One Table Is Ultra Slow At Being Copied, Others Are Fine
Jun 10, 2007
This is a really really odd problem.
Here's the situation. I've got a DB with several tables. Let's pick out two of those tables from the bunch.. they both have approx 2500 rows in them. I take one and duplicate it (right click, All Tasks->Export Data, copy tables, etc) and it duplicates just fine into another DB.
Now, the second table.. takes FOREVER. Yet it has the same amount of rows in it! And this is a flat table export.. I'm not including dependent objects or anything like that.
Not to mention this is slow, but when I use Red-Gate SQL Data Compare, it gets stuck when it hits this specific table.
We have one LEDGER, where all the daily activities are stored. The LEDGER table has 4 indexes (1 clustered and 3 non-clustered). To get AR we use this table.
Well problem is some times in 1-2 months, any simple AR query takes a long time and every other client gets slow response (queries are very slow or sometimes block).
If we DROP any index on LEDGER table and again put it back (RECREATE), all our queries work fine and faster. This goes on till 1-2 months, till we see the same issue again.
This is a classic case happened today. Queries were running fine till morning 8 AM. We upload some 50 thousand records to Ledger table (Data Conversion). Well after 30 mins, all simple AR queries started taking a long time. We DROPPED an index in LEDGER table and everything was faster....Just to be same we added back the same index again.......everything is Faster.....
What is this. ....is it our QUERY, index or huge Transactions or no free space ???
We are scheduled to run SP4, next week. But is there any solution in the mean time on what is this?
Also is they any way to KILL all SQL server processes that take more than a mins. We just don't want ALL our client to Slow down because of one query????
I've a SSIS 2008 parent/child package solution to manage data transfers between two different data sources, so we can copy multiple tables and capture how many rows were transferred and duration for each transfer. This solution was working fine up until last week, when I made some changes to allow the package to perform a source count using standard SQL determined by an expression, or SQL provided from configuration tables, I also changed the package to Truncate or not the destination table, again controlled by configuration settings in a table. The child packages which perform the data flows have not changed!
The day after the controlling package promotion to live, I saw the bizarre behaviour of the Package log stating all rows transferred, but the actual table counts were not what the log stated, see attached file. The package solution works ok on other servers and was ok in DEV, but there were less tables and rows transferred.Re-running the package gave the same errors, but on some of the same tables and some different ones.Â
As it is the child packages doing the transfers and nothing has changed in them. I cannot see how the log would be able to say all rows are transferred and yet not all of the rows are actually moved?
Process output - where you can see counts and log Table transfer controller (as txt not dtsx)
An example of the data transfer child packages (as txt not dtsx)When I set the ExecuteOutOfProcess = True the package worked fine, unfortunately, this is not a good solution as SSIS 2008 does not tidy up the Dtshost.exe processes it starts and I'd be left with a memory issue after a very short time, we transfer hundreds of tables each day. ( I could write a .net script in the controlling package to kill the child processes, but that would still have hundreds of processes running before I could end them, as we have three parallel streams to allow a bit better performance.
I'm trying to learn SQL with SAMS Teach Yourself SQL in 24 hours and using SQL 2000 Server but because the book is a generic SQL code I'm finding some of the examples don't work for me. Firstly is there a STORAGE command in MS SQL as when I tried to create my table and appended:
STORAGE (INITIAL 20M NEXT 1M);
All I got was an error and couldn't figure how to get round it.
Hi.. I am getting a xml stream of data and putting it to a object and then calling a big sproc to insert or update data in many tables across my database... But there is one Table that i am having trouble inserting it.. But if i run an update it works fine... This my code for that part of the sproc.. IF Exists( SELECT * FROM PlanEligibility WHERE PlanId = @PlanId ) BEGIN UPDATE PlanEligibility SET LengthOfService = Case When @PD_EmployeeContribution = 0 Then @rsLengthOfServicePS ELSE @rsLengthOfService END, EligibilityAge = CASE When @PD_EmployeeContribution = 0 Then @EligibilityAgePS Else @EligibilityAge End, EntryDates = @EntryDates, EligiDifferentRequirementsMatch = Case When @PD_EmployeeContribution = 0 Then 0 When @PD_EmployeeContribution = 1 and @PD_EmployerContribution = 0 then 0 Else 1 END, --@CompMatchM, LengthOfServiceMatch = CASE When @MCompanyMatch = 0 Then @rsLengthOfServicePs ELSE @rsLengthOfServiceMatch END, EligibilityAgeMatch = CASE When @MCompanyMatch = 0 Then @EligibilityAgePS ELSE @EligibilityAgeMatch END, OtherEmployeeExclusions = @OtherEmployeeExclusions WHERE PlanId = @PlanId END ELSE BEGIN INSERT INTO PlanEligibility ( PlanId, LengthOfService, EligibilityAge, EntryDates, EligiDifferentRequirementsMatch, LengthOfServiceMatch, EligibilityAgeMatch, OtherEmployeeExclusions ) VALUES ( @PlanId, Case When @PD_EmployeeContribution = 0 Then @rsLengthOfServicePS ELSE @rsLengthOfService END,--@rsLengthOfService, CASE When @PD_EmployeeContribution = 0 Then @EligibilityAgePS Else @EligibilityAge End, --@EligibilityAge, @EntryDates, Case When @PD_EmployeeContribution = 0 Then 0 When @PD_EmployeeContribution = 1 and @PD_EmployerContribution = 0 then 0 Else 1 END, --having trouble here CASE When @MCompanyMatch = 0 Then @rsLengthOfServicePs ELSE @rsLengthOfServiceMatch END, CASE When @MCompanyMatch = 0 Then @EligibilityAgePS ELSE @EligibilityAgeMatch END, --EligibilityAgeMatch,@EligibilityAgeMatch, @OtherEmployeeExclusions ) END Any help will be appreciated.. Regards, Karen
I have a report problem. I'm using a parameter to dynamically control visibility for two tables. If the parameter is set to one value, I want to switch one of the tables to invisible, if the parameter is set to another, I want the other table to be invisible instead.
This all works fine in Visual Studio. When I publish it to my report server, the visibility controls no longer function and both tables always display. Any ideas here?
We have a point of sale application (C# .NET 2.0) and a Sql Server 2005 database back end.
Our customers are concerned that employees could create a backup of the SQL Server database (or even of the MDF file) and use it to steel customer data.
Very often, the application is running on a single PC in a shop using Sql Server Express Edition 2005 under Windows XP. The users usually log on as local administrator. It's hard for us to force our customers to change their local security policies.
Ideally, I would like some form of security mechanism that prevents a backup from being restored on to another PC without either a password or some other form of authentication.
I restored a backup of a database running SQL Server in W2K to my own laptop (Windows XP) for report testing pourposes. The restore worked perfectly, but when I ran the store procedure that returns my "report" set I noticed that several of the fields within the result set are different, the number of rows and customers are a perfect match to the production report. The fields that are different are calculated fields that invoque a user defined function, which again are exactly the same on both databases. I tried dropping the stored procedure and the 4 functions and recreating them again but I get the same results, the number of rows, the customers and all "non" function calculated fields are perfect, only the fields calculated with the functions are wrong.
I got a mdf file from my co-workerwho just stopped SQL server and delete ldf fileand give me a mdf file.if it was detached, it would work finebut it was just copied.Is there any way to attach this mdf file?I tried sp_attach_db and sp_attach_single_file_dband both failed.thanks,
I made a copy of a database "sac_prod" and named the new copy "vgs_prod". Now, when I do a backup of the new database, it still shows the name of the original. Is there any way to change this so it will be the same as the new database name?Here is the BACKUP script:BACKUPdatabase vgs_prod TODISK='\sac-srvr1data$TechnicalSharedProductionSQLBackup LasVegasvgs_prod_CopyOnly.BAK' with COPY_ONLYHere is the messages I received from this BACKUP:Processed 1752 pages for database 'vgs_prod', file 'sac_prod' on file 1.Processed 6 pages for database 'vgs_prod', file 'sac_prod_log' on file 1.BACKUP DATABASE successfully processed 1758 pages in 0.412 seconds (34.955 MB/sec). I would like to change the file 'sac_prod' to be 'vgs_prod' in lines 1 and 2 just above. Thanks,
I recently encountered an error when I created several copies of one package.
It's always nearly the same package with small modifications. I call this packages from a parent package which is part of our datawarehouseing-framework.
The problem is, when copying a packages or using a packages as template the packages' IDs and Task's-IDs are the same. And this isn't only an issue concerning logging!! :
When the parent package calls one of the copied packages the first task is executed in every package parallely. Furthermore ... when I for example set a breakpoint on a data transformation task in one of the packages, the breakpoint is set in all packages on the same task! This is resulting in strange errors because the tasks-states and variable values seem to get mixed up.
Unfortunately there is only a possibility to change the package's ID, but the IDs of tasks are readonly!
One solution is, to create a new package and copy all the tasks to the new package which creates new IDs, but doing so, I have to manually recreate a long list of variables, all the configurations, all the connection-managers once again. Furthermore I loose the layout of tasks.
I found some posts about it here
http://groups.google.de/group/microsoft.public.sqlserver.dts/browse_thread/thread/6f85a31ea190608a/0eae312aa8440cf8?lnk=gst&q=pitfall&rnum=1&hl=de#0eae312aa8440cf8 or
I have a database on sqlserver 2005.I usually take a full db backup and sequential transaction log backups and append each of the backups it to a backup file.Now if i want to copy my second transaction log file to a specific folder on my server.Do we have any procedure to do it.
At one of our client side a wired log shipping issues has come up.while monitoring those two server i noticed that although log-shipping report says both server are in sync, report displays information related to both backup and restore , it doesn't shows information related to copy that is when was last file copied and last file copy column is showed blank. Same is when i execute proc "sp_help_log_shipping_monitor" . I get same result ...
When i expand copy job history to analyse its what i found is although job has executed successfully , but in depth reading each steps says that no .trn file was copied .
My copy directory is at secondary server itself where .trn files are placed.And from this location itself files are begin restored.
SQL server and agent on both servers are run by same domain account ....
It appears that when you use the import/export wizard from within Microsoft SQL Server Managment Studio, the identity attributes of the table being copied are not transferred. For example, say the source table has a column
[ref] [int] IDENTITY ( 1 , 1 ) NOT NULL,
When the import wizard is done the destination table will have a column named ref, but will not be an identity column. The column definition will be
[ref] [int] NOT NULL,
instead. Is there a way to change this behavior somewhere in the gui? When doing the import, the only options seems to be 'Enable Identity Insert', but checking this does not affect the definition of the column.
I'm using the methods of the Microsoft.SqlServer.Management.Smo namespace within a .NET application to create a backup file from a SQLEXPRESS database. I can then restore the database from that backup device using methods in the same namespace. Here is a snippet from the restore code:
srv = New Server("MYPCSQLEXPRESS")
db = srv.Databases("washmaster")
Dim bdi As New BackupDeviceItem(BackupFileName, DeviceType.File)
Dim recoverymod As RecoveryModel
recoverymod = db.DatabaseOptions.RecoveryModel
rs.NoRecovery = False
rs.Devices.Add(bdi)
rs.Database = "washmaster"
rs.ReplaceDatabase = True
srv.KillAllProcesses("washmaster")
rs.SqlRestore(srv)
This works great as long as I used one of the backup files that I created directly on the disk. However, my application has a utility that allows the user to copy the backup files onto another drive, such as a CD or a thumb drive and when I try to restore from the copy of the backup, I get the following exception:
....Cannot open backup device..[filename]...Operating system error 5(Access is denied.)
The reason I get this error is that the "NETWORK SERVICE" account was removed from the file permissions when the file was copied.
How can I copy a backup to another drive and preserve the "NETWORK SERVICE" account? If I can't do that, is it wise to try to add the account back to the file before using it to restore or is there a better way?
Hello all. I've got a problem with really slow INSERTs on one (and only one) of the tables in a database. For example, using SQL Management Studio, it takes 4 minutes and 48 seconds to insert 25 rows. There are only about 8 columns in the table and only about 1500 records. All the other tables in the database are very fast for inserts.
Another odd thing uniquely associated with INSERTs on this table: prior to inserting the 25 new rows of data, SQL Management Studio tells me that it inserted 463 rows of data which I know did not happen. Here's the INSERT statement:
INSERT INTO FieldOps(StudySiteID , QA_StructureID , Notes , PersonID) SELECT DISTINCT StudySiteKey , QA_StructureKey , SampleComments1 , '25' FROM ScriptOutput_Nitrate WHERE (ScriptOutput_Nitrate.StudySiteKey IS NOT NULL)
and SQL Management Studio (eventually) says: (463 row(s) affected) (463 row(s) affected)
(25 row(s) affected)
The table has an index on the primary key (INT data type with auto increment). I tried running the following code to fix things but it made no difference:
USE [master] GO ALTER DATABASE [FieldData] SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO
use FieldData GO DBCC CHECKTABLE ('FieldOps', REPAIR_REBUILD) With ALL_ERRORMSGS GO
USE [master] GO ALTER DATABASE [FieldData] SET MULTI_USER WITH ROLLBACK IMMEDIATE GO
I'm guessing that the problem might be related to the index (??). I don't know... Does anyone here have a suggestion as to what I should do to fix this problem.
Hi,I have a table defined asCREATE TABLE [SH_Data] ([ID] [int] IDENTITY (1, 1) NOT NULL ,[Date] [datetime] NULL ,[Time] [datetime] NULL ,[TroubleshootId] [int] NOT NULL ,[ReasonID] [int] NULL ,[reason_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[maj_reason_id] [int] NULL ,[maj_reason_desc] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[ActionID] [int] NULL ,[action_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[WinningCaseTitle] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Duration] [int] NULL ,[dm_version] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[ConnectMethod] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[dm_motive] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[HnWhichWlan] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[RouterUsedToConnect] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[OS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,[WinXpSp2Installed] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Connection] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Login] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL,[EnteredBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Acct_Num] [int] NULL ,[Site] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,CONSTRAINT [PK_SH_Data] PRIMARY KEY CLUSTERED([TroubleshootId]) ON [PRIMARY]) ON [PRIMARY]GOWhich contains 5.6 Million rows and has non clustered indexes on Date,ReasonID, maj_Reason, Connection. Compared to other tables on the sameserver this one is extremely slow. A simple query such as :SELECTSD.reason_desc,SD.Duration,SD.maj_reason_desc,SD.[Connection],SD.aolEnteredByFROM dbo.[Sherlock Data] SDWhere SD.[Date] > Dateadd(Month,-2,Getdate())takes over 2 minutes to run ! I realise the table contains severallarge columns which make the table quite large but unfortunately thiscannot be changed for the moment.How can i assess what is causing the length of Query time ? And whatcould i possibly do to speed this table up ? The database itself isrunning on a dedicated server which has some other databases. None ofwhich have this performance issue.Anyone have any ideas ?
We have developped an ETL. For development we used small test files (10 000 rows) to test if it works correctly. This runs in less then a minute
In Test we are using a file which contains all rows (7 million). We did twice a test and we first stopped the process after a week and the 2nd time we stopped the process after a weekend.
We are able to trace the problem to the point where it has to sort the tables.
The proces is pretty simple. We use two connectors to directly connect to the tables. Then we have two blocks to sort the data. And then we have one block to merge the data.
Should we which to let SQL do the sorting ? Since it is in staging is has no index on that column. A select on the tables with an order by takes 3 minutes to return all those rows.
Any idea's ?
Also is there a page with the best practices for ETL ?
SELECT * on a 4000 row table is taking more than 12 seconds. Other larger tables are not nearly as slow. I've DBCC dbreindex'd, and dbcc showcontig shows density at 100%.
How can I figure out why this is happening? What are some remedies?
Hello.I am administering a SQL Server (Enterprise Edition on Windows 2003)from some month and can't understand what is going on in the latestweek (when the db grow a lot).The DB is around 250G, and has one table with 1 billion rows. It isperforming in a decent way, but can't understand why a particolar tablehas strong performance problem.I have a stored procedure that read table from table A and insert them,after processing in table B, and then move them in other table (similarto a Star Schema) for reporting.Table B is, for how the SP is written, not more than 3000 lines. TableB is very simple, has 3 rows, and no index.What is very strange is that performance of table B is really slow. IfI do a select count (*) from table_b it takes between 30s & 2minutes toreturn it has 0 lines. When the stored procedure insert 1000 lines, ittakes 20/30 seconds and it takes 20/30 seconds to delete them.To me it doesn't look like a lock problem, because it is slow also whenthe only procedure that access that table are stopped. I did an updatestatistics with fullscan on this table with no improvement.The DB is on a Storage Area Network that should perform decently. TheLUN I use is configured to use a piece of 32 disk that are used also byother application. I don't have performance data of the SAN. Themachine is an HP DL580 with 4 CPU (hiperthreading disabled), 8G of RAM,AWE and PAE and 5G reserved for SQL Server.I don't know what to do to solve this situation. Could it be a"corruption problem" that slow this table so much? is it possible thefact the db grow a lot in the last week created problem also to thissmall and simple table?Do you have any idea or hint on how to manage this situation, orpointer to documentation that can help in analizing this situation?For the ones that arrived till here, thank you for your time andpatience reading my bad english...Best Regards,MamoPSI can't rewrite the stored procedure, because it is part of a closedsource product.
I'm working with a table with about 60 million records. This monster is growing every minute of the day as well, by 200,000 - 300,000 records/day. It's 11 columns wide, and has one index on a datetime column. My task is to create some custom reports based on three of these columns, including the datetime one.
The problem is response time. Any query executed on this table takes forever--anywhere between 30 seconds and 4 minutes. Queries such as this one below, as simple as it is, can take a minute or more:
select count(dt_date) as Searches from SearchRecords where datediff(day,getdate(),dt_date)=0
As the table gets larger and large, the response time is going to get worse and worse. Long story short, what are my options to get the speed of queries down to just a few seconds with a table this big? So far the best I can come up with is index any other appropriate columns (of which there is one for sure, maybe two).
I have a table defined as such: PosterArtId int no 4 10 0 no (n/a) (n/a) NULL Graphic image no 16 yes (n/a) (n/a) NULL GraphicFilename varchar no 50 no no no SQL_Latin1_General_CP1_CI_AS
I have a Stored Procedure defined in the database that queries this table (joins with some other tables) that takes about 1 1/2 minutes to return results (running directly in query analyzer). The table itself has 8900 records and the resulting stored procedure returns 33 rows.
I have backed up this database and restored it to another database on a different machine also running SQL Server 2000. When I run the same stored procedure on this 2nd database (note the contents of the database and this table are exactly the same), it runs very quickly - in about 2 seconds.
I'm trying to figure out what is causing the query to run so slow on the original database, which is our production database server (note that none of the other queries seem to be running extra slow on this machine, just this particular one).
I've since been reading up on storing images in the database and I don't think the images are stored "text in row" - I ran the command: €œSELECT OBJECTPROPERTY(OBJECT_ID('tblPosterArt'),'TableTextInRowLimit')€? and got a 0 return value.
My ultimate goal is to figure out: 1) why the timing is so different on the two databases even though the have the same data 2) is there something we can do to speed up the results on our production server
For the first goal, I'm heading down the path that something in the database backup/restore did not recreate the btree storage of the image data in the same manner. Would this be correct? If not, is there some kind of analysis that I can do that will tell me some useful information?
I've run the Stored Procedure for both databases in query analyzer with the "show execution plan", "trace", and "statistics" turned on. In the Execution Plan of the production database I see a significant amount of time in three areas: Nested Loops/Left Semi Join, Clustered Index Scan, and Clustered Index Seek. But being as I'm not a dba (nor do we have one on staff), I'm not sure how to interpret this data. I keep wanting to point to some sort of environment issue since the data is the same between the two machines.
I suppose there is nothing to do about the 2nd goal without knowing why the query on the one machine is taking so long. Any thoughts on how to get more information here?
I have a table that has appx 3.2 million rows. see sp_help
Name Owner Type Created_datetime ------------------------------------------------------------------- TB_SAAI014_BPD dbo user table 2005-08-10 11:33:23.893
Column_name Type Comp Lngth Prec Scale Nullable ------------------------------------------------------------------------ RowID int no 4 10 0 no SPHInstID int no 4 10 0 no BPDInstID int no 4 10 0 no BMUID varchar no 11 no InfoImblCfw numeric no 9 12 2 no BMUPrdNonDel numeric no 9 12 2 no PrdFPN numeric no 9 13 3 no PrdBMUBalSrvVol numeric no 9 13 3 no PrdInfoImblVol numeric no 9 13 3 no PrdExpdMtrVol numeric no 9 13 3 no BMUMtrVol numeric no 9 13 3 no PrdBMUNonDelBidVol numeric no 9 13 3 no PrdBMUNonDelOfrVol numeric no 9 13 3 no TranLossFctr numeric no 9 15 7 no TranLossMtpl numeric no 9 15 7 no TradUnitName varchar no 30 no TotTrdUnitMtrVol numeric no 9 13 3 no BMUAppBalSrvVol numeric no 9 13 3 no DTCreated datetime no 8 yes DTUpdated datetime no 8 yes
Identity Seed Inc Not Repl ----------------------------------------- RowID 0 1 0
RowGUIDcol ----------------------------- No rowguidcol column defined.
Data Located on File Group ========================== PRIMARY
Index Name Decsription Keys ---------------------------------------------------------------------------- idx_SPH_BPD clustered, unique located on PRIMARY SPHInstID, BPDInstID
This table has 1 clustered index based on its own unique record ID and that of its parent table record
I have an import process that adds appx 980 rows of data to this table and numerous rows to several other tables as part of a transaction and it ran in about 15 seconds.
However we suffered a server failure and it had to be rebuilt (Svr2k3), SQL 2000 re-installed (with default options) and the data base restored.
The same transaction is now taking 8 to 9 minutes. I tracked it down to this particular table. Just doing a count(*) takes over 5 minutes. Select * where ID = 1 takes over 5 mins. Also, whenever the table is accessed you can hear the server thrashing the disks. Other tables, although smaller do not seem to be suffering from this masive performance drop.. I've tried droping and recreating the index. I have even created a copy of the table, with index, and still get the same issue with speed. DBCC CHECKTABLE returns the following but takes 6 and a half minutes DBCC results for 'TB_SAAI014_BPD'. There are 3168460 rows in 72011 pages for object 'TB_SAAI014_BPD'. DBCC execution completed. If DBCC printed error messages, contact your system administrator.
No errors are shown
A DBCC CHECKTABLE on another table with 230 thousand rows, run at the same time only took 10 seconds
Can anyone please point me in the direction of things to check, try or repair.
I wonder how SQL Server 7 can be so slow!! I use an external application (that we made) which reads information from SQL Server 7 databases. What I mean is, one of my allications reads information about "users" and there is about 11500 rows in the table with 34 columns in each. My application shows information about one person at a time. And then there's a scrollbox at the bottom where one can scroll to see other users (about 11500 different users). When I'm using the scrollbars to move down and get information about other users, I see that the CPU-usage is 100% all the time. And the "hour-glass" (windows thingy to show that there's a delay) is shown for maybe a second. And at first it takes almost 10 seconds to (I suppose?) read in the information from the table into my application and show information about the first user.
I don't think it should take almost a second or so to just scroll this list of users. How can SQL Server be so slow? I have 196 megabyte ram on this computer that I use for development. All databases together are less than 15 megabytes. Less than 10 megabytes I suppose.
I mean, in Access97 it was much faster but whas gettins slower for certain occurances with much more data int he tables, and the whole idea of converting the system to SQL Server7 was to got fast responsetimes. Just to clarify, the client-application and the SQL Server resides on the same computer right now so they don't have to go over some kind of network-conenction etc.
I am using DTS to transfer some tables from one server to another as part of a migration. We want to be down for as little time as possible, but we need the most up-to-date copy of the database tables in question.
I am currently testing the transfer process in our test environment by migrating the data from one database to another on the same SQL instance.
There are 7 tables to transfer and the total size of the database is 450 MB (with around 117 MB used). The two largest tables have around 17,000 records each.
One table (the header) has no text column and it takes just a few seconds to transfer. The other table (the detail) has two columns, one of which is a text column (actually, its not fair to call it the detail table; the relationship is actually one-to-one, but for the sake of this discussion, let's leave it at that).
The header takes seconds to transfer, but the detail takes up to 18 minutes.
Physically, our test server is quite robust; 2 processors, a 3 disk RAID-5 for the data files and a separate RAID 1 partition for the logs. Performance counters don't indicate any real issues: during the transfer, the disk utilization on the data partition occasionally spikes to a high level, but comes right back down until the next spike (the spikes being separated by about 1 minute. No issues with memory, paging or CPU.
I have removed the clustered index on the affected table as well as the PK. No help.
Are text columns just slow? Is there something that I am missing?
Dear MS SQL Experts,I have to get the number of datasets within several tables in my MSSQL2000 SP4 database.Beyond these tables is one table with about 13 million entries.If I perform a "select count(*) from table" it takes about 1-2 min toperform that task.Since I know other databases like MySQL which take less than 1 sec forthe same taskI'm wondering whether I have a bug in my software or whether there areother mechanisms to get the number of datasets for tables or the numberof datasets within the whole database.Can you give me some hints ?Best regards,Daniel Wetzler
When I open a large table (say more than 1,000,000 Rows) in the SSMS by right clicking on the table name, it takes a very big time to fully open the table.More than 20 minutes for 1,000,000 records on a local instance.
SQL Server 2000 EM was extremely faster. Does any one knows a work around?
I need to be able to view and edit the data in SSMS.
In my database/MY SERVER (SQL7/Win2K), I run a simple query with a Table/10000 rows (without cluster index): SELECT * FROM TABLE it take over 30s. Why it's slow? How can I check for reason? How to configure my server to improve performance? Thanks in advance. TH ---------------------------------- SP_CONFIGURE's RESULT in MY SERVER ----------------------------------
Table 'spt_values'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0. name minimum maximum config_value run_value ----------------------------------- ----------- ----------- ------------ ----------- affinity mask 0 2147483647 0 0 allow updates 0 1 1 1 cost threshold for parallelism 0 32767 5 5 cursor threshold -1 2147483647 -1 -1 default language 0 9999 0 0 default sortorder id 0 255 52 52 extended memory size (MB) 0 2147483647 0 0 fill factor (%) 0 100 0 0 index create memory (KB) 704 1600000 0 0 language in cache 3 100 3 3 language neutral full-text 0 1 0 0 lightweight pooling 0 1 0 0 locks 5000 2147483647 0 0 max async IO 1 255 32 32 max degree of parallelism 0 32 0 0 max server memory (MB) 4 2147483647 2147483647 2147483647 max text repl size (B) 0 2147483647 65536 65536 max worker threads 10 1024 255 255 media retention 0 365 0 0 min memory per query (KB) 512 2147483647 1024 1024 min server memory (MB) 0 2147483647 0 0 nested triggers 0 1 1 1 network packet size (B) 512 65535 4096 4096 open objects 0 2147483647 0 0 priority boost 0 1 1 1 query governor cost limit 0 2147483647 0 0 query wait (s) -1 2147483647 -1 -1 recovery interval (min) 0 32767 0 0 remote access 0 1 1 1 remote login timeout (s) 0 2147483647 5 5 remote proc trans 0 1 0 0 remote query timeout (s) 0 2147483647 0 0 resource timeout (s) 5 2147483647 10 10 scan for startup procs 0 1 0 0 set working set size 0 1 0 0 show advanced options 0 1 1 1 spin counter 1 2147483647 10000 10000 time slice (ms) 50 1000 100 100 two digit year cutoff 1753 9999 2049 2049 Unicode comparison style 0 2147483647 196609 196609 Unicode locale id 0 2147483647 1033 1033 user connections 0 32767 0 0 user options 0 4095 0 0
HiWe have a SQL server 2000 SP4 on a windows 2003 2x3Ghz XEON 4 GB ram.We have a table looking like this with currently 6 rows. Total data is aprox10 kb i all row all together.CREATE TABLE [dbo].[BIOMETRICPROFILE] ([BIOMETRICPROFILEID] [bigint] IDENTITY (1, 1) NOT NULL ,[FINGERPRINTTEMPLATE1] [image] NOT NULL ,[FINGERPRINTTEMPLATE2] [image] NOT NULL ,[FINGERPRINTTEMPLATE3] [image] NOT NULL ,[FINGERPRINTTEMPLATE4] [image] NOT NULL ,[FINGERPRINTTEMPLATE5] [image] NOT NULL ,[FINGERPRINTTEMPLATE6] [image] NOT NULL ,[TYPE] [nvarchar] (50) COLLATE Danish_Norwegian_CI_AS NOT NULL) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]GOselect * from BIOMETRICPROFILE takes ~4 seconds (!) to execute thourgh Queryanalyzer. Alle other tables has no performance problems.We have a SQL 2005 express instalation on the same server. If we restore abackup from the sql 2000 database the query takes aprox ~ 15 ms.What isgoing on here?Has SQL 2000 problems with image fields? or how can we find the problem?RegardsAnders
I am having performance issues on a SQL query in Access. My query isaccessing and joining several tables (one very large one). The tables arelinked ODBC. The client submits the query to the server, separated byseveral states. It appears the query is retrieving gigs of data from thetable and processing the joins on the client. Is there away to perform moreof the work on the server there by minimizing the amount of extraneous tabledata moving across the network and improving performance (woefully slowabout 6 hours)?
Has anyone written or come across a routine that validates start and enddate in a slow changing dimension? (eg verifies there is no overlap in any of the records).