Why Is SQL Server 7 So Slow? I Only Have About 11500 Rows In The Table
Sep 1, 2000
I wonder how SQL Server 7 can be so slow!! I use an external application (that we made) which reads information from SQL Server 7 databases. What I mean is, one of my allications reads information about "users" and there is about 11500 rows in the table with 34 columns in each. My application shows information about one person at a time. And then there's a scrollbox at the bottom where one can scroll to see other users (about 11500 different users). When I'm using the scrollbars to move down and get information about other users, I see that the CPU-usage is 100% all the time. And the "hour-glass" (windows thingy to show that there's a delay) is shown for maybe a second. And at first it takes almost 10 seconds to (I suppose?) read in the information from the table into my application and show information about the first user.
I don't think it should take almost a second or so to just scroll this list of users. How can SQL Server be so slow? I have 196 megabyte ram on this computer that I use for development. All databases together are less than 15 megabytes. Less than 10 megabytes I suppose.
I mean, in Access97 it was much faster but whas gettins slower for certain occurances with much more data int he tables, and the whole idea of converting the system to SQL Server7 was to got fast responsetimes.
Just to clarify, the client-application and the SQL Server resides on the same computer right now so they don't have to go over some kind of network-conenction etc.
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
How can we handle transactions in SSIS 1. when some error/something happens during export and the # of rows are not exported fully to destination, how to rollback the transaction in SSIS.
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
An SSIS package to transfer data from a DB instance on SQL Server 2005 to SQL Server 2000 is extremely slow. The package uses an OLEDB Source to OLEDB Destination for data transfer which is basically one table from sql server 2005 to sql server 2000. The job takes 5 minutes to transfer about 400 rows at night when there is very little activity on the server. During the day the job almost always times out.
On SQL Server 200 instances the job ran in minutes in the old 2000 package.
Is there an alternative to this. Tranfer Objects task does not work as there is apparently a defect according to Microsoft. Please let me know if there is any other option other than using a Execute 2000 package task or using an ActiveX Script to read records from one source and to insert them into the destination source, which I am not certain how long it might take and how viable will that be?
Dear friends,I have a problem with a simple select statement and I don't know why it is happening.I have 2 tables, Fees and FeesDataRoles. Fees presents all the fees and FeesDataRole is a middle table between Fees and Roles table. So each fee can have multiple Roles and a Role can have many Fees.Now I have a select statement:Select *From Fees Inner Join FeesDataRoles ON Fees.FeeID = FeesDataRoles.FeeIDWhere (FeesDataRoles.DataRoleID = @DataRoleID) AND (FeesDataRoles.RecordStatus = 1 ) AND (FeesDataRoles.ValidFrom >= getdate() ) AND ( FeesDataRoles.ValidTo <= getdate() OR FeesDataRoles.ValidTo is null)Now it shouldn't take that long to execute this procedure but surprisingly sometimes when I insert a value to the table and then execute this store procedure it does now show the data just added. Very strange.....!!!!I ran the procedure 5 times after inserting an item and nearly 1 out of 5 does not return the right result righ. ( It does not include the recently inserted rows)Anyone have any idea....?I used Tuning Advisor, no sugestion. I change the clustered index in FeesDataRoles from FeesDataRoleID(the primary key of the table) to DataRoleID to increase the performance, still it happens sometimes.Is my Where clause so costly that cause this problem.Please help. I really appreciate your help.Regards,Mehdi
I have a report with a single table, single grouping level, single data set and no sub-reports. It has 3 rows for a grouping header and 3 rows per dataset row of detail. The detail rows are initially hidden and can be expanded by clicking on the header +. Its a fairly standard master-detail report.
Regardless of data size, I get NO page breaks in HTML. I have the Interactive size set to 8.5x11, KeepTogether is set to False, and PageBreakAtEnd is set to False. I would like it to break based on the visible grouping rows.
As it is now, everytime you expand any section, it takes forever to reload for a larger recordset.
I know that "HTML renderer and Preview (which are soft page break renderers) will ignore page breaks of conditionally hidden items and their children.", but how do I get this report to page break?? I've seen a lot of posts on this, but none that seem to have an answer.
I need to use Bulk insert statement for copying a table with 200 million rows to another table on the same server...the table has no primary key or identity column.... script for BULK INSERT ...
I have a SQL script to insert data into a table as below:
INSERT into [SRV1INS2].BB.dbo.Agents2 select * from [SRV2INS14].DD.dbo.Agents
I just want to set a Trigger on Agents2 Table, which could delete all rows in the table , before carry out any Insert operation using above statement.I had below Table Trigger on [SRV1INS2].BB.dbo.Agents2 Table as below: But it did not perform what I intend to do.
USE [BB] GO /****** Object: Trigger   Script Date: 24/07/2015 3:41:38 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
Select statement joining file1 to file2. File 1 may have 0, 1, or many corresponding rows in file2. I need to count the corresponding rows in table2. Table2 also has a Boolean column and I need to count the number of rows where it is true. So I need to count the total number of matching rows and the count of those that are set to true. This is an example of what I have so far. I had to add each column being selected into a Group by to make it work, but I do not know why. Is there some other way this should be set up.
SELECT c.CarId, c.CarName, c.CarColor, COUNT(t.TrailerId) as trailerCount, (add count of boolian, say t.TrailerFull is true) FROM Car c LEFT JOIN Trailer t on t.CarId = c.CarId GROUP BY c.CarId, c.CarName, c.CarColor
I want to return all rows in table giftregistryitems with an additional column that holds the sum of column `amount` in table giftregistrypurchases for the respective item in table giftregistryitems.
What I tried, but what returns NULL for purchasedamount:
SELECT (SELECT SUM(amount) from giftregistrypurchases gps where registryid=gi.registryid AND gp.itemid=gps.itemid) as purchasedamount,* FROM giftregistryitems gi LEFT JOIN giftregistrypurchases gp on gp.registryid=gi.id WHERE gi.registryid=2
How can I achieve what I need?
Here's my table definition and data:
/****** Object: Table [dbo].[giftregistryitems] Script Date: 02-05-15 22:37:11 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[giftregistryitems]( [id] [int] IDENTITY(1,1) NOT NULL,
In a t-sql 2012 sql script, I have the following script, that only works for a few records since the value of TST.dbo.LockCombination.seq only contains the value or 1 in most cases. Basically for every join listed below, there should be 5 records where each record has a distinct seq value of 1, 2, 3, 4, and 5. Thus my goal is to determine how to add the missing rows to the TST.dbo.LockCombination where there are no rows for seq values of between 2 to 5. I would like to know how to insert the missing rows and then do the following update statement. Thus can you show me the sql on how to add the rows for at least one of the missing sequence numbers?
UDATE LKC SET LKC.combo = lockCombo2 FROM [LockerPopulation] A JOIN TST.dbo.School SCH ON A.schoolnumber = SCH.type JOIN TST.dbo.Locker LKR ON SCH.schoolID = LKR.schoolID AND A.lockerNumber = LKR.number JOIN TST.dbo.Lock LK ON LKR.lockID = LK.lockID JOIN TST.dbo.LockCombination LKC ON LK.lockID = LKC.lockID WHERE LKC.seq = 2
A normal select statement looks like the following:
select * from TST.dbo.Locker LKR JOIN TST.dbo.Lock LK ON LKR.lockID = LK.lockID JOIN TST.dbo.LockCombination LKC ON LK.lockID = LKC.lockID where LKR.number in (000,001,1237)
In case you need the ddl statements for the tables affected here are the ddl statements:
CREATE TABLE [dbo].[Locker]( [lockerID] [int] IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [schoolID] [int] NOT NULL, [number] [varchar](10) NOT NULL, [serialNumber] [varchar](20) NULL, [type] [varchar](3) NULL, [locationID] [int] NULL,
I decided to change over from Microsoft Access Database file to the New SQLServerCe Compact edition. Although the reading of data from the database is greatly improved, the inserting of the new rows is extremely slow.
I was getting between 60 to 70 rows per sec using OLEDB and an Access Database but now only getting 14 to 27 rows per sec using SQLServerCe.
I have tried the below code changes and nothing seams to increase the speed, any help as I would prefer to use SQLServerCe as the database is much smaller and I€™m use to SQL Commands.
Details: VB2008 Pro .NET Frameworks 2.0 SQL Compact Edition V3.5 Encryption = Engine Default Database Size = 128Mb (But needs to be changes to 999Mbs)
Where Backup_On_Next_Run, OverWriteQuick, CompressAns are Booleans, all other column sizes are nvarchar and size 10 to 30 expect for Full Folder Address size 260
14 to 20 rows per second (Was 60 to 70 when using OLEDB Access)
TRY 2
Using Record Sets
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
cmd.CommandText = "SELECT * FROM BackupDatabase" cmd.ExecuteNonQuery() Dim rs As SqlCeResultSet = cmd.ExecuteResultSet(ResultSetOptions.Updatable Or ResultSetOptions.Scrollable)
Dim rec As SqlCeUpdatableRecord = rs.CreateRecord()
rec.SetString(1, Group_Name1) rec.SetString(2, Full_Folder_Address1) rec.SetString(3, File1) rec.SetSqlString(4, File_Size_KB1) rec.SetSqlString(5, Schedule_To_Run1) rec.SetSqlString(6, Backup_Time1) rec.SetSqlString(7, Last_Run1) rec.SetSqlString(8, Result1) rec.SetSqlString(9, Last_Modfied1) rec.SetSqlString(10, Latest_Modfied1) rec.SetSqlBoolean(11, Backup_On_Next_Run1) rec.SetSqlString(12, Total_Backup_Times1) rec.SetSqlString(13, Server_File_Number1) rec.SetSqlString(14, Server_Number1) rec.SetSqlString(15, File_Break_Down1) rec.SetSqlString(16, No_Of_Servers1) rec.SetSqlString(17, Full_File_Address1) rec.SetSqlBoolean(18, OverWriteQuick) rec.SetSqlBoolean(19, CompressAns) rs.Insert(rec) Catch e As Exception MessageBox.Show(e.Message) Finally conn.Close() End Try End Sub
€™20 to 24 rows per sec
TRY 3
Using SQL Commands Direct
Private Sub InsertRecordsIntoSQLServerce(ByVal Group_Name1 As String, ByVal Full_Folder_Address1 As String, ByVal File1 As String, ByVal File_Size_KB1 As String, ByVal Schedule_To_Run1 As String, ByVal Backup_Time1 As String, ByVal Last_Run1 As String, ByVal Result1 As String, ByVal Last_Modfied1 As String, ByVal Latest_Modfied1 As String, ByVal Backup_On_Next_Run1 As Boolean, ByVal Total_Backup_Times1 As String, ByVal Server_File_Number1 As String, ByVal Server_Number1 As String, ByVal File_Break_Down1 As String, ByVal No_Of_Servers1 As String, ByVal Full_File_Address1 As String, ByVal OverWriteQuick As Boolean, ByVal CompressAns As Boolean)
I am finding it difficult to find an example that allows for insertion of additional rows into a table, without dropping the table I'm inserting into. Or inserting specific values. Like this example..
[URL] ....
I have 6 table I am formatting the data to conform to the final table as I'm inserting it into, but none of these examples gives me the example needed. I am using SQL 2012.
<code> SELECT CONVERT(VARCHAR(50),[FName]) + ' ' + CONVERT(VARCHAR(50),[LName]) AS [CustName] ,CAST('ALARMCOM' as nvarchar(8)) as VendorName ,CONVERT(VARCHAR(25),[CUSTOMER_CS_ACCOUNT_NUMBER]) AS [Cust_ID] ,CONVERT(VARCHAR(40),[Charge_Description])as [ChargeType] ,CASE
I also have a RESOURCES table of phrases (for translation purposes) similar to this:
res_id res_lang res_phrase AccessDenied en Access Denied
For some rows in the resources table I do not have all language codes present so am missing some translations for a given res_id.My question is what query can I use to determine the RESOURCE.RES_IDs for which I do not have a translation for.
For example I might have a de, en, cz translation for a phrase but not a pl phrase and I need to identofy those rows in order that I can obtain translations for the missing RESOURCE rows.
Given one table, Table1, with columns Key1 (int), Key2 (int), and Type (varchar)...
I would like to get the rows where Type is equal to 'TypeA' and Key2 is Null that do NOT have a corresponding row in the table where Type is equal to 'TypeB' and Key2 is equal to Key1 from another row
I would like to return only the row where Key1 = 4 because that row meets the criteria of Type='TypeA'/Key2=NULL and does not have a corresponding row with Type='TypeB'/Key1=Key2 from another row.
I have tried this and it doesn't work...
SELECT t1.Key1, t1.Key2, t1.Type FROM Table1 t1 WHERE t1.Key2 IS NULL AND t1.Type LIKE 'TypeA' AND t1.Key1 NOT IN (SELECT Key1 FROM Table1 t2 WHERE t1.Key1 = t2.Key2 AND t1.Key1 <> t2.Key1 AND t2.Type LIKE 'TypeB')
I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.
I need a sample script to insert 500 million records into a table ....
I have resulting rows from a query similar to the following:
The data is coming from a single table that contains only one coverage code column and one coverage code date, but the end user wants the two coverage code types and dates combined into a single row. So the SELECT looks something like this:
SELECT [Employee ID] = emp.employee_id, [Coverage Code 1] = enr.coverage_code, [Coverage Date 1] = enr.coverage_date, [Coverage Code 2] = case when enr.product_type = 'Accident.Accident' then enr.coverage_code else NULL end,
[Code] ....
I basically want to merge the like Employee ID's together into a single row like the following:
I know I have done this before and it is probably pretty simple.
I want to return all rows in table giftregistryitems with an additional column that holds the sum of column `amount` in table giftregistrypurchases for the respective item in table giftregistryitems.
What I tried, but what returns NULL for purchasedamount:
SELECT (SELECT SUM(amount) from giftregistrypurchases gps where registryid=gi.registryid AND gp.itemid=gps.itemid) as purchasedamount,* FROM giftregistryitems gi LEFT JOIN giftregistrypurchases gp on gp.registryid=gi.id WHERE gi.registryid=2
How can I achieve what I need?
Here are my table definitions and data:
/****** Object: Table [dbo].[giftregistryitems] Script Date: 02-05-15 22:37:11 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[giftregistryitems]( [id] [int] IDENTITY(1,1) NOT NULL,
I have a CTE query against a table with 32K rows that runs fine in 2008R2. I am running it in 2014 Std Ed. against the same data and it runs very slowly. Looking at the execution plan I think I see what's contributing to the slowness.
Note that the "actual number of rows" is some 351M...how is this possible?
the query:
declare @amts table (claim int,allowed decimal(12,2),copay decimal(12,2),deductible decimal(12,2),coins decimal(12,2)); ;with unpaid (claimID) as (select claimID from claim where amt+copay + disct+mm + ded=0) insert @amts select lineID, sum(rc), sum(copay), sum(deduct), case when sum(mm)>0 and (sum(mm)<sum(mmamt)) then sum(mm) else 0 end from claimln where status is null and lineID not in (select claimID from unpaid) group by lineID
it's like there's some massively recursive process going on?
I tried with the following and result is coming for one month i.e. JUL but not with the second Month i.e Jun
SELECT 'Jul1' AS MON, [BNQ], [FNB], [RS] FROM (SELECT REVENUECODE, SUM(ROUND(((Jul/31)*30),0)) AS JUL FROM RM_USERBUDGETTBL WHERE USERNAME='rahul' AND FY=2015 GROUP BY REVENUECODE, USERNAME ) AS SourceTable PIVOT (SUM(JUL) FOR REVENUECODE IN ([BNQ], [FNB], [RS])) AS PivotTable
I have several databases to deal with, all with + 250 tables. The databases are not identical and do not conform to a specific naming convention for table names. Most but not all tables have a column called "LastUpdated" containing a date/time (obviously). I'd like to be able to find all rows within a whole database (table by table) where the date/time is greater than a specified date/time.
I'm looking for a reliable query that will return all the rows in each of the tables but without me having to write hundreds of individual scripts "SELECT * FROM [dbo.xyz] WHERE LastUpdated > '2015-01-01 09:00:00:000'", or have to look through each table first to determine which of them has the LastUpdated field.
I have created a table Table with name as Varchar and id as int. Now i have started inserting the rows like, insert into Table values ('arun',20).Yes i have inserted a row in the table. Now i have got the values " arun's ", 50. insert into Table values('arun's',20) My sqlserver is giving me an error instead of inserting the row. How will you solve this problem?
Hello all. I've got a problem with really slow INSERTs on one (and only one) of the tables in a database. For example, using SQL Management Studio, it takes 4 minutes and 48 seconds to insert 25 rows. There are only about 8 columns in the table and only about 1500 records. All the other tables in the database are very fast for inserts.
Another odd thing uniquely associated with INSERTs on this table: prior to inserting the 25 new rows of data, SQL Management Studio tells me that it inserted 463 rows of data which I know did not happen. Here's the INSERT statement:
INSERT INTO FieldOps(StudySiteID , QA_StructureID , Notes , PersonID) SELECT DISTINCT StudySiteKey , QA_StructureKey , SampleComments1 , '25' FROM ScriptOutput_Nitrate WHERE (ScriptOutput_Nitrate.StudySiteKey IS NOT NULL)
and SQL Management Studio (eventually) says: (463 row(s) affected) (463 row(s) affected)
(25 row(s) affected)
The table has an index on the primary key (INT data type with auto increment). I tried running the following code to fix things but it made no difference:
USE [master] GO ALTER DATABASE [FieldData] SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO
use FieldData GO DBCC CHECKTABLE ('FieldOps', REPAIR_REBUILD) With ALL_ERRORMSGS GO
USE [master] GO ALTER DATABASE [FieldData] SET MULTI_USER WITH ROLLBACK IMMEDIATE GO
I'm guessing that the problem might be related to the index (??). I don't know... Does anyone here have a suggestion as to what I should do to fix this problem.
Hi,I have a table defined asCREATE TABLE [SH_Data] ([ID] [int] IDENTITY (1, 1) NOT NULL ,[Date] [datetime] NULL ,[Time] [datetime] NULL ,[TroubleshootId] [int] NOT NULL ,[ReasonID] [int] NULL ,[reason_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[maj_reason_id] [int] NULL ,[maj_reason_desc] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[ActionID] [int] NULL ,[action_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[WinningCaseTitle] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Duration] [int] NULL ,[dm_version] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[ConnectMethod] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[dm_motive] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[HnWhichWlan] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[RouterUsedToConnect] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[OS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,[WinXpSp2Installed] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Connection] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Login] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL,[EnteredBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Acct_Num] [int] NULL ,[Site] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,CONSTRAINT [PK_SH_Data] PRIMARY KEY CLUSTERED([TroubleshootId]) ON [PRIMARY]) ON [PRIMARY]GOWhich contains 5.6 Million rows and has non clustered indexes on Date,ReasonID, maj_Reason, Connection. Compared to other tables on the sameserver this one is extremely slow. A simple query such as :SELECTSD.reason_desc,SD.Duration,SD.maj_reason_desc,SD.[Connection],SD.aolEnteredByFROM dbo.[Sherlock Data] SDWhere SD.[Date] > Dateadd(Month,-2,Getdate())takes over 2 minutes to run ! I realise the table contains severallarge columns which make the table quite large but unfortunately thiscannot be changed for the moment.How can i assess what is causing the length of Query time ? And whatcould i possibly do to speed this table up ? The database itself isrunning on a dedicated server which has some other databases. None ofwhich have this performance issue.Anyone have any ideas ?
We have developped an ETL. For development we used small test files (10 000 rows) to test if it works correctly. This runs in less then a minute
In Test we are using a file which contains all rows (7 million). We did twice a test and we first stopped the process after a week and the 2nd time we stopped the process after a weekend.
We are able to trace the problem to the point where it has to sort the tables.
The proces is pretty simple. We use two connectors to directly connect to the tables. Then we have two blocks to sort the data. And then we have one block to merge the data.
Should we which to let SQL do the sorting ? Since it is in staging is has no index on that column. A select on the tables with an order by takes 3 minutes to return all those rows.
Any idea's ?
Also is there a page with the best practices for ETL ?
I have a master table and i need to import the rows into the parent and child table.
Master table name is Flatfile_Inventory Parent Table name is INVENTORY Child Tables name are INVENTORY_AMOUNT,INVENTORY_DETAILS,INVENTORY_VEHICLE, Error details will be goes to LOG_INVENTORY_ERROR
I have 4 duplicate rows in the Flatfile_Inventory which i have already inserted in the Parent and child table.
Again when i run the query using stored procedure,its tells that all the 4 rows are duplicate and will move to the Log_Inventory_Error.
I need is if i have the duplicate rows in the flatfile_Inventory when i start inserting into the parent and child table the already inserted row have the unique ID i must identify it and delete that row in the both parent and chlid table.And latest row must get inserted into the Parent and child table from Flatfile_Inventory.