I am updating a db with data from a file, in this data we have new info, info that has been updated and info that is to be removed from the db.
Now I was wondering which approach results in better performance/shorter executin time:
1. first update excisting values, then insert new ones, and last delete cancelled data
or
2. delete cancelled data and data that will be updated, then insert new and updated info
I get all this data from a file, in that file all rows are similar and there is one column that defines if the data is new, updated or to be deleted (thus all the updates also include the information for the enty that has not been altered).
Does anyone know how to improve performance on insert statements. I have to run a query of several thousand insert statements, but it just takes too long. Does anyone know of any good tips to improve performance?
Hey there :)Sorry if I'm asking a dumb question here, but I'm still quite new to MS SQL,so this problem might appear larger to me than it really is.I'm trying to create a performance test environment for a Ruby on Rails andMongrel setup with an MS SQL Server 2000.The adapter, mssqlclient, uses some kind of "conversion" for unicode, here'sa quote from the homepage:"Automatically translate from proper UTF-16LE nvarchar fields in thedatabase to UTF-8 Ruby Strings you can display in your application"As far as the local DB designer knows, we're not using UTF16-LE nvarcharfields, unless it's something that happens implicitly.Either way, this is how a query from the mssqlclient adapter might look:SELECT TOP 1 * FROM Item WHERE (Item.Itemnumber = N'45783745')Response time the first couple of times was upwards of 20+ seconds, afterthe sql server has "awaken from its slumber", it's roughly 4 seconds.Omitting the "N" from the WHERE clause, response time is in milliseconds (asone would expect, regardless of the fact that there's currently >2.5million items in the table).Any tips on how to resolve this? Is the SQL statement bad, or is it aquestion of configuring SQL Server correctly?Thanks in advance for any help,Daniel Buus :)--http://www.rhesusb.dk
I have a number of complex search stored procedures that use the following syntax to try to simplify the code.
WHERE @SearchParam IS NULL OR SearchCol = @SearchParam
unfortunately it appears that this is really inefficient as far as the database is concerned.
If I run the following query on the AdventureWorks database (SQL Server 2005 with SP2 and fixes up to v3054)
Code Block SET STATISTICS IO ON
Declare @CustomerID int SET @CustomerID = 1
SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE @CustomerID IS NULL OR CustomerID = @CustomerID
SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE CustomerID = @CustomerID
Is see that the first select results in 45 logical reads, whereas the second results in only 2 logical reads.
Does anyone have any idea how I can get the benefit of a search procedure that does not have loads of IF blocks, or dynamic SQL without this major performance issue?
We have an issue where a cube hasn't been designed properly - when someone queries it with Excel, it is doing a mega-crossjoin. When anyone else tries to do *anything* on the AS server (connect with management studio, etc.) it just hangs. We have to either track down the person running the query (via the flight recorder), or restart the service. Obviously the correct fix is to change the design of the cube - I plan on doing it asap. But it brings up this important question - is there a setting I can change to allow others to use the box while this is going on? Maybe some thread isolation, or parallelism? I'm just throwing out ideas, as I haven't experienced this part of AS administration yet.
Hello everyone, I am hoping someone can help me with this problem. Iwill say up front that I am not a SQL Server DBA, I am a developer. Ihave an application that sends about 25 simultaneous queries to a SQLServer 2000 Standard Edition SP4 running on Windows 2000 Server with2.5 GB of memory. About 11 of these queries are over views (all overthe same table) and these queries are all done from JDBC but I am notsure that matters. Anyway, initially I had no problem with thesequeries on the tables and the views with about 4 years of information(I don't know how many rows off hand). Then we changed the tables toreplicated tables from another server and that increased the amount ofdata to 15 years worth and also required a simple inner join on 2columns to another table for those views.Now here is the issue. After times of inactivity or other times duringthe day with enough time between my test query run I get what lookslike blocking behavior on the queries to the views (remember these allgo to the same tables). I run my 25 queries and the 11 view queriesall take about 120 seconds each to return (they all are withinmilliseconds of each other like they all sat there and then werereleased for processing at the same time). The rest of the queries arefine. Now if I turn around and immediately run the 25 queries again,they all come back in a few seconds which is the normal amount of time.Also, if I run a query on one of views first (just one) and then runthe 25 queries they all come back in a few seconds as well.This tells me that some caching must be involved since the times are sodifferent between identical queries but I would expect that one of thequeries would cache and thus take longer but the other 10 would befast, not all block for 2 minutes. What is more puzzling is that thisbehavior didn't occur before where now the only differences are:1) 3 times more data (but that shouldn't cause a difference from 3seconds to 120 and all tables have been through the index wizard with aSQL trace file to recommend indexes)2) There is now a join between 2 tables where there wasn't before3) The tables are replicated throughout the day.I would appreciate any insight into this problem as 120 seconds is waytoo long to wait. Thanks in Advance.Chris
I changed from Access97 to AccessXP and I have immense performanceproblems.Details:- Access XP MDB with Jet 4.0 ( no ADP-Project )- Linked Tables to SQL-Server 2000 over ODBCI used the SQL Profile to watch the T-SQL-Command which Access ( whocreates the commands?) creates and noticed:1) some Jet-SQL commands with JOINS and Where-Statements aretranslated very well, using sp_prepexe and sp_execute, including thesimilar SQL-Statement as in JET.2) other Jet-SQL commands with JOINS and Where-Statements aretranslated very bad, because the Join wasn´t sent as a join, Accesscollects the data of the individual tables seperately.Access sends much to much data over the network, it is a disaster!3) in Access97 the same command was interpreted wellCould it be possible the Access uses a wrong protocol-stack, perhapsJet to OLEDB, OLEDB to ODBC, ODBC to SQL-Server orJet to ODBC, ODBC to OLEDB and OLEDB to SQL-Server instead ofJet to ODBC and ODBC direct to SQL-ServerDoes anyone knows anything about:- Command-Interpreter of JetODBC, Parameters, how to influence thecommand-interpreter- Protocol-Stack of a Jet4.0 / ODBC / SQL-Server applicationThanks , Andreas
I am curious to what major differences there are between these two versions. Trying to decide whether or not to purchase the SQL 6.5 training kit from Microsoft or not. If the code and utilities are the same, then I could probably still learn from the 6.5 version. Any thoughts, suggestions will be greatly appreciated.
I am having the following problem AFTER converting to VS2008 from VS2005 and SQLCE 3.5 from 3.01:
SQL CE db file has a table called Court0 with various columns of type float. I populate the values by copying floats from another table/tables. I do this via ado.net using this code snippet:
We have a database that when an update is released (and this is very often) the release notes don't cover most of the actual changes. Every time groups of our custom queries and reports get broken due to database changes. Does anyone know how to compare two databases and get a report of the differences between them? I can either have the two versions on the same server or on different servers if that makes a difference.
I'm hoping for something where you input @oldversion, @newversion
I have just converted some Access VBA code to a sproc. I'm finding that for some reason the rounding is different: eg. ROUND(17 * 97995 / 1000,2) = 1665.915 before Rounding
SQL SProc: 1665.91 Rounds down ADP VBA: 1665.92 Rounds up
/* Objects in Company1 Missing in Company2 */ Select 'Table Objects in Company1 but are not in Company2' select Left(a.name,30), a.refdate from sysobjects a Where a.xtype = 'U' and a.name like 'TBL%' and Not Exists (Select 1 From ____________.dbname.dbo.sysobjects b where a.name = b.name)
/* Objects in Company2 Missing in Company1 */ Select 'Table Objects in Company2 but are not in Company1' select Left(a.name,30), a.refdate from ____________.dbname.dbo.sysobjects a Where a.xtype = 'U' and a.name like 'TBL%' and Not Exists (Select 1 From sysobjects b where a.name = b.name)
/* Column Differences */
Select 'Column Differences between like named tables'
select Left(x.TabName,30) as TableName, Left(x.ColName,30) as ColumnName , Left(x.DataType,15) as Company1DataType, x.length as Company1Length, x.refdate as Company1RefDate , Left(y.DataType,15) as Company2DataType, y.length As Company2Length, y.refdate as Company2RefDate from ( Select a.name as TabName, b.name as ColName, b.length, c.name as DataType, a.refdate from sysobjects a, syscolumns b, systypes c where a.id = b.id and b.xusertype = c.xusertype and a.xtype = 'U' and a.name like 'TBL%') As x , ( Select a.name as TabName, b.name as ColName, b.length, c.name as DataType, a.refdate from ____________.dbname.dbo.sysobjects a, ____________.dbname.dbo.syscolumns b, ____________.dbname.dbo.systypes c where a.id = b.id and a.xtype = 'U' and b.xusertype = c.xusertype and a.name like 'TBL%') As y Where x.TabName = y.TabName and x.ColName = y.ColName and (x.length <> y.length or x.DataType <> y.DataType)
/* Column Differences */ Select 'Column in Company1.com not in Company2'
Select Left(a.name,30) as TableName, Left(b.name,30) as ColumnName, b.length, c.name, a.refdate from sysobjects a, syscolumns b, systypes c where a.id = b.id and b.xusertype = c.xusertype and a.xtype = 'U' and a.name like 'TBL%' and Not Exists ( Select 1 from ____________.dbname.dbo.sysobjects d, ____________.dbname.dbo.syscolumns e where d.id = e.id and a.xtype = 'U' and a.name like 'TBL%' and a.name = d.name and b.name = e.name) Order by 1, 2
/* Column Differences */ Select 'Column in Company2 not in Company1.com'
Select Left(a.name,30) as TableName, Left(b.name,30) as ColumnName, b.length, c.name, a.refdate from ____________.dbname.dbo.sysobjects a, ____________.dbname.dbo.syscolumns b, ____________.dbname.dbo.systypes c where a.id = b.id and b.xusertype = c.xusertype and a.xtype = 'U' and a.name like 'TBL%' and Not Exists ( Select 1 from sysobjects d, syscolumns e where d.id = e.id and a.xtype = 'U' and a.name like 'TBL%' and a.name = d.name and b.name = e.name) Order by 1, 2
--Select 'Table Objects that are still in use in both Company2 and Company1' --select Left(a.name,30), a.refdate from sysobjects a, ____________.dbname.dbo.sysobjects b --where a.name = b.name and a.xtype = 'U'
What's the difference between using CREATE TABLE #TempTable and DECLARE @Table TABLE for temp tables and are there any advantages or disadvantages to using one over the other?Thanks
I have a server with two test instances of a data base. I have a query which creates a temp table, inserts 29 rows, perform 4 update queries to add counts and then dumpps out the results. This entire query script runs 1.33 minutes on one instance and 2.5 minutes on the other. On the production server this query now runs in 9 seconds. If I run any one of the test updates individually they execute under 2 seconds, just like the production server. THe results are repeatable.
All are SQL 7 with all service packs on NT4 sp6. Both test data bases are backups of production from last week. I suspect some kind of caching/buffer problem, but I do not know what to look for. I am not a DBA so I have no idea what role TEMPDB plays may play in this.
Can anyone give us ideas on where to look for the performance difference? Will our impending upgrade to SQL2K solve this problem or make it worse? Any ideas would be appreciated.
Is there any tool to find the differences between the two databases. I would like to know the differences in developmental server and Production server. if the developers create any new objects, I want to migrate them to production server.
Can we do it in sql server 200 or do we need to have separate tool.
I am kind of baffeled. I have a table with a column of 8 varchar in 2000 and the same in 6.5. When I insert into 2000 with a data length of more than 8 chars via Cold Fusion into the table, it fails. The same Cold Fusion program inserts into the 6.5 table, but truncates the data but does not fail. Does anyone know why this happens. Thanks, Newbie.
I have written the SQL below and need to change the case. I need to say that if today’s date (get date) is between <> 4 weeks of sv_latest_appraisal then = Outstanding. Can this be done in SQL? I know it is very specific so not sure. Please help! Sam
SELECT Employee.FORENAME AS Forename, Employee.SURNAME AS Surname, Employee.LOCATION AS Location, Employee.DEPARTMENT AS Department, Employee.STARTDATE AS Startdate, Sv_latest_appraisal.NEXT_APP AS Next_app, Sv_latest_appraisal.USR_EAR_TYPE AS Usr_ear_type, Sv_latest_appraisal.USR_EAR_TYPE_NEW AS Usr_ear_type_new, CASE WHEN Sv_latest_appraisal.NEXT_APP <= getdate() THEN 'OVERDUE' WHEN Sv_latest_appraisal.NEXT_APP >= getdate() THEN 'NOT DUE' WHEN Sv_latest_appraisal.NEXT_APP = getdate() THEN 'DUE TODAY' ELSE 'UNKNOWN' END FROM (dbo.EMPLOYEE AS Employee INNER JOIN dbo.SV_latest_appraisal ON Employee.EMPLOY_REF = Sv_latest_appraisal.EMPLOY_REF) INNER JOIN dbo.JOB AS Job ON Employee.JOB_REF = Job.JOB_REF WHERE (Employee.LEAVER = 0)AND (Employee.LOCATION LIKE 'GE')
I want to generate a query that returns a the following results with an extra column named 'KMDifference' which is the difference between 'ArrivalKM' from last day and 'DepartureKM' from today.
So I need to find the difference between two consecutive records and put the result in a new column called Downtime. For example:
Record 2 Start (2/9/15 13:29:03) - Record 1 End (2/9/15 13:28:46) = 0:00:17 Record 3 Start (2/9/15 13:29:17) - Record 2 End (2/9/15 13:29:12) = 0:00:05 Record 4 Start (2/9/15 13:29:27) - Record 3 End (2/9/15 13:29:21) = 0:00:06 and so on…
Also what do I do about the 1st record since there is no previous record to subtract from?
So far I have this code in my query to generate my table: SELECT Start, End FROM group_table3
I am trying to test some data handling between two different versions of an application.
I have restored the database schema twice, once as DB_old and once as DB_new.
I import a transaction using the new application into DB_new and I import the SAME transaction into the DB_old using the old version of application.
I then have to eyeball the data in SQL Query Analyzer to try to identify problems where the fields have received different values.
I have done this by running a select statement twice telling it to use both of the databases and then viewing it in two grids. There are a lot of columns so I have to do a lot of scrolling across the screen to do the comparison, and since the view is in two separate grids I have to hop back and forth and click the scroll bars, etc.
It seems like there has to be a better way. I don't suppose there is a way to lock the two grids so they both scroll together is there?
I was thinking maybe I could insert each of the selects into a temporary table and then do some kind of comparison to identify which values were different in each column. Some of the columns will have differences, like the timestamp, but if I could somehow identify which columns were different then I could eyeball them to identify which of those were okay to be different and which of them were actually bugs from the changed application version.
I have no idea how to identify those individual columns with different data values or even where to start.
Just so you understand better what I am doing now here is the query I am running that I then eyeball: use DB_new select * from claim where claim_id = 35144 use DB_old select * from claim where claim_id = 35144
Dear Experts, i've taken a backup from one database(which is running on sql server 2000) 'X' and restored it in another database(sqlserver 2005) as 'X' with the same name.
when i'm taking the backup, the size of the database is 59.63 MB (test database) after restoring the same in my local machine, on sql server 2005, it became 158.25 MB.
why is this much of difference..... any architectural difference...?
Vinod Even you learn 1%, Learn it with 100% confidence.
I am a newbie when it comes to MS SQL Server administration and I am hoping you can help me out. We are migrating from a shared webhosting platform to our own internal dedicated web / MS SQL (2005) server and have encountered an error that appears to be stemming form Query Syntax.
In our old system we could simply query via the following format:
"Select [Column] from [Table Name]"
But on the SQL Server I just set up we have to query via this format:
"Select [Column] from [Database Name].[Table Name]"
We have literally hundreds of preprogrammed Queries and it would be quite difficult to change them all. Does anyone know how I can set up SQL Server to run so that our quieries do not require the DatabaseName in the statement? I have placed the connection code below, if that helps any.
Set objConn= Server.CreateObject("ADODB.Connection") objConn.Open "Provider=SQLOLEDB; Data Source =; Initial Catalog =; User Id =; Password=" Set objRec = Server.CreateObject("ADODB.Recordset").
Hello,I've got installed Win 2003 SBS Premium with the SQL Server 2000 on a servermachine. It works almost fine, except the application which uses the SQLServer. The main part of the application runs (since the last update) fine,but other tools of that application (database import and the databasemanager for check and rebuild) doesn't. They hang up or kill the database.Our software developer says that these problems are in correlation with theserver os. But there won't be any problem if we install Win 2000 Server andadd an additional SQL Server 2000.Finally my question is: Are there any differences between the SQL Server2000 Versions, which are sold (a) as a single product, (b) as part of theWin 2003 SBS Premium package and (c) as part of other Server versions?Thanks in Advance,Martin
In previous threads I saw that in a scenario where Log Shipping is active any other Log Backup activities should be avoided in order to let the Log Chain stay intanct. Until now we just used Mirroring and Full Backup and Log Backup. Introducing Log Shipping to a third Server in a separate location would mean that the existing Log Backup Jobs must be removed or Log Shipping will not work.
I am glad to have read these threads before falling into that trap myself.
A question was raised by my colleague who is responsible for the System and Network Administration. Are these Log Backups that will be performed by Log Shipping just differential? He would not like to see that Log Shipping is pushing or pulling Log's that grow during time.
I named to him two reasons that it must be differential. A Log Backup truncates the existing Log which implies that the next Log is just the difference since the last Backup. Also, it must be differential because the Transations can only be committed once anyway, so Log Shipping implies that those Log Backups are just as big as the incoming Transactions since the last Log Backup.
I'm trying to find what are the main differences between SQL Server 2005 64-bits and 32-bits. So far, I've found some articles about TPC-C performance but I would like to know some response time or execution time of a batch or SSIS packages.
Any information about this 2 versions is appreciated.