We have a view with many left joins. The original creators of this view might have been lazy or sloppy, I don't know. I have rewritten the query to proper inner joins where required and also nested left joins.
So rather then the following exemplary fragment
select <many items> from A left join B on B.id_A = A.id left join C on C.id_B = B.idthis now looks like select <many items> from A left join (B join C on C.id_B = B.id ) on B.id_A = A.id
Compilation time of the original view was 18s, of the new rewritten view 4s. The performance of execution is also better (not counting the compile of course). The results of the query are identical. There are about 30 left joins in the original view.
I can imagine that the optimizer has difficulty with all these left joins. But 14s is quite a big difference. I haven't looked into detail in the execution plans yet. I noticed that in both cases the Reason for Early Termination of Statement Optimization was Time Out.
MS SQL 2008.I want to execute a delete query on certain tables in my database to delete some rows in the tables.The tables selected has a certain name pattern (the name ends with "Temp").
So I can do this to get a list of the table names
SELECT name FROM sys.Tables where name like '%Temp'
Now I want to check each table to see if it has a column with the name "DateStamp" and then execute a delete query as follows:
delete form TableName where DateStamp is < '2010-01-01'
In other words I need to iterate through the tables names, How to do this?
Another classic example of something silly I must be doing
I have a "Foreach Loop Container" which uses a "Foreach ADO Enumerator". The object source is a recordset variable.
Inside the "Foreach Loop Container", a lot of things are happening - some Execute SQL Tasks, Send Mail Tasks, Script Tasks and also a couple of Sequence Containers.
Now, I am getting 2 records in the recordset, but, the "Foreach Loop Container" executes just once.
For security reasons customer wants to put a SQL database on an encrypted thumb drive (IronKey). Here's the rub though. He wants to be able to work with the data on a workstation. Then, if he takes his laptop out of the office he wants to be able to simply plug that thumb drive into the laptop and fire up SQL on the laptop and use that same database. Procedurally this would work in that the database can be created so that the location is the same from both machine viewpoints, however will the two different SQL instances allow moving the database back and forth like this?
I am building a query. I have a table with 4 columns and need to try and put the times together. There are some inconsistencies with this, and i'm hoping to exclude them.. Here is a sample table:
Function 1 = Clock In Type 1 Function 2 = Clock Out Type 1 Function 3 = Clock In Type 2 Function 4 = Clock Out Type 2
Basically what I need to do is take the time from rows with Function 1 and match it with Function 2 so I can get a total time of the clock in. Function 3 rows need to match up with Function 4 rows so I can get another set of total times. There may be more clock in rows then clock out rows or more clock out rows then clock in rows, and there may be multiple clock ins & outs per day per employee. I'm basically trying to get totals for each Clock In/Out type.
In the For Loop, How to Iterate from Older flat files to Newer flat files based on File's Timestamp. If there are some older files in that folder, it should be processed first and then continue with the newer one.
I'm trying to quantify the number of times folks use SQL Server Management Studio to change client data in one of our production databases. Does SQL Server keep this statistic? How do I get to this data?
Problem Summary: Merge Statement takes several times longer to execute than equivalent Update, Insert and Delete as separate statements. Why?
I have a relatively large table (about 35,000,000 records, approximately 13 GB uncompressed and 4 GB with page compression - including indexes). A MERGE statement pretty consistently takes two or three minutes to perform an update, insert and delete. At one extreme, updating 82 (yes 82) records took 1 minute, 45 seconds. At the other extreme, updating 100,000 records took about five minutes.When I changed the MERGE to the equivalent separate UPDATE, INSERT & DELETE statements (embedded in an explicit transaction) the entire update took only 17 seconds. The query plans for the separate UPDATE, INSERT & DELETE statements look very similar to the query plan for the combined MERGE. However, all the row count estimates for the MERGE statement are way off.
Obviously, I am going to use the separate UPDATE, INSERT & DELETE statements. The actual query plans for the four statements ( combined MERGE and the separate UPDATE, INSERT & DELETE ) are attached. SQL Code to create the source and target tables and the actual queries themselves are below. I've also included the statistics created by my test run. Nothing else was running on the server when I ran the test.
Server Configuration:
SQL Server 2008 R2 SP1, Enterprise Edition 3 x Quad-Core Xeon Processor Max Degree of Parallelism = 8 148 GB RAM
SQL Code:
Target Table: USE TPS; IF OBJECT_ID('dbo.ParticipantResponse') IS NOT NULL DROP TABLE dbo.ParticipantResponse;
Can I safely use top n select/delete in a while loop? For example:
declare @FieldVal int while (select count(*) from @MyTempTable) > 0 begin select top 1 @FieldVal = FieldVal from @MyTempTable -- process @FieldVal then delete the row delete top 1 from @MyTempTable end
I like the simplicity of the above approach as long as it's reliable and there aren't any gotchas that I may not be aware of.
I have an SSIS job that dynamically loops through each server, grabbing data for typical DBA reporting, like diskspace, and errorlogs. If the server is down for whatever reason the SSIS package fails. Is there any way I can prevent the SSIS package from failing if one of the servers is down?
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[PaymentsLog](
[Code] ....
Is there a way to look at the DatePeriod table and use the StartDtae and EndDate as the periods to be used in the select statement and then cursor through each date between these two dates and then insert the data in to the PaymentsLog table?
one of my database data file is 100 GB and the log file is 500 GB.DB is in full recovery model and the transaction logs happen once in 6 hours.Even then, the Database log file isn't reducing in size.
I am trying to run a UNION ALL query in SQL SERVER 2014 on multiple large CSV files - the result of which i want to get into a table in SQL Server. below is the query which works in MSAccess but not on SQL Server 2014:
SELECT * INTO tbl_ALLCOMBINED FROM OPENROWSET ( 'Microsoft.JET.OLEDB.4.0' , 'Text;Database=D:DownloadsCSV;HDR=YES', 'SELECT t.*, (substring(t.[week],3,4))*1 as iYEAR, ''SPAIN'' as [sCOUNTRY], ''EURO'' as [sCHAR],
[Code] ....
What i need is:
1] to create the resultant tbl_ALLCOMBINED table
2] transform this table using PIVOT command with following transformation as shown below:
PAGEFIELD: set on Level = 'Item' COLUMNFIELD: Sale_Week (showing 1 to 52 numbers for columns) ROWFIELD: sCOUNTRY, sCHAR, CATEGORY, MANUFACTURER, BRAND, DESCRIPTION, EAN (in this order) DATAFIELD: 'Sale Value with Innovation'
3] Can the transformed form show columnfields >255 columns i.e. if i want to show all KPI values in datafield?
P.S: the CSV's contain the same number of columns and datatype but the columns are >100, so i dont think it will be feasible to use a stored proc to create a table specifying that number of columns.
Now with the above result, On every record I have to fire a query Select SUM(sale), SUM(scrap), SUM(Production) from tableB where ProdID= ["ProdID from above query"].How to write this query in a Stored Procedure so that I can get the required SUM columns for all the ProdID's from first query?
I have an VB.NET web app which performs a fairly complicated SQL query. It seems in the morning, the 1st time the page is loaded (and query executed) it takes up to 10-15 seconds to complete loading. Sometimes it even times out. However anytime after that, the page loads up (even from another computer) in about 4-5 seconds. Can someone explain the reason for this and how I might fix the load times in the morning?
I have a column in a table that records when the date and time of an event took place.
Table Name: Chronicle Column Name: Created (of type DateTime)
I would like to select the Chronicle records that are between two dates. (e.g. 1 May 2001 and 20 May 2001) And I would like to select those records that are between two times. (e.g. 6:00am and 1:00pm)
Does anyone know how to do this or have any pointers for me? I can see it would be easier if I had the date in one column and the time in the other. Can it be done without doing that?
We recently upgraded from SQL 6.5 to SQL 7. I have a few .sql files that were each running around 5 - 8 minutes under 6.5. These same files now each take over 30 minutes to run. Has anybody had problems with their queries taking longer to run under 7.0? These files are quite large and are comprised of 3 - 4 batches with several queries in each batch. If anybody has any thoughts on the cause please let me know.
I'm in an unfortunate situation. We are posting information using a stored procedure to an outside SQL server connected to through a System DSN on our server (win 2003 server) using php's ODBC functions (we never had any luck connecting directly to the SQL server using php's mssql functions).
Everything is working fine, we can connect, send querys, etc ... but between 1am and 10am we recieve errors when trying to execute queries (though we can connect fine).
Whoops ... forgot to get the error returned before it turned 10am ... I can post it tomorrow
I think the database is being locked, but unfortunately I know very little about MS SQL server
The people who's database we are connecting to are not being helpful ... I was hoping I could get some suggestions on what would be the cause.
Anything you can suggest would be a huge help! Thanks! - Joe
We are having a problem with Query Analyzer not connecting to SQL Server anymore. Sometimes it will, sometimes it won't. Sometimes when it does, then you click on the databases drop-down, it may take a long time to return. Likewise, it may take a long time to open the object browser, or it may open without the database info but with just the "Common Objects" info.
I have multiple SQLDatasources on multiple pages. When I call the Select command for the datasource the query is run multiple times. I was wondering if anyone had any problems like this. The data is being read into listboxes. If you need any more info or have any specific questions, feel free to ask.
Dear Friends, I am in problem & have to solve one query. I have a one table with the employee time in & time out data, employee can go out & come in fequently in a day.
I want to know that how much time every emp have attend in the company per day.
Kindly, do reply as soon as possible.
I am enclosing data defination in txt file along with the data in the MS Excel file.
I have two tables in my database. I want to insert date and no.of hours worked on that day for a particular project. So in a week if end user enters 7 dates and hrs for each day...... i have to insert those values into the table against one project only. Can any one please help me out how to run insert query for 7 times (in a week) with different parameters
I want the below query to run 24 hours ..once the insert is complete, run again , so on for 24 hours .
there is a way to run every second in as job but i want to run only after run complete ..is there a way to run the query after every complete run ? and keep in job
INSERT INTO [dbo].[Audit_Active] ([SPID],[LoginName],[HostName],[ProgramName],[Command],[LastQuery],[DBName],[ServerName]) SELECT --DISTINCT p.SPID, p.LogiName, p.HostName,
For the following query, I am trying to fetch only one row (no duplicates) for each BL_ID based on the logic that if I have multiple rows for one BL_ID, then only the one which has the largest LEG_SEQ_NBR should be displayed and the other rows should be ignored.
I am using the below code get to the above logic:
ROW_NUMBER() OVER (PARTITION BY ITIN.BL_ID ORDER BY ITIN.LEG_SEQ_NBR desc) AS RowNum
select * FROM ( SELECT TMIS.[BL_ID] ,[POL_NAME] ,[POD_NAME]
I want to take this XML and put it into a table with CustomerId and MatchingSetId. With this SQL, each MatchingSetId gets assigned to each CustomerId instead of retaining the relationships in the XML.
I have a person table with 1 billion rows on it, partitioned equally at 10 million rows per partition. The primary key constraint is a composite of an identity column and ssn( char(11) ) with the partitioning column built on the SSN.
This is built on my home grown workstation:
Microsoft 2008 Server 64 bit, Microsoft SQL server 2008 64 bit, Intel 2.66 quad core, 8 gb ram, Os/ raid 1, data on 6 drives hardware/software raid 50, transaction logs on 4 drive raid 10, all drives sata II/ 3gb burst.
I have updated statistics on the table and I have 2 queries that give clustered index seek , one never comes back before I get impatient, the other comes back instantly, and the showplan looks the same for both queries.
SELECT *
FROM Person
WHERE PersonKey > -1 and SSN = '219-09-3987'
AND
SELECT TOP 100 PERCENT *
FROM Person
WHERE PersonKey > -1 and SSN = '219-09-3987'
Incidentally the query with the top 100 percent is the one that returns instantly.
I am puzzled
1) Why the estimated plan looks the same
2) Why a top 100 Percent query is faster than one without it
SELECT CONVERT(char(3), dbo.tblCourses.CourseDate, 0) AS Month, YEAR(dbo.tblCourses.CourseDate) AS Year, SUM(CASE WHEN a.AttendanceStatus IN (9) THEN 1 ELSE 0 END) AS [City CCG Attended], SUM(CASE WHEN a.AttendanceStatus IN (3) THEN 1 ELSE 0 END) AS [City CCG DNA], SUM(CASE WHEN a.AttendanceStatus IN (7) THEN 1 ELSE 0 END) AS Cancelled, gp.CCGRegion FROM dbo.tblGP_Practices AS gp INNER JOIN
[Code] ....
I wanted to add anohter column where it counts the results of the followign query:
SELECT dbo.tblPatient.NHSnumber, dbo.tblPatient.DateOfBirth, dbo.tblPatient.DateReferralReceived, dbo.tblGP_Practices.OrganisationCode FROM dbo.tblPatient INNER JOIN dbo.tblGP_PatientLink ON dbo.tblPatient.PatientID = dbo.tblGP_PatientLink.PatientID INNER JOIN dbo.tblGP_Practices ON dbo.tblGP_PatientLink.GPPracticeID = dbo.tblGP_Practices.GPPracticeID LEFT OUTER JOIN dbo.tblAppointments ON dbo.tblPatient.PatientID = dbo.tblAppointments.PatientID WHERE (dbo.tblAppointments.PatientID IS NULL)