T-SQL (SS2K8) :: Insert Causing Delay From Table To Table
Oct 28, 2014
I am trying to move data from one table to the another (staging to real time) in a stored procedure.
There are no indexes or primary keys on the target table and it is still taking ages to execute it (30 minutes approx.). There are no defaults, no constraints as well. There is one identity int column though.
There are some 500000 odd rows in the target table.
table2 is intially populated (basically this will serve as historical table for view); temptable and table2 will are similar except that table2 has two extra columns which are insertdt and updatedt
process: 1. get data from an existing view and insert in temptable 2. truncate/delete contents of table1 3. insert data in table1 by comparing temptable vs table2 (values that exists in temptable but not in table2 will be inserted) 4. insert data in table2 which are not yet present (comparing ID in t2 and temptable) 5. UPDATE table2 whose field/column VALUE is not equal with temptable. (meaning UNMATCHED VALUE)
* for #5 if a value from table2 (historical table) has changed compared to temptable (new result of view) this must be updated as well as the updateddt field value.
I've 2 tables as follow, --> Full script and data as attachment, Scripts.zip
CREATE TABLE [dbo].[myMenuCollection]( [menuCollection_idx] [int] NOT NULL, [parentID] [int] NULL,
[code]....
You can see - User select 3 Menu, which is the Menu Id is 1, 4, 10.If the Parent Id for Menu is 0, there is 1 record only to insert. If the Parent Id for Menu != 0, we've to make sure the Insert statement will insert the Parent Menu automatically
Based on Photo Above, there's 3 Menu is selected. But, in back-end - Insert statement will insert 4 record. Please see Menu Id = 10. The Parent Id = 9. So, we need to insert Menu Id = 9 automatically into myInsertedMenu table
I am trying to insert new records into the target table, if no records exist in the source table. I am passing user specific values for insert, but it does not insert any values, nor does it throw any errors. The insert needs to occur in the LOAN_GROUP_INFO table, i.e. the target table.
MERGE INTO LOAN_GROUP_INFO AS TARGET USING (SELECT LGI_GROUPID FROM LOAN_GROUPING WHERE LG_LOANID = 22720 AND LG_ISACTIVE = 1) AS SOURCE
What I am trying to do, Extract the data from SQL table and Insert in Email Body and email to user. I got good article on Internet, I follow all steps as it is, but still I am getting error.
Here is the link : [URL] ....
But I am getting Error:
Error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list. at System.Text.StringBuilder.AppendFormat(IFormatProvider provider, String format, Object[] args) at System.String.Format(IFormatProvider provider, String format, Object[] args) at ST_7f59d09774914001b60a99a90809d5c5.csproj.ScriptMain.Main()
I have created a table Table with name as Varchar and id as int. Now i have started inserting the rows like, insert into Table values ('arun',20).Yes i have inserted a row in the table. Now i have got the values " arun's ", 50. insert into Table values('arun's',20) My sqlserver is giving me an error instead of inserting the row. How will you solve this problem?
I'm inserting from TempAccrual to VacationAccrual . It works nicely, however if I run this script again it will insert the same values again in VacationAccrual. How do I block that? IF there is a small change in one of the column in TempAccrual then allow insert. Here is my query
INSERT INTO vacationaccrual (empno, accrued_vacation, accrued_sick_effective_date, accrued_sick, import_date)
For reasons that are not relevant (though I explain them below *), Iwant, for all my users whatever privelige level, an SP which createsand inserts into a temporary table and then another SP which reads anddrops the same temporary table.My users are not able to create dbo tables (eg dbo.tblTest), but arepermitted to create tables under their own user (eg MyUser.tblTest). Ihave found that I can achieve my aim by using code like this . . .SET @SQL = 'CREATE TABLE ' + @MyUserName + '.' + 'tblTest(tstIDDATETIME)'EXEC (@SQL)SET @SQL = 'INSERT INTO ' + @MyUserName + '.' + 'tblTest(tstID) VALUES(GETDATE())'EXEC (@SQL)This becomes exceptionally cumbersome for the complex INSERT & SELECTcode. I'm looking for a simpler way.Simplified down, I am looking for something like this . . .CREATE PROCEDURE dbo.TestInsert ASCREATE TABLE tblTest(tstID DATETIME)INSERT INTO tblTest(tstID) VALUES(GETDATE())GOCREATE PROCEDURE dbo.TestSelect ASSELECT * FROM tblTestDROP TABLE tblTestIn the above example, if the SPs are owned by dbo (as above), CREATETABLE & DROP TABLE use MyUser.tblTest while INSERT & SELECT usedbo.tblTest.If the SPs are owned by the user (eg MyUser.TestInsert), it workscorrectly (MyUser.tblTest is used throughout) but I would have to havea pair of SPs for each user.* I have MS Access ADP front end linked to a SQL Server database. Forreports with complex datasets, it times out. Therefore it suit mypurposes to create a temporary table first and then to open the reportbased on that temporary table.
I have a table with call data (ContactID, Queues Entered, Call Status, Date & Time Stamps etc). Each entry relating to a contact ID goes onto a new row. The first row for a contact is the date and time it is created. It then captures the queue (or queues) it enters before it is answered. Finally, it captures when the call is released (Completed).
I'm trying to link all this data into one single row per contact ID to make it easier to report on.
I started off by using DISTINCT to pull back all of the Contact ID's. I then used a Left Join to pull back the date and time of creation. I created a further Left Join to pull back the first queue that it entered and so on.
When I did this, I started getting duplicates. This is because some calls enter more than one queue.
How can I do this so that it only has one ContactID per row. Also, for the Queue, is there anything I can do to ensure it pulls back the first Queue it enters? (These are time stamped). Subsequently, I would then need to add the second and third queue it enters in other columns. (A call can enter a maximum of 3 queues).
I would like to create a procedure which create views by taking parameters the table name and a field value (@Dist).
However I still receive the must declare the scalar variable "@Dist" error message although I use .sp_executesql for executing the particularized query.
Below code.
ALTER Procedure [dbo].[sp_ViewCreate] /* Input Parameters */ @TableName Varchar(20), @Dist Varchar(20) AS Declare @SQLQuery AS NVarchar(4000) Declare @ParamDefinition AS NVarchar(2000)
I'm running SQL server 2000 sp1. I created a stored procedure that (1) drops a table, (2) recreates it with a "select into" statement, (3) alters the table by adding a field, and then (4) updates that field.
The trouble I'm having is that when I execute the stored procedure I get an error stating that I have an "invalid column name" between steps (2) and (3). It seems as though when I drop the table in step (1), the entire procedure wants to re-compile and it can't get past step (4) because the table hasn't been altered yet.
I've noticed a similar problem in editing stored procedures when they refer to tables or fields that don't exist yet because WITHIN the procedure they are created/modified. I'm not able to get a successful syntax check and therefore not able to save my work.
I would like to return the nearest date of Table B in my table like for
ID W001 in table B should return ID A002 CreatedDatetime: 2014-06-03 20:05:48.000 ID W002 in table B should return ID A004 CreatedDatetime: 2014-06-04 01:05:48.000
I have a table that has hotel guests and their start stay date and end stay date, i would like to insert into a new table the original information + add all days in between.
I have a sql snippet from a 3rd party application that will not complete its transaction. The SELECT statement executes but does not finish. Instead the statement just sits in AWAITING COMMAND for 1000 seconds then dies, thus killing the UPDATE statement that is supposed to follow.
I am parsing a file where along the flow I use a conditional split. One path of the split is the primary table (with IDENTITY) values. The rest of the paths have a FOREIGN KEY to the primary table.
It seems that SSIS is trying to insert the rows at the same time (which makes sense) but this is causing a problem with the secondary tables and their FK constraint since the primary table is not yet written.
Is there a way to delay the secondary tables until the primary table is done?
(I guess one way is to run through the file twice... once for the primary table and another for the rest but that seems wasteful to me...)
I am using Sql 2005 SP1 and merge replication on a database. One of the tables is used for an audit trail and has a dynamic filter applied so that it doesn't replicated every audit trail record to every subscriber.
Our sp's tend to insert records in to the audit trail table when someone inserts a new product (for example). The problem is that just recently the insert of new products has been taking >2 seconds, this is relatively slow compared to how it used to be 2 months ago.
Using profiler I have found that it is the insert in to the audit trail table that is taking all the time, and this is taking a long time because of something replication is doing. From profiler I have found that the following statement is the culprit. This is something that replication is doing but why it take so long I don't know:
select count(*) from [dbo].[MSmerge_repl_view_000CC979122E4C88AF27FE08CDCC84EB_B5F96F71937D4D9A949DEECFE540D0C4] [AUDIT_TRAIL_DETAIL] with (rowlock) where [RowGUID] in (select [AUDIT_TRAIL_DETAIL].[RowGUID] from inserted [AUDIT_TRAIL_HEADER], [dbo].[MSmerge_repl_view_000CC979122E4C88AF27FE08CDCC84EB_B5F96F71937D4D9A949DEECFE540D0C4] [AUDIT_TRAIL_DETAIL] with (rowlock) where (AUDIT_TRAIL_HEADER.ID = AUDIT_TRAIL_DETAIL.FKAuditTrailHeaderID))
The AUDIT_TRAIL_DETAIL table currently has 1.1 million row in it.
Can anyone give me any clues as to what I should do help improve the performance once again? Should I stop filtering on this table?
I have 2 identical tables one contains current settings, the other contains all historical settings.I could create a union view to display the current values from table A and all historical values from table B, butthat would also require a Variable to hold the tblid for both select statements.
Q. Can this be done with one joined or conditional select statement?
DECLARE @tblid int = 501 SELECT 1,2,3,4,'CurrentSetting' FROM TableA ta WHERE tblid = @tblid UNION SELECT 1,2,3,4,'PreviosSetting' FROM Tableb tb WHERE tblid = @tblid
Hi... I was hoping if someone could share me some thoughts with the issue that I am having at the moment.
Problem: When I run the package in my local machine and update local SS DB/table - new records writes OK in the table. BUT when I changed my destination meaning write record into another physical SS DB/table there is no INSERT data occurs. AND SO when I move/copy over that same package into another server (e.g. server that do not write record earlier) and run it locally IT WORKS fine too.
What I am trying to do is very simple - Add new records in a SS table using SSIS . I only care for new rows and not even changed rows. Here is my logic - 1. Create Ole DB source to RemoteSERVER - using SELECT stmt 2. I have LoopUp component that will look for NEW records - Directs all rows that don't find match and redirect rows (error output). 3. Since I don't care for any rows that is matched in my lookup - I do nothing or I trash the rows 4. I send the error rows (NEW rows) into OleDB destination
RESULTS when I run the package locally and destination table is also local - WORKS FINE; But when I run the package locally and destination table is in another Sserver (remote) - now rows is written.
The package is run thru BIDS manually so there is no sucurity restrictions attached to it.
I am not sure what I am missing. And I do not see error in my package either. It is not failing.
I have SQL Server Management Studio Express (SSMS Express) and SQL Server 2005 Express (SS Express) installed in my Windows XP Pro PC that is on Microsoft Windows NT 4 LAN System. My Computer Administrator grants me the Administror Privilege to use my PC. I tried to use SQLQuery.sql (see the code below) to create a table "LabResults" and insert 20 data (values) into the table. I got Error Messages 102 and 156 when I did "Parse" or "Execute". This is my first time to apply the data type 'decimal' and the "VALUES" into the table. I do not know what is wrong with the 'decimal' and how to add the "VALUES": (1) Do I put the precision and scale of the decimal wrong? (2) Do I have to use "GO" after each "VALUES"? Please help and advise.
Thanks in advance,
Scott Chang
///////////--SQLQueryCroomLabData.sql--/////////////////////////// USE MyDatabase GO CREATE TABLE dbo.LabResults (SampleID int PRIMARY KEY NOT NULL, SampleName varchar(25) NOT NULL, AnalyteName varchar(25) NOT NULL, Concentration decimal(6.2) NULL) GO --Inserting data into a table INSERT dbo.LabResults (SampleID, SampleName, AnalyteName, Concentration) VALUES (1, 'MW2', 'Acetone', 1.00) VALUES (2, 'MW2', 'Dichloroethene', 1.00) VALUES (3, 'MW2', 'Trichloroethene', 20.00) VALUES (4, 'MW2', 'Chloroform', 1.00) VALUES (5, 'MW2', 'Methylene Chloride', 1.00) VALUES (6, 'MW6S', 'Acetone', 1.00) VALUES (7, 'MW6S', 'Dichloroethene', 1.00) VALUES (8, 'MW6S', 'Trichloroethene', 1.00) VALUES (9, 'MW6S', 'Chloroform', 1.00) VALUES (10, 'MW6S', 'Methylene Chloride', 1.00 VALUES (11, 'MW7', 'Acetone', 1.00) VALUES (12, 'MW7', 'Dichloroethene', 1.00) VALUES (13, 'MW7', 'Trichloroethene', 1.00) VALUES (14, 'MW7', 'Chloroform', 1.00) VALUES (15, 'MW7', 'Methylene Chloride', 1.00 VALUES (16, 'TripBlank', 'Acetone', 1.00) VALUES (17, 'TripBlank', 'Dichloroethene', 1.00) VALUES (18, 'TripBlank', 'Trichloroethene', 1.00) VALUES (19, 'TripBlank', 'Chloroform', 0.76) VALUES (20, 'TripBlank', 'Methylene Chloride', 0.51) GO //////////Parse/////////// Msg 102, Level 15, State 1, Line 5 Incorrect syntax near '6.2'. Msg 156, Level 15, State 1, Line 4 Incorrect syntax near the keyword 'VALUES'. ////////////////Execute//////////////////// Msg 102, Level 15, State 1, Line 5 Incorrect syntax near '6.2'. Msg 156, Level 15, State 1, Line 4 Incorrect syntax near the keyword 'VALUES'.
I'm trying to insert data into a table from two tables into a single table along with a hard coded value.
insert into TABLE1 (THING,PERSONORGROUP,ACCESSRIGHTS) VALUES ((select SYSTEM_ID from TABLE2 where AUTHOR IN (select SYSTEM_ID from TABLE2 where USER_ID =('USER1'))),(select SYSTEM_ID from TABLE2 where USER_ID =('USER2')),255)
I get the following-
Msg 512, Level 16, State 1, Line 1
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.
My question is: How can I insert a row for each unique TemplateId. So let's say I have templateIds like, 2,5,6,7... For each unique templateId, how can I insert one more row?
SET @QUERY = "INSERT INTO tblDocTable(FileName, FileType, ImportExportID, BuildingID, Document)
SELECT '"+@fName+"' AS FileName, '"+@fType+"' AS FileType, " + cast(@fID as nvarchar(18)) + " as ImportExportID, '"+@bID+"' AS BuildingID, * FROM OPENROWSET( BULK '" +@fPath+"' ,SINGLE_BLOB)
AS Document"
EXEC (@QUERY)
This puts some values including a pdf or .doc file into a table, tblDocTable.
Is it possible to change this so that I can get the values from a table rather than as parameters. The Query would be in the form of: insert into tblDocTable (a, b, c, d) select a,b,c,d from tblimportExport.
tblImportExport has the path for the document (DocPath) so I would subsitute that field, ie. DocPath, for the @fPath variable.
Otherwise I can see only doing a Fetch next from tblIportExport where I would put every field into a variable and then run this exec query on these. Thus looping thru every row in tblImportExport.