I have an application that processes a large number of input files in a CSV format and then posts the data to a table on SQL Server Express.
The data table can end up very large and I have no requirements to store all the data locally.
I have included the table in my DataSet using visual studio express so I have access to the schema, but will not run Fill() on it.
Ideally I would like to process a CSV file at a time.
I can add records to my local (empty) data table and when I am happy, I can call tableAdapter.Update() or dataSet.DataTable.AcceptChanges() to generate lots of SQL 'INSERT' commands to update the physical database at the server end.
I would then like to empty my local data table (so it doesn't get too big) and repeat the same process over again for each CSV file.
How can I empty my local table without causing it to generate a load of SQL 'DELETE' commands? I want to empty the table and fool ADO.NET into thinking that everything is synchronised, as if it has just done an update but actually hasn't.
Hi, I would like to delete a data from a 750million row table in chunks of 10000,without blocking the users.As ours is a 24/7 shop I donot want to block the users for a long time. Answer for this is highly appreciated. Thanks Samna
We are using the Transfer SQL Server Objects Task to transfer a large table. The trans log is filling up for this table. Is there a method to split the Data Transfer Task into smaller batches? (Smaller tables are transferring without issue.)
visual studio.net seems to default to single batch mode when runningsql scripts. does anyone know how to change this behavior? typicalbatches will include object existence, drop, and create batches priorto processing. i have attempted removing the graphical plan, usingbatch separator 'go', and the vba trick using ';' to no avail.tia
I am using SQL Server Express and Visual Studio 2005. I am new to batches and am trying to understand how they work. I am trying to write a query that creates an assembly and the functions that are contained in it. Here is my query:
USE ProductsDRM GO
IF NOT EXISTS (SELECT 'True' FROM sys.assemblies WHERE name = 'ComputedColumnFunctions') BEGIN CREATE ASSEMBLY ComputedColumnFunctions FROM 'C:WebsitesAssemblyTestStoredFunctionsStoredFunctionsinStoredFunctions.dll' GO
CREATE FUNCTION fImageFileName ( @ProductID int, @ImageSizeCode nvarchar(4000) ) RETURNS nvarchar(4000) AS EXTERNAL NAME [ComputedColumnFunctions].[StoredFunctions.UserDefinedFunctions].ImageFileName GO
CREATE FUNCTION fTestInt ( @ProductID int ) RETURNS int AS EXTERNAL NAME [ComputedColumnFunctions].[StoredFunctions.UserDefinedFunctions].TestInt GO
CREATE FUNCTION fTestInt2 ( @TestInt int ) RETURNS int AS EXTERNAL NAME [ComputedColumnFunctions].[StoredFunctions.UserDefinedFunctions].TestInt2 END ELSE BEGIN PRINT 'The assembly named "ComputedColumnFunctions" already exists. No new assembly was created.' END
GO
I read in a book about SQL Server 2005 about including a test for whether the object (such as assembly in this case) exists before trying to create it. If I only include the CREATE ASSEMBLY statement and the FROM line below it and delete the next GO down through the last CREATE FUNCTION (just before the END ELSE), it works fine. If I leave it as is, I get a runtime error on the GO line just after the CREATE ASSEMBLY statement. What am I doing wrong?
Hello SQL Team! I'm stuck at this problem for days and need help. The problem is with the GO keyword. I know the GO causes the batch to get executed and all local variables are lost. But I can't seem to find a work around. I would like each stored procedure to get executed in different batches.
create table sql_cmd(cmd nvarchar(255)) insert into sql_cmd(cmd) values('exec user_sp param1,''param2'') insert into sql_cmd(cmd) values('exec user_sp2 param1,''param2'')
declare @sql nvarchar(255) declare c_sql cursor --small table peformance not a problem. Also open to other suggestions for select cmd from sqlcmd open c_sql fetch next from c_sql into @sql while @@fetch_status = 0 begin print @sql exec sp_executesql @sql GO fetch next from c_sql into @sql end
Recently I was stumped on a problem where I was granting permissions to a user, from within a script that was creating a stored procedure. Then I would check the permissions on the procedure, but the permissions were empty. It turned out that I was missing a "go" statement to seperate the create procedure statement from the grant statement. So that got me to thinking about what other kids of statements must be seperated into seperate batches ? I would thik that anything that is being created must be seperated from any statements that grant or alter permissions, because the items would need to exist first.
So i tweaked a stored procedure that did a 1 hour update for a specific countryId.
If that procedure was called at the same time with two different countryId than one update took place and afterwards the other.
Since all the rows are distinct i switched to an batch update only updating 10000 rows at a time ( and not all of the 2 million).
The general locking looked better afterwards but now i receive strange deadlocks.
My theory:
TX1 Updates a row on Page1 (P1) with rowlock. TX2 also does this on P1. Now TX1 deceides to escalate to PAGELOCK. TX1 waits for TX2 to be done with the row so it can lock the page. TX2 waits for TX1 to leave the page since TX2 may also want to pagelock.
Everybody waits for each other so we have a deadlock. Is that feasible ? OR is there another common problem when doing batch updates on the same table with distinct rows ( that can be on a same datapage ofc).
I have a query batch "update" script that upgrades my users database from,say version 0 to version 1, or from version 1 to version 2. I would like toknow how I can wrap the entire script in a transaction, so that either thewhole thing succeeds or none of it does.For example:BEGIN TRANSACTION.......... Alter some tables.....GO.......... Alter a stored procedure.....GO.......... Create a new stored procedure.....GOCOMMIT TRANSACTIONorROLLBACK TRANSACTIONGO(how do I get to the "ROLLBACK TRANSACTION" if an error occurs in the updatescript?)
I want to update tableToUpdate in batches of 5000 per batch and set the lastenecryptionDT to null based on the the join to the tableValues using the column ENCRYPTIONID, and also output updated rows into another table. Incase I would need to do a rollback.
Hi,I have table with three columns as belowtable name:expNo(int) name(char) refno(int)I have data as belowNo name refno1 a2 b3 cI need to update the refno with no values I write a query as belowupdate exp set refno=(select no from exp)when i run the query i got error asSubquery returned more than 1 value. This is not permitted when thesubquery follows =, !=, <, <= , >, >= or when the subquery is used asan expression.I need to update one colum with other column value.What is the correct query for this ?Thanks,Mani
In another forum post, a poster was deleting large numbers of rows from a table in batches of 50,000.
In the bad old days ('80s - '90s), I used to have to delete rows in batches of 500, then 1000, then 5000, due to the size of the transaction rollback segments (yes - Oracle).
I always found that increasing the number of deleted rows in a single statement/transaction improved overall process speed - up to some magic point, at which some overhead in the system began slowing the deletes down, so that deleting a single batch of 10,000 rows took more than twice as much time as deleting two batches of 5,000 rows each.
good rule-of-thumb numbers (or even better, some actual statistics and/or explanations) as to how many records should be deleted in a single transaction/statement for optimum speed? 50,000 - 100,000 - 1,000,000 or unlimited? Are there significant differences between 2008, 2012, 2014?
We have a SQLServer 2005 Enterprise merge replication publication with SQL Mobile 3.0 subscribers (Windows Mobile 5.0 and 6.0). We do not use pre-computed partitions due to trigger performance issues with an SSIS/ETL application that supplies data to the merge database. We do use the "Optimize" (=true) option, though we have tried this both ways with no significant differences. We use filters and joins for each worker ID (as HOST_ID) from the subscriptions.
The sync times become increasingly worse after we run the snapshot and bring the publication online. I have tried rerunning the snapshots, this helps little, as it often behaves like the subscription was set to reinitialize and forces a big sync (reload of all data) to the subscriber. We have tried much of the obvious (e.g., flattening filters and joins, adding indexes, etc.).
When users are synchronizing, we watch replication monitor and notice that a lot of time is spent processing "enumerating inserts and updates for article [any article]", especially processing the many generations and batches. This is true for any follow-up syncs after the 1st big sync (initializing the subscription).
I read several posts regarding the batches and generations of changes, and decided to try increasing the DownloadGenerationsPerBatch?. I tried adding this parameter to the snapshot agent job, and the job fails each time with a vague message, even with the default value of 100. How do you change this parameter for SQLServer 2005 Enterprise?
Hi SQL fans,I realized that I often encounter the same situation in a relationdatabase context, where I really don't know what to do. Here is anexample, where I have 2 tables as follow:__________________________________________ | PortfolioTitle|| Portfolio |+----------------------------------------++-----------------------------+ | tfolio_id (int)|| folio_id (int) |<<-PK----FK--| tfolio_idfolio (int)|| folio_name (varchar) | | tfolio_idtitle (int)|--FK----PK->>[ Titles]+-----------------------------+ | tfolio_weight(decimal(6,5)) |+-----------------------------------------+Note that I also have a "Titles" tables (hence the tfolio_idtitlelink).My problem is : When I update a portfolio, I must update all theassociated titles in it. That means that titles can be either removedfrom the portfolio (a folio does not support the title anymore), addedto it (a new title is supported by the folio) or simply updated (atitle stays in the portfolio, but has its weight changed)For example, if the portfolio #2 would contain :[ PortfolioTitle ]id | idFolio | idTitre | poids1 2 1 102 2 2 203 2 3 30and I must update the PortfolioTitle based on these values :idFolio | idTitre | poids2 2 202 3 352 4 40then I should1 ) remove the title #1 from the folio by deleting its entry in thePortfolioTitle table2 ) update the title #2 (weight from 30 to 35)3 ) add the title #4 to the folioFor now, the only way I've found to do this is delete all the entriesof the related folio (e.g.: DELETE TitrePortefeuille WHERE idFolio =2), and then insert new values for each entry based on the new givenvalues.Is there a way to better manage this by detecting which value has to beinserted/updated/deleted?And this applies to many situation :(If you need other examples, I can give you.thanks a lot!ibiza
I have a table where table row gets updated multiple times(each column will be filled) based on telephone call in data.
Initially, I have implemented after insert trigger on ROW level thinking that the whole row is inserted into table will all column values at a time. But the issue is all columns are values are not filled at once, but observed that while telephone call in data, there are multiple updates to the row (i.e multiple updates in the sense - column data in row is updated step by step),
I thought to implement after update trigger , but when it comes to the performance will be decreased for each and every hit while row update.
I need to implement after update trigger that should be fired on column level instead of Row level to improve the performance?
I'm looking for a query that can "batch" update one table from another. For example, say there are fields on both tables like this: KeyField Value1 Value2 Value3 The two tables will match on "KeyField". I would like to write one SQL query that will update the "Value" fields in Table1 with the data from Table2 when there is a match.
I have an update trigger which fires from a transactiion table to update a parent record in another table. I am getting no errors, but also no update. Any help appreciated (see script below)
create trigger tr_cmsUpdt_meds on dbo.medisp for UPDATE as
if update(pstat) begin update med set REC_FLAG = 2 from deleted dt where med.uniq_id = dt.uniq_id and dt.pstat = 2 and dt.spec_flag = 'kop' end
I imported a SQL Table into SQL DataBase, But I can not update this table even with SQL Server management Studio When I change any data on mentioned table above, Red exclamation sign appears left of the record . How can I correct this problem? Thanks.
This program gets the values of A and B passed in. They are for table columns DXID and CODE. The textbox GET1 is initialized to B when the page is loaded. When I type another value in GET1 and try to save it, the original initialized value gets saved and not the new value I just typed in. A literal value, like "222" saves but the new GET1.TEXT doesn't.
I have contract table which has built in foreign key constrains. How can I alter this table for delete/ update cascade without recreating the table so whenever studentId/ contactId is modified, the change is effected to the contract table.
Thanks
************************************************** ****** Contract table DDL is
create table contract( contractNum int identity(1,1) primary key, contractDate smalldatetime not null, tuition money not null, studentId char(4) not null foreign key references student (studentId), contactId int not null foreign key references contact (contactId) );
I am trying to use a stored procedure to update a column in a sql table using the value from a variable table I getting errors because my syntax is not correct. I think table aliases are not allowed in UPDATE statements.
This is my statement:
UPDATE [dbo].[sessions_teams] stc SET stc.[Talks] = fmt.found_talks_type FROM @Find_Missing_Talks fmt WHERE stc.sessionid IN (SELECT sessionid FROM @Find_Missing_Talks) AND stc.coupleid IN (SELECT coupleid FROM @Find_Missing_Talks)
I have a scenario where I have to Update a table with date when there are new records in another table
For example:
I load ODS table with the data from a file in SSIS. the file has CustomerID and other columns.
Now, when there is new record for any customerID in Ods, then Update the dbo table with the most recent record for every CustomerID(i.e. update the date column in dbo for that customerID). Also Include an Identifier that relates back to the ODS table. How do I do this?
I am trying to update one table when records are inserted in another table.
I have added the following trigger to the table “ProdTr” and every time a record is added I want to update the field “Qty3” in the table “ActInf” with a value from the inserted record.
My problem appears to be that I am unable to fill the variables with values, and I cannot understand why it isn’t working, my code is:
ALTER trigger [dbo].[antall_liter] on [dbo].[ProdTr] for insert as begin declare @liter as decimal(28,6)