I am currently wrapping up a website upgrade for a client and I am working on a development server/database. The development server/database will become the live version. When the upgrade goes live, I will need to update that database with the latest data from specific datatables (no all of them) in the previously live database, but I don't know how to do a bulk refresh of datatables.
Problem: specific datatables (not all datatables) from Database1 need to be updated with the data from Database2. Database1 and Database2 are copies of each other with vast differences in some of the data.
Result: All of the current, up-to-date data needs to reside on Database1.
Solution: Any ideas?
I am using MSSQL 2000 and the databases reside on the same server.
I'm having issues with bulk update in SQL Server.I'm using SAP BODS as ETL tool and have some 20000 updates.target table has approx 0.5 million records and it has clustered index on id column.I have selected upsert option in BODS.Same setup is also done for Sybase IQ , IQ has bulk update option which is giving very ood performance.
In IQ same update load is finishing in some 9 minutes where SQL is taking more than 2 hours for same, this doesn't seem right.When I look at update is causing whole package to go slow.Sybase is creating query where is ID is present then do update else insert.Is there any way to make bulk update work faster in SQL environment?
I have to update a field within a table of 60 records or so. Each record has a different field value. it's type varchar. i was given an excel file with the field values and was thinking of a bulk update like bulk insert, but i don't recall that it's possible that way.
Is the only way to create a table, bulk insert, then merge the two tables together with UPDATE?
Just wanted to see if there was an easier way to do it, otherwise i'll take the latter route. Thanks!
I am currently working with C and SQL Server 2012. My requirement is to Bulk fetch the records and Insert/Update the same in the other table with some business logic? How do i do this?
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK INSERTtbl_ASX_Data_temp FROM 'M:DataASXImportTest.txt' WITH (FORMATFILE='M:DataASXSQLFormatImport.Fmt')
I have an unusual question... I have a table that I have to select the top 300 records from, check the date on each record, and using an if... then... statement, loop through each record updating a field based on criteria.
The problem is that it takes so long to do this. Is there a way to populate an object and pass that object with the data to a SQL stored procedure to have the SQL server do all the updating as opposed to the application doing the record updating? Does that make sense? Here's a sample of the code and you'll see what I'm talking about.
Thanks in advance for any help or advice you can give.
Imports Microsoft.SqlServer.Server
Imports System.Data.SqlClient
Imports CallTracker_AgingCheck
Module CallTracker
Sub Main()
GetWebAgingData()
GetWebCallsData()
End Sub
#Region "Get Data"
Public agingStatusNumber As New ArrayList
Public agingEscalationNumber As New ArrayList
Public callStatusNumber As New ArrayList
Public CallStatus As New ArrayList
Public Sub GetWebCallsData()
Dim dt As New DataTable
Dim Criteria As String = "SELECT TOP 300 CallID, TIMEOFCALL, Status, StatusNumber " & _
"FROM WebCalls " & _
"WHERE (Status = 'OPEN') OR " & _
"(Status = 'IN PROCESS') OR " & _
"(Status = 'WORKORDER') OR " & _
"(Status = 'PRIORITY') OR " & _
"(Status = 'PENDING') " '& _
'"ORDER BY TimeOfCall DESC"
Dim Fa As String = String.Empty
Dim fromDATE As String = String.Empty
Dim toDate As String = String.Empty
Dim ds As New DataSet()
Dim tbl As String = "WebCalls"
Dim x1 As Integer = 0
Try
Using cn As New SqlClient.SqlConnection(Database.SQLConnection)
cn.Open()
Using cm As SqlClient.SqlCommand = cn.CreateCommand()
cm.CommandText = Criteria
cm.CommandType = CommandType.Text
cm.Parameters.AddWithValue("@CALLID", Fa)
cm.Parameters.AddWithValue("@TIMEOFCALL ", Fa)
cm.Parameters.AddWithValue("@STATUS", Fa)
cm.Parameters.AddWithValue("@STATUSNUMBER", Fa)
Dim Age As TimeSpan = Nothing
Dim _totalHours As Integer = Nothing
Dim EscalateTime As Integer = agingEscalationNumber.Count
Dim DAUpdateCmd As SqlCommand
Using da As New SqlDataAdapter(Criteria, cn)
da.Fill(dt)
Dim callID As Integer = Nothing
Dim statNO As Integer = Nothing
Dim toc As Integer = dt.Rows.Count
Dim x As Integer = 0
For x = 0 To toc - 1
Dim y As Date = dt.Rows(x).Item(1) 'gets the date from the first row in WebCalls for comparison with WebAging
Age = Today.Subtract(y)
_totalHours = Age.TotalHours
If _totalHours > 744 Then
Console.WriteLine("This record is older than 30 days")
Else
callID = dt.Rows(x).Item(0)
DAUpdateCmd = New SqlCommand("Update WebCalls SET STATUSNUMBER = @STATUSNUMBER where CALLID = @CALLID", da.SelectCommand.Connection)
We are planning to add a new attribute to one of our tables to speed updata access. Once the attribute is added, we will need to populatethat attribute for each of the records in the table.Since the table in question is very large, the update statement istaking a considerable amount of time. From reading through old postsand Books Online, it looks like one of the big things slowing down theupdate is writing to the transaction log.I have found mention to "truncate log on checkpoint" and using "SETROWCOUNT" to limit the number of rows updated at once. Or "dumptransaction databaseName with No_Log".Does anyone have any opinions on these tactics? Please let me know ifyou want more information about the situation in order to provide ananswer!
INSERT INTO Users (userName, UserSalt, UserHash1, UserHash2, UT_memberID) select memberFirstName + '.' + memberLastName + '56' as userName, '{AxxxxxDE-6xx6-4xxD-Bxx9-3xxxx79xxxxE}', '{4xxxxxx6-8xx5-6xxD-Cxx6-4xxxFxxx1xx9}', '{0xxx8xxE-Cxx4-6xx8-ExxB-Dxxxx4xxx2xC}', members.memberID From members Inner Join groupLeaders ON members.memberID = groupLeaders.memberID SELECT @@Identity AS UserID
How can I modify the portion that is inserting the '56' at the end of each username to do the following:
1) check to see if username already exists in the database (using a query with "LIKE %'")
2) if not, create the username "as-is" or how it should be without the number
3) if already exists, get a count of records matching your search criteria .... now make a new username + + (count + 1).ToString();
Any thoughts... I am struggling to put these two pieces together.
Thanks,
Zoop
[EDIT - original post below this]
I have modified my method to make this a bit easier. I added a memberID field to my [Users] table so that I can update my [Members] table in a difference statement after the insert takes place.
I have the following query, and it completes succesfully in query analyzer (though I haven't actually executed the SP, just testing the syntax...) anyway, here is what I have:
Code:
INSERT INTO Users (userName, UserSalt, UserHash1, UserHash2, UT_memberID) select memberFirstName + '.' + memberLastName + '56' as userName, '{AxxxxxDE-6xx6-4xxD-Bxx9-3xxxx79xxxxE}', '{4xxxxxx6-8xx5-6xxD-Cxx6-4xxxFxxx1xx9}', '{0xxx8xxE-Cxx4-6xx8-ExxB-Dxxxx4xxx2xC}', members.memberID From members Inner Join groupLeaders ON members.memberID = groupLeaders.memberID SELECT @@Identity AS UserID
I am hoping this will create a user for all members whose 'memberID' can be found in the groupLeaders table... is this correct?
Also, notice the 56 being appended to the end of each username. I would like this to be a random number generated within a given range... can this be done? any advice?
Thanks,
Zoop
[Original post below - provide more background]
I have three tables involved with this insert/update:
I want to insert into the [Users] table the memberFirstName.memberLastName + randomNum into the 'UserName' column from the [Members] table. Also, I want to make all passwords the same, in this case I know the Salt, Hash1, Hash2 I will be using and would like to pass these in for the 'UserHash1' 'UserHash2' fields.
Now, I only want to make this insert where the memberID is in the GroupLeaders table. and Finally, I need to Update my Members table with a UserID where the memberID matches the one used from the groupLeaders table.
Does anyone have any ideas on how I can accomplish this, even if it requires adding a temporary field to one of my tables... here is what I have so far, but am recieving errors and can't quite figure this one out. (btw - I also don't know how to gen the rand num and was using the literal 23 as a placeholder.) Thanks...
Code:
INSERT INTO Users (userName, UserSalt, UserHash1, UserHash2) select a.memberFirstName + '.' + a.memberLastName + '23' + as userName, '{AA99FCDE-6E06-437D-B9E9-3E3D27955C3E}', '{7xxxxxx2-4xx6-9xx1-7xx9-4x3xx4Axxx59}', '{0xx8xxE-Cxx4-6xxx-xxxx-Fxx3xxxx3xxF}', b.memberID as newMemID From members a, groupLeaders b Where a.memberID = b.memberID SELECT @@Identity AS UserID
Update Members Set UserID = Ident_Current('Users') where memberID = newMemID
On a daily basis I need to perform a bulk update. Table totals about 50,000 records with approximately 5,000 changing (deletes, edits, and new records) per day. I'd like to push just the updates somehow, but VB is too slow and I haven't found a way in to handle it in DTS. Not much experience w/ DTS.
I'm transfering between two SQL 2000 servers w/ a VB app sitting in the middle.
Hello, I am trying to read in from a csv file which works like this:
DECLARE @doesExist INT DECLARE @fileName VARCHAR(200) SET @fileName = 'c:file.csv'
SET NOCOUNT ON
EXEC xp_fileexist "' + @fileName + '", @doesExist OUTPUT SET NOCOUNT OFF
IF @doesExist = 1
BEGIN BULK INSERT OrdersBulk FROM "' + @fileName + '" WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '' ) END ELSE print('Error cant find file')
What I want to do is check another table before each line inserts, if the data already exists I want to do an UPDATE. I think i can do what i need with a cursor but I think the bulk update just pushes all the data up and will not allow me to put in the cursor. So is there a way i can read the csv in a cursor instead of using the bulk insert so i can examine each row?
We need to import data from flat/xml files into our database. we plan to do so in bulk as amount of data is huge, 2GB+ we need to do some validation checks in our code after that we create insert queries.
We have identity columns that are used as foreign keys in child tables. Question is how can I write a bulk/batch insert statement that will propogate the identity column to the child, as for all other we are creating the query in the application memory. there are 2 parent tables and 1st table value needs to be referred to in 7 tables and second table's value in 6.
I have Three tables Student,Daily_Attendance_Master and Daily_Attendence_Details.
I want to run sql of insert or update of student attendence(apsent or present) in Daily_Attendence_Details based on Daily_Attendance_Master_Id and Student_Id(from one roll number to another).
If Both are present in table Daily_Attendence_Details then i want to run Updating of attendance from one roll number to another roll number in Daily_Attendence_Details on the basis of Daily_Attendence_Details_Id
And if both or any one is not present i want to run insert of student attendense from one roll number to another roll number in Daily_Attendence_Details.
I give below the structure of three tables Student,Daily_Attendance_Master and Daily_Attendance_Details.
(Hope this isn't a "stupid" question, but I haven't been able to find a straight-forward answer anywhere)" I currently have code that iterates through a dataview's records, making a change to a field in some of the records. The way I have this coded, a conection has to opened & closed for each individual record that's updated: dsrcUserIae.UpdateCommand = "UPDATE UserIAE SET blnCorrect = @blnCorrect WHERE (ID = @ID)" dsrcUserIae.UpdateParameters.Add("blnCorrect", SqlDbType.Bit) dsrcUserIae.UpdateParameters.Add("ID", SqlDbType.Int) Dim myDataView As DataView = CType(dsrcUserIae.Select(DataSourceSelectArguments.Empty), DataView) For Each myRow As DataRowView In myDataView If myRow("FkUsersAnswerID") = myRow("AnswerID") Then intCorrect = 1 Else intCorrect = 0 End If dsrcUserIae.UpdateParameters.Item("blnCorrect").DefaultValue = intCorrect dsrcUserIae.UpdateParameters.Item("ID").DefaultValue = myRow("ID") intUpdateResult = dsrcUserIae.Update() Next It seems like I should be able to do something like this (call update once), but I'm not sure how... dsrcUserIae.UpdateCommand = "UPDATE UserIAE SET blnCorrect = @blnCorrect WHERE (ID = @ID)" dsrcUserIae.UpdateParameters.Add("blnCorrect", SqlDbType.Bit) dsrcUserIae.UpdateParameters.Add("ID", SqlDbType.Int) Dim myDataView As DataView = CType(dsrcUserIae.Select(DataSourceSelectArguments.Empty), DataView) For Each myRow As DataRowView In myDataView If myRow("FkUsersAnswerID") = myRow("AnswerID") Then myRow("blnCorrect") = 1 Else myRow("blnCorrect") = False End If Next intUpdateResult = dsrcUserIae.Update() 'Want all changed myRow("blnCorrect") to be updated to datasource Can anybody explain how to do the bulk update? I've seen some info about AcceptChanges and Merge, but I'm not sure if they apply here, or if they more for Transactions.
I have to modify the table structure where the table have a lot of data already. The log is getting full due to uncommitted transactions, there is a lot of data being updated in large bulks, not all of the transactions are committed, the update task cannot be completed. However, there is no more spare disk space for it to commit the transaction. Anyone can help?
I am trying my first bulk update to an existing SWL table from a CSV text file,The text file naming is exacrtly the same as the SQL table, with the same attributes
The statements: BULK INSERT [Jedox_prod].[dbo].[B_BP_Customer] FROM 'c:Baanjedox_dailyjdcom4401.txt' WITH
[code]....
The error message is: [size=1Msg 4864, Level 16, State 1, Line 1
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 2, column 3 (BP_Country). Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".size=1]..The have checked and re-checked the BP_Country field ( the 1st field after the key) and I am not seeing any mismatches.
I have come up with an issue where I want to update data in a table using bulk/SET update to get the result shown in below code with output in column titled "Arrear Amt".
Please use this test data.
CREATE TABLE ##vOD_Calc ( Seq_No INT , Contract_id INT , Rental_id INT , Actual_OD INT , Logic_OD INT , Due_dte DATETIME ,
[Code] .....
Logic required is that once the sum of column [ArrearAmt] of current row and all previous rows becomes greater than $100 then column [ChArrrearAmt] should show that summed up value and in else case the column [ChArrrearAmt] should show the same value as that of column [ArrearAmt].
Once the column [ChArrrearAmt] reaches the threshold of $100 then the same cycle should start again i.e. in above example rental#1 had $37.17 < $100 then rental#1 + rental#2 is also < $100 and at rental#3 sum of rental#1, rental#2 and rental#3 becomes $111.51 which is greater than $100 so its updated in column [CHArrrearAmt]. The same cycle start overs from rental#4 onwards however the summation of [ArrearAmt] will now begin after rental#4 onwards and not from the starting.
Below is the loop based SQL script which handles the above situation, however in BULK its a total deterioration of performance if thousands of rows are to be processed i.e. with a contract having multiple rentals.
The case here is that I have to use the result of previously updated column value of [ChArrrearAmt] to take decision for the next row, however with BULK update since the row is not yet updated with latest amount therefore the decision on next row is also giving wrong result.
This is the code with which I have achieved to update the column 'chArrear Amount', however its a loop based solution and performance killer.
INSERT INTO ##vOD_Calc_loop ( Rows_count , contract_id ) SELECT COUNT(*) , T.Contract_id FROM ##vOD_Calc T GROUP BY T.Contract_id
I have a fundamental problem with how CDC works for bulk updates.When CDC enabled table is updated for single row - My CDC system tables its recording it as update (3 & 4) which is perfect and what it should be. No Complains!But when I do a bulk update in the same CDC enabled tables for the same columns - My CDC system tables its recording as delete and then insert (1 & 2). This is not correct and this is what my problem is. We used triggers before CDC we did not face this problem with triggers every thing was fine with triggers other than performance.The way how the CDC is handling the bulk update is a big problem for me because based on the output of CDC system tables we are doing some migration work to legacy system.
It will be impossible for me to go and change my migration logic scripts because we have 100's or procedures in it.Is it a know problem with CDC? Is there any solution in CDC when a bulk update happens on a table the CDC system tables record it as updates. I don't think CDC 'net changes' in this situation because the net change would show as single inserted row.If this can't be done with CDC then I have to completely abandon CDC and go back to triggers..
I am getting the below error message while performing Bulk Insert/Update operation.
Could not allocate space for object 'dbo.pros_mas_det'.'PK__pros_mas__3213E83F22401542' in database 'admin_mbjobslive' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.
My Current SQL Server version :
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)
My current database size crossed the limit size of 10 GB.
Can anyone provide an expample of bulk copying XML data to a SQL table. I am also looking at using column mapping so that I can map fields and also insert a new GUID into the key of the SQL table. Many thanks
I have a web form with a text field that needs to take in as much as the user decides to type and insert it into an nvarchar(max) field in the database behind. I've tried using the new .write() method in my update statement, but it cuts off the text after a while. Is there a way to insert/update in SQL 2005 this without resorting to Bulk Insert? It bloats the transaction log and turning the logging off requires a call to sp_dboptions (or a straight-up ALTER DATABASE), which I'd like to avoid if I can.
Hi! I'm building a web application. I need to read data from a text or excel file and process the data and then store the result records into database. The record number is big. I can store the data record into database (SQL Server 2005) one at a time. I think it's slow. Is there any way to insert the data in bulk.
I 'am working with SQL Server7.0, and I need to transfer bulk of data(in millions) from aremote database in Rdb to SQL Server. What is the best approach other than using a comma delimited flat file? Is there a way to create a database link and then use a copy script in SQL to copy the data directly? I would appreciate any help. Thank You.
I have imported data in my table using the bulk insert command. I was supposed to fill specific columns of my table with that data so I used a view to put them in the column I wanted.
The table looks like this now:
id | id_param | val_param +-----------+--------------+ 1 | no_tel | 742062141 2 | sex | 1 3 | age | 23 4 | no_tel | 765234157 5 | sex | 1 6 | age | 34
When I want to select only the val_param that is=1 for the id_param=sex using this interogation:
select * from bd_rox where id_param='sex' and val_param='1'
it returns no value and I don`t know why.The wanted result should look like this:
id | id_param | val_param +-----------+--------------+- 2 | sex | 1 5 | sex | 1
I manage a legacy system that dumps it's data into a number of different databases (same schema) on a nightly basis using bulk insert. I need to formulate a strategy for efficiently aggregating that data into a single database right after these nightly extractions complete. Here is my current stategy:
1. Duplicate the legacy system's database schema and add an identifier column to specify which database the data loaded from.
2. Each night, delete all records in the table.
3. Each night, for each database:
3a. Set each table's default value to a value that references the current database being loaded.
3b. Use the legacy system's flat files and format files to bulk insert into the database.
3c. Clear the default value.
What other steps would faciliate performance? Dropping and recreating the indexes? Does anyone forsee faults in this strategy?
Hello,I read several articles of newsgroup about the bulk delete, and I foundone way is to:-create a temporary table with all constraints of original table-insert rows to be retained into that temp table-drop constraints on original table-drop the original table-rename the temporary tableMy purge is a daily job, and my question is how this work on a heavyload operational database? I mean thousand of records are written intomy tables (the same table that I want to purge some rows from) everysecond. While I am doing the copy to temp table and drop the table whathappens to those operational data?I also realized another way of doing the bulk delete is using BCP:1) BCP out rows to be deleted to an archive file2) BCP out rows to be retained3) Drop indexes and truncate table4) BCP in rows to be retained5) Create indexesAgain the same question: When I'm doing the BCP is there any insertionblocking to my original table? What happens to my rows meantime to beinserted?Does BCP acquire an exclusive lock on the table which prevents anyother insertion?Does any one have an experience with a BCP command for querying out 2million records, and how long will it take?I appreciate your help.
Any help would be appreciated.I am running a script that does the following in succession.1-Drop existing database and create new database2-Defines tables, stored procedures and functions in the database3-Imports data using bulk insert4-Analyzes data using stored proceduresI would like to improve the performance of the analysis in step 4 bycreating indexes in step 2.Question 1-Are indexes updated when data is bulk inserted? I know they arewhen using normal insert, update, or delete T-SQL but I am not sure aboutbulk insert of data.Question 2-Do I need to update the index statistics in any way or would theybe ready to use in step 4.Thanks,CJ
I have a set of records in application memory seperated by a record terminator ''. I can write the memory stream to a local disk file and call bcp api functions to load the file in to SQL server. But how do I transfer the in memory data directly to the SQL server, without writing to a data file, using ODBC. I am not using any .Net Framework classes in my code. The SQL server and application server(generating the data records) are on two different physical servers connected through network. I am trying to figureout the fastest and efficient way to load the data to SQL server from a remote application server. Thanks for your help.
I need to import (pereodicaly) large ammount of data to my CE database. When tested import on network this take a lot of time. That's why decided to send raw data in ASCII files (because of small size) and to import files to CE database.
Certainly, it's not a problem to write those cli by myself, but it's interesting if someone already did this...
I am trying to bulk copy some XML data into a SQL and am generally doing quite well. The XML data I have been given has a child node called "name" which is the same as in the parent node as shown in the highlight of the XML source below. Now I can retrive the data in the parent node using bulk.ColumnMappings.Add("name", "Name") but I cannot get any of the data from the child node "Catagory" or " Catagories". Have you any suggestions on how I can get this data. <?xml version="1.0" ?>
<products> <product> <ProductId>12345</ProductId> <name>Productname</name> <description>"This is some description text"</description> <Categories> <Category> <name>Category type</name> <merchantName>Category subtype</merchantName> </Category> </Categories> <fields /> </product> Etc……. </products> Many thanks in advance Simon