What Is The Best Way To Manage Bulk Imports And Updates In Data?
Apr 28, 2008
I got anywhere from a couple hundred to a hundred thousand records that need to be updated or inserted into their SQL Server 2005 end destination. What are some of the best ways to accomplish this? Right now we are doing it manually through line by line updates and inserts. Would I use BC or some other bulk import tool?
I'm using Visual Basic 2005 Express and SQL Server 2005 Express. I have textboxes on a VB form linked to 2 database tables.
I am wondering if it is possible to use just ONE BindingNavigator to manage data entry and updates to THE database tables. I initially thought I could manage the tables but have I encountered some problems:
i)When I entered a new record, and clicked on the SAVE BUTTON the new record the textboxes for the 1st table saves the record to the database, but the textboxes for 2nd table still retained data in them and are not saving theirs to the database.
ii) The same textboxes for the 2nd table are NOT allowing for updates too! Or, could it simply be that it is not possible to use this method for data entry and updates?
Looking for suggestions on this one. What I want to do have have a text file that may have any number of rows and cols (with a predefined format) that a user can update or insert into a table. The definition of the row/cols and data mapping etc, has been done, it is the mechanics of actually doing the below I would appreciate help and advice on.
As the user is an 'end-user' (and has no SQL knowledge at all) the text file to import from will be placed in a predefined location and then a small script will be executed from their PC (as it happens, it's a Mac that runs an app that can exec an SQL command on the currently open database) that will in turn run a stored proc which is then reads in (imports or updates) the appropriate tables witht he contents of the external text file.
Sorry the explanation is a bit long winded but if anyone had any practical suggestions and examples, it would be greatly appreciated.
FYI, they are running SQL 2000 on both XP Pro and W2K3 server.
I've been reading about the "table lock on bulk load" option and TABLOCK hint.
So my understanding is by default only row locks are taken out and other queries can read/write data while the bulk load is going on. However if you were doing parallel bulk loads with overlapping keys from a clustered index then they may block each other.
But if the option is enabled, you can do the parallel bulk loads without blocking because a table lock is taken out, however, other processes couldn't read/write the data until they're all done.
Is that the gist of it? I think I got confused by some misinformation. Don't all those row locks eventually likely escalate to a table lock anyway though?
I have a project that consists of a SQL db with an Access front end as the user interface. Here is the structure of the table on which this question is based:
Code Block
create table #IncomeAndExpenseData ( recordID nvarchar(5)NOT NULL, itemID int NOT NULL, itemvalue decimal(18, 2) NULL, monthitemvalue decimal(18, 2) NULL ) The itemvalue field is where the user enters his/her numbers via Access. There is an IncomeAndExpenseCodes table as well which holds item information, including the itemID and entry unit of measure. Some itemIDs have an entry unit of measure of $/mo, while others are entered in terms of $/yr, others in %/yr.
For itemvalues of itemIDs with entry units of measure that are not $/mo a stored procedure performs calculations which converts them into numbers that has a unit of measure of $/mo and updates IncomeAndExpenseData putting these numbers in the monthitemvalue field. This stored procedure is written to only calculate values for monthitemvalue fields which are null in order to avoid recalculating every single row in the table.
If the user edits the itemvalue field there is a trigger on IncomeAndExpenseData which sets the monthitemvalue to null so the stored procedure recalculates the monthitemvalue for the changed rows. However, it appears this trigger is also setting monthitemvalue to null after the stored procedure updates the IncomeAndExpenseData table with the recalculated monthitemvalues, thus wiping out the answers.
How do I write a trigger that sets the monthitemvalue to null only when the user edits the itemvalue field, not when the stored procedure puts the recalculated monthitemvalue into the IncomeAndExpenseData table?
I use Asp.Net Application to upload a Excel file and then a DTS to import data from the file to the SQL2000 and finally to display the read data on the screen.
The DTS starts with setting some variables with the help of Dynamic properties.
On Succes. DTS rum 2 simultaneous Transform Data task importing data from excel to SQL2000.
This works fine for "most" of the time, but then there are the other times. One of the TDT(Transform Data task) reads nothing from the excel file. but it can read the data if i upload the same file again right after.
I was wondering about the stability of SSIS when it comes to importing data on a real-time basis. Let's say you have a scenario where flat files, for instance, will be dropped at random intervals ranging from 1 second to 10 seconds apart and the importer has to import these files immediately.
I would imagine that this is done with a package which runs a loop sniffing the directory forever but I stand corrected on the best ways of doing it.
I'm not too sure whether SSIS is a good idea for this as lots of people have had bad comments on SSIS in real-time in my company but they cannot elaborate on why enough to convince me. I have done some pretty cool stuff and must admit that I am a fan of the technology but dont want to defend it into a corner
I'm pretty new to using SQL 2005 Management Studio. Generally speaking, it works pretty much the way I'm used to (using Enterprise Manager) as far as moving data around and designing databases is concerned. But I've been trying to import some data to my local SQL server from an online SQL database and I am getting the most bizarre results.
Basically, it appears to work perfectly, but when the import is finished, there is only one new row in the destination table.
I have tried this with two completely different online databases and I have tried importing using a query and just downloading the table as-is, but whatever happens, I just get one row! All of the databases I'm exporting from or importing into are SQL 2000 - I just happen to be using the SQL 2005 client software.
Have you heard of anything like this?
I'm appending the report to the end of this email. As you can see, it says "success, success" all the way down and it clearly states "71236 rows transferred", but when I get done, there's just one.
Any thoughts?
One explanation is that every new row is overwriting the last new row somehow, but I don't think so because the row that actually gets copied is always the first one in the record set, not the last. Unless they're being transferred in reverse order, I suppose.
I'm pretty stumped and I haven't found any useful blogs or help on the web.
Hope you can think of something because I don't have Enterprise Manager on my computer any more and it's going to be a pain to install it.
Jon
Here's the report:
The execution was successful
- Initializing Data Flow Task (Success)
- Initializing Connections (Success)
- Setting SQL Command (Success)
- Setting Source Connection (Success)
- Setting Destination Connection (Success)
- Validating (Success)
- Prepare for Execute (Success)
- Pre-execute (Success)
- Executing (Success)
- Copying to [inframes].[dbo].[sum_shop_clicks] (Success) * 71236 rows transferred
Messages * Information 0x402090df: Data Flow Task: The final commit for the data insertion has started. (SQL Server Import and Export Wizard)
* Information 0x402090e0: Data Flow Task: The final commit for the data insertion has ended. (SQL Server Import and Export Wizard)
- Post-execute (Success)
- Cleanup (Success) Messages * Information 0x4004300b: Data Flow Task: "component "Destination - sum_shop_clicks" (37)" wrote 71236 rows. (SQL Server Import and Export Wizard)
Dear all,, I need your help,,I'm work in website project using ASP.NET,,I have to register the users of this site,, the users are over 200,,so,,I'm thinking in away to save my time,,All the information of these users are stored in Excel file,,What I want to do is to imports these data from the excel file into a table in my database(SQLserver database),,Could you help in coding by VB.NETThanks in advance,,
I've been quite excited about SQL Server MDS that should allow non-IT staff to easily maintain data. However maintaining data that have many-to-many relationships seems to be quite a pain. I believe the best way is:
Open up your MDS web interfaceGo to entities > product (for example)Add a new member and fill the details Click "product parts" in the bottom right "Related Entities" partAdd a new memberTry and find the product you just created from the dropdownlistSelect the first part and click OKAgain try and find the product you have created from the dropdownlistSelect the second part and click OKRepeat...Close the tab on your browser and finish your product entity.How I wish it worked:
Open up your MDS web interfaceGo to entities > product (for example)Add a new member and fill the detailsCheck a checkbox for each part visible under "product parts" in the bottom right "Related Entities" partFinish your product entity.
Hello,I am working on an ASP.NET web site using an SQL database. Is there a way to Insert/Delete and Update data in the database without creating all the forms in ASP.NET?Since I am the only person to work with the database it would be easier for me than creating all ASP.NET forms.Thanks,Miguel
1. The user may browse any website on the internet that may be in any language and enter the data into my application. 2. The data entered can be in English or any other language. 3. That’s were the problem arises; the data that enters the database other than English is displayed in wrong format like small boxes. 4. The user who had previously entered the data in the database can also alter that data. So when I display the data (other than English) to him it is not in the format in which the user had entered.
So can anybody help me out how to manage multilingual data?
Hi All, How do I input a large text page (notepad) into a SQL column. Or assign a pointer to the data. I've tried to use BOL (writetext) and to no avail, I guess I'm missing something. I'm just using EM and Query analyzer. I thought this should be easy. Image data should work the same way.
I have an aplpication whih collect different types of history data from an SQL database. On type of those data are hitory log information where amount of records can be really big.
What i would like to implement is a kind of paging mecanism for colelcting those data. fro example the first call will retrun the first 100 rows, then the next call would get rows 101 to 200 . etc....
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1 Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode). Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1 Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK INSERTtbl_ASX_Data_temp FROM 'M:DataASXImportTest.txt' WITH (FORMATFILE='M:DataASXSQLFormatImport.Fmt')
We have several read-only nodes in our AlwaysOn cluster, which are set to use Synchronous-commit mode, which ensures that the logs are updated on the read-only nodes before any update statements complete. Even with this option, if we query a read-only node before the logs have been processed, we can read old data. I would like to know a strategy to ensure that a read-only query will definitely return up to date information. I had an idea that if I just used a different transaction type, like Serializable, that it might block the read-only query from actually getting the data until after the log file was processed, but I have not tried it, yet.
I would like to move more queries to the read-only nodes, in an effort to offload CPU utilization from the primary node.
I am using sql server 2008 r2 on my end. I have created a database named testDB. I have a lot of tables with some log tables in this. some tables have contain lack of records in log table.
So my purpose is that I want to fix the table size of those tables(log tables) and want to move records in other database table placed on another location. So my database has no problem.
is there any way to make such above steps which I want for my database?
Is there already built any such functionality in sql server?
I have 14 databases, the last database - 14th one will have lookup tables only. The other 13 databases will have these lookup tables and data tables. At the end of each day I will make updates for lookup tables on 14th database, I want to be able to push the updates to any or some of the 13 databases. Look up tables will have only upto 100 rows, so I am not concerned about the bandwidth. What is the best way to accomplish this.
I don't know if the title for the subject is appropriate here, anywayhere goes:This process was set up by someone and I have inherited it. I have asql2000 database that has about 13 tables that get populated with datafrom 3 different databases. I have identified where each of this datacomes from, and the stored procedures that do the updates, inserts, anddeletes, and the jobs that run these stored procedure to do theupdates, except for one table. The updates for all the other tablesare done through scheduled jobs. For the one table I know where thedata comes from and the stored procedure that needs to run to do theupdate on the table, however I have not been able to identify theprocess that runs the stored procedure.I am hoping that someone can give me a clue as to how to find out wherea stored procedure is being used - or any other hint as to how I couldgo about finding out how this table gets updated.ThanksKR
I have been successful with DTS packages and various SQL statements. However, I have a new challenge. I have a table in an SQL Server database. One of the columns is employee number and a column for department number(which is not populated) In a remote AS400 file I have the employees number and department number. I want to create a package to connect to remote table and update SQL Server table with department number where the two tables match on the employee number.
I need to make a job that will update up to 8000 rows with the list description of 'berkhold' to 'berknew' in SQL 2000. This is something that I have to do with several projects manually every day by doing the following 2 steps.
SELECT ListDescription, CRRecordID FROM dbo_BerkleyGroupInventory WHERE ListDescription ="BerkHold" AND CRCallDateTime<'1/1/2003' AND CRCallResultCode ='CC' ORDER BY CRRecordID
I then scroll to the 8000th row and copy the CrrecordID and run the following query
UPDATE dbo.berkleygroupinventory SET listdescription ='berknew' WHERE ListDescription ='BerkHold' AND CRRecordID <=5968432 AND CRCallDateTime ='1/1/2003' AND CRCallResultCode='CC'
I'm sure there's an easier way to do this, but I'm very new to SQL and haven't figured it out yet
I have 14 databases, the last database - 14th one will have lookup tables only. The other 13 databases will have these lookup tables and data tables. At the end of each day I will make updates for lookup tables on 14th database, I want to be able to push the updates to any or some of the 13 databases. Look up tables will have only upto 100 rows, so I am not concerned about the bandwidth. What is the best way to accomplish this.
I am running a Access97 front end with a SQL Server 7 backend. On records with an ntext datatype, you are only allowed to update records if the ntext field is null. The tables are linked from access. You get a "cannot update linked table" and "ODBC error #306." Any suggestions??
We have a legacy database whose data needs to be included in our yet-to-be-built sql 2005 data warehouse. Some of the tables don€™t have any natural candidates for a primary key. (Of course, we also need to add other data to the mix.)
Suppose we load the empty warehouse initially. In following loads we don€™t want to include those records that haven€™t changed from the first load (€œduplicates€?) but we also don€™t want to delete the contents of the entire warehouse because of the load time. Any ideas/best practices how to handle €œincremental updates€? to a warehouse would be appreciated.
Hello, I am working on a project that involves one part where a field's value needs to be changed when the user updates the record. Here is the situation in detail: There is an InputData table where the user enters new records or changes existing records. There is a field called "calculated" in this table which has a default value of 'no'. A stored procedure runs math calculations on all the InputData records where the calculated field = 'no'. At the end of this stored procedure, it sets the calculated field = 'yes'. When new records are added by the user their "calculated" field value is 'no' by default so that the next time the stored procedure is executed, it only runs the math calculations on the new records. The problem is, if a user changes an existing record, the "calculated" field needs to be changed from 'yes' to 'no' so that the stored procedure recalculates the math for the modified record. How do I change the value from 'yes' to 'no' on records that the user modifies? Thanks.
I have a table that is increasing quite largely each day. By now, I have average 300 million of records over 2.5 years. Before we received our new interface, the data we received was aggregated and thus not that big.The problem is that the table is so huge that I cannot use the Slowly Changing Component. I was thinking about making a temp table where I load the incremental data before I load it into the final data mart table.Based on this temporary table I use a script to compare the temp data with the already existing data in the data mart. However, this requires a compare of each records (300 mil records).
I posted the questions in sql forum and got good sql statement to work with it.. However, I want to see if there is a way to do it in SSIS..
May be this is really basic questions but I am having hard time to do it in sql server 2005 SSIS..
I have a flat file that I want to merge with table in SQL server 2005.
1> I have successfully created a data flow task to import data from flat file to Table X (new table I created for this package).
Now here is my question. I have a Table A already in the database with the same column structure as of TableX (Both the tables have 20 columns/same Name/Same design).
I want to merge Table A and Table X and stored the data in TableA. However, I just don't want to merge blindly, I need to insert a new row in Table A only if the same row does not exist in Table A (there is no primary key, i am looking certain fields to see if the rows are same)..
Here is an example: Table A -------------- 1 test test1 test2 test3 test4 test5 2 test test6 test7 test8 test9 test10
Table X ------------ 1 test test1 test2 test99 test4 test5 2 test test98 test97 test 96 test95 test94 -------------------------------------------------------- Now, I want to only insert row 2 of Table X since there is match on 4 of the fields in row1.. The new Table A should look like
NEW Table A' -----------------
test test1 test2 test3 test4 test5 test test6 test7 test8 test9 test10 test test98 test97 test 96 test95 test94
------------------------------------ I think, I could do this using Execute SQL task and write all the code in sql, but that will be cumbersome and time consuming.. Is there a simpler way to achieve this?