I need to load a lot of data into a sql table as fast as possible. (All data is string data, CSV file). I have read that the "fastest" way to load data is using the "Fast Load" option of the OLEDB Destination but I have also read that the "SQL Server" Destination is fast.
1. Is there a general consensus as to which way is the quicksest load?
2. The file is a CSV file. Would a fixed-format file be faster to read?
Im bulk loading a ton of data into MSSQL SERVER 2005 Standard Edition. I used to do this process in version 2000. It seems there is some more overhead in 2005. Is there a way to drop logging to almost null to speed up insert?
Hi, 1)I need to transfer 500 gb of data from one server to other, which is faster, DTS/BCP/Restore. 2)Which are the best methods for checking blocking, dead locks & Indexes!
I've got a view that is driven from a 80 million record table in a data warehouse. I am trying to populate an aggregate table in a datamart, but am running into preformance problems. The datamart table needs to be updated daily. I understand there are many factors that effect performance, but in general would the fastest approach be: 1) Truncate the datamart table 2) Perform a bcp of the view to a text file 3) Bulk Insert to the datamart table
If you need more information to answer this please let me know.
I have NUnit tests. on each test I create a temp db with data and drop it in the end of the test. Since I have several connections to it during the test, I do "set single_user" on the db and then drop because I can't drop it while others are connected. ("Other" menas other connections) This takes quite some time. tried detach but no improvements.
Here I will describe my problem. 1. We are loading large amount of data from database on background thread which is starting on Application_start event in global.aspx.cs file.The data is later cached for subsquent request to improve the performance. 2. Now when we put the application on web farm garden, it is not able to load the application. 3. We are sending the request the servers through Router kind of application. 4 This application is working fine on single server enviornment.
I just have done the SSIS example in the tutorial document included when install SQL 2005 ENT. I have a problem that whenever I test to run, the service load all data from source with out noticing about the data (I mean it load all the data to the destination), I do it several time and it continue to load all without checking. That mean the data is dublicated when the schedule run???
I think there should be a paramete or something like that to help the engine just load the new data to the destination. Could you help please?
In my SSIS package I am using Foreach loopcontainer to load multiple flat files.
Now my requirement is that I want to load only those file which contains %vendor%.In source folder I have many files but I am interested in to load only those file which contains the string %vendor% in file name.
how I can load the CSV file data into the sql server table. I know there are ways like bulk insert and other to load the csv file data into the table. But in my case the table doesn't exist and has to be created at the run time. With simple insert in temp table we do like select * into #temp from tablename and that creates the temp table. So. I need something like that which create the temp table and load the data into it. because the CSV file would have different number of columns and names so I can not create the table structure in advance. I have to create the table at run time.
I am trying to insert a row into a table of Microsoft SQL Server 2000.
There are various columns.
[SNO] [numeric](3, 0) NOT NULL , [DATT] [char] (32) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [DATTA] [char] (3000) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [CODECS] [char] (32) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
The [DATTA] column is causing a problem. Even if I am trying to put only 1700 character string into [DATTA], the java code throws the following exception:-
StaleConnecti A CONM7007I: Mapping the following SQLException, with ErrorCode 0 and SQLState 08S01, to a StaleConnectionException: java.sql.SQLException: [Microsoft][SQLServer 2000 Driver for JDBC]Connection reset
at com.microsoft.jdbc.base.BaseExceptions.createException(Unknown Source)
Why is it throwing an exception even though the sum-total of this row doesn't exceed 8000 characters?
I previously had an ASP.NET 1.1 site running on my IIS 6.0 server (not the default website) with Reporting Services running in a subdirectory of that website. I recently upgraded to ASP.NET 2.0 for my website and was greeted with an error when trying to view a report. The error was very non-descript, but when I checked the server logs, it recorded the details as "It is not possible to run two different versions of ASP.NET in the same IIS process. Please use the IIS Administration Tool to reconfigure your server to run the application in a separate process."
First of all, I could not figure out where and how to do this. Secondly, I decided to try to also change the Reporting Services folders to run ASP.NET 2.0 and when I did, I was greeted with the following message when attempting to view a report:
"Failed to load expression host assembly. Details: StrongName cannot have an empty string for the assembly name."
Hello, I am tring to add a string my database. Info is added, but it is the name of the string, not the data contained within. What am I doing wrong? The text "Company" and "currentUserID" is showing up in my database, but I need the info contained within the string. All help is appreciated!
Imports System.Data Imports System.Data.Common Imports System.Data.SqlClientPartial Class _DefaultInherits System.Web.UI.Page
Protected Sub CreateUserWizard1_CreatedUser(ByVal sender As Object, ByVal e As System.EventArgs) Handles CreateUserWizard1.CreatedUser 'Database ConnectionDim con As New SqlConnection("Data Source = .SQLExpress;integrated security=true;attachdbfilename=|DataDirectory|ASPNETDB.mdf;user instance=true") 'First Command DataDim Company As String = ((CType(CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("Company"), TextBox)).Text) Dim insertSQL1 As StringDim currentUserID As String = ((CType(CreateUserWizard1.CreateUserStep.ContentTemplateContainer.FindControl("UserName"), TextBox)).Text) insertSQL1 = "INSERT INTO Company (CompanyName, UserID) VALUES ('Company', 'currentUserID')"Dim cmd1 As New SqlCommand(insertSQL1, con) '2nd Command Data Dim selectSQL As String selectSQL = "SELECT companyKey FROM Company WHERE UserID = 'currentUserID'"Dim cmd2 As New SqlCommand(selectSQL, con) Dim reader As SqlDataReader '3rd Command Data Dim insertSQL2 As String insertSQL2 = "INSERT INTO Company_Membership (CompanyKey, UserID) VALUES ('CompanyKey', 'currentUserID')"Dim cmd3 As New SqlCommand(insertSQL2, con) 'First CommandDim added As Integer = 0 Try con.Open() added = cmd1.ExecuteNonQuery() lblResults.Text = added.ToString() & " records inserted."Catch err As Exception lblResults.Text = "Error inserting record." lblResults.Text &= err.Message Finally con.Close() End Try '2nd Command Try con.Open() reader = cmd2.ExecuteReader()Do While reader.Read() Dim CompanyKey = reader("CompanyKey").ToString() Loop reader.Close()Catch err As Exception lbl1Results.Text = "Error selecting record." lbl1Results.Text &= err.Message Finally con.Close() End Try '3rd Command Try con.Open() added = cmd3.ExecuteNonQuery() lbl2Results.Text = added.ToString() & " records inserted."Catch err As Exception lbl2Results.Text = "Error inserting record." lbl2Results.Text &= err.Message Finally con.Close()End Try
Hi! We make complete data load once a week. Now we need to make additional steps to append data on daily basis. We have primary key on the table and it doesn't allow appending duplicate rows. What steps should we create to append data?
Hi everobody! When I run : LOAD DATABASE db1 FROM DISK = 'c:mssqldatadb1_backup.dat' go
I got error message: Msg 3201, Level 16, State 1 Can't open dump device 'c:mssqldatadb1_backup.dat', device error or device off line. Please consult the SQL Server error log for more details. How can I fix the problem? Thank you. Alona
right now I have a stored procedure that goes through each of the Line and Body fields using a cursor. The problem is that this method is very slow. How would you experts solve this problem? any Hints or suggestions?
BEFORE EXAMPLEPartLineBodySeriesEngineYear 11234A,BWETC1998 25678991,93,94,95WET01997 3345656S,R5,6,12WENC1995
I'm needing to load data on the client side to load into a DataGrid. I decided to use excel to load the data, but it doesn't need to be. My problem is that it only loads from the server not the client. I browse to find the file and get the path with a control named ctlFindFile. A button labeled ctlLoadData will when pressed display the pathway in label1 and also place the pathway in the function GetDataFromExcel which returns a dataset from the spreadsheet and displays the data in a dataset. Data is returned fine if I’m on the server, but when I'm on a remote machine I receive an error. That is unless I've place a spreadsheet with the same name and pathway on the server as on the client machine, then I’m able to load the file. Now how do I get it to load from on the client machine?
Code below:
Private Sub ctlLoadData_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ctlLoadData.Click Label1.Text = ctlFindFile.Value DataGrid1.DataSource = GetDataFromExcel(ctlFindFile.Value, "SampleNamedRange").Tables(0) DataGrid1.DataBind() End Sub Public Function GetDataFromExcel(ByVal FileName As String, ByVal RangeName As String) As System.Data.DataSet 'Returns a DataSet containing information from a named range in the passed Excel worksheet Try Dim strConn As String = "Provider=Microsoft.Jet.OLEDB.4.0;" &_ Data Source=" & FileName & ";Extended Properties=Excel 8.0;" Dim objConn As New System.Data.OleDb.OleDbConnection(strConn) objConn.Open() ' Create objects ready to grab data Dim objCmd As New System.Data.OleDb.OleDbCommand("SELECT * FROM " &_ RangeName, objConn) Dim objDA As New System.Data.OleDb.OleDbDataAdapter objDA.SelectCommand = objCmd ' Fill DataSet Dim objDS As New System.Data.DataSet objDA.Fill(objDS) ' Cleanup and return DataSet objConn.Close() Return objDS Catch ex As Exception ' Possible errors include Excel file already open and locked, et al. Return Nothing End Try End Function Error on Client:Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object.Source Error:
Line 31: Private Sub ctlLoadData_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles ctlLoadData.Click Line 32: Label1.Text = ctlFindFile.Value Line 33: DataGrid1.DataSource = GetDataFromExcel(ctlFindFile.Value, "SampleNamedRange").Tables(0) Line 34: DataGrid1.DataBind() Line 35: End SubSource File: C:InetpubwwwrootSAI_LoadWebForm1.aspx.vb Line: 33 Stack Trace:
[NullReferenceException: Object reference not set to an instance of an object.] SAI_Load.WebForm1.ctlLoadData_Click(Object sender, EventArgs e) in C:InetpubwwwrootSAI_LoadWebForm1.aspx.vb:33 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +108 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +57 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +18 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +33 System.Web.UI.Page.ProcessRequestMain() +1292
Version Information: Microsoft .NET Framework Version:1.1.4322.2032; ASP.NET Version:1.1.4322.2032
I have a user that is loading data via an Access load procedure to a table that actually is a SQL Server 2005 table but is linked to the Access database. He is saying the load is extremly slow. How can I monitor what it is doing on the SQL Server side?
Hi,I'm extracting data from a mainframe application with a view to loadingit into a MS SQL database. I'm trying to determine the most efficientway to format the mainframe extract file to make loading into thedatabase easier.The problem I have is that the existing record structure includes anarray that can vary between 1 to 50. If I include this array in asingle record the table I use to import the data would need 50 columnsthough not all these would be populated. There is a field in the recordto identify how many occurances of the array there are.Current Record Structure :Account NumberAccount NameOther Account DetailsTotalNumberOfArrayFieldsPopulatedArray :Value1Value2Value3....up to Value50 (if required)i.e.12344,Mr Agent,$29.95,2,BX123,BX12412345,Mr Jones,$14.95,3,XX123,XX124,XX12512345,Mr Jones,$14.00,1,XY12312345,Mr Jones,$15.95,2,XZ124,XZ12512346,Mr Smith,$19.95,3,AX123,AX124,AX12512346,Mr Smith,$19.00,1,BY12312347,Mr Acant,$99.95,7,CX123,CX124,CX125,CX126,CX127,CX128 ,CX129There may be up to 3 records created for each Account Number withdifferent values in the array fields.Am I better to break this file into two files .. one with the corecustomer information and a second file with a row for each array valuewhich has a link to the customer information file.OrIs there a way to efficiently process the original file once it isloaded into the staging tables in the database ?i.e.File 1 - Core Customer Information====================================Current Record Structure :Record NumberAccount NumberAccount NameOther Account DetailsTotalNumberOfArrayFieldsPopulatedFile 2 - Array Information====================================Record NumberArray :Value1Value2Value3....up to Value50 (if required)File 1========================12344,Mr Agent,$29.95,212345,Mr Jones,$14.95,312345,Mr Jones,$14.00,112345,Mr Jones,$15.95,212346,Mr Smith,$19.95,312346,Mr Smith,$19.00,112347,Mr Acant,$99.95,7File 2========================12344,BX12312344,BX12412345,XX12312345,XX12412345,XX12512345,XY12312345,XZ12412345,XZ12512346,AX12312346,AX12412346,AX12512346,BY12312347,CX12312347,CX12412347,CX12512347,CX12612347,CX12712347,CX12812347,CX129At times the individual array values will be used for look ups thoughessentially the Customer Information record will be the primary lookupdata.I'm leaning toward changing my COBOL code and creating the 2nd outputunless someone can suggest a simple way to process the information onceloaded into the table.Any help that could be suggested would be greatly appreciated.
I am using Bulk Insert task to laod data from .dat file to SQL table but getting an error below.
[Bulk Insert Task] Error: An error occurred with the following error message: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.".
I tried to load data from the table in different server. If I just want to limit one year data ( using date_key in that table ) then what task do I need to do that? Please let me know. Thanks.
How to extract and Load Data using ADO.NET in SSIS.i hope to extract data we have DataReader source .but how to load (Insert) data with ADO.NET ?.and is ADO.Net quicker than OLEDB ?
Hello I am looking for some advise. I have a process to import flat text files. We are importing data from five vendors. These files are seperated by vendor with each vendor having their own directory. The general file layouts are different for each vendor. Each vendor may have up to five different types of files to be imported that are their one directory (sales, inventory, transactions etc). Sales file for Vendor 1 is different layout than sales file for Vendor 2. Each vendor may have multiple instances of each type (store 1 inventory, store 2 inventory, store 1 sales, store 2 sales etc.) There could be up to five hundred files (of the five different types) in a given vendors directory. I am using an import package. This package has five (.dtsx files) different data flows. Each of these data flows has connection managers that connect to the specific types of files (sales, inventory, transactions etc) for that vendor.
My current play is to have the data flow(.dtsx file) parse the store name (from the file name) for each of the file types. It would load/process each of the available file formats (sales, inventory, transactions etc) for that store in that vendor folder. We want the process to load all data files from a given store. It would then move on to the next available store (same vendor). I would like to set the process to run multithreaded so that I am loading as many of the stores for that vendor as possible (there could be over a hundred stores for each vendor) at the same time. How do I get each dataflow to run multiple instances (Instance1 for vendor(x).dtsx, instance2 for vendor(x).dtsx etc) for maximum Vendor(x) input. What is the best way/process/design to track each store name so that each process (instance1, instance2 etc) is loading a distinct store. The files will be moved to a history folder as they are processed. Should I have a different process that gets each store name initially and then saves that information to a SQL table. The dataflow would load the next available store name from the SQL table query and let SQL lock that store?
I have to Load Contacts data from different systems into a single SQL Server 2005 table, the scenario is the following :
I have three systems System 1,System 2 and System 3
Step 1 Load Data From System 1 Step 2 Load Data from System 2 and if there are matching contacts with System 1, then match their details, keep System 1 details Step 3 Load Data from System 3 and if there are matching contacts with System 2, then match their details and if different, flag them, do the same with System details.
The unique ids in the three systems are not the same so only way to match is to do a concatenation of name + address + zip code
My Input is a flat file source and it has spaces in few columns in the data . These columns are linked to another table as a foreign key and when i try loading them in a relational structure Foreigh key violation is occuring , is there a standard method to replace these spaces .
what approach should i take so that data gets loaded in a relational structure.
for example
Name Age Salary Address dsds 23 fghghgh
Salary description level 2345 nnncncn 4
here salary is used in this example , the datatype is char in real scenario
what approach should i take to load the data in with cleansing the spaces in ssis
Hi, In my SQL server 7.0, I have got 250 store procedures in each database. Before using them for my application, I want to ecyption all. I must add "WITH ENCRYPTION" string in each SP in all database and it'll take me a long time. Is there fastest way to encryption all SPs in all DBs? Have anyone got an utility SP ( or anyway else) to do this? Thanks in advance.
What is the fast way a stored procedure can copy a table from a linked server?
I would like to tune this statement, possibly with hints or other logging options. Assume that table_A and table_B have the exact table structure and that I want to preserve table_A and all its indexes and contraints. The table will be truncated before this load, if that helps in any way.
insert into table_A select * from OpenQuery(Server,'select * from Table_B')
In relation to my last post, I have a question for the SQL-gurus.I need to update 70k records, and mark all those updated in a specialcolumn for further processing by another system.So, if the record wasKey1, foo, foo, ""it needs to becomeKey1, fap, fap, "U"iff and only iff the datavalues are actually different (as above, foobecomes fap),otherwise it must becomeKey1, foo,foo, ""Is it quicker to :1) get the row of the destination table, inspect all valuesprogramatically, and determine IF an update query is neededOR2) just do a update on all rows, but addingand (field1 <> value1 or field2<>value2) to the update querythat isupdate myTablesetfield1 = "foo"markField="u"where key="mykey" and (field1 <> foo)The first one will not generate new update queries if the record hasnot changed, on account of doing a select, whereas the second versionalways runs an update, but some of them will not affect any lines.Will I need a full index on the second version?Thanks in advance,Asger Henriksen
everything is ok.. the connection is on..the page can be loaded... no error.... BUT the data from the database cannot be retrieved... as in.. it just didnt load on screen... i dont know whats wrong... i have follow all the steps show on a book.. can anyone pls help me?? thks