Data Access :: BULK INSERT Yields Inconsistent Results
May 6, 2015
I am getting inconsistent results when BULK INSERTING data from a tab-delimited text file. As part of my testing, I run the same code on the same file again and again, and I get different results every time! I get this on SQL 2005 and SQL 2012 R2.We have an application that imports data from a spreadsheet. The sheet contains section headers with account numbers and detail rows with transactions by date:
AAAA.1234 /* (account number)*/
1/1/2015 $150 First Transaction
1/3/2015 $24.233 Second Transaction
BBBB.5678
1/1/2015 $350 Third Transaction
1/3/2015 $24.233 Fourth Transaction
My Import program saves this spreadsheet at tab-delimited text, then I use BULK INSERT to bring the data into a generic table full of varchar(255) fields. There are about 90,000 rows in each day's data; after the BULK INSERT about half of them are removed for various reasons. Next I add a RowID column to the table with the IDENTITY (1,1) property. This gives my raw data unique row numbers.
I then run a routine that converts and copies those records into another holding table that's a copy of the final destination table. That routine parses though the data, assigning the account number in the section header to each detail row. It ends up looking like this:
AAAA.1234 1/1/2015 $150 First Purchase
AAAA.1234 1/3/2015 $24.233 Second Purchase
BBBB.5678 1/1/2015 $350 Third Purchase
BBBB.5678 1/3/2015 $24.233 Fourth Purchase
My technique: I use a cursor to get the starting RowID for each Account Number: I then use the upper and lower RowIDs to do an INSERT into the final table. The query looks like this:
SELECT RowID, SUBSTRING(RowHeader, 6,4) + '.UBC1' AS AccountNumber
FROM GenericTable
WHERE RowHeader LIKE '____.____%'
Results look like this:
But every time I run the routine, I get different numbers! my results are not accurate. I get inconsistent results EVERY TIME.Here is my code, with table, field and account names changed for business confidentiality.This is a high profile project at my company;
TRUNCATE TABLE GenericImportTable;
ALTER TABLE GenericImportTable DROP COLUMN RowID;
BULK INSERT GenericImportTable FROM 'SERVERGeneralAppnameDataFile.2015.05.04.tab.txt'
WITH (FIELDTERMINATOR = ' ', ROWTERMINATOR = '
', FIRSTROW = 6)
[code]...
View 2 Replies
ADVERTISEMENT
May 16, 2006
As part of a c# program, utilizing .Net 2.0, I am calling a sproc via a SqlCommand to bulk load data from flat files to a various tables in a SQL Server 2005 database. We are using format files to do this, as all of the incoming flat files are fixed length. The sproc simply calls a T-SQL BULK INSERT statement, accepting the file name, format file name and the database table as input paramaters. As expected, this works most of the time, but periodically (to often for a production environment), the insert fails. The particular file to fail is essentially random and when I rerun the process, the insert completes successfully.
A sample of the error messages returned is as follows (@sql is the string executed):
Cannot bulk load. Invalid destination table column number for source column 1 in the format file "\RASDMNTTRAS_ROOTBCP_Format_FilesEMODT3.fmt".
Starting spRAS_BulkInsertData.
@sql = BULK INSERT Raser.dbo.EMODT3_Work FROM '\RASDMNTTRAS_ROOTAmeriHealthworkpdclmsemodt3.20060511.0915.txt.DATA' WITH (FORMATFILE = '\RASDMNTTRAS_ROOTBCP_Format_FilesEMODT3.fmt');
The format file for this particular example is as follows (I apologize for the length):
8.0
62
1 SQLCHAR 0 1 "" 1 Record_Type SQL_Latin1_General_CP1_CI_AS
2 SQLCHAR 0 15 "" 2 Vendor_Number SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 20 "" 3 Extract_Subscriber_Number SQL_Latin1_General_CP1_CI_AS
4 SQLCHAR 0 20 "" 4 Extract_Member_Number SQL_Latin1_General_CP1_CI_AS
5 SQLCHAR 0 2 "" 5 Claim_Nbr_Branch_Code SQL_Latin1_General_CP1_CI_AS
6 SQLCHAR 0 8 "" 6 Claim_Nbr_Batch_Date_CCYYMMDD SQL_Latin1_General_CP1_CI_AS
7 SQLCHAR 0 3 "" 7 Claim_Nbr_Batch_Sequence_Nbr SQL_Latin1_General_CP1_CI_AS
8 SQLCHAR 0 3 "" 8 Claim_Nbr_Sequence_Number SQL_Latin1_General_CP1_CI_AS
9 SQLCHAR 0 3 "" 9 LINE_NUMBER SQL_Latin1_General_CP1_CI_AS
10 SQLCHAR 0 1 "" 10 Patient_Sex_Code SQL_Latin1_General_CP1_CI_AS
11 SQLCHAR 0 3 "" 11 Patient_Age SQL_Latin1_General_CP1_CI_AS
12 SQLCHAR 0 4 "" 12 G_L_Posting_Tables_Code SQL_Latin1_General_CP1_CI_AS
13 SQLCHAR 0 50 "" 13 G_L_Posting_Tbls_Code_Desc SQL_Latin1_General_CP1_CI_AS
14 SQLCHAR 0 2 "" 14 Fund_TYPE SQL_Latin1_General_CP1_CI_AS
15 SQLCHAR 0 1 "" 15 Stop_Loss_Or_Step_Down_Code SQL_Latin1_General_CP1_CI_AS
16 SQLCHAR 0 2 "" 16 Stop_Loss_Fund SQL_Latin1_General_CP1_CI_AS
17 SQLCHAR 0 50 "" 17 Stop_Loss_Fund_Desc SQL_Latin1_General_CP1_CI_AS
18 SQLCHAR 0 8 "" 18 Post_Date SQL_Latin1_General_CP1_CI_AS
19 SQLCHAR 0 1 "" 19 Rebundling_Status_Indicator SQL_Latin1_General_CP1_CI_AS
20 SQLCHAR 0 8 "" 20 Co_Payment_Grouper SQL_Latin1_General_CP1_CI_AS
21 SQLCHAR 0 50 "" 21 Co_Payment_Grouper_Desc SQL_Latin1_General_CP1_CI_AS
22 SQLCHAR 0 8 "" 22 Co_Payment_Accumulator SQL_Latin1_General_CP1_CI_AS
23 SQLCHAR 0 50 "" 23 Co_Payment_Accumulator_Desc SQL_Latin1_General_CP1_CI_AS
24 SQLCHAR 0 8 "" 24 Co_Insurance_Grouper SQL_Latin1_General_CP1_CI_AS
25 SQLCHAR 0 50 "" 25 Co_Insurance_Grouper_Desc SQL_Latin1_General_CP1_CI_AS
26 SQLCHAR 0 8 "" 26 Co_Insurance_Accumulator SQL_Latin1_General_CP1_CI_AS
27 SQLCHAR 0 50 "" 27 CI_Accumulator_Desc SQL_Latin1_General_CP1_CI_AS
28 SQLCHAR 0 8 "" 28 Coverage_Grouper SQL_Latin1_General_CP1_CI_AS
29 SQLCHAR 0 50 "" 29 Coverage_Grouper_Desc SQL_Latin1_General_CP1_CI_AS
30 SQLCHAR 0 8 "" 30 Coverage_Accumulator SQL_Latin1_General_CP1_CI_AS
31 SQLCHAR 0 50 "" 31 Coverage_Accumulator_Desc SQL_Latin1_General_CP1_CI_AS
32 SQLCHAR 0 8 "" 32 Deductible_Grouper SQL_Latin1_General_CP1_CI_AS
33 SQLCHAR 0 50 "" 33 Deductible_Grouper_Desc SQL_Latin1_General_CP1_CI_AS
34 SQLCHAR 0 8 "" 34 Deductible_Accumulator SQL_Latin1_General_CP1_CI_AS
35 SQLCHAR 0 50 "" 35 Deductible_Accumulator_Desc SQL_Latin1_General_CP1_CI_AS
36 SQLCHAR 0 8 "" 36 Unit_Grouper SQL_Latin1_General_CP1_CI_AS
37 SQLCHAR 0 50 "" 37 Unit_Grouper_Desc SQL_Latin1_General_CP1_CI_AS
38 SQLCHAR 0 8 "" 38 Unit_Accumulator SQL_Latin1_General_CP1_CI_AS
39 SQLCHAR 0 50 "" 39 Unit_Accumulator_Desc SQL_Latin1_General_CP1_CI_AS
40 SQLCHAR 0 8 "" 40 Out_Of_Pocket_Grouper SQL_Latin1_General_CP1_CI_AS
41 SQLCHAR 0 50 "" 41 Out_Of_Pocket_Grouper_Desc SQL_Latin1_General_CP1_CI_AS
42 SQLCHAR 0 8 "" 42 Out_Of_Pocket_Accumulator SQL_Latin1_General_CP1_CI_AS
43 SQLCHAR 0 50 "" 43 Out_Of_Pocket_Acc_Desc SQL_Latin1_General_CP1_CI_AS
44 SQLCHAR 0 3 "" 44 Service_Edit_Code SQL_Latin1_General_CP1_CI_AS
45 SQLCHAR 0 50 "" 45 Service_Edit_Code_Desc SQL_Latin1_General_CP1_CI_AS
46 SQLCHAR 0 8 "" 46 System_Date_MEDMAS_CCYYMMDD SQL_Latin1_General_CP1_CI_AS
47 SQLCHAR 0 8 "" 47 Last_Change_MEDMAS_CCYYMMDD SQL_Latin1_General_CP1_CI_AS
48 SQLCHAR 0 10 "" 48 Medicare_Termination_Reason_Code SQL_Latin1_General_CP1_CI_AS
49 SQLCHAR 0 10 "" 49 User_ID_MEDMAS SQL_Latin1_General_CP1_CI_AS
50 SQLCHAR 0 10 "" 50 User_ID_Last_Modified SQL_Latin1_General_CP1_CI_AS
51 SQLCHAR 0 8 "" 51 Adjudication_Date_CCYYMMDD SQL_Latin1_General_CP1_CI_AS
52 SQLCHAR 0 9 "" 52 Adjudication_Time SQL_Latin1_General_CP1_CI_AS
53 SQLCHAR 0 10 "" 53 Adjudication_User_ID SQL_Latin1_General_CP1_CI_AS
54 SQLCHAR 0 9 "" 54 A_P_Batch_Number SQL_Latin1_General_CP1_CI_AS
55 SQLCHAR 0 7 "" 55 A_P_Sequence SQL_Latin1_General_CP1_CI_AS
56 SQLCHAR 0 3 "" 56 CPA_Batch_Number SQL_Latin1_General_CP1_CI_AS
57 SQLCHAR 0 8 "" 57 CPA_Date_CCYYMMDD SQL_Latin1_General_CP1_CI_AS
58 SQLCHAR 0 1 "" 58 Manual_Authorization_Flag SQL_Latin1_General_CP1_CI_AS
59 SQLCHAR 0 50 "" 59 Fund_Description SQL_Latin1_General_CP1_CI_AS
60 SQLCHAR 0 1 "" 60 DRG_Inclusion_Indicator SQL_Latin1_General_CP1_CI_AS
61 SQLCHAR 0 1 "" 61 Future_Expansion SQL_Latin1_General_CP1_CI_AS
62 SQLCHAR 0 2 "
" 62 Company_Number SQL_Latin1_General_CP1_CI_AS
Has anyboy run across this before, or have any ideas as to what might be happening?
Thanks in advance.
View 6 Replies
View Related
Nov 2, 2006
I'm importing a large csv file two different ways - one with Bulk Import Task and the other way with the Data Flow Task (flat file source -> OLE DB destination).
With the Bulk Import Task I'm putting all the csv rows in one column. With the Data Flow Task I'm mapping each csv value to it's own column in the SQL table.
I used two different flat file sources and got the following:
Flat file 1: Bulk Import Task = 12,649,499 rows; Data Flow Task = 4,215,817 rows
Flat file 2: Bulk Import Task = 3,403,254 rows; Data Flow Task = 1,134,359 rows
Anyone have any guess as to why this is happening?
View 9 Replies
View Related
Apr 21, 2015
I am currently working with C and SQL Server 2012. My requirement is to Bulk fetch the records and Insert/Update the same in the other table with some business logic? How do i do this?
View 14 Replies
View Related
Nov 20, 2007
I'm either missing something or this is a bug. I have a Lookup that finds no matches if I use the default option of full caching (everything on the Advanced tab unchecked). The lookup table is relatively small (15348 bytes) in only 544 rows. If I check only the Enable Memory Restriction box and eliminate caching, it works fine. I can also check the Enable Caching box and accept the default cache size of 5MB and it works fine. Anyone have any ideas? I'm running on Standard Edition, SP2.
Thanks!
View 7 Replies
View Related
Jan 17, 2008
Im having some issues with bulk insert.
This is the table:
CREATE TABLE [dbo].[tmp_GA_status](
[GA_recno] [int] NOT NULL,
[GA_desc] [varchar](40) NULL
)
This is the file (unicode):
1|"test1"
2|"test2"
3|"test3"
4|"test4"
5|"test5"
6|"test6"
7|"test7"
8|"test8"
and this is the sql:
bulk insert tmp_GA_status from 'C: empTextDumpGA_status.dta'
with (CODEPAGE='RAW', FIELDTERMINATOR='|', ROWTERMINATOR='
', DATAFILETYPE='widechar')
so yeah, pretty simple. But whatever I do I get this;
Msg 4864, Level 16, State 1, Line 1
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 2 (GA_desc).
So what am I doing wrong ?
View 13 Replies
View Related
Oct 11, 2003
Hi,
I need to write a SQL statment to insert data into a table. Not a problem. But, can I insert more than one row/record at a time?
e.g. If I have a 'Person' table with 'First_Name' and 'Last_Name' fields and 50 peeps to put in them all in the same statment, or do I have to keep reusing the query 50 times??
Thanks
Dave
View 2 Replies
View Related
Feb 1, 2008
I was wondering if anyone could share with me the proper syntax for using "Bulk Insert" to load a SQL Server Table from an MS Access table?
Thanks in advance.
View 1 Replies
View Related
Apr 11, 2006
HiI have an oddity. If I run a piece of SQL:SELECT EmployeeNo, MailToFROM ST_PPS.dbo.Employeewhere AddedOn BETWEEN '01-jan-2006' and '01-feb-2006'AND MailTo NOT IN ( '3', 'x')order by MailToI get the resultsEmployeeNo MailTo----------- ------608384 1606135 1608689 1609095 1607163 1606165 1606472 1608758 1.....for 2594 rowsIf I create a stored procedure with the same SQL:-CREATE PROCEDURE dbo.PPS_testASSELECT EmployeeNo, MailToFROM ST_PPS.dbo.Employeewhere AddedOn BETWEEN '01-jan-2006' and '01-feb-2006'AND MailTo NOT IN ( '3', 'x')order by MailToGOand run it:-EXEC PPS_testI get three extra rowsEmployeeNo MailTo----------- ------607922 NULL606481 NULL605599 NULL606316 1608871 1607427 1608795 1.....for 2597Does anyone know what is happening here? It appears that the clause:-MailTo NOT IN ( '3', 'x')excludes NULL in raw SQL, but includes NULL (correctly I think) in astored procedure.Chloe CrowderThe British Library
View 5 Replies
View Related
May 5, 2008
Hi,
I am building a report with a recursive hierarchy for drill-down purposes. The hierarchy is built by querying a SSAS OLAP cube and defining a details grouping for the table/matrix.
Every time I run the report one or more of the leaf members in the recursive hierarchy "jumps" up to the highest level. First I thought that this may be due to the fact that the leafs parents are not part of the returned dataset. However, the queries makes sense and the "offending" members does never contain any data (while the query should return only non empty members) which is why this is a very strange behavior. Furthermore, the "offending" member differs between different executions of the report, despite the fact that the parameters is exactly the same and the cube is untouched between executions.
I am actually pressing "View Report", waiting for the report to execute and when I press "View Report" again, the returned datasets seem to differ, yielding different "offending" members in the report.
When I run the queries individually in the Data-tab in BIDS, the returned datasets are always the same. Execution caching is turned off for the report.
Checking against SSRS's ExecutionLog, the RowCount for consecutive executions with the exact same parameters differ. For example, RowCount:
3094
3080
3079
3088
3087
Why does SSRS behave such inconsistently? Any tips or tricks?
View 3 Replies
View Related
Jun 7, 2006
Hi All
Same situation as described here , same issue.
SQL Server(SQL2005 on Windows2003) uses domain account. This domain account enabled to be trusted for delegation. Client connects to server using Windows auth. Client issues BULK INSERT with UNC path. Statement returns error:
Cannot bulk load because the file "\Serverpubfile.txt" could not be opened. Operating system error code 5(Access is denied.)SQL 2000 runs this statement successfully so statement and file are OK. Everyone has all permissions on network share. Domain account granted all permissions explicitly so there is no access troubles.
Audit show anonymous connections.
Question is - how to put delegation in work?
Thank you in advance,
Alexander Sinitsin
View 2 Replies
View Related
Oct 11, 2005
Problem: Insert a network file in the DB using BULK Insert
View 50 Replies
View Related
Apr 10, 2001
I am running SQL Server 7.0 on NT 4.0. I have created a simple query:
SELECT SUM(month1) As total_month1
FROM eac_manload
WHERE project_number = 8800
and dept IN (50,51,52,55,57,60,61,62,63,64,65,68,69)
I first time I run the query I get the correct result. Subsequent times that I run the query the result is 1 record with a Null value. The data has not changed. If I stop MSSQLSERVER and restart the service I get the correct result the first time and the Null value each time thereafter. Anybody out there with any idea of what is going on here? Any help will be appreciated!!
View 1 Replies
View Related
Mar 24, 2006
I want to do a bulk insert of a file located on a different machine then the SQL Server database.
machine1 and machine2 are running Windows Server 2003 Standard Edition. SQL Server v8.0 is running on machine2. Neither machine1 nor machine2 are in any domain. (These are servers at a hosting company.)
I use a UNC filename to specify the file to load. It looks something like this:
\machine1.someplace.com
eportdata
eport200602.txt
I get this error message when I attempt the bulk insert using SQL Query Analyzer:
Server: Msg 4861, Level 16, State 1, Line 1
Could not bulk insert because file '\machine1.someplace.com
eportdata
eport200602.txt' could not be opened. Operating system error code 5(Access is denied.).
The share reportdata on machine1 has READ permissions for EVERYONE. What do I need to do enable allow the database machine (machine2) to access the files on machine1?
Thank you in advance for you help.
Phil
View 10 Replies
View Related
Oct 24, 2007
Hi ,
I am a newbie and i need to provide access for developer for him to use bulk insert ... on temp tables.
what permission do i need to provide the developer i cannot provide bulkadmin permission to him what are the other ways to provide him the access.
Please help me with this.
View 4 Replies
View Related
Jan 11, 2000
I have a stored procedure (see below), in which I would like to check if the create an identity column and make it a primary key succeeded. I check @@error after the exec statement. This used to pick up an error if the table already had an identity column. It has stopped doing that. Why? And, if this is not the way to capture the error after the exec statement, how do I do it?
CREATE PROCEDURE rasp_test3
/*
Written by Judith Farber Abraham
this procedure loops thru sysobjects looking for user tables.
If a user table, does it have a primary key?
If not, add an identity column to table and make it a primary key
*/
--would like to have sp in main db but use from all three
@fixDB nvarchar(50)--the db to which to add PKs
AS
Declare @TableName varchar(50)
Declare @TableID int
Declare @Msg varchar (50)
Declare @ColumnName varchar(50)
Declare @IndexName varchar(50)
Declare @MyCursor nvarchar(500)
declare @MyCursorC nvarchar(500)
declare @CName sysname
--Set @Msg = "********* Finished adding Ident fields *************"
/* */
/*
do for all user tables ( xtype = u )
*/
set @Mycursor = N'Declare SysCursor cursor for select Name, ID from ' + @fixdb +'.dbo.sysobjects where xtype = "u"'
execute sp_executesql @mycursor
open syscursor
Fetch next from SysCursor into @TableName, @TableID
/* -1 = no record; -2 = row deleted; 0 = got a row */
While (@@Fetch_status <> -1)
Begin
If (@@Fetch_status <> -2)
Begin /* have a user row (table) */
/* */
set @ColumnName = @TableName + 'ID'
set @IndexName = 'PK_' + @columnName
--only add ident and PK if no primary key in table
If not exists (Select * from Sysobjects where Parent_obj = @TableID and xtype = 'PK')
--add an identity column to user table and make it a Primary key
EXEC ('ALTER TABLE ' + @tablename + ' ADD ' + @columnName + ' INT IDENTITY CONSTRAINT ' + @IndexName + ' PRIMARY KEY ' )
--
Begin
--if error, assume already ident column, so find column name & make PK
print @@error
if @@error <> 0 print "jerror occured"
--set @MycursorC = N'Declare SysCursorC cursor for SELECT c.name
--FROM syscolumns c, sysobjects o
--WHERE ((c.id = o.id) AND (c.status = 128)) AND (o.name = ' + @tablename + ')'
--execute sp_executesql @mycursorC
--Open SyscursorC
--Fetch next from SysCursorC into @CName
--print @cname
--close syscursorc
--deallocate syscursorc
--Exec ('ALTER TABLE ' + @tablename + ' ADD ' + @columnName + ' INT IDENTITY CONSTRAINT ' + @IndexName + ' PRIMARY KEY ' )
--select @cname=c.name
--print c.name
End
End
Fetch next from SysCursor into @TableName, @TableID
End
--Print @Msg
Close SysCursor
Deallocate SysCursor
Return
Thanks for any help,
Judith
View 3 Replies
View Related
Jul 13, 2006
I'm testing some code to look up values from my database and update a specific field when certain conditions are met. I'm having trouble with some code that is giving me the results I expect when I submit one set of parameters, but is not finding anything in the database for another set, when I know the data exists.
Here's the code for my stored procedure, SP:
Code:
@flightid bigint,
@departuretime smalldatetime
SELECT flightid, flightno, departuretime, origincode, destinationcode
FROM flightschedules
WHERE flightid <> @flightid
AND departuretime = CONVERT(SMALLDATETIME, @departuretime, 120)
And here's the vbscript that calls it:
Code:
vOutboundID = 452
vReturnID = 453
'--- Get the flight details ---
strOrigin = "confirmflightdetails '" & vOutboundID & "';"
set rsOrigin = Server.CreateObject("ADODB.Recordset")
rsOrigin.Open strOrigin, objConn
response.write "origin " & rsOrigin("departuretime") & "<BR>"
strReturn = "confirmflightdetails '" & vReturnID & "';"
set rsReturn = Server.CreateObject("ADODB.Recordset")
rsReturn.Open strReturn, objConn
response.write "return " & rsReturn("departuretime") & "<BR>"
strGetOFID = "SP '" & vOutboundID & "', '" & rsOrigin("departuretime") & "';"
set rsOFID = Server.CreateObject("ADODB.Recordset")
rsOFID.Open strGetOFID, objConn
DO WHILE NOT rsOFID.EOF
response.write "OFNO " & rsOFID("flightno") & " " & rsOFID("flightid") & "<br>"
rsOFID.MoveNext
Loop
strGetRFID = "SP '" & vReturnID & "', '" & rsReturn("departuretime") & "';"
set rsRFID = Server.CreateObject("ADODB.Recordset")
rsRFID.Open strGetRFID, objConn
DO WHILE NOT rsRFID.EOF
response.write "RFNO " & rsRFID("flightno") & " " & rsRFID("flightid") & "<br>"
rsRFID.MoveNext
Loop
Here's the code for confirmflightdetails:
Code:
@flightid bigint
AS
SELECT flightid, flightno, departuretime
FROM flightschedules
WHERE flightid = @flightid
When confirmflightdetails is tested, I the proper results, as confirmed by the response.write statements:
4521092006-07-29 08:00:00
4531102006-07-29 12:05:00
I put the response.write statements and loops in so I could verify the functionality.
Here's what it produces:
out 452
ret 453
origin 7/29/2006 8:00:00 AM
return 7/29/2006 12:05:00 PM
OFNO 109 450
Here's what it should produce:
out 452
ret 453
origin 7/29/2006 8:00:00 AM
return 7/29/2006 12:05:00 PM
OFNO 109 450
RFNO 110 451
If I do this in query analyzer:
Code:
select flightid, flightno, departuretime
from flightschedules
where flightid > 449 and flightid < 454
this is what I get from the database:
flightid flightno departuretime origin destination
4521092006-07-29 08:00:00 A C
4501092006-07-29 08:00:00 A B
4531102006-07-29 12:05:00 C A
4511102006-07-29 13:15:00 B A
What I'm trying to do is look up the chosen flight, then find the flight with the matching origin/destination (the other flight leg) on the same day.
I can't figure out why it's working for one set of parameters and not for the other.
Thanks in advance for any help!
View 1 Replies
View Related
Jun 29, 2015
I'm trying to use Bulk insert for the first time and getting the following error. I think it might have something to do with my Format File and from the error msg there's a conversion error for the first column. In my database the Field is nvarchar(6) so my best guess is to use SQLNChar for the first column. I've checked the end of each line is CR LF therefore the is correct for line 7 right?
Msg 4863, Level 16, State 1, Line 1
Bulk load data conversion error (truncation) for row 1, column 1 (ASXCode).
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
BULK
INSERTtbl_ASX_Data_temp
FROM
'M:DataASXImportTest.txt'
WITH
(FORMATFILE='M:DataASXSQLFormatImport.Fmt')
[code]...
View 5 Replies
View Related
Oct 12, 2007
Hi,
i have a file which consists data as below,
3
123||
456||
789||
Iam reading file using bulk insert and iam inserting these phone numbers into table having one column as below.
BULK INSERT TABLE_NAME FROM 'FILE_PATH'
WITH (KEEPNULLS,FIRSTROW=2,ROWTERMINATOR = '||')
but i want to insert the data into table having two columns. if iam trying to insert the data into table having two columns its not inserting.
can anyone help me how to do this?
Thanks,
-Badri
View 5 Replies
View Related
Feb 26, 2007
Hello,
I have a linked server named 'Charlie_File' to an Excel Workbook that I set up in SQLServer 2005 Management Studio. The workbook is on my local C drive. Sometimes, I get the results back that I expect when I run the following query;
SELECT * FROM OPENQUERY(Charlie_file, 'SELECT * FROM [Feb$]')
Sometimes, on subsequent runs of the above query, I get the following message;
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "MSDASQL" for linked server "Charlie_file" reported an error. The provider did not give any information about the error.
Msg 7303, Level 16, State 1, Line 1
Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "Charlie_file".
There seems to be about a minute or so of a delay before the query will run correctly on subsequent attempts. Is there a connection issue here where a connection blocks subsequent attempts to select the data within a specific time span?
Thank you for your help!
cdun2
View 1 Replies
View Related
Jun 5, 2014
I have a lot of rows of hours, set up like this: 0745, 0800, 2200, 1145 and so on (varchar(5), for some reason).
These are converted into a smalldatetime like this:
CONVERT(smalldatetime, STUFF(timestarted, 3, 0, ':')) [this would give output like this - 1900-01-01 11:45:00]
This code has been in place for years...and we stick the date on later from another column.
But recently, it's started to fail for some rows, with "The conversion of a varchar data type to a smalldatetime data type resulted in an out-of-range value".
My assumption is that new data being added in is junk. If I query for these values and just list them (rather than adding a column to convert them also) that's fine, of course. I've checked all the stuffed (but not yet converted - so 11:45 rather than 1145) output to see if it ISDATE(), and it is. There are no times with hours > 23 or minutes greater than 59 either.
If I add the CONVERT in, we see the error message. But here's the oddity, if I place all of the rows into a holding table, and retry the conversion, there is no error. It's this last bit that is puzzling me. Plus I can't see any errors in the hours data that would cause a conversion problem.
I've put the whole of this into a cursor to try to trap the error rows too, but all processes fine. Why would it fail if NOT in a cursor?
View 9 Replies
View Related
Mar 11, 2015
Firstly may I say that the sproc I am having problems with and the service that calls it is inherited technical debt from an unsupervised contractor. We are not able to go through a rewriting process at the moment so need to live with this if possible.
Background
We have a service written in c# that is processing packages of xml that contain up to 100 elements of goods consignment data. In amongst that element is an identifier for each consignment. This is nvarchar(22) in our table. I have not observed any IDs that are different in length in the XML element.
The service picks up these packages from MSMQ, extracts the data using XPATH and passes the ID into the SPROC in question. This searches for the ID in one of our tables and returns a bool to the service indicating whether it was found or not. If found then we add a new row to another table. If not found then it ignores and continues processing.
Observations
The service seems to be dealing with a top end of around 10 messages a minute... so a max of about 1000 calls to the SPROC per minute. Multi-threading has been used to process these packages but as I am assured, sprocs are threadsafe. It is completing the calls without issue but intermittently it will return FALSE. For these IDs I am observing that they exist on the table mostly (there are the odd exceptions where they are legitimately missing). e.g Yesterday I was watching the logs and on seeing a message saying that an ID had not been found I checked the database and could see that the ID had been entered a day earlier according to an Entered Timestamp.
So the Sproc...
USE [xxxxxxxxxx]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
[Code]....
So on occasions (about 0.33% of the time) it is failing to get a bit 1 setting in @bFound after the SELECT TOP(1).
The only suggestions I can make have been...
change @pIdentifier nvarchar(25) to nvarchar(22)
Trim any potential blanks from either side of both parts of the identifier comparison
Change the SELECT TOP(1) to an EXISTS
The only other thought is the two way parameter direction in the C# for the result OUTPUT. Not sure why he did it that way or what the purpose is.
I have been unable to replicate this using a test app and our test databases. Has observed selects failing to find even though the data is there, like this before?
View 6 Replies
View Related
Dec 5, 2007
I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.
The first query completed in around 7s and has 47,000 logical reads.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.
My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?
One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.
Thanks,
Tim
View 5 Replies
View Related
Apr 18, 2008
Hello,
I'm just learning SSIS and I've hit my first bump. I am doing a bulk import from a tab delimited text file to an empty sql table that has a Idendity column defined. How do I tell the bulk insert task to skip that column when inserting from the text file. If I remove the identity column it imports the data fine, but I want to create the indentity column in the table too.
Thanks.
View 8 Replies
View Related
May 6, 2015
I am getting inconsistent results when BULK INSERTING data from a tab-delimited text file. As part of my testing, I run the same code on the same file again and again, and I get different results every time! I get this on SQL 2005 and SQL 2012 R2.
We have an application that imports data from a spreadsheet. The sheet contains section headers with account numbers and detail rows with transactions by date:
AAAA.1234 /* (account number)*/
1/1/2015 $150 First Transaction
1/3/2015 $24.233 Second Transaction
BBBB.5678
1/1/2015 $350 Third Transaction
1/3/2015 $24.233 Fourth Transaction
My Import program saves this spreadsheet at tab-delimited text, then I use BULK INSERT to bring the data into a generic table full of varchar(255) fields. There are about 90,000 rows in each day's data; after the BULK INSERT about half of them are removed for various reasons.
Next I add a RowID column to the table with the IDENTITY (1,1) property. This gives my raw data unique row numbers.
I then run a routine that converts and copies those records into another holding table that's a copy of the final destination table. That routine parses though the data, assigning the account number in the section header to each detail row. It ends up looking like this:
AAAA.1234 1/1/2015 $150 First Purchase
AAAA.1234 1/3/2015 $24.233 Second Purchase
BBBB.5678 1/1/2015 $350 Third Purchase
BBBB.5678 1/3/2015 $24.233 Fourth Purchase
My technique: I use a cursor to get the starting RowID for each Account Number: I then use the upper and lower RowIDs to do an INSERT into the final table. The query looks like this:
SELECT RowID, SUBSTRING(RowHeader, 6,4) + '.UBC1' AS AccountNumber
FROM GenericTable
WHERE RowHeader LIKE '____.____%'
Results look like this:
But every time I run the routine, I get different numbers!
Needless to say, my results are not accurate. I get inconsistent results EVERY TIME. Here is my code, with table, field and account names changed for business confidentiality.
TRUNCATE TABLE GenericImportTable;
ALTER TABLE GenericImportTable DROP COLUMN RowID;
BULK INSERT GenericImportTable FROM 'SERVERGeneralAppnameDataFile.2015.05.04.tab.txt'
WITH (FIELDTERMINATOR = ' ', ROWTERMINATOR = '', FIRSTROW = 6)
ALTER TABLE GenericImportTable ADD RowID int IDENTITY(1,1) NOT NULL
SELECT RowID, SUBSTRING(RowHeader, 6,4) + '.UBC1' AS AccountNumber
FROM GenericImportTable
WHERE RowHeader LIKE '____.____%'
View 3 Replies
View Related
Dec 27, 2007
Hi!
I'm building a web application. I need to read data from a text or excel file and process the data and then store the result records into database. The record number is big. I can store the data record into database (SQL Server 2005) one at a time. I think it's slow. Is there any way to insert the data in bulk.
Thanks!
ccy
View 4 Replies
View Related
Apr 18, 2001
SQL Server 7.0 doesn't seem to support data files for bulk insert that have quoted text fields.
e.g.
" 1","Farmer","Joe","AAA","Smith John","",20001001,
I've tried using the format file to strip out the quotes. But, this doesn't seem to work.
My format file looks like this:
4.2
9
1 SQLCHAR 0 0 """ 0 dummy1
2 SQLCHAR 0 9 "","" 2 EmployeeID
3 SQLCHAR 0 35 "","" 3 LastName
4 SQLCHAR 0 35 "","" 4 FirstName
5 SQLCHAR 0 10 "","" 5 Category
6 SQLCHAR 0 35 "","" 6 Supervisor
7 SQLCHAR 0 5 ""," 7 OpCode
8 SQLCHAR 0 8 "," 8 HireDate
9 SQLCHAR 0 8 "
" 9 TermDate
Any idea on how I can bulk insert a data file where some of the fields are qutoed.
View 2 Replies
View Related
Jun 25, 2014
I have imported data in my table using the bulk insert command. I was supposed to fill specific columns of my table with that data so I used a view to put them in the column I wanted.
The table looks like this now:
id | id_param | val_param
+-----------+--------------+
1 | no_tel | 742062141
2 | sex | 1
3 | age | 23
4 | no_tel | 765234157
5 | sex | 1
6 | age | 34
When I want to select only the val_param that is=1 for the id_param=sex using this interogation:
select * from bd_rox where id_param='sex' and val_param='1'
it returns no value and I don`t know why.The wanted result should look like this:
id | id_param | val_param
+-----------+--------------+-
2 | sex | 1
5 | sex | 1
View 9 Replies
View Related
Jan 30, 2008
I manage a legacy system that dumps it's data into a number of different databases (same schema) on a nightly basis using bulk insert. I need to formulate a strategy for efficiently aggregating that data into a single database right after these nightly extractions complete. Here is my current stategy:
1. Duplicate the legacy system's database schema and add an identifier column to specify which database the data loaded from.
2. Each night, delete all records in the table.
3. Each night, for each database:
3a. Set each table's default value to a value that references the current database being loaded.
3b. Use the legacy system's flat files and format files to bulk insert into the database.
3c. Clear the default value.
What other steps would faciliate performance? Dropping and recreating the indexes? Does anyone forsee faults in this strategy?
Thanks,
Matt
View 3 Replies
View Related
Jul 20, 2005
Any help would be appreciated.I am running a script that does the following in succession.1-Drop existing database and create new database2-Defines tables, stored procedures and functions in the database3-Imports data using bulk insert4-Analyzes data using stored proceduresI would like to improve the performance of the analysis in step 4 bycreating indexes in step 2.Question 1-Are indexes updated when data is bulk inserted? I know they arewhen using normal insert, update, or delete T-SQL but I am not sure aboutbulk insert of data.Question 2-Do I need to update the index statistics in any way or would theybe ready to use in step 4.Thanks,CJ
View 2 Replies
View Related
Sep 8, 2000
I have installed a msde engine (sql server 7 desktop) on a nt 4.0 sp6 workstation. I have around 17 megs of data
of need to transfer from a Sql Server 7 from a nt 4.0 sp6 Server. I can't get anything to work. Bcp complains
something to the effect there is no server attribute which is true because it is a work station. if I remove the
-S option it complains that I did not designate the server. This bcp script works just fine from SQL Server 6.5
on servers. DTS puts a message that I do not have a liscense to use DTS for the SQL Server Desktop. I tried
transfering the database to Access and then use the upsizing Wizard but the database is to big. I tried a bulk insert
but I got the vague message ' OLE DB provider 'STREAM' reported an error. The provider did not give any
information about the error' What is the best way to do this? Why am I having trouble with Bcp and Bulk insert?
View 1 Replies
View Related
Dec 11, 1998
Short version:
The best/fastest way to load large amounts of data from a comma delimited text file into an SQL Server table. Where the text file contains date fields in ccyy/mm/dd format and the SQL Server table defines those fields as datetime data types.
Details:
When I attempt to load files (using either bcp or BULK INSERT) containing datetime data the load process errors because the datetime fields in my text file are in ccyy/mm/dd format and the default format for SQL Server is mm/dd/yy. I have been unable to change the default format by using the SET DATEFORMAT statement (apparently the SET DATEFORMAT statement will not work for bcp because bcp runs outside of the SQL Server session???).
The only alternatives that I have come up with are: 1) Change the format of date fields in the text file from ccyy/mm/dd to mm/dd/ccyy. 2) Create a temporary table that defines the date fields as a char(n) datatype. Then load the data into the temp table. Then SET the DATEFORMAT to ccyy/mm/dd. Then copy the temp table into the permanent table (the permanent table using datetime data types).
Both of these alternatives would require additional processing time. Since this is a process that loads large amounts of data on a monthly (soon to be weekly) basis, speed is of the essence.
I would appreciate any suggestions.
Thanks!
View 6 Replies
View Related
Jan 10, 2007
Hi everyone. Does anyone know if you can retrieve truncated data from a BULK INSERT operation?
We have a file that needs to be inserted into a SQL Server Database. There is a field that has a maximum of 8000 characters, but some times users submit files that have more than that. We need to be able to capture the truncated data. The BULK INSERT operation does not throw an error. The only way I can think of to get the data is if I bulk insert the data into a temporary table with a memo field and then copy it over, but that may really slow down the SP.
Has anyone encountered this situation before? I also have the option of parsing the file in .NET.
Thanks and take care,
Angel
View 9 Replies
View Related