Missing 1 Million Rows During BCP PROCESS!! Urgent Help
Mar 12, 1999
Hi, I used the /e in my bcp code. yet did not get all the rows from the main frame into the sql talbes... here is the case I have 11 million rows in an ftp server I use this code to bcp into sql server can anyonecheck if this code is good for the process, I am missing one million row in the bcp process and do not know why??? I put the /e to see if there is any error but could not see any error file in my hard drive?
Please check it out and let me know
regards
Ali
Exec master..xp_cmdshell "bcp dbname..tablename in c:ftprootNbtorder.txt /fd:ftprootformatfileablename.fmt /Servername /Usa /Password /b250000 /a8000 /eerrfileORD"
View 2 Replies
ADVERTISEMENT
Jun 9, 2006
We have SQL Server 2000 with merge replication at a Publisher and subscriber.
We have some records getting deleted at Publisher and Subscriber and no conflicts are logged.
We have tried the compensate_for_errors setting and this has had no effect.
This is causing serious data corruption and has now become an URGENT issue. Out tech team are almost out of ideas.
Has anyone experienced this or have any ideas as to what to check next?
View 3 Replies
View Related
Dec 12, 2014
I run the following statement and it will not update beyond 7 million plus rows and I have about 38 million to complete. I keep checking updated row counts and after 1/2 day it's still the same so I know something is wrong because it was rolling through no problem when I initiated it. I need to complete ASAP so it's adding to my frustration. The 'Acct_Num_CH' field is an encrypted field (fyi).
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
WHILE @@ROWCOUNT > 0
BEGIN
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
END
SET rowcount 0
View 5 Replies
View Related
Apr 9, 2008
I'm new to using a DB and have a few questions about what I'm trying to do. I have some historical options data and want to place it into a sql express database. (I understand I might need to use a none express version once the db gets to big.) A months worth of data is over 5.5 million rows of data. So six years worth is ~400 million rows. Is it possible to put this into a sql db and be able to search it very fast? I have a months worth in a db now and it is pretty slow. Should I use a new table for each month and then have 6 years * 12 month = 72 tables to increase the search speed? I search by date and stock_symbol and the data looks like this:
Date, Stock_Symbol, Option_Symbol, Strike, BidPrice, AskPrice, Volume, OpenInterest, (and a few others)
The select statement is simple: SELECT * FROM Options WHERE Date = @Date and StockSymbol = @Symbol
Thanks
View 4 Replies
View Related
Mar 21, 2000
In our database, we have a very large table that gets updated every morning, start of the day is copying 4 million rows from the fact table from previous date to today's date in the same table and then some other processing. It takes 1 1/2 to 2 hrs to do this. There is a dts package created to copy these rows into temp table and then to this fact table.
This table has more than 200 million rows
Any ideas on how to accomplish this without doing the copy twice and not running into locking problems.
Thanks for any suggestions.
View 5 Replies
View Related
Jul 16, 2013
i am deleating 8 Million rows from my database,I am wondering how to control T-Log,also I heard something about row lock and table lock
View 4 Replies
View Related
Feb 27, 2015
i have a following table
table name : emp_master
empid efname emname elamane efathername emothername deptno edob edoj createdby updateby lastupdatedatetime lastactionperformed
empid is primarykey.
this table contains 20million of records and i want to fire following query on this to get employye all data where eployee is more than 10 year old
select empid ,efname, emname, elamane, efathername, emothername, deptno ,edob ,edoj ,createdby, updateby, lastupdatedatetime ,lastactionperformed
from emp_master
where year(doj)+10 > year(getadate())
this will return approx 10 million rows and taking 18 mins. tune this query what approaches should i take to reduce the time of execution.
View 6 Replies
View Related
Mar 24, 2008
Hi,
i have to load 1million rows from database( or flatfile) to the database(or flat file).
which task is used as the best solution for this?
Appreciate any assistance in this regard.
Thanks,
Das
View 5 Replies
View Related
May 4, 2000
I need to update about 1.3 million rows in a table of mine.
I am getting the data from one of the columns of the same table and
updating the new column.
I am doing this using a cursor which I have put in a stored procedure.
As this is a production table which users might be accessing.It is a
web based application and I can't slow the system down.
So I am willing to run the stored prcedure during off peak hours.
However, do I need to put this in a transaction?
If I did put it in a transaction what type of isolation level should I
opt for?
Data integrity is very important for me and I don't mind to compromise
on the performance.
I am doing this because one of the columns which has "short description"
entry is has become too small for business purposes and we want to increase it's
length from varchar(100) to varchar(150).
As this is SQL 6.5, I can't increase the lenght of the column.
So Iadded a new column and will run the stored proc.
What precautions are to be taken?
This is on a high priority basis and very important too.
Thanks in advance...
Stored procedure code:
USE DB_Registration_Dev
GO
IF EXISTS (SELECT NAME FROM SYSOBJECTS WHERE NAME='usp_update_product' AND TYPE='P')
DROP PROCEDURE usp_update_product
GO
CREATE PROC usp_update_product
AS
DECLARE @short_desc varchar(100)
DECLARE @prod_id int
DECLARE sdesc_curs CURSOR
FOR
SELECT [Product].[product_id] , [Product].[short_description]
FROM Product
OPEN sdesc_curs
FETCH NEXT FROM sdesc_curs
INTO @prod_id, @short_desc
WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE Product
SET [Product].[sdesc] = @short_desc
WHERE Product_id=@prod_id
FETCH NEXT FROM sdesc_curs
INTO @prod_id, @short_desc
IF @@FETCH_STATUS <> 0
PRINT ' Finished Updating the table...go ahead and have fun ...! '
END
DEALLOCATE sdesc_curs
GO
View 1 Replies
View Related
Feb 27, 2008
I'm looking for some performance assistance on updating a column value in a table that contains approximately 50 million rows. I have a permanent table in another database that has the key column and value to be set. My query is listed below, but I'm afraid it will run quite awhile. Any suggestions would be appreciated.
update mytable
set column2 = b.column2
from mytable as a
join mytable1 as b
on a.column1 = b.column1
There is a one to one relationship between the two tables.
View 8 Replies
View Related
Mar 10, 2000
Need help, I am managing a Data Warehouse (80 G.B Database Size), I purge older than 6 months data from a table which has more than 140 Million rows on daily basis. The daily data load performance is degrading. The table has no clustered indexes (only non-clustered indexes).
Tried dropping and rebuilding the non-clustered indexes, didn't work.
One way to solve the problem is drop the non-clustered index, bcp out the data, truncate the table and bcp in the data and rebuild the non-clustered indexes. This is too risky and taking 14 hours to bcp out the data.
This was not the issue in SQL Server 6.5, because SQL 6.5 always insert new record indexes at the end of the heap link (heap = non-clustered indexes without clustered index). In contrast, SQL Server 7.0 first checks for available space in existing pages by using percent free space pages (this is where it is killing the performance ).
Thanks for your help!
View 4 Replies
View Related
Aug 21, 2007
I have 1+ CSV files (using a foreach loop) which I'm doing a lot of transform work on and then inserting into a SQL database table.
Each CSV file usually contains about 2 days worth of data (contains date stamps) - somewhere in the region of 60k records per day.
The destination table currently contains 3 million+ rows and will get bigger.
I need to make sure that before inserting into the destination table, the data doesn't already exist.
I've read the following article: http://www.sqlis.com/311.aspx
While the lookup method works, it takes ages and eats up memory as it caches the 3m+ records before running for each CSV. Obviously this will only get worse as the table grows in size.
To make things a little more efficient what I'd like to do, is first derive the dates I'm dealing with in the current file - essentially storing the max(date) and min(date) in variables. Then in the lookup SQL use those vars, to reduce the amount of data that needs to be brought into the transformation to check against before inserting into the destination table.
Lookup SQL eg. SELECT * FROM MyTable WHERE Date BETWEEN varMinDate AND varMaxDate.
Ideally I'd use an aggregate transformation and then use the subsequent output from that either in the lookup query or store the output in vars, but I don't think you can do that and I get the feeling I'm approaching this with the wrong mindset.
Any thoughts would be great!
View 6 Replies
View Related
Jun 12, 2015
I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?
View 13 Replies
View Related
Feb 1, 2005
1) I write a select statemnet
select a.columname1, b.columname1
from table1 a,table2 b
where a.columname2 = b. columname3
How do i compare
a.columname2 <--> int type column
while b. columname3 <--->varchar type
how should i use convert function
2) whats the best way to import 1.5 million rows ?
How about text file
View 2 Replies
View Related
Jul 29, 2014
I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.
I need a sample script to insert 500 million records into a table ....
View 9 Replies
View Related
Sep 17, 2015
I have been tasked with writing an update query to update a table with more than 150 million rows of data. Here are the table structures:
Source Tables :
OC
CREATE TABLE [dbo].[OC](
[OC] [nvarchar](255) NULL,
[DATE DEBUT] [date] NULL,
[DATE FIN] [date] NULL,
[Code Article] [nvarchar](255) NULL,
[INSERTION] [nvarchar](255) NULL,
[Code] ....
The update requirement is as follows:
DECLARE @Counter INT=0 --This causes the @@rowcount to be > 0
while @@rowcount>0
BEGIN
SET rowcount 10000
update r
set Comp=t.Comp
[Code] ....
The update took more than 48h and didn't terminate , how to accelerate it ?
View 6 Replies
View Related
Aug 30, 2007
I have a row that is being used log track plays on our website.
Here's the table:
CREATE TABLE [dbo].[Music_BandTrackPlays](
[ListenDate] [datetime] NOT NULL DEFAULT (getdate()),
[TrackId] [int] NOT NULL,
[IPAddress] [varchar](20)
) ON [PRIMARY]
There's a CLUSTERED INDEX on ListenDate ASC and a NON CLUSTERED INDEX on the TrackId.
I have a TRIGGER on the Music_BandTrackPlays table that looks like the following:
CREATE TRIGGER [trig_Increment_Music_BandTrackPlays_PlayCount]
ON [dbo].[Music_BandTrackPlays] AFTER INSERT
AS
UPDATE
Music_BandTracks
SET
Music_BandTracks.PlayCount = Music_BandTracks.PlayCount + TP.PlayCount
FROM
(SELECT TrackId, COUNT(*) AS PlayCount
FROM inserted
GROUP BY TrackId) AS TP
WHERE
Music_BandTracks.TrackId = TP.TrackId
When a simple INSERT statement is done on the Music_BandTrackPlays table, it can take quite a long time. When I remove the TRIGGER the INSERTs are immediate. The Execution plan for the TRIGGER shows that a 'Inserted Scan' is taking up most of the resources.
How exactly is the pseudo 'inserted' table formed?
For now, I think the easiest thing to do is update my logging page so it performs 2 queries. One to UPDATE the Music_BandTracks table and increment the counter, and perform the INSERT into the Music_BandTrackPlays table seperately.
I'm ok with that solution but I would really like to understand why the TRIGGER is taking so long. The 'inserted' pseudo table will be 1 row 99% of the time. Does SQL Server perform a table scan on all 20 million rows in order to determine what's new and put it in the inserted pseudo table?
Thanks!
View 6 Replies
View Related
Apr 24, 2013
IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END
It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.
View 4 Replies
View Related
Apr 2, 2008
Hi, I need to write a query which I have never attempted before and could do with some help.... I have a Groups table and a Users_Groups look up table. In this model, users can only be assigned to 1 group. If a group is deleted, a trigger should fire and delete any rows in User_Groups having a matching Groups.Ref. Unfortunately, the trigger hasn't been firing and I now have a load of defunct rows in Users_Groups relating users to groups which do not exist.I now need to find all of these defunct rows in Users_Groups so that I can delete them. How can I find rows in Users_Groups where the parent rows and refs in Groups are null? I've tried searching the net for something similar but don't even know how to word the search properly to get any half relevant results. Cheers PS, I do realise I need to tighten the constraints on my database
View 5 Replies
View Related
Jan 20, 2006
In the SSIS Analysis Services processing task, I was wondering if
anyone knows why some dimensions do not have the Process Update option
in the list of options for processing them? If there is
only Process Full, Process Data, and Unprocess, I am not sure how
I can do incremental updates without scripting.
Also, will this affect the cubes if a full process is performed?
Any help is much appreciated!
View 1 Replies
View Related
May 24, 2000
Hi,
I run the following command , it went well but i didnt find
a file authors.txt
where will i find this authors.txt
exec master.. xp_cmdshell "bcp pubs..authors out authors.txt -c
-Sserver -Uuser -Ppwd"
--kavira
View 3 Replies
View Related
Apr 12, 2007
Can someone please help me with troubleshooting my SQL Server authenticated user. One of the users that I created has a login name of <None>. I am unable to delete this user and recreate his login nor plug in a login name. I believe that there is a script to correct this problem but I am having trouble researching it. I am working under the gun to resolve this issue so anyones help with the issue is much appreciated.
View 2 Replies
View Related
Sep 30, 2004
Any help at this stage is welcome!
Due to collation-discrepancies the publisher and subscriber databases have been recreated.
When however I attempt to restart the publishing...SQL gives the message
"The process could not make a generation at the 'Subscriber"
There is enough space on the harddisk of either subscriber or publisher 3Gbyte+
UserRights are ok and I reinitialized the publication.
Everything seems ok and yet I get the error mentionned above.
Can anyone give me some help with this?
Thank you!! :)
VincentJS
View 2 Replies
View Related
Jul 20, 2005
We get the following error message."a floating point exception occured in the user process. currenttransaction is cancelled".this message comes when trying to excute a stored procedure. Thisexception is unpredictable.OS : Windows 2000 (SP3)Version: SQL server 2000 (SP3).
View 1 Replies
View Related
Jan 31, 2008
Hi
Just thought I would ask the question on here with regards a problem we have. We have PDA based auditing tool which use SQL Compact to store information via on Compact Framework application. Now we have a user who swears blind they have carried out an audit but there is no trace of it anywhere on the database even though there are audits on there prior to the said date and after and there is no other faciity to delete data from the app other than a complete wipe of the database hence everything would of gone!
Now, the data is stored over 4 tables from which there are no traces of any unrelated data between the tables etc. so its dissappeared, anybody any experience of data simply vanishing into thin air??
Thanks in advance
Andy
View 1 Replies
View Related
Jan 20, 2004
Hello,
I have a sql 2000 database in which reports are generated on a monthly basis from the data inside on of my tables. The reports have been working fine, until some of the rows seemed to have disappered!
I know the data use to be in the table, since it is showing on the old reports, however, when I try to pull that same data, it is not in the database at all.
Does anyone have any ideas on what could have caused this or how I can resolve??
Thxs!!
View 4 Replies
View Related
Jun 27, 2007
I have enabled drillthrough on a cube (AS 2005) and selected the columns required. This worked. When I use the drillthrough option in Excel, drillthrough queries return less rows than expected. For example, for a cells combinations (pivot table) I saw 21198 rows, but after access to detail, query return just 21106 rows. Less rows than expected.
Any ideas what causes this?
Thanks!
:)
View 1 Replies
View Related
Aug 7, 2006
Hello
Iam using:
Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38
Copyright (c) 1988-2003 Microsoft Corporation
Desktop Engine on Windows NT 5.1 (Build 2600: Service Pack 2)
I am using BULK INSERT to import some pipe-delimited flat files into a database.
I am firstly converting the file using VB.NET, to ensure each line of the file has a carriage return (by using streamwriter.writeline), and I am also ensuring there is no blank line at the end of the file (by using streamwriter.write).
Once I have done this, my BULK INSERT command appears to work OK. This is how I am using the statement:
BULK INSERT
tempHISTORY
FROM 'C:TEMPHISTORY.TXT'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = '|',
ROWTERMINATOR = '
'
)
NB: The first row in the file is a header row.
This appears to work OK, however, I have found that certain files seem to miss the final line of the file! I have analysed these files incase they have an inconsistant number of columns but they don't.
I have also found that if I knock off the last column of the tempHISTORY table, the correct number of rows are imported. However, of course, I can't just discard one of the columns from the file, I need to import the entire file.
I cannot understand why BULK INSERT is choosing to miss the final line in the file, when the schema of the destination table matches the structure of the file.
View 2 Replies
View Related
May 24, 2007
Hello everyone,
I'm trying to create a performant script to copy records from a table in a source database, to an identical table in a destination database.
In SQL 2000, I used to create a little lookup which did a count using certain fields. If the record was missing, I executed an INSERT query, otherwise an UPDATE query. The result was that the table on the destination side was always up to date. Duplicate rows were out of the question.
This was, if I'm not mistaking, a Data Transformation, using a bit of custom VBA code to govern the transfer. For each source row, the custom code was executed. Depending on the result of the custom code, a different query was launched.
Now I'm trying to do the same using SSIS in SQL 2005. Is there a task which does this for me, or do I have to script again? In the latter case, which type of task would I use?
(I thought of the Script Task, but then I would need to set up quite a bit myself.)
Thank you,
Bram
View 1 Replies
View Related
Feb 8, 2007
SQL 2k, DDL below.I have a simple table with the following data:fldYear fldCode1 fldCode22000 ABC1 ABC122000 ABC1 ABC132001 ABC1 ABC122002 ABC1 ABC122002 ABC1 ABC13I need to know, for every distinct combination of fldCode1 andfldCode2, if there are any years missing.For example,SELECT DISTINCT fldCode1, fldCode2 FROM MyTablereturnsABC1 ABC12ABC1 ABC13I need to know that in 2001 there was no entry for ABC1/ABC13Thanks!Edwardif exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[MyTable]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)drop table [dbo].[MyTable]GOCREATE TABLE [dbo].[MyTable] ([fldYear] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[fldCode1] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[fldCode2] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL) ON [PRIMARY]GO
View 2 Replies
View Related
Nov 25, 1998
hi, I am importing data daily to many tables, I want to keep track of the #of rows for each import process. I already have created a trigger as follow:
CREATE TRIGGER tr_bcp_log ON dbo.A
FOR INSERT
AS
declare @name varchar(30),
@row_count int
select @name=name , @row_count= @@rowcount
from inserted
insert into bcp_tracks (name,row_count)
values(@name,@row_count)
GO
The problem is that I am getting a row for each inserted row in table A.for instance if I have 500 rows in table A, I will get 500 rows in the log table like this
table_name,#of rows
A 1
A 1
A 1
etc up to 500 rows for table A
This is not what I want, I want to capture the num of rows for every bcp process , so in the log table I want to see the following :
table_name, #of rows
A 500
B 600
C 450
A 250
etc
Any help ?
thanks
Ali
View 1 Replies
View Related
Aug 11, 2014
I need to use Bulk insert statement for copying a table with 200 million rows to another table on the same server...the table has no primary key or identity column.... script for BULK INSERT ...
View 9 Replies
View Related
Feb 12, 2008
Hi,
I have written a reporting application which has a SQL2005 backend. An import routine into SQL, written by a 3rd party, frequently fails. The main problems are missing rows in certain tables.
I am going to write an SP that will accepts a from and to date. I then want to search for rows of type X between those dates that do not exist so we then know between a date range, we have no data for these XYZ days.
I have this working by returning all rows between the dates into a dataset, sorted by date, and then running through the rows and testing if the next rows date is the next expected date. This works but I think is a very poor solution. This is all done on the client in C#.
I want to learn and implement the most efficent way of doing this. My only solution in a SP was to make a temporary table of all dates between the date range for row type X and then do a right outer join against the data table, returning all rows which are missing.
Something like this:
SELECT twmd.date
FROM #temp_table_with_all_dates ttwad
RIGHT OUTER JOIN table_with_missing_date twmd
ON ttwad.date = twmd.date
WHERE twmd.date IS NULL
Would this be a good, efficent solution, or should I just stick to my processing of a dataset in C#?
Many thanks in advance,
CB
View 4 Replies
View Related