Counting # Of Imported Rows For Each Import Process
Nov 25, 1998
hi, I am importing data daily to many tables, I want to keep track of the #of rows for each import process. I already have created a trigger as follow:
CREATE TRIGGER tr_bcp_log ON dbo.A
FOR INSERT
AS
declare @name varchar(30),
@row_count int
select @name=name , @row_count= @@rowcount
from inserted
insert into bcp_tracks (name,row_count)
values(@name,@row_count)
GO
The problem is that I am getting a row for each inserted row in table A.for instance if I have 500 rows in table A, I will get 500 rows in the log table like this
table_name,#of rows
A 1
A 1
A 1
etc up to 500 rows for table A
This is not what I want, I want to capture the num of rows for every bcp process , so in the log table I want to see the following :
table_name, #of rows
A 500
B 600
C 450
A 250
etc
Today I tried out Integration Services and after a couple of hours of confusion I am impressed by the power of this product. I was wondering if I might get some help on the 'best practice' for the following requirements:
I have a foreach container that scans a directory and calls a data flow task for each file in the directory. The files import to the database fine, but I want to modify the procedure so it only imports files that have not yet been imported. There are a couple of scenarios:
- a file is created in the directory - a file is modified in the directory
In both cases I want to insert (or reinsert) the file. How can I modify my package to accomodate this behaviour? Storing the filename is an option, but I am not sure how to also bring in the file creation/modified dates.
I have a stored procedure which will run automatically. I've got try...catch code in the procedure, but I found a bug with the code where if there are any import errors, it doesn't recognize that that there was an error and it runs through the try code as through there was no problems. (I reported the bug). I added some code using @@rowcount to check if there were rows imported, and if not, it moves the data file to a error folder so I know there was a problem with the import. But this only checks if at least one row was imported, not if all the rows in the datafile have been imported. (i.e. if the first row imported correctly, and the second did not, it still sees it as successful). The problem is some of the data files have only one row to import and some have multiple rows. Is there a way to count the number of rows in the datafile, then count the number of rows imported, to verify they are the same number imported? Thanks, Laura
I have a stored procedure which will run automatically. I've got try...catch code in the procedure, but I found a bug with the code where if there are any import errors, it doesn't recognize that that there was an error and it runs through the try code as through there was no problems. (I reported the bug). I added some code using @@rowcount to check if there were rows imported, and if not, it moves the data file to a error folder so I know there was a problem with the import. But this only checks if at least one row was imported, not if all the rows in the datafile have been imported. (i.e. if the first row imported correctly, and the second did not, it still sees it as successful). The problem is some of the data files have only one row to import and some have multiple rows. Is there a way to count the number of rows in the datafile, then count the number of rows imported, to verify they are the same number imported? Thanks, Laura
I have a record in an Excel format (Excel 2010) and I would like to bulk import that into SQL Server 2008 and also while importing, SQL Server will automatically create a new table based on the header fields or row of the source file.
I am not sure if SQL Server 2008 has this capabilities.
is there any way to set up a column that has the row count in it? i need this for a program i am developing and this would make it much easier to deal with. I know i can get a total count but when i run a count within a select statement i just get '1' for every row. thanks
select statement joining file1 to file2. File 1 may have 0, 1, or many corresponding rows in file2. I need to count the corresponding rows in table2. Table2 also has a Boolean column and I need to count the number of rows where it is true. So I need to count the total number of matching rows and the count of those that are set to true. This is an example of what I have so far. I had to add each column being selected into a Group by to make it work, but I do not know why. Is there some other way this should be set up.
SELECT c.CarId, c.CarName, c.CarColor, COUNT(t.TrailerId) as trailerCount, (add count of boolian, say t.TrailerFull is true) FROM Car c LEFT JOIN Trailer t on t.CarId = c.CarId GROUP BY c.CarId, c.CarName, c.CarColor
i have a report with 1 group and items under the group.
i want to add a new field called Sl. no. at the group level which keeps the count on groups...by that i mean say if there are 10 groups i want the number from 1 to 10 appear against each group as 1 Grp1 2 Grp2 3 Grp3 . . . 10 Grp10
How can i achieve the above. I tried the rownumber but it returns the number of rows the group has. The levels i have is "table1" the main scope and "table1_Group1" the group scope which in the table1 scope.
hi, i have a stored procedure SELECT UserName AS Visitor, COUNT(VisitID) AS TotalVisit FROM UserVisits WHERE (ProductID = @ProductID) AND (AnonimIP IS NULL) GROUP BY UserName UNION SELECT AnonimIP AS Visitor, COUNT(VisitID) AS TotalVisit FROM UserVisits AS UserVisits_1 WHERE (ProductID = @ProductID) AND (UserName IS NULL) GROUP BY AnonimIP this will return something like: zuperboy90 - 4 visits ANONIMOUS - 6 visits 85.104.103 - 2 visits etc how can i count the rows returned in both selections (4+6+2 = 12) ? thank you
Can anyone explain to me how to count number of rows per page? I need to calculate an index size, and I think the factor that contributes the most to my formula`s inaccuracies is the number of rows per page.
I have a stored procedure which deletes a number of rows from a number of different tables. How to i count/return the number of deleted rows in each table?
Here is my stored procedure if it helps:
set ANSI_NULLS ON set QUOTED_IDENTIFIER ON go
ALTER PROCEDURE [dbo].[usp_delete_entry] @new_venue_id int AS BEGIN
DECLARE@new_customer_id int SET @new_customer_id = (SELECT customer_id FROM VENUE WHERE venue_id = @new_venue_id)
DELETE FROM FEATURED WHERE venue_id = @new_venue_id DELETE FROM FACILITIES WHERE venue_id = @new_venue_id DELETE FROM SIC WHERE venue_id = @new_venue_id DELETE FROM SUBSCRIPTION WHERE venue_id = @new_venue_id DELETE FROM ADMIN WHERE venue_id = @new_venue_id DELETE FROM VENUE WHERE venue_id = @new_venue_id DELETE FROM CUSTOMER WHERE customer_id = @new_customer_id
I hate to ask such silly helps..but I'm missing something here..need help. I have a table having columns for createddate and deleteddate. The data gets created and deleted periodically and I need to find out the number of created,deleted and remaining number of records on each day. This query works, but takes a lot of time...not sure if there is a more better way to do this.. Please help SELECT CAST(createddate AS DATETIME) AS createdDate, Created, Deleted, Remaining FROM( SELECT CONVERT(VARCHAR,createdon,102) AS CreatedDate, COUNT(1) created, (SELECT COUNT(1) FROM table ta2 WHERE CONVERT(VARCHAR,ta2.deletedon,102) = CONVERT(VARCHAR,ta.createdon,102)) Deleted, ((SELECT COUNT(1) FROM table ta1 WHERE CONVERT(VARCHAR,ta1.createdon,102) <= CONVERT(VARCHAR,ta.createdon,102)) - (SELECT COUNT(1) FROM table ta1 WHERE CONVERT(VARCHAR,ta1.deletedon,102) <= CONVERT(VARCHAR,ta.createdon,102))) Remaining FROM table ta WHERE CONVERT(VARCHAR,createdon,102) >= (GETDATE() - 90) GROUP BY CONVERT(VARCHAR,createdon,102) ORDER BY CONVERT(VARCHAR,createdon,102) DESC) AS tmp
Hi, I have a table that for ease has this data in:R1, R2, R....z---------------------A | 12A | 22A | 30B | 0B | -1B | -3C | 100I want to generate a table for each distinct row in R1, gives a countof all the rows with data correspondingFor the above table I would getA | 3B | 3C | 1Im probably being stupid but cannot see this at the moment... pleasehelp.Thanks
I would like to create a user defined SQL function which returns the number of rows which meets certain condition, and the average value of one of the culomns. I cannot find a code example for it. Please help.
Select statement joining file1 to file2. File 1 may have 0, 1, or many corresponding rows in file2. I need to count the corresponding rows in table2. Table2 also has a Boolean column and I need to count the number of rows where it is true. So I need to count the total number of matching rows and the count of those that are set to true. This is an example of what I have so far. I had to add each column being selected into a Group by to make it work, but I do not know why. Is there some other way this should be set up.
SELECT c.CarId, c.CarName, c.CarColor, COUNT(t.TrailerId) as trailerCount, (add count of boolian, say t.TrailerFull is true) FROM Car c LEFT JOIN Trailer t on t.CarId = c.CarId GROUP BY c.CarId, c.CarName, c.CarColor
Is there a way (perhaps a property) to capture the number of rows selected from a Flat File Data Flow Source without having to develop a script to loop through the rows and count them?
I need to count and display the number of records which have GradeTitle="SHO". I'm only starting to use BI development studio and all attempts at using the built in aggregate functions have failed.
Also, the report I wish to create has a fixed number of columns and a fixed number of rows as the info being displayed is really only counting values in the DB. I tried using Table but multiple rows were created.
I'd appreciate if anyone could point me in the right direction, as searching this forum turned out to be pretty fruitless for me.
Which method in MSSQL 7.0 would best suit being able to de-dupe, basically leave the dupes in the import file.
I have two process, in the first process, I'm importing from two different tables, so that any potential dupes would have their unique RecID's given from either table. I can then de-dupe on the unique ucase(entry)+RecID combo.
This works fine, however in the second process, the import file that has only one source, and therefore I could have real dupes. Currently I've only used a TSQL cursor process to copy all the data into a temp table, delete the data in the live table, then use another cursor in the same process to only copy one instance of an (account_number + RecID) back into the live table.
This too works, but I'd like to make a DTS package that can do this on import in as few steps as possible. I'm thinking to use one connection and a proc(?)
Hopefully, there is a way to do this. I work with two SQL servers. One is our production server the other is our test server. In order to test various things, I often need to copy the source data from one server to the other. Most of our programming is in VBA. It's easy enough to open a recordset and fill it with the data I need from the production server, then upload each record, one at a time, to the test server. The problem is that I am dealing with a massive amount of data and this takes a long time.
I have found that I can use the import task in SQL Server Enterprise and it transfers the data extremely quickly. Is there a way, preferably using VBA, that I could automate this import task process?
My value in SQL appears in the 4th column as 8251439.5. I define that the source has a float field. Any idea why my value get changed and how to work around it?
Hi,I am having trouble importing data from an excel spreadsheet into MSSQL Server 2000 using DTS Wizard. The DTS import process issuccessfull, no errors, but only 50 rows of approx. 1500 rows of dataare imported. I tried to remove 20 rows in the excel spreadsheet inthe interval row 0-50. When i later ran the import, only 30 rows wereimported. I deleted almost every row in the interval 0-50, with theresult of the import having 0 rows imported (but job ransuccessfully). I decided to delete rows 0-100 in the spreadsheet inorder to see if the resolved the problem, but it didn't. As Isuspected something in the excel file to be the cause, I exported theexcel spreadsheeet to a tab delimited textfile, with only one row. ADTS import resulted in importing approx 100 rows, double the amount ofthe textfile, but the other 1400 rows were not imported. The data inthe column is containing numeric values only.Please help me! What could possibly be the cause of DTS skipping rowslike that. DTS doesn't feel reliable at all :/Regards,Björn
Hi, I used the /e in my bcp code. yet did not get all the rows from the main frame into the sql talbes... here is the case I have 11 million rows in an ftp server I use this code to bcp into sql server can anyonecheck if this code is good for the process, I am missing one million row in the bcp process and do not know why??? I put the /e to see if there is any error but could not see any error file in my hard drive? Please check it out and let me know
We write to a log file each time a job runs. We give each job a unique batchid. I want to compare the run times of each step/record between two batch ids: '20150101888' and '20150101777'. Column Mins in the number of minutes each step ran. I am having trouble comparing the rows that have generic process and stepname – Trans Switch in this example. A new process within a batchid starts with a 'XX', 'Load'.
So I want to compare CA's Trans to CA's Tran Switch and ER's Trans Switch to ER's, etc. There can be multiple Trans Switch per process. There should be the same number between each batch, but no guarantees that something might change. Also, Trans Switch is not the record right after the new process (CA, ER) in production.
I have just made a very simplified example.
/** Want to compare 20150101888 to 20150101777 and end up with this result set. Notice that the duplicate process/step within a process has the process (CA and ER in this example) and a sequential number added to it: 'CA Trans 1'. Need this to pull out the largest time differences.
Time difference, process, step, mins1, mins2, batchid1, batchid2 -6, CA, Load, 17, 23, 20150101888, 20150101777 0, CA Trans 1, Switch, 8, 8, 20150101888, 20150101777 -6, CA Trans 2, Switch, 9, 15, 20150101888, 20150101777 -4, ER, Load, 7, 11, 20150101888, 20150101777 -4, ER Trans 1, Switch, 7, 11, 20150101888, 20150101777
After the date you can see that there is two digits number either 00 or 01. The rows also have a different lengthts.
When ever that columns contains 00 the line should be inserted to a special text file, if the columns contains 01 it should to another file.
How can I solve this in a good way?
One of the problems I have is that when I try to import the rows the flat file connections indicates(erros message) that I have partial row in the file which is true since the the rows with the columns content 01 have more fields then the other.