I am trying to get a result like below without using the cursor.
col1 col2 col3 col4
1111 date uniquenumber 6
1111 date uniquenumber 5
1111 date uniquenumber 4
1111 date uniquenumber 3
1111 date uniquenumber 2
1111 date uniquenumber 1
2222 date uniquenumber 4
2222 date uniquenumber 3
2222 date uniquenumber 2
2222 date uniquenumber 1
3333 date uniquenumber 2
3333 date uniquenumber 1
the column that say unique number is unique and is not duplicated and date column might have duplicates
Please advise whether it is possible to write a query without cursor.
With the function below, I receive this error:Error:Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 0.Function:Public Shared Function DeleteMesssages(ByVal UserID As String, ByVal MessageIDs As List(Of String)) As Boolean Dim bSuccess As Boolean Dim MyConnection As SqlConnection = GetConnection() Dim cmd As New SqlCommand("", MyConnection) Dim i As Integer Dim fBeginTransCalled As Boolean = False 'messagetype 1 =internal messages Try ' ' Start transaction ' MyConnection.Open() cmd.CommandText = "BEGIN TRANSACTION" cmd.ExecuteNonQuery() fBeginTransCalled = True Dim obj As Object For i = 0 To MessageIDs.Count - 1 bSuccess = False 'delete userid-message reference cmd.CommandText = "DELETE FROM tblUsersAndMessages WHERE MessageID=@MessageID AND UserID=@UserID" cmd.Parameters.Add(New SqlParameter("@UserID", UserID)) cmd.Parameters.Add(New SqlParameter("@MessageID", MessageIDs(i).ToString)) cmd.ExecuteNonQuery() 'then delete the message itself if no other user has a reference cmd.CommandText = "SELECT COUNT(*) FROM tblUsersAndMessages WHERE MessageID=@MessageID1" cmd.Parameters.Add(New SqlParameter("@MessageID1", MessageIDs(i).ToString)) obj = cmd.ExecuteScalar If ((Not (obj) Is Nothing) _ AndAlso ((TypeOf (obj) Is Integer) _ AndAlso (CType(obj, Integer) > 0))) Then 'more references exist so do not delete message Else 'this is the only reference to the message so delete it permanently cmd.CommandText = "DELETE FROM tblMessages WHERE MessageID=@MessageID2" cmd.Parameters.Add(New SqlParameter("@MessageID2", MessageIDs(i).ToString)) cmd.ExecuteNonQuery() End If Next i ' ' End transaction ' cmd.CommandText = "COMMIT TRANSACTION" cmd.ExecuteNonQuery() bSuccess = True fBeginTransCalled = False Catch ex As Exception 'LOG ERROR GlobalFunctions.ReportError("MessageDAL:DeleteMessages", ex.Message) Finally If fBeginTransCalled Then Try cmd = New SqlCommand("ROLLBACK TRANSACTION", MyConnection) cmd.ExecuteNonQuery() Catch e As System.Exception End Try End If MyConnection.Close() End Try Return bSuccess End Function
I am looking to create a incremental value based on the resulting insert that I am using. There already is another field being used as an identity field. I have a beginning value that I just want to add the row number to for the insert.
insert into lineitem select substring(group_id,4,len(ltrim(rtrim(group_id)))-3) as co_code, 0,0,(case when enddate < cast(month(getdate()) as varchar(10))+cast(day(getdate()) as varchar(10)) then 'Prior' else 'Current' end ), left(acct_type,2) as bene_type, convert(smalldatetime,left(ltrim(rtrim(eff_date)), 8)), 0,trans_amt,0,0,convert(smalldatetime,left(ltrim(r trim(sett_date)),8)), ltrim(rtrim(b.fname)) + ' ' + ltrim(rtrim(b.lname)) as payee,0,0,a.ssn,'Y',999+count(*), (case when isnull(b.location,'') = '' then '' else b.location end) as location from mbi_tran_temp a left join enrollees b on a.ssn = b.ssn and ltrim(rtrim(a.group_id)) = ltrim(rtrim(b.mbicode))
The '999+count(*)' is where I would like to have the incremental value.
Hi Friends please let me know how can we incrementally load a destination table with source table. bearing in mind that we need to track that there are no duplicates in the destination table. I need to load only changed or new data in the final load. Please give me some examples also. I am tryin this from last 2 days as I am totally new to SSIS.
I have done an bulk upload and I would like to start doing incremental uploads. I just want to upload only the new records that have been added to my data source ( free foxpro tables ). Can anybody point me to an example or info to accomplish this.
Dear all,I have an SQL table where I am doing insert from an asp.net web project.I have a primary Key, set to be incremented by 1on each insert.I have another column, Col1 that should be incremented on each insert, How can I do this?I need to have 2 incremental columns? do u have any idea on how to do this?I am doing this right now by calling a select statement in the Add stored procedure on this table, I read the last value of col1 and then I increment it by 1, this work fine on development environement, but when many users are accessing this website, I will have wrong values for col1. Any idea??Thanks.
hi, i need to create two instances of db and transfer incremental uploads from one db to another without having to transfer the entire table of data again and again. how should i go about it? what commands should i use?
i have a catalog and add directory which has 10,000 documents and all are index but if i add 1000 documents to that directory and i don't want those 1000 documents to be indexed. i want only previous 10,000 index document and don't want to new document to be indexed. is there any way can stop the new document to be indexed, please let me it's bit urgent. Thanking you in anticipation
I've created an SSIS package that loads data from source to destination, using Lookup and conditional Split to check New rows and changed rows for one table.
Now I want to take this father by loading data for multible table more that 100. I did it in T-SQL using dynamic sql and cursor.
I was wondering if there is a way to schedule a tast that will dump afixed width text file of all the new entries in a table. So if I hada table with likeusername - varchar(20)created - smalldatetimeI could get a weekly feed each week of all the new users in a textfile. I know I could write a script that would go through and do thisby looking at the time stamp, and the last time that the filepreviously ran and get the new dates but I was hoping there was abuilt way to do this. Or perhaps a more elegant solution.Thanks,Charlie
Hi allNeed your help to do this; I got a table with these records:Supplier RegNo Status PoNumberABC sbh1309m 1DCD sbt99x 1FGJ sbg3939m 1FGJ sbg3939m 1OEE ey3939d 1Need to have a sql command to transform to :Supplier RegNo Status PoNumberABC sbh1309m 1 50001DCD sbt99x 1 50002FGJ sbg3939m 1 50003FGJ sbg3939m 1 50003OEE ey3939d 1 50004Any ideas?Thanks in advance.Rashid.
I am attempting to perform an incremental load, inserting new rows and updating existing rows. I am using a lookup and everything works fine, except when it is the first load of the destination table. As there are no records in the table at all, the lookup fails. I thought of using a rowcount - if the count variable is zero load everything from temporary table to load table, otherwise perform lookup and incremental load.
We are in the process of converting our existing incremental loads from DTS to SSIS.
Currently we get all the data for the past month into temp tables in the warehouse, compare with key fields add the new rows and update changed rows. All this is done using Execute SQL task.
Is there a better way to implement the incremental logic using SSIS any new objects that be used to avoid too much SQL codes? Performance is very important and we do a lot of aggregation after the load for the reports to run faster so that we can meet customer SLA's.
We have around 20 tables that needs to be loaded 4 have large amount of data between 20 and 40 million rows out of which we will be brining over around 100 thousand during each incremental run. The other tables have less than 100,000 rows so does not hurt truncating and reloading the entire table.
Hi everyone. I'm trying to figure out how to run an incremental load into a Staging table.
At this point I'm not trying to Conditional Split it between "New" and "Changed" records... just the load.
The logic in my head says that after each load, you can take the most recent "modified" date/time and store that in an incremental load table. That way, next time you run an incremental load, you just have to look up that "modified" date/time, and only load the source records with a "modified" date/time later than the record in your incremental load table. Does that plan sound feasible?
I think so far my problem is that my source is on an ADO.NET connection, and my incremental load table is on my SQL Server. So when I do my load from the ADO.NET database, I cannot read the data from the incremental load table.
Is my logic flawed? Any help would be appreciated.
i have to import a csv file into a database via ssis and so far everything works fine. but now i have to add a column where a incremental int should be inserted, so every row should have a unique number ... i tried to realize this by using derived col transofrmation, but without success ... has someone an idea how to do this?
Does an Differential Backup contain all the changes since the last full backup, including Inserts, Updates, and Deletions. Our DB has "Truncate on Checkpoint" = True, so Log backups are non-sensical. I want to apply the Differentials to an archive, offline DB.
"one pebble does not fill the void, but it is a start." "Dedicated to only creating original mistakes."
I'm trying to update every record with a incremental number. I wrote the following query but it updates the records with the same number. Could someone please tell me what I'm doing wrong? Thanks.
declare @NextKey int SELECT @NextKey = NEXT_KEY FROM TABLE_KEYS WHERE TABLE_NAME = "tblEmp" set @NextKey = @NextKey + 1
DECLARE tblNewEmp_cursor CURSOR FOR select emp_pk from tblNewEmp
open tblNewEmp_cursor FETCH NEXT FROM tblNewEmp_cursor WHILE @@FETCH_STATUS = 0 begin update tblNewEmp set emp_pk = @NextKey set @NextKey = @NextKey + 1 FETCH NEXT FROM tblNewEmp_cursor end
I am working on one application, which retrieves data from multiple tables in the database and all the fields retrieved, are exported as an excel sheet. All the export functionality is done through DTS. And the data is retrieved using an SP.
I am supposed to use an "incremental backup" approach here. Means, for the 1st scheduling of the package, all the database dump will be there in excel file. But, after that only fields that are updated/inserted are to be filled in the excel file. Each time, when the dump of the database is taken, the excel file is stored in an archive folder with Date and tiemstamp(e.g. TEST_11_04_2007_15_00_00.xls)
e.g. For the first dump, I get 100 records from the database. Before next execution of the package, 10 rows get inserted and 1 row gets updated. So, on the second time execution of the package, I should populate the excel sheet with those 11 rows only, and not 110 records.
I am not authorised to change the database schema. So, is there any approach to try out this?
Hi, all. I want to Select query with incremental column on the fly.
For example Use pubs GO select * from jobs where job_desc like '%e%' Order by max_lvl returns job_idjob_descmin_lvlmax_lvl 1New Hire - Job not specified1010 12Editor25100 13Sales Representative25100 ...
I want to add here Rank Column numbering in order select RankOnTheFly, * from jobs where job_desc like '%e%' Order by max_lvl Then result will be.. Rankjob_idjob_descmin_lvlmax_lvl 11New Hire - Job not specified1010 212Editor25100 313Sales Representative25100 ..
I can get the result using cursor and looping throught and inserting or Using Identity function. But, I saw before there is just one simple Select query doing that.
Hi All I have this table which has about 30 thousand records
I want to send all the records everyday at night to the other interface(TIBCO) in form of XML.
I can send all the records at once or maybe 10 records at a time
1. If I make one XML with all the records and send it, its risky as XML would be quite big, also it would be time consuming and would generate a big load on the program(.NET program will create XML). Also, any abnormal error would kill the whole process and we would have to start again
2. Processing 10 or 20 records at a time and then sending the XML(with these 10 or 20 records) and doing this till all the records are sent. If there is any error in between in the program(.NET), the program can start from that step again. Now the problem is how I can keep track of the record number already sent to the other interface Can someone help? Thanks
Hi;Before I start let me say I know this is a silly way to go aboutthings, but it is one of those "my company made me do it" things.My company is in the process of ( finishing ) migrating from foxpro tosqlserver.They have a foxpro program that does a lot of various updates. Iwould like to write a tsql script to replace it.The trick is that the tsql script would have to have some proceduralprogramming artifacts that I don't think it has.I/my company would like the script to generate output messages thatthe user can see while the script is running. Everytime I have ran ascript in query analyzer I usually don't see the output ("print"statemetns, etc ) until all of the work isdone. Is there a way to run thing so I can see the output statementsas the work is being done?Can print statements work from within a tsql loop?The other thing is that I/my company wants the tsql script to writeoutput messages to a log file while it is operating ( for errorchecking and other logging).Is this possible with tsql?Thanks in advance for the infoSteve
Is there a way in SSIS to go through a certain column in a table and to collect incrementally the values for each row in this column and to finally insert the final value to a second field or parameter?
Is it possible to do it with no external code? only within SSIS tasks?
When Station = 'D_CC' then Cycle_Index suppose to start a new cycle. So my problem is trying to find the next 'D_CC' and store the incremental cycle # in a New_Cycle column.
Updated: Sorry to confuse you by changing the text color!
Case when Station = 'D_CC' then Cycle_Index = Cycle_Index + 1; based on the prior cycle # and continuos to find next 'D_CC' until the end; regardles whatever in between the prior D_CC to next D_CC.
Bottom line is searching the next value from Station = D_CC then cycle # in Cycle_Index column need to be incremented by 1 and stored the new cycle # in New_Cycle column as the actual cycle.
1) We are doing Weekly Full & DAILY night incremental backup of TL using Veritas Backup Manager to Tape.
One day I took Incremental Backup of TL file manually using studio and deleted the backup file.
Will I able to restore completed if something happens on next day ? Is automated backup takes care of Incremental backup from last night instead of manual interim backup ?
What is the recommendation ? If automated backup is enabled, we should not do manual backup ?
2) In Full Recovery Modek , If I do full backup , Does it backup Transactional Log also or only Datafiles ?
I have a full text index created on a table with PK, text column and timestamp column. The table has 10 million rows. I tried one time full population and CPU spiked so after couple of hours i stopped full population.
Now since i have a timestamp column in the table I want to do a incremental population.
But when I run a select
SELECT * FROM sys.fulltext_indexes
The incremental_timestamp column is showing value 0x0000000000000000
How do I find how long will it take for incremental population to complete?
Hi,I am working on one application, which retrieves data from multiple tables in the database and all the fields retrieved, are exported as an excel sheet. All the export functionality is done through DTS. And the data is retrieved using an SP.I am supposed to use an "incremental backup" approach here. Means, for the 1st scheduling of the package, all the database dump will be there in excel file. But, after that only fields that are updated/inserted are to be filled in the excel file. Each time, when the dump of the database is taken, the excel file is stored in an archive folder with Date and tiemstamp(e.g. TEST_11_04_2007_15_00_00.xls)e.g. For the first dump, I get 100 records from the database. Before next execution of the package, 10 rows get inserted and 1 row gets updated. So, on the second time execution of the package, I should populate the excel sheet with those 11 rows only, and not 110 records.I am not authorised to change the database schema.So, is there any approach to try out this?