I need to create a fairly simple package that copies data from one database using a sql querie and inserts it into another. what's the best way to handle rows that already exist in the destination table? The package needs to run once a day and ignore rows inserted from previous runs.
Type Name Value x M1 5 x M2 10 x M3 20 y M1 10 y M2 15 y M3 30
Now, i need to add four more rows to the table
Type Name Value x M1 5 x M2 10 x M3 20 y M1 10 y M2 15 y M3 35 z1 Total 15 (xM1+XM2) z1 Diff 5 (xM3-xM1+XM2) z2 Total 25 (yM1+yM2) z2 Diff 10 (yM3-yM1+yM2)
IUpdating existing rows in a SQL server 2005 destination table
--I have a CSV file and every row needs to perform an update on a production table
The most straight forward is to use an OLEDB Command Data Flow Transformation Object that sends one SQL statement per row coming in. How do I configure this object ???
That might be okay if you've only got a small number of updates, however if you've got a large set, I prefer landing this set to a temporary table and then doing Set based updates in a following Execute SQL task. Can you please tell me how I can set up the Execute SQL Task to do set based updates in the Control flow??
I am trying to insert some new rows into an existing SQL table. The table name is Agt_table, and I want to add some data for some new agents into existing columns: Agent name, agent code, phone number, fax number
Example - I want to add the following record to my existing table Agt_table Agent name: ABC Company Agent code: 012345 Phone #: 555-555-5555 Fax#: 555-555-5555
I am using a DTS package to output a view to a pre-determined Excel file. Currently it just adds the output to the bottom of the current table in excel but i would like it to delete the contents of the worksheet before adding the new rows.
I know that this is legal sql: "SELECT 1 AS Blah" I want to do something like this except for I need to select multiple rows each with a different value for Blah. The query needs to be legal to be passed to the SqlCommand.ExecuteReader Method. Is this possible?
i need to add a datetime column to an exisitng table that has like 1.2 million records and its being accessed frequently but i cant afford to stop the db at all
whenever i do : alter table mytable add Updated_date datetime
it just takes too long and i have to stop executing the query after a couple of mins I am running sql express 2005 sp2. db size is over 3 gb but still under the 4 gb limit
can u plz advice on how to add this column. its urgent!!
Combination of Student_Id, Subject_Id and Quarter columns is the primary key. One student can take one subject in a quarter. Now the new requirement is a student can take multiple subjects in a quarter. So need to add another table like below:
NEW table name: Student_Subject and column are below: Student_id Subject_Id Quarter1
All the above three columns combination is primary key.
After the new table Student_Subject created, remove Subject_Id column from Student table.
When the user clicks on a button after selecting multiple subjects and provide col1 and col2 data then one row gets inserted into Student table and multiple rows gets inserted into Student_Subject table.
Is there any other table design that satisfies one student can take multiple subjects in a quarter?
I have a client that requres me to add a header line and trailer line to a flat file. The trick is the header and footer row is required to have a length of 120 with 5 colunmns while the data included in the query has a length of 1125 and about 70 columns. How can I append a header row to a file through SSIS.
The objective is to identify orders where an order fee has been applied incorrectly. I have multiple orders per customer, my table contains an orderID and a customerID. Currently if the customer places additional orders before the previous orders have been closed/cancelled, then additional fees are being applied.
Let's say I'm comparing order #1 to order #2. I need to identify these rows where the following is true:-
The CustID is the same.
Order #2 has a more recent order date.
Order #2 has a FeeDate Before the CancelledDate of Order #1 (or Order #1 has no cancellation date).
So in the table the orderID:2835692 of CustID: 24643 has a valid order fee. But all the subsequently placed orders have fees which were applied before the first order was cancelled and so I want to update the FeeInvalid column with a 'Y'. The first fee will always be valid.
I think I understand why the code I am trying doesn't achieve the result I want but I can't figure out how to write it correctly. Below is one example of code I've tried and also code to create the table and insert some test data.
update t1 SET FeeInvalid = 'Y' FROM MockData t1 Join MockData t2 on t1.CustID = t2.CustID WHERE t1.CustID = t2.CustID AND t2.OrderDate > t1.OrderDate AND t2.FeeDate > t1.CancelledDate CREATE TABLE [dbo].[MockData]( [OrderID] [float] NULL,
I need to have a script where it ask the user for a value, the script will search for all records that match the value. Then it will display the numbers of records found and ask the user to enter a different value. The rest of the script will use this new value and increment by 1 n times as the number of records found. I started the script where it will ask for "HANDLE" and display the number of records found with that "HANDLE"
declare @HANDLE as varchar(30) declare @COUNT as varchar(10) declare @STARTINV as varchar(20)
set @HANDLE = ?C --This is the parameter to search for records with this value set @STARTINV = ?C --User will input the starting invoice number SELECT COUNT as OrderCount FROM SHIPHIST where HANDLE = @HANDLE
I just can't figure out how to proceed to use the entered invoice # and increment by 1 until it reach the number of records found.
This will be the end results:
Count=5 --results from query STARTINV=00010 --Value entered by user
I'm using service broker and keep getting errors in the log even though everythig is working as expected
SQL Server 2005 Two databases Two end points - 1 in each database Two stored procedures: SP1 is activated when a message enters the sending queue. it insert a new row in a table SP2 is activated when a response is sent from the receiving queue. it cleans up the sending queue.
I have a table with an update trigger In that trigger, if the updted row meets a certain condition a dialogue is created and a message is sent to the sending queue. I know that SP1 and SP2 are behaving properly because i get the expected result. Sp1 is inserteding the expected data in the table SP2 is cleaning up the sending queue.
In the Sql Server log however i'm getting errors on both of the stored procs. error #1 The activated proc <SP 1 Name> running on queue Applications.dbo.ffreceiverQueue output the following: 'The conversation handle is missing. Specify a conversation handle.'
error #2 The activated proc <SP 2 Name> running on queue ADAPT_APP.dbo.ffsenderQueue output the following: 'The conversation handle is missing. Specify a conversation handle.'
I would appreceiate anybody's help into why i'm getting this. have i set up the stored procs in correctly?
i can provide code of the stored procs if that helps.
i have 2 tables (both containing the same column names/datatypes), say table1 and table2.. table1 is the most recent, but some rows were deleted on accident.. table2 was a backup that has all the data we need, but some of it is old, so what i want to do is overwrrite the rows in table 2 that also exist in table 1 with the table 1 rows, but the rows in table 2 that do not exist in table one, leave those as is.. both tables have a primary key, user_id.
I went to look at the connection string previously entered for a dataset created in a new report, and am not seeing anything intuitive for bringing up the associated datasource dialog box that was used to enter name, type and connection string. I'm also noticing nothing intuitive for deleting an existing dataset. How do you do these two very simple things in an existing project? I dont see the dataset in solution explorer, I see it only in the text box on the data tab and in a limited kind of way on the dataset view where the columns show and maint is allowed mostly on the columns only. I tried hilighting the dataset here and hitting the delete key to no avail.
I would like to restore database using RESTORE DATABASE ... REPLACE command. If database exists already and has any open connections this command will fail. I would like to close all existing connections to specific database before running RESTORE DATABASE ... REPLACE command. I can do closing from Management Studio using checkbox "Close Existing Connection" when deleting database. Actually I need to do the same but from script.
We have implemented our service broker architecture using conversation handle reuse per MS/Remus's recommendations. We have all of the sudden started receiving the conversation handle not found errors in the sql log every hour or so (which makes perfect sense considering the dialog timer is set for 1 hour). My question is...is this expected behavior when you have employed conversation recycling? Should you expect to see these messages pop up every hour, but the logic in the queuing proc says to retry after deleting from your conversation handle table so the messages is enqueued as expected?
Second question...i think i know why we were not receiving these errors before and wanted to confirm this theory as well. In the queuing proc I was not initializing the variable @Counter to 0 so when it came down to the retry logic it could not add 1 to null so was never entering that part of the code...I am guessing with this set up it would actually output the error to the application calling the queueing proc and NOT into the SQL error logs...is this a correct assumption?
I have attached an example of one of the queuing procs below:
Code Block DECLARE @conversationHandle UNIQUEIDENTIFIER, @err int, @counter int, @DialogTimeOut int, @Message nvarchar(max), @SendType int, @ConversationID uniqueidentifier select @Counter = 0 -- THIS PART VERY IMPORTANT LOL :) select @DialogTimeOut = Value from dbo.tConfiguration with (nolock) where keyvalue = 'ConversationEndpoints' and subvalue = 'DeleteAfterSec' WHILE (1=1) BEGIN -- Lookup the current SPIDs handle SELECT @conversationHandle = [handle] FROM tConversationSPID with (nolock) WHERE spid = @@SPID and messagetype = 'TestQueueMsg'; IF @conversationHandle IS NULL BEGIN BEGIN DIALOG CONVERSATION @conversationHandle FROM SERVICE [InitiatorQueue_SER] TO SERVICE 'ReceiveTestQueue_SER' ON CONTRACT [TestQueueMsg_CON] WITH ENCRYPTION = OFF; BEGIN CONVERSATION TIMER ( @conversationHandle ) TIMEOUT = @DialogTimeOut -- insert the conversation in the association table INSERT INTO tConversationSPID ([spid], MessageType,[handle]) VALUES (@@SPID, 'TestQueueMsg', @conversationHandle);
SEND ON CONVERSATION @conversationHandle MESSAGE TYPE [TestQueueMsg] (@Message)
END ELSE IF @conversationHandle IS NOT NULL BEGIN SEND ON CONVERSATION @conversationHandle MESSAGE TYPE [TestQueueMsg] (@Message) END SELECT @err = @@ERROR; -- if succeeded, exit the loop now IF (@err = 0) BREAK; SELECT @counter = @counter + 1; IF @counter > 10 BEGIN -- Refer to http://msdn2.microsoft.com/en-us/library/ms164086.aspx for severity levels EXEC spLogMessageQueue 20002, 8, 'Failed to SEND on a conversation for more than 10 times. Error %i.' BREAK; END -- We tried on the said conversation, but failed -- remove the record from the association table, then -- let the loop try again DELETE FROM tConversationSPID WHERE [spid] = @@SPID; SELECT @conversationHandle = NULL; END;
SET @VRUSERVICEMIN = 46 SET @BILLEDFLAT = 0 -- PROCESS ONLY RECORDS WHERE THERE IS NO FLAT FEE IF @BILLEDFLAT = 0 BEGIN IF @VRUSERVICEMIN BETWEEN 0 AND 15 BEGIN SET @VRUSERVICEMIN = .25 END ELSE IF @VRUSERVICEMIN = 15 BEGIN SET @VRUSERVICEMIN = .25 END ELSE IF @VRUSERVICEMIN BETWEEN 15 AND 30 BEGIN SET @VRUSERVICEMIN = .5 END ELSE IF @VRUSERVICEMIN = 30 BEGIN SET @VRUSERVICEMIN = .5 END ELSE IF @VRUSERVICEMIN BETWEEN 30 AND 45 BEGIN SET @VRUSERVICEMIN = .75 END ELSE IF @VRUSERVICEMIN = 45 BEGIN SET @VRUSERVICEMIN = .75 END ELSE IF @VRUSERVICEMIN > 45 BEGIN SET @VRUSERVICEMIN = 1 END END
Is it possible to catch and error and then keep the process going in a stored procedure? So if an update encounters a primary key violation on a row, is it possible to skip that row and keep the process going?
Hi! I have some try .. catch block trying to insert some data into database. During its action duplicate key row insert error could raise, for example. The question is how could I know distinguish it from other sql errors? Object ex (Catch ex As Exception) has only message property '{"Cannot insert duplicate key row in object 'dbo.Group_Courses' with unique index 'IX_Group_Courses'.The statement has been terminated."}' and type System.Data.SqlClient.SqlException. Knowing the type of error is not enough, because there are different SqlExceptions. Even the message is not unique for this error, because now i deal with 'dbo.Group_Courses' and then it could be other table. Is there something that unique identifies each error? For example error code. If it exists, where could I get it? Thanks in advance!
I test my the now function and it is getting the right date. When I try to send that to the sql database I have it turns it into 1/1/1900. Does anyone know why this is happening, I have tried everything Here is my code:
Hi, I would like to handle a sql error in t-sql and return a certain value in case error occurs. For example if I would like to add a record I want to return a certain identity value or maybe a status of transaction (0 for incomplete, 1 for succesfull trans). If error occurs in sql I cannot return any values back to asp.net because of What I am doing at the moment is catching an error in asp.net and then displaying an error message. Is there a way to return only a return value to asp.net and somehow handle the error in t-sql? Thanks
I don't know what's wrong with SQL2000 setup, the problem is: whenever I execute a query with a date/time such as 02/02/200, SQL give me an error message saying that the date is "Out of Range".
I'm having trouble with something and I was hoping that you could point me in the right direction.
Here's the scenario: I have a VB application that clients use to add records to a SQK 2K DB. The info they have added that day is shown in a grid. They have the ability to edit items in the grid, and then update those changes to the database. The problem is that sometimes they change the values to something they shouldn't. To combat this I've started experimenting with check constraints. In query analyzer I test the constraint by trying to update an entry to an 'illegal' value. When I do this, I get an error saying: "Server: Msg 547, Level 16, State 1, Line 1" and the change is not made. What I'd like to do is to give the user a dialog box notifying him of the error. Is there a way to have a sub-routine or stored procedure be triggered when a message of this type is generated?
My specs are: W2K pro clients, SQL2K on an NT4 sp6a server. Application is written in VB6.
I have a select statement that works, but I know there has to be a better way( I apologize for being TSQL brain dead today). Here is the statement
SELECT PATIENT_ACPT_STATUS, DISP_NOTES, RECORD_ID, modified_by, last_name, ddate FROM PATIENT_MEDICATION_DISPERSAL_ Where (ddate = convert(char(10),getdate(),101) and (MODIFIED_BY = CURRENT_USER) and rec_status = 1) or (ddate = convert(char(10),getdate(),101) and (PATIENT_ACPT_STATUS = 1)) OR (ddate = convert(char(10),getdate(),101) and (MODIFIED_BY = 'open'))
I have a need when a Update, Insert or Delete is done to a record in DB "A", it will send the appropriate UID to a different table in different DB "B".
My first thought was to have a trigger on the table in DB "A" simply call a stored procedure on DB "B" and do the UID.
However - my question is what is the best approach and what's the best way to establish the connection to DB "B" for the UID from within DB "A"? We can't use linked servers - DNSLESS string would be the preferred connect way. Not sure how to execute it within a trigger.
Is even using a Trigger to Stored Proc the best way?
What about Transaction Replication - which I've never attempted - is that a better way?
Just looking for some guidance here so I don't needlessly burn time down a path that isn't recommended or simply won't work.
I build a local cube from a relation database. In the database there are 1:n relations. Is there a way to handle 1:n relations? For example: I have a table LOGGEDFLAW and a table LOGGEDREASON with a 1:n relation between them. We create a select statement of these tables and as an result we get duplicate records of LOGGEDFLAW each time more than 1 record of LOGGEDREASON are associated to 1 record of LOGGEDFLAW - this is the standard result I get with an relational JOIN operation. Now I want to count the LOGGEDFLAWs without the duplicates generated by the 1:n relationship.
Please help me out how to implement the locking in below scenario
Req -
There are two tables Table1 & Table2 If I will insert in table1 then related data fields will be auto updated in table2 , similarly based on the data in table2 table1 data needs to be updated.
Now the sync of table1 & table2 is working fine.
My prob is we are handling the updation/insertion from the UI screens . Two separate screen for each table. When we have multiple user accessing the screens say - User1 updates table1 and User2 updates table2 then we need to implement the locking so that at one time one screen will allow updation in the table1 and hence table2. The other screen shouldnt allow updation in table2 and hence in table1.
This is very common locking functionality ...but am not getting any way to implement it , Please advise.
I have a friend who is doing a voting application for one of his customers and they are concerned about the volume that sql server can handle. He's looking a single sql server 2005 with plenty of hd space and 4gb of memory. The app will look to see if you voted and then insert a record accordingly.
Are there any papers out there or apps that can show the amount a server can handle?
I already asked this question; however, I am giving all the detailsnow:We get large files(millions of records) and we need to load it into ourtables using import export wizard. Some of the fields in the file canbe Null and so we are forced to create table with fields that allowNulls with default ''. However when we insert data into these tablesit puts Null in those fields even though we have a default '' (I do notthink we have any work around for that; do we?)Finally we need to go through each field and update it to '' if it is aNull and that takes LOT OF TIME.If (select count (*) from <tablename> where <columname> is Null) >0BeginUpdate <tablename>set <columnName> = ''where <columnName> is NullendPlease let me know if there are any work arounds for this crisis ?Thank you very much in advance!
I need to know if there is a better way to construct this SQL statement.(Error handling is omitted)MS SQL Server 2000Insert into FSSUTmpSelect a.acct_no, a.ac_nm, a.ac_type, 10, -1,-1 * sum(CHARINDEX(convert(char(4), b.post_yr), @post_yr) * CHARINDEX('-',CONVERT(char(2), b.post_prd - @post_prd - 1)) * b.prd_trn_amt),-1 * sum(CHARINDEX(convert(char(4), b.post_yr), @post_yr) * CHARINDEX('0',CONVERT(char(1), b.post_prd)) * b.prd_trn_amt)FROM GLAccounts a, GLBalances bWHERE b.cmpny_cd = a.cmpny_cdAND b.acct_no = a.acct_noAND a.cmpny_cd = @cmpny_cdAND ac_ctrl_type between '200' and '219'Group by a.acct_no, a.ac_nm, a.ac_typeThe part I’m wondering about is the 2 sum sections.The GLBalances table has following important fields:Post_yr -- the posting yearPost_prd – the posting periodPrd_trn_amt – The beginning balances if the period is 0, or the nettransactions for periods 1 through 12.The first sum gives the current balance as of the period @post_prd by addingall of the periods from 0 to @post-prdThe second sum is just the beginning balance.It is doing a conditional sum by using CHARINDEX to be 0 if the recordshould not be added and 1 if it should.There is a problem as it stands when you are looking for the balances whenthe @post_prd is 9 or greater because “b.post_prd - @post_prd – 1” willbe –10 or smaller. Then the CONVERT(char(2) ….. is an error, so CHARINDEXis 0 when it needs to be 1.I can fix that by using SIGN and it will work fine. What I what to know, isthere a better way to populate the table, where one of the values is aconditional sum?This is a STORED PROCEDURE from a commercial product, so I can’t changeanything else other than the STORED PROCEDURE.
Hi,What exactly does the above (subject) error mean? I'm getting it from anadp file when used by a few people at the same time (each user has thefile in their own filespace though). Access is through windowsauthentication and it only seemed to occur during an update of aspecific table.. The problem is it didn't happen to everyone and Ican't recreate it at all on my own, so am wondering if it was somethingto do with the level of traffic to/from the server at that specific time.Any clues?Cheers,Chris