Getting Id Values From Both Source And Target Rows When Duplicating Records
Aug 27, 2007
Hi, I am copying records in a table. The source table and the target table are the same. I need the value from the id-field from both the source and target row. Is there a way to do this with one query?
I tried the following, but it doesn't seem to work:
INSERT tableOne (value1, value2, value3)
OUTPUT source.id, inserted.id
SELECT value1, value2, value3 FROM tableOne AS source
WHERE ID = @number
I have a table that holds a number of offers made to a orginization for placements at a lecture. what im wanting to do is have each of the rows for a orginization repeated so that the names of people attending can be put into the database.
the result im looking to get is something like this where the name of the attendess would be inputed in an application. id | orginization | lecture | nameofattende 1 | orga | lec1 | j. blog 2 | orga | lec1 | s. smith 3 | orga | lec1 | h. samual 4 | orga | lec1 | j. sams 5 | orga | lec1 | b.j. james 6 | orgb | lec1 | m. curry 7 | orgb | lec1 | k. murry 8 | orgb | lec1 | g. hansen
In this situation do I need a proxy or forwarder at both ends to prevent connection issues? Are there plans to handle this in future SSSB upgrades. Thanks.
I need to create an Bulk upload utility using ASP.Net and SQL Server. Below is the process for the uploads -
Excel Template wherein user will enter the details. A Tab-delimited output file will be generated using the VBA. There are 2 tables - one is Temp Table which is replica of the the final table and second is the final table Using File.OpenText(filePath).ReadLine() - All the Rows from the tab delimited data file will be inserted into DataTable.
using SQLBulkCopy the tab-delimited data file data will be inserted into the Temp Table.
Data will be validated based on the data inserted in the temp table. If the data as errors then the temp table will be cleared else the data will be inserted from the temp table to the final table.
My Issue is that in both the tables there is a column (Name : PeopleKey (Int PrimaryKey)). If the user enters Alphabetic value then the Bulk Utility is failing. Below are the two options in my mind -
1. I can change the DataType in Temp table from int to VARCHAR. So, the data can be inserted at first and then I can validate and get the data corrected. But i am not sure whether it is the right way to fix issue as the source and target tables columns are different.
2. When the data in inserted into the Datatable by following Step 3. So, once the data in inserted into DataTable then i can validate there. Thus the source and target tables Datatype will be same.
What could be simpler: map a flat file record structure, extract the data, and populate essentially the same flat file record struc in an Oracle table. Let the fun begin.
Specifically: the flat file record struc is fixed length 196 bytes. A particular field consists of 4 bytes of Integer data; IS deals very nicely with the definition, does not appear to be any issue with that. The issue is trying to get the 4 bytes of integer to map and load into the Oracle table. The data type in the flat file def is DT_UI4. The data type in the Oracle target is DT_NUMERIC. One would think that perhaps a simple transform and Viola?! I've defined the transform but does not seem to matter - whatever I try yeilds the same results.
I 've tried many different src/trg data type defs., but all yeild the same results.
Execution Results from debug:
Everything validates and then...
[kcd [8671]] Error: Data conversion failed. The data conversion for column "load_time_min" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
[kcd [8671]] Error: The "output column "load_time_min" (11050)" failed because error code 0xC0209084 occurred, and the error row disposition on "output column "load_time_min" (11050)" specifies failure on error. An error occurred on the specified object of the specified component.
Can anyone help me out in getting the information or execution progress of a package like "number of records migrated", "which component is getting executed at present",etc...when we migrate the datas using the package which we have created programmatically and trying to execute the package programatically.We can see these informations in the the "progress tab" when we execute the package using BIDS in SSIS.
I am using DTS and VBScript in DataPump tasks in order to transfer large amounts of data from text files to an SQL database.
As the database uses a normalized schema, there is often the case of inserting multiple records in a destination table from various fields of the same record of the source text file.
For example, if the source record contains information about goods sold like date, customer, item code, item name and total amount, and does so for a maximum of 3 goods per sale (row), therefore has the structure:
I have tried using a datapump task and VBScript, and I guess it has to do with the DTSTransformStat_**** constants, but none of those I used seems to work
I have a SSIS package that simply moves data from a SQL database A to another SQL database B. I have update (increased) the size of a nvarchar column, on both A and B.I am wondering if there is a way to "refresh" somehow the SSIS package so I don't have to rebuild and redeploy it.The error I get now is a truncation error: "Text was truncated or one or more characters had no match in the target code page".
Hi, I am less of a technical but more of a analyst professional and right now investgating on various tools / options for the new conversion project I will be leading in insurance client. One of the tools that client want to use is SSIS but the source and target database is not on SQL server but plans are to build a staging SQL server database for transformation. Does SSID supports this kind of ETL process where both source and target system are non SQL servers.
I'm encountering a very peculiar situation when I'm trying to compare source and target data using conditional split. Following is the Data Flow and how I'm trying to achieve this.
Source Data : Col_A (PK) Col_2 1 100 8 500 Target Data : Col_A (PK) Col_2 1 100 3 700 8 500 Look-up Target on Col_A to check for existing records. Now we have four columns in Look-up match output: Col_A, Col_B, Lkp_Col_A (Target Col), Lkp_Col_B (Target Col).
Conditional Split: Compare Col_B with Lkp_Col_B
Update target if there is any change in the existing value of Col_B.When I'm running the package for every record in source, the conditional split fails and even when there is no change in Col_B, some of the records (Not all and quite randomly) get updated with the same value. If I run the package for few records, it works absolutely fine.
I am new in SSIS. Anyone know how to valify number of record that I load from csv file to SQL database table?
For example, the source file call product.csv and target table in database named DSS table name PRODUCT. I load data from flat file to table then I need verification if count between source and target not match send e-mail to me.
I am trying to exclude records that have an assessed value that has been waived in an aggregation. For Example:
Here is my table:
CREATE TABLE #temptable (ReportingMonth Varchar(6), Fee_Code Varchar(20), Fee_Transaction_Amount Decimal(12,2), Fee_Transaction_Date Datetime, Fee_Transaction_Type Char) INSERT INTO #temptable (ReportingMonth, Fee_Code, Fee_Transaction_Amount, Fee_Transaction_Date, Fee_Transaction_Type) SELECT 'Jan-13', 'ONE TIME DRAFT FEE', '20', '01/24/2013', 'A' UNION ALL SELECT 'Feb-13', 'LATE CHARGE', '33.6', '02/19/2013', 'A' UNION ALL SELECT 'Mar-13', 'LATE CHARGE', '37.01', '03/18/2013', 'A'
[code]....
Here are Data Mapping Description
Reporting Month = Month - Year Fee Code = Fee Description Name Fee Transaction Amount = Fee Amount Fee Transaction Date = When Fee Amount was Applied Fee Transaction Type = "A" = Assessed Fee; "W" = Waived Fee; "P" = Paid Fee
I've also included an image with beginning data set and what I want to identify in red and what my final data set should look like after the exclusion of those 4 records are removed.
Here are the logic requirements:In the attachment what I need the logic to do is essentially identify the $20 One Time Draft Fee from the first instance using the MIN Transaction date. Since there were $80 waived for this fee code (One Time Draft Fee), I would expect to see the first 4 (highlighted in red) to be identified as the target and as you can see in the attachment the second data set had the 4 highlighted items removed. That should be my final output.
trying to loop and remove the waive amounts from the assess amounts and tied it back to remove from my base data.
I need to update the ilocationid from Table 1 to all Table 2 records related to Table 1but there is no direct relation from Table 1 to Table 2. I needed Table 3 to make the connection from Table 1 to 2.
declare tableName table ( uniqueid int identity(1,1), id int, starttime datetime2(0), endtime datetime2(0), parameter int )
A stored procedure has new set of values for a given id. Sometimes the startime and endtime are the same, in which case I update the value of parameter. Sometimes I add a new time range (insert statement), and sometimes I delete a time range (delete statement).
I had a question on merge, with insert, delete and update and I got that resolved. However I have a different question regarding performance of the merge statement.
If my target table has hundreds of millions of records and I want to delete/update/insert a handful of records, will SQL server scan the entire target table? I can't have:
merge ( select * from tableName where id = 10 ) as target using ...
and I can't have:
merge tableName as target using [my query] as source on source.id = target.id and source.starttime = target.startime and source.endtime = target.endtime where target.id = 10 ...
This means I cannot filter the set of rows in the target table to a handful of records where id = 10.
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
How can we handle transactions in SSIS 1. when some error/something happens during export and the # of rows are not exported fully to destination, how to rollback the transaction in SSIS.
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
Running this code on my PC via VS 2005 .Net version 2.0.50727 on the server (shown in IIS) Code is in ASP.NET 2.0 and is a VB.NET Console application SSIS 2005
Problem & Info:
I am bringing in an Excel file. I need to first strip out any non-detail rows such as the breaks you see with totals and what not. I should in the end have only detail rows left before I start moving them into my SQL Table. I'm not sure how to first strip this information out in SSIS specfically how down to the right component and how to actually code the component to do this based on my Excel file here: http://www.webfound.net/excelfile.xls
Then, I assume I just use a Flat File Source coponent or something to actually take the columns in the Excel and split into an OLE DB Datasource to shove each column into a corresponding column in my SQL Server Table. I have used a Flat File Source in the past to do so with a comma delimited txt file but never tried with an Excel.
Desired Help:
How to perform
1) stripping out all undesired rows 2) importing each column into sql table
I’ve got a situation where the columns in a table we’re grabbing from a source database keep changing as we need more information from that database. As new columns are added to the source table, I would like to dynamically look for those new columns and add them to our local database’s schema if new ones exist. We’re dropping and creating our target db table each time right now based on a pre-defined known schema, but what we really want is to drop and recreate it based on a dynamic schema, and then import all of the records from the source table to ours.It looks like a starting point might be EXEC sp_columns_rowset 'tablename' and then creating some kind of dynamic SQL statement based on that. However, I'm hoping someone might have a resource that already handles this that they might be able to steer me towards.Sincerely, Bryan Ax
I want to update @Stop.UserField with thevalue from @UpdateSource where @UpdateSource.HasPathway=@Stop.UserField...but I need to use the @FieldDescription table to determine how to map the columns.
I have the following variables VehicleID, TransactDate, TransactTime, OdometerReading, TransactCity, TransactState.
VehicleID is the unique vehicle ID, OdometerReading is the Odometer Reading, and the others are information related to the transaction time and location of the fuel card (similar to a credit card).
The records will be first grouped and sorted by VehicleID, TransactDate, TransactTime and OdometerReading. Then all records where the Vehicle ID and TransactDate is same for consecutive rows, AND TransactCity or TransactState are different for consecutive rows should be printed.
I also would like to add two derived variables.
1. Miles will be a derived variable that is the difference between consecutive odometer readings for the same Vehicle ID.
2. TimeDiff will be the second derived variable that will categorize the time difference for a particular vehicle on the same day.
My report should look like:
VehID TrDt TrTime TimeDiff Odometer Miles TrCity TrState 1296 1/30/2008 08:22:42 0:00:00 18301 000 Omaha NE 1296 1/30/2008 15:22:46 7:00:04 18560 259 KEARNEY NE
I'm stuck. I have a table that I want to pull some info from that I don''t know how to.
There are two colomuns, one is the call_id column which is not unique and the other is the call_status column which again is not unique. The call_status column can have several values, they are ('1 NEW','3 3RD RESPONDED','7 3RD RESOLVED','6 PENDING','3 SEC RESPONDED','7 SEC RESOLVED').
The call_id could be any number, I only want the 6 PENDING rows where there are other rows for that call_id which have either 3 3RD RESPONDED or 7 3RD RESOLVED. If someone knows how it would be a great help.
Ok. Here is what I need to do. I don't even know if it is possible. I have a production server and a db backup server. In a perfect world, I want to be able to place a copy of my databases (users and all) on the backup server and have it update (all changes) regularly.
I messed with DTS but it errors out because I don't have user accounts set up on the backup server. (I'm not entirely sure it does anything I want to do anyway)
I then use the XML source connection to connect to it. It sees all of the columns correctly, but when i run and put a watch on it, or try to output the results to a .csv file, no records come through.
Any ideas on why there aren't any rows comming through? i'm using SQL2005 with no SP1
I like the script table to.... function is sql server 2005, but I was wondering if there was a way to do this for the entire database.Like if I want to create the database and all the tables inside it without having to copy each individual table's code. Know how to do this? Thankee.
I am developing an application that uses SQL Server 2000 for the back end. I am at the stage where some modules in the app can be tested while I finish development on some others. I run my own tests against SQL Server running on my own PC but for other people to test I have set up another server with SQL Server 2000 and have restored my database there.
My question is as follows: I would like any changes to my database (structure and data) to be replicated on the test server's database (not necessarily immediately, but without much delay). I've heard the buzz words (log shipping, replication, etc) but would like some advice on the best way to proceed. At the moment I don't need any data back from the test server and I don't particularly care if test data on that server is lost although these may become issues later on.
I have a pivot transform that pivots a batch type. After the pivot, each batch type has its own row with null values for the other batch types that were pivoted. I want to group two fields and max() the remaining batch types so that the multiple rows are displayed on one row. I tried using the aggregate transform, but since the batch type field is a string, the max() function fails in the package. Is there another transform or can I use the aggragate transform another way so that the max() will work on a string?