Using Condation Split To Update Data
Jun 21, 2007
hi all
i using SSIS to update data on tables based on another tables as follow
start connect to DS --> using Lookup to located the key fields on the 2 tables --> on error start insert into the second table else if the recorde is already exist start update
my problem is Performance
--source table rows count almost 60 million
--distination table almost 3 million
my package execute from 2 days and still working till now updated 122,000 and insert 4 million
kindly i need support
thanks & regards
View 10 Replies
ADVERTISEMENT
Oct 13, 2006
I have a Variable called - UpdateIDs.
I would like to create a conditional split on id's that have no responses and id's that have responses.
The idea is. There is a whole bunch of tables that can be updated if the foreign key is in the no responses id's.
So I have created a data flow. The conditional split is there, but I do not have a "Update Global Variable" Destination source.
Is there away to achieve this.
Basically this is the logic that I am trying to use
Dataflow --> Store Id's into Global Variable --> Data Flow update/insert Tables and Rows where Id in Gloabl Variable.
Is there another way this can be done?
Sorry I am very new to this, and would appreciate any help
Thanks
Andrew
View 2 Replies
View Related
Aug 28, 2007
Hi
I am using conditional split Checking to see if a record exists and if so update else insert. But this cause database dead lock any one has suggestion?
Thanks
View 7 Replies
View Related
Jun 15, 2007
Could I ask how to spit the data into training and validation sets when doing data mining?
Thanks
View 1 Replies
View Related
Mar 21, 2007
My company use SQL server 2005 standard. considering deal with huge data, how if we want to split data (date range yearly or monthly) in order to ease transaction. that's simply for us to use query, but how if we want to split data that can be easily execute by operator (non-admin privilege). Is there any another way?
View 1 Replies
View Related
Mar 24, 2006
hai all,
This is my first question to this forum.
here is my situtation:
I am into report testing I need to test a report for which i have write a query,iam using qery analyser for runing query
Database : sql server
tabel name :job_allocations
column naME :technicain code
Based on techincain code in joballocation tablei need to get technician cost from other table for the particular technician.
Based on the technician code user chooses column will be updated
if single data will be TC01
if more than one then data will be TC01:TC02:TC03
user can choose any number of techincian for a job
MY problem is :How to split tha when there is multiple technician and calculate cost for the job
Ineed it in single excecution query
Table structure
job_allocation table
jobcardn_fk Technician_code
jc01 TC01
jc02 Tco1:Tco2:Tc03......
I need it in
jobcardno_fk TEchnician_code
jco1 Tc01
jco2 Tc01
jco2 TC02
jc02 Tc03
TKs ands Regards
Diwakar.R
View 1 Replies
View Related
Aug 13, 2007
I'm trying to figure out how in a Data Flow Transform I can split some data.
I have data coming through that has a PK (col1) and datetime col (col2).
This data may contain multiple rows for the PK, col1.
I want to be able to take the min datetime for each row for col1 and send down one path and all other rows down another path eg. There are 20 rows in the Data Flow coming through. Col1 has value 1 for first 10 rows, value 2 for remaining 10 rows and there are different values for Col2 for all rows. Take the Min value in Col2, where col1=1 and also the min value in Col2, where col1=2 and send down one path. Send all other rows down another path.
I thought of using the Conditional Split Transform but my Expression knowledge isin't experienced enough (I'm looking into this) and I'm not sure if it can be done.
I also though of using the Multi-Cast Transform and then using the Aggregate function, on each data set. While I can do this easily for the first data flow.
SELECT col1, MIN(col2)
FROM tbl1
GROUP BY col1
which is easily done in the Aggregate transform, the other side of the data flow I can't see how can be done using the Aggregate Transform
SELECT col1
FROM tbl1
WHERE col2 NOT IN (SELECT MIN(col2) FROM tbl1 GROUP BY col1).
Are either methods feasilble or is there another way.
I want to avoid putting this data into temp tables in a SQL database and manipulating the data from there. The data has been extracted from a flat file source.
Any help and ideas welcome.
View 13 Replies
View Related
Jun 22, 2007
I have a query that returns a table similar to:
State Status Count
CA Complete 10
CA Incomplete 200
NC Complete 20
NC Incomplete 205
SC Incomplete 50
What sort of query will allow me to reformat the table into:
State Complete Incomplete
CA 10 200
NC 20 205
SC NULL 50
View 5 Replies
View Related
Feb 6, 2008
I have a table called products with the values like
ProductId ProductName
10 A
20 D,E,F,G
30 B,C
40 H,I,J
I need to display each productid's with
ProductId ProductName
10 A
20 D
20 E
20 F
20 G
30 B
30 C
40 H
40 I
40 J
I will be appreciated if you can send me the code.
Thanks,
Mears
View 10 Replies
View Related
Feb 21, 2005
What's the best way to convert a large set of records from a simple schema where all fields are in one table to a schema where fields are split across two tables? The two table setup is necessary for reasons not worth getting into here.
Doing this via cursor is pretty straightforward, but is there a comparable set-based solution?
Here are sample create table commands. Obviously, the example below is simplified for discussion purposes.
-- One record from here will produce a record in TargetParentRecords and a record in TargetChildRecords for a total of two records.
CREATE TABLE OriginalSingleTableRecords (
ID INT IDENTITY (1, 1) NOT NULL,
ColumnA VARCHAR(100) NOT NULL,
ColumnB VARCHAR(100) NOT NULL,
CONSTRAINT PK_OriginalSingleTableRecords PRIMARY KEY CLUSTERED (ID)
)
CREATE TABLE TargetParentRecords (
ParentID INT IDENTITY (1, 1) NOT NULL,
ColumnA VARCHAR(100) NOT NULL,
CONSTRAINT PK_TargetParentRecords PRIMARY KEY CLUSTERED (ParentID)
)
-- Each row in this table must link to a TargetParentRecords row
CREATE TABLE TargetChildRecords (
ID INT IDENTITY (1, 1) NOT NULL,
ParentID INT NOT NULL, -- References TargetParentRecords.ParentID
ColumnB VARCHAR(100) NOT NULL,
CONSTRAINT PK_TargetChildRecords PRIMARY KEY CLUSTERED (ID)
)
View 5 Replies
View Related
Apr 4, 2006
Hi,
I have Data split into 3 text files with 3 fields repeated in each to link then (key). I want to import this data into one table.
I used DTS to create 3 tables with the data. Now i want to combine the 3 tables into only one (that i already created). How can i do this? Note: the field names in the source tables are different from the destination table.
Thanks
Guy
View 6 Replies
View Related
Mar 13, 2007
Hello all,
Little layout question. Assume my dataset returns the following data:
A
B
C
D
E
How can I present this data in a table (or list, or matrix) splitted into two columns:
A B
C D
E
Any idea will be very appreciated! Thanks a lot!
TG
View 4 Replies
View Related
May 19, 2015
I found string from net how to split column data to row
SELECT A.JbIDFull, A.ProdID, A.OrderQty, A.OtherDetails, A.OrderDate, c.Item FROM
(SELECT JbIDFull, DeptID, ProdID, OrderQty, OtherDetails, OrderDate, LamORSteachType, JBID, RMIDs, RMQty
FROM tblJobCardforProduction WHERE JBID = '2' AND DeptID = '3') A
CROSS APPLY dbo.SplitStringNEW(a.RMIDs,';') b
CROSS APPLY dbo.SplitStringNEW(a.RMQty,'/') c
When I apply one cross apply it's working fine but when i apply for one more column is replacing three time one row, this data i want to split RMIDs and RMQty with same JbIDFull
JbIDFull  DeptID   ProdID OrderQty    RMIDs RMQty
PD-May15-00001 3 2044 10000Â Â Â Â PROD-00052 0
PD-May15-00002 3 921 1000 PROD-00052;PROD-00383;PROD-00384 500;600;700
View 2 Replies
View Related
Dec 16, 2006
Hi All,
We are working on a project, (C#.Net 2003, MS SQL 2000) where database is growing as 2 to 3 GB per day (Scanned Documents are storing in Database). There is only one table in the database. Currently database size is grown upto 200 GB. For safety reason we are planning to split the database accoring to one key field in the table say Book_no.
Please guide/suggest me how to split the database now on the SQL query like "SELECT * FROM master_records WHERE book_no=1"
And later how to merge/combine all these splitted database into one.
I thought of using "Data Export" and later "Import with Append" but don't know whethere it will effective or not? Any method or tool available to Split the database table and later combine them to make again a single one?
Please help me, its urgent.
Thanks in advance.
View 3 Replies
View Related
Jan 13, 2005
Hi
This is probably a very basic question for most people in this group.
How do i split the data in a column in to 2 columns? This can be done in access with an update query but in MS SQL server I am not sure.
Here is an example of what i want to acheive
FName
John?Doe
FName LName
John Doe
thanks for the help
prit
View 7 Replies
View Related
Jan 2, 2008
Hi,
I have a scenario, where I have a string column from database with value as "FTW*Christopher,Lawson|FTW*Bradley,James". In my report, I need to split this column at each " | " symbol and place each substring one below the other in one row of a report as shown below .
"FTW*Christopher,Lawson
FTW*Bradley,James"
Please let me know how can I acheive this?
View 3 Replies
View Related
Mar 28, 2012
I'm using sql 2008 and triying to build a dynamic sql script to split the records 50/50.I know using newid() with order by clause selects randomly but how should I build the select statement to split the data 50/50 so i don't need to run the script manually everytime ?
View 10 Replies
View Related
Apr 1, 2015
How to split a column data into multiple rows, below is the requirement...
Create table #t3 (id int, country (varchar(max))
INSERT #t3 SELECT 1,' AU-Australia
MM-Myanmar
NZ-New Zealand
PG-Papua New Guinea
PH-Philippines'
Output should be like below
1 ,AU-Australia
1,MM-Myanmar
1,NZ-New Zealand
1,PG-Paua New Guinea
1,PH-Phlippines
Note: we are getting source data from sqlserver tables.
I googled and found below way but did't get the output as required
SELECT A.id, a.country,
Split.a.value('.', 'VARCHAR(500)') AS String
FROM (SELECT id, country ,
CAST ('<M>' + REPLACE(country, ' ', '</M><M>') + '</M>' AS XML) AS String
FROM #t3) AS A CROSS APPLY String.nodes ('/M') AS Split(a);
View 4 Replies
View Related
Sep 5, 2007
This is probably an easy question, and I just can't find the solution. I've searched extensively, but I am probably just not searching for exactly what I need.
Basically, I have a Conditional Split. What I need to do is for each row coming out of my split, I need to SELECT some data from another database based on one of the fields and then place the data from the DB into a file for later processing.
Seems pretty simple, considering the power of SSIS. Using tools such as OLE DB Command didn't help - the data that comes out of the OLE DB Command is the input data, not the data returned by the command.
How can I do this?
Thank you!
Nolan
View 1 Replies
View Related
Aug 21, 2014
OK, so I have:
- 500 GB DW
- 5 GB in smaller DBs
- 220 GB TempDB
- 350 GB in Log files.
My machine is Fujitsu Primergy 64 cores (with HT) and 192 GB RAM.
I have several IO locations:
- 540 GB in-server HDD 15k RAID10
- 1 TB HDD 15k RAID10 on SAN (separete controller)
- 2 TB HDD 15k RAID10 on SAN (same controlller as below)
- 800GB SSD RAID10 on SAN (same controller as above)
Data warehouse has 2 fact tables that are absolutely crucial and quite large.
Now i want to organize DB into several Filegroups and put them on different drives. Filegroups I'm thinking of:
- FILEGROUP1: for 1st crucial Fact Table
- FILEGROUP2: for 2nd crucial Fact Table
- FILEGROUP3: for tempDB
- FILEGROUP4: for dimensions data
- FILEGROUP5: for the rest of facts data
- FILEGROUP6: for dimensions indexes
- FILEGROUP7: for the rest of facts indexes
- FILEGROUP8: for 1 log file of one smaller DB (its in full-recovery and its quite large)
- FILEGROUP9: for the rest of log files
- FILEGROUP10: others
How should I organize them across available drives? I was thinking about sth like:
800 GB SSD: FILEGROUPS 1-3
2 TB RAID10: FILEGROUPS 5+7+8
1 TB RAID10: FILEGROUPS 4+6+10
540 GB in-server: FILEGROUP 9
I know that having multiple filegroups on the same drive is pointless regarding performance, but in future i could actually add some more drives, so i want to separate them now.
Also - how much files per filegroups should i create? Considering 1 or 2. Except TempDB where I am going for 4.
View 2 Replies
View Related
Jul 8, 2015
I have a field in a table that contains addresses e.g
15 Green Street
5F Brown Steet
127 Blue Street
1512 Red Road
I want to output the numbers into one column and the address to another column as i need to produce a report that only shows streets and roads but no numbers.
So basically no matter how many characters before the first space which can be numbers or letters i want these output into two columns.
View 6 Replies
View Related
Feb 21, 2007
I have a source table which I'm splitting 3 ways based on a column value, but the target is the same OLE DB destination table. One conditional path is to a Multi-Cast two way split to same OLE DB gestination table. The default split is to a flat file for logging unknown record types. For a test I have data for only the 3 column values I want, but I'm having trouble with the process completing. If I pre-filter the data going into the source table by one or two values I can get the process to complete even if one split is to the multicast. If I include all three data types in the source table, I get different results depending on the order in which the conditions are specified - sometimes only two split paths are executed; other times all three are executed, but in some cases only one path of the multicast split is executed. In any case, when the three source data types are used in the test, the process never competes - the pathes are in a yellow condition and never complete.
Am I creating some kind of deadlock situation by having the source data directed to the same target table via 4 splits? Any help you can provide is appreciated. Thanks.
View 12 Replies
View Related
Aug 12, 2015
I'm encountering a very peculiar situation when I'm trying to compare source and target data using conditional split. Following is the Data Flow and how I'm trying to achieve this.
Source Data : Col_A (PK)Â Â Â Â Â Col_2
                      1                   100
                      8                   500
Target Data : Col_A (PK)Â Â Â Â Â Col_2
                      1                   100
                      3                   700
                      8                   500
Look-up Target on Col_A to check for existing records. Now we have four columns in Look-up match output: Col_A, Col_B, Lkp_Col_A (Target Col), Lkp_Col_B (Target Col).
Conditional Split: Compare Col_B with Lkp_Col_B
Update target if there is any change in the existing value of Col_B.When I'm running the package for every record in source, the conditional split fails and even when there is no change in Col_B, some of the records (Not all and quite randomly) get updated with the same value. If I run the package for few records, it works absolutely fine.
View 8 Replies
View Related
Mar 16, 2006
I€™ve created with the help of some great people an SSIS 2005 package which does the follow so far:
1) Takes an incoming txt file. Example txt file: http://www.webfound.net/split.txt
The txt file going from top to bottom is sort of grouped like this
Header Row (designated by €˜HD€™)
Corresponding Detail Rows for the Header Row
€¦..
Next Header Row
Corresponding Detail Rows
€¦and so on
http://www.webfound.net/rows.jpg
2) Header Rows are split into one table, Maintenance Detail Rows into another, and Payment Detail Rows into a third table. A uniqueID has been created for each header and it€™s related detail rows to form a PK/FK relationship as there was non prior to the import, only the relation was in order of header / related rows below it when we first started. The reason I split this out is so I can massage it later with stored proc filters, whatever€¦
Now I€™m trying to somehow bring back the data in those table together like it was initially using a query so that I can cut out each of the Header / Detail Row sections into their own txt file. So, if you look at the original txt file, each new header and it€™s related detail rows (example of a cut piece would be http://www.webfound.net/rows.jpg) need to be cut out and put into their own separate txt file.
This is where I€™m stuck. How to create a query to combine it all back into an OLE DB Souce component, then somehow read that souce and split out the sections into their own individual txt files.
The filenames of the txt files will vary and be based on one of the column values already in the header table.
Here is a print screen of my package so far:
http://www.webfound.net/tasks.jpg
http://www.webfound.net/Import_MaintenanceFile_Task_components.jpg
http://www.webfound.net/DataFlow_Task_components.jpg
Let me know if you need more info. Examples of the actual data in the tables are here:
http://www.webfound.net/mnt_headerRows.txt
http://www.webfound.net/mnt_MaintenanceRows.txt
http://www.webfound.net/mnt_PaymentRows.txt
Here's a print screen of the table schema:
http://www.webfound.net/schema.jpg
View 17 Replies
View Related
Aug 24, 2015
I am importing the values for field Atype from a .csv file as DT_STR, 13 and I need to fit them into a bit type CType field.
When I write the conditional split ((ISNULL(Atype)?"a":Atype)!=(ISNULL(CType)?"9":CType)) it says that the DT_WSTR and DT_I4 types are incompatible and that I need to explicitly cast with a cast operator. I haven't been able to make it work, how to explicitly cast?
View 4 Replies
View Related
Sep 30, 2015
I have a delimited text file with 650+ columns. The sum of the column lengths of a single row, if fully populated, exceeds 30K bytes. The "killer" fields lengthwise are the "Description" fields. If they were removed from the input file, the remainig columns would occupy about 5000 bytes, which is within SQL max row length.Â
Can SSIS be used to created these two tables? (one without  description fields, the other with those field but arranged vertically in the table rows).
The fundamental issue is I can not import a single file row into a sql table because that row length could exceed the max byte count for a row.
View 8 Replies
View Related
May 7, 2007
Hi JayH (or anyone). Another week...a new set of problems. I obviously need to learn .net syntax, but because of project deadlines in converting from DTS to SSIS it is hard for me to stop and do that. So, if someone could help me some easy syntax, I would really appreciate it.
In DTS, there was a VBScript that copied a set of flat files from one directory to an archive directory after modifying the file name. In SSIS, the directory and archive directory will be specified in the config file. So, I need a .net script that retrieves a file, renames it and copies it to a different directory.
Linda
Here is the old VBScript Code:
Public Sub Main()
Option Explicit
Function Main()
Dim MovementDataDir
Dim MovementArchiveDataDir
Dim MovementDataFile
Dim MovementArchiveDataFile
Dim FileNameRoot
Dim FileNameExtension, DecimalLocation
Dim CurMonth, CurDay
Dim FileApplicationDate
Dim fso ' File System Object
Dim folder
Dim FileCollection
Dim MovementFile
'======================================================================
'Create text strings of today's date to be appended to the archived file.
FileApplicationDate = Now
CurMonth = Month(FileApplicationDate)
CurDay = Day(FileApplicationDate)
If Len(CurMonth) = 1 Then
CurMonth = "0" & CurMonth
End If
If Len(CurDay) = 1 Then
CurDay = "0" & CurDay
End If
FileApplicationDate = CurMonth & CurDay & Year(FileApplicationDate)
'=====================================================================
' Set the movement data directory from the global variable.
MovementDataDir = DTSGlobalVariables("gsMovementDataDir").Value
MovementArchiveDataDir = DTSGlobalVariables("gsMovementDataArchiveDir").Value
fso = CreateObject("Scripting.FileSystemObject")
folder = fso.GetFolder(MovementDataDir)
FileCollection = folder.Files
' Loop through all files in the data directory.
For Each MovementFile In FileCollection
' Get the full path name of the current data file.
MovementDataFile = MovementDataDir & "" & MovementFile.Name
' Get the full path name of the archive data file.
MovementArchiveDataFile = MovementArchiveDataDir & "" & MovementFile.Name
DecimalLocation = InStr(1, MovementArchiveDataFile, ".")
FileNameExtension = Mid(MovementArchiveDataFile, DecimalLocation, Len(MovementArchiveDataFile) - DecimalLocation + 1)
FileNameRoot = Mid(MovementArchiveDataFile, 1, DecimalLocation - 1)
MovementArchiveDataFile = FileNameRoot & "_" & FileApplicationDate & FileNameExtension
If (fso.FileExists(MovementDataFile)) Then
fso.CopyFile(MovementDataFile, MovementArchiveDataFile)
' If the archive file was coppied, then delete the old copy.
If (fso.FileExists(MovementArchiveDataFile)) Then
fso.DeleteFile(MovementDataFile)
End If
End If
Next
fso = Nothing
folder = Nothing
FileCollection = Nothing
Main = DTSTaskExecResult_Success
End Function
View 6 Replies
View Related
Dec 12, 2014
I run the following statement and it will not update beyond 7 million plus rows and I have about 38 million to complete. I keep checking updated row counts and after 1/2 day it's still the same so I know something is wrong because it was rolling through no problem when I initiated it. I need to complete ASAP so it's adding to my frustration. The 'Acct_Num_CH' field is an encrypted field (fyi).
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
WHILE @@ROWCOUNT > 0
BEGIN
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
END
SET rowcount 0
View 5 Replies
View Related
Dec 18, 2006
I have one of our production Accounting Databases starting from 2 GBnow grown into a 20 GB Database over the period of a few years...I have been getting timeouts when transactions are trying to updatedifferent tables in the database.. Most of the error I get are I/Orequests to the data file (Data file of the production dbAccounting_Data.MDF).I would like to implement the following to this Accounting database.I need to split the Data file into multiple files by placing some ofthe tables in different file groups. I have the server upgraded to beable to have different drives in different channels. I can place thesedata and log files in different drives so it will be less I/Oconflicts..I would like to have the following file groups..FileGroup 1 - which will have all database definitions (DDL).FileGroup 2 - I will have the AR Module tables under here..FileGroup 3 - I will have the GL module tables under here..FileGroup 4 - I will have the rest of the tables under hereFileGroup 5 - I will like to place the indexes under here....Also where will the associated transaction files go?I would like to get some help doing this. Is there any articles / helpavailable that I can refer to. Any suggestions / corrections/criticisms to what I have mentioned above is much appreciated...!Thanks in advance....
View 1 Replies
View Related
Oct 18, 2006
When I enter over 4000 chars in any ntext field in my SQL Server 2005 database (directly in the database and through the application) I get an error saying that the data could not be updated because string or binary data would be truncated.Has anyone ever seen this? I cannot figure out what is causing it, ntext should be able to hold a lot more data that this...
View 7 Replies
View Related
Aug 6, 2015
I want to make data changes in read_only database , that's why i must set database read_write. While database is at read_write mode, i want to be sure that no one makes change in database.
For this aim, i write the code below, but i suspect that after setting the database read_write, till the setting database
single_user ,is it possible get DML script from another user. Is the code below enough for this operation. Or is there another way?
Reminding: Read_only database can not be set single_user mode. That's why, first you must set database read_write.
The code;
use master
alter database xxx set read_write
with rollback immediate
alter database xxx set single_user
with rollback immediate
use xxx
update tablexxx set columnxxx=yyy
use master
alter database xxx set read_only
with rollback immediate
alter database xxx set multi_user
with rollback immediate
View 5 Replies
View Related
Sep 28, 2006
Hi,First post so apologies if this sounds a bit confusing!!I'm trying to run the following update. On a weekly basis i want toinsert all the active users ids from a users table into a timesheetstable along with the last day of the week and a submitted flag set to0. I plan then on creating a schduled job so the script runs weekly.The 3 queries i plan to use are below.Insert statement:INSERT INTO TBL_TIMESHEETS (TBL_TIMESHEETS.USER_ID,TBL_TIMESHEETS.WEEK_ENDING, TBL_TIMESHEETS.IS_SUBMITTED)VALUES ('user ids', 'week end date', '0')Get User Ids:SELECT TBL_USERS.USER_ID from TBL_USERS where TBL_USERS.IS_ACTIVE = '1'Get last date of the weekSELECT DATEADD(wk, DATEDIFF(wk,0,getdate()), 6)I'm having trouble combing them as i'm pretty new to this. Is the bestapproach to use a cursor?If you need anymore info let me know. Thanks in advance.
View 4 Replies
View Related
May 10, 2007
How do u achieve this -- While SSIS Data Load Execution itself?
Update Col B with Col A value if Col B is NULL in the Data File?
View 1 Replies
View Related