Hi guys iam working on Sales Force Automation Application in which i have a senario where i create team (base table) and team users (child table) one team can have many users and so user can be in one or more team.
My question is what will be the best practise (performance wise) to add let suppose 100 users in a team in one go for example user can create of any number users present in the user list or How to insert bulk of records in child table .>>
Hi, I am working on an application that is to read a large number of XML files, take out specific values from each file, and store these in a SQL server so that reports can be generated from these values. There are some 15-20,000 files for each month of the year. I am OK with parsing the files and getting the fields that I need but I don't want to insert one record at a time as I parse the files. I was told that I can create a .exe file that parses the xml files and stores the required values in a csv file and use these csv files to initiate a bulk insert, using Business Intelligence Studio. I have not been able to find any info or article on how to do this. Any help on how I can accomplish this, or alternate solutions is greatly appreciated.
HI friends, I am retrieving oracle data(linked server) to my local sql server 2005. Initially I want to have duplicate copy of the oracle data to my local server. I have created linked server and inserting to my local table(exact replica). This will run for every 2 minutes(as per my client requirement) to get newly added records. But, I want to retrieve specific columns from the replicated table and insert into other local tables. i have written a SP to do this. some columns are storing in other tables to maintain normalization. this should also run for every two minutes. I have written a class (c#) to retrieve replica table and passing parameters to SP and inserting to my local tables in normalized form. But this is taking 10 minutes to complete my process to insert 1500 records. but my client insist to reduce the speed to run for every 2 mins. is it correct way wat i am doing? or any other solution is there? pls suggest me. thanks in advance
I ran my package and it was successfu. I tried running it again, but this time it throws me this error:
Dim_T_Account [56575]: Unable to prepare the SSIS bulk insert for data insertion.
Error: 0xC004701A at CallerType, CallerChannel, Dealer, DODealer, HotlineType, Model, Reg'l Signal Code, Account, Contact, DTS.Pipeline: component "Dim_T_Account" (56575) failed the pre-execute phase and returned error code 0xC0202071.
Information: 0x40043008 at CallerType, CallerChannel, Dealer, DODealer, HotlineType, Model, Reg'l Signal Code, Account, Contact, DTS.Pipeline: Post Execute phase is beginning.
Why suddenly without changing anything, i encountered this error? What does it mean it cannot prepare the SSIS bulk insert. My connection to server is working ok.
I am using SQL Server Destinations in my data flow tasks. I'm running this package in the server until i encountered this error:
OnError,,,LOAD AND UPDATE Dimension Tables,,,10/24/2007 1:22:23 PM,10/24/2007 1:22:23 PM,-1071636367,0x,Unable to prepare the SSIS bulk insert for data insertion. OnError,,,Load Dimensions,,,10/24/2007 1:22:23 PM,10/24/2007 1:22:23 PM,-1071636367,0x,Unable to prepare the SSIS bulk insert for data insertion. OnError,,,Discount Reason, ISIS Condition, ISIS Defect, ISIS Repair, ISIS Section, ISIS Symptom, Job Status, Parts, Purchase SubOrder Type, Service Contract, Service Reason, Service Type, TechServiceGrp, WarrantyType, Branch, Wastage Reason,,,10/24/2007 1:22:23 PM,10/24/2007 1:22:23 PM,-1073450974,0x,SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Dim_T_ISISDefect" (56280) failed with error code 0xC0202071. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.
What could be the reason for this? I don't usually have an error.
Having searched the forum, this one clearly has form... However beyond assisting those who have fallen at the first hurdle (i.e. forgetting/not knowing that they cannot execute the package remotely to the instance of SQL Server into which they are inserting), the issues raised by others have not been addressed. Thus I am bringing nothing new to the table here - just providing an executive summary of problems which others have run into, written about, but not received answers for.
First the complete error: Description: Unable to prepare the SSIS bulk insert for data insertion. End Error Error: 2008-01-15 04:55:27.58 Code: 0xC004701A Source: <xxx> DTS.Pipeline Description: component "<xxx> failed the pre-execute phase and returned error code 0xC0202071. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 4:53:34 AM Finished: 5:00:00 AM Elapsed: 385.384 seconds. The package execution failed. The step failed.
Important points
It mostly works - It produces no error more than 9 times out of 10.
It fails on random dataflows - My package has several dataflows, (mostly) executing concurrently. Where the error occurs it does not do so on the same dataflow each time: on one run it'll fail on dataflow A whilst B,C,D and E succeed, then A-E will all succeed (and continue doing so for the next ten runs thereafter), and then the error recurs for dataflow D, with A,B,C and E all succeeding. Hope someone has something interesting to say,
I am trying to bulk copy some XML data into a SQL and am generally doing quite well. The XML data I have been given has a child node called "name" which is the same as in the parent node as shown in the highlight of the XML source below. Now I can retrive the data in the parent node using bulk.ColumnMappings.Add("name", "Name") but I cannot get any of the data from the child node "Catagory" or " Catagories". Have you any suggestions on how I can get this data. <?xml version="1.0" ?>
<products> <product> <ProductId>12345</ProductId> <name>Productname</name> <description>"This is some description text"</description> <Categories> <Category> <name>Category type</name> <merchantName>Category subtype</merchantName> </Category> </Categories> <fields /> </product> Etc……. </products> Many thanks in advance Simon
Given the sample data and query below, I would like to know if it is possible to have the outcome be a single row, with the ChildTypeId, c.StartDate, c.EndDate being contained in the parent row. So, the outcome I'm hoping for based on the data below for ParentId = 1 would be:
1 2015-01-01 2015-12-31 AA 2015-01-01 2015-03-31 BB 2016-01-01 2016-03-31 CC 2017-01-01 2017-03-31 DD 2017-01-01 2017-03-31
declare @parent table (Id int not null primary key, StartDate date, EndDate date) declare @child table (Id int not null primary key, ParentId int not null, ChildTypeId char(2) not null, StartDate date, EndDate date) insert @parent select 1, '1/1/2015', '12/31/2015' insert @child select 1, 1, 'AA', '1/1/2015', '3/31/2015'
I am having a problem in adding new records in SQL 7.0 Database. I have two databases - one with table name DB1..WTEST and the other with table name DB2..TEST. I want to update records in DB2..TEST table when records are already in TEST table (by comparing primary key - with ignore duplicate option) & secondly, INSERT records in DB2..TEST table from DB1..WTEST table if the records do not exist in DB2..TEST table. The first option which is Updation, works fine but with second option I have problem. It works and records are INSERTED but I don't know the statistics of inserted records. Instead it gives a warning message - Duplicate records were Ignored -
Can somebody help me out or suggest some better solution for conditional record insertion. I used CURSOR option but it is only for Updation & Deletion.
IF EXISTS (SELECT * FROM DB1..WTEST INNER JOIN DB2..TEST ON DB1..WTEST.Name1 = DB2..TEST.Name1) BEGIN UPDATE DB1..TEST SET add1 = DB1..WTEST.Add1 FROM DB1..WTEST Print 'Updated' END
WHILE NOT EXISTS(SELECT * FROM DB1..WTEST INNER JOIN DB2..TEST ON DB1..WTEST.Name1 = DB2..TEST.Name1) BEGIN INSERT DB2..TEST SELECT DB1..WTEST.Name1, DB1..WTEST.Add1 FROM DB1..WTEST CONTINUE END
------------------ Alternate Script --------------------- IF EXISTS (SELECT * FROM DB1..WTEST, DB2..TEST WHERE DB1..WTEST.Name1 = DB2..TEST.Name1) BEGIN UPDATE DB2..TEST SET add1 = DB1..WTEST.Add1 FROM DB1..WTEST WHERE DB1..WTEST.Name1 = DB2..TEST.Name1 END
INSERT DB2..TEST SELECT DB1..WTEST.Name1, DB1..WTEST.Add1 FROM DB1..WTEST WHERE NOT EXISTS(SELECT * FROM DB2..TEST WHERE DB1..WTEST.Name1 = DB2..TEST.Name1) END
---------------------------------------------------------------------------- ------------ Ibrar Ahmed System Projects Controller Olayan Saudi Holding Company Ph:- +966-3-8871000 x 1122 Fax:- +966-3-8872000 E-Mail: i.ahmed@oshco.com
In our database we have a list of devices in a "Device" Table, eachhaving one or more IP's located in the "IP" Table linked through aforein key on the DeviceID Column.I would like to retrieve this information as SuchDeviceID IpAddress1 10.0.0.1, 10.0.0.2, 10.0.0.32 ...345etc.Is it possible to do that without using cursors? Through a query?
Greetings,I just wanna know if anyone can tell me how to get all user definedtables in parent-then-child manner. I mean all the parents should belisted first and then childs.I dont think there is any direct way to do this, but i am not able toform any sort of query to achieve this.Any help will be greatly appreciated.TIA
I have a situation that I must resolve. I have a program being used by many but I had to create a new table to provide a new feature. The problem I have is this table must use the primary key from the parent table as its primary key, meaning when a user adds a new record to parent table, I need to instantly add the primary key to the child table. Now this was done in the program using sql statements, but I need to implement a trigger or such as to keep me from having to reinstall application on many computers.
basically person inserts new record, then I need to get the new primary ket and add insert it into the child tables. how can I do this with a trigger. I have tried to use an insert into statment with my trigger, but I can't seem to pass the parameters correctly.
CREATE Trigger dbo.Table_Borrower_Insert_Keys ON Table_Borrower AFTER INSERT AS begin declare @bid as int
@bid = select MAX(BorrowerID) FROM Table_SoldProgression
INSERT Table_SoldProgression(BorrowerID) values (@bid) end GO
another attempt
CREATE Trigger dbo.Table_Borrower_Insert_Keys ON Table_Borrower AFTER INSERT AS
Hi,DDLs and DMLs:create table #job (jobID int identity(1,1) primary key, jobNamevarchar(25) unique not null, jobEndDate dateTime, jobComplete bitdefault(0), check (([JobEndDate] is null and [JobComplete] = 0) OR([JobEndDate] is not null and [JobComplete] = 1)));Q1 with check constraint:sample dagtainsert into #job (jobName)values('first job');transaction Aupdate #jobset jobEndDate = '12/19/2003', JOBCOMPLETE=1where jobID = 3;RESULTSET/STATUS = Successupdate #jobset jobEndDate = NULL, JOBCOMPLETE=0where jobID = 3;RESULTSET/STATUS = Successtransaction Cupdate #jobset jobEndDate = '12/19/2003'where jobID = 3;RESULTSET/STATUS = Failurehow come check constraint can't set a value which is preset in thecheck constraint? If it's the way how it works with MS SQL Server2000, well, IMHO, it's limiting because the above transaction C is avalid one. Or maybe check constraint is not fit for this purpose?Maybe, it doesn't make much sense for me to go into Q2 but I'll try-- create job's child table, taskcreate table #task (taskID int identity(1,1) primary key, taskNamevarchar(25) unique not null, taskEndDate dateTime, taskComplete bitdefault(0), jobID int not null references #job (jobID));-- skip check constraint for taskEndDate and taskComplete for nowNow, the Business Rule says,1) if all tasks are complete then automatically set jobComplete forthe #job table to yes;2) the jobEndDate in the #job table must be >= the last/MaxtaskEndDateI tend to think trigger would slow down data update quite a bit, so,try to stay away for this purpose if possible.Always appreciate your idea.
i have 3 tables names parent, child1, child2 parent has 1 record, child1 has 2 record and child 3 has 3 records the script
select Parent.*,child1.f1,child2.f2 from child1 inner join Parent on parent.id =child1.id inner join child2 on child1.id =child2.id
running above query gives me sixes rows but i want only all rows of childs but not their Cartesian products
Object: Table [dbo].[Parent] Script Date: 06/18/2015 17:33:02 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[Parent]( [id] [int] NOT NULL,
i am trying to figure out how i can create an SSIS package to insert into multiple tables. After the first insert, I want to take the ID created (an Identity column) and then use that to insert into other associated (foreign key) tables.
For example, I have a table Users. The primary key is an Identity column. Once the SSIS insert is complete, the bulk load of new users has an identity ID value for each row. What I want to do, during the same SSIS package, is to take each row as it is inserted and add rows to other tables. Like, UserDepartment - it has a foreign key for the user id and a foreign key to the department being added. And, as part of this I will need to get the latest ID value and possibly some other values and store them in variables.
I looked at multi-cast, but I don't know/think that this will work for me.
does anyone know of a good example or article like this?
In my DataFlow i have OleDBDataSource and OleDB Command. Using these i am inserting data to master and child tables.
In OleDBDataSource , i am inserting into master table and returning the ID of newly inserted rows. Next in the OleDB Command, i am inserting to child table using the ID returned from OleDBDataSource.
It is working fine. Now i want to put this in the Transaction so that if it fails to insert into child table, the changes made to the master table should be rolled back. I tried by giving Transaction Supported for dataflow. But does not looks like it works for me. Please suggest me the best approach for this.
Supose I have two records in a parent-child relationsuip (actually I have many more such records). Does SSIS offer any support ask for inserting one record into the parent table, the other into the child table, and updating the foreign key of the child to point to the parent?
TIA,
Barkingdog
P.S. I have to do just this as part of the datawarehouse test I'm running. Seems like a common task but I don't recall anything in SSIS addressing the issue.
I want to have a linking table say for example we call this a claim. Based on the claim number you need to relate to one of say 6 different types of claims. The types of claims related to their own individual parent table. (individual because each type of claim tracks completely different information) does anyone have an idea on how to set this up?
Sample Structure
table = Claim Field 1 = ClaimTypeA_ID Field 2 = ClaimTypeB_ID Field 3 = ClaimTypeC_ID Field 4 = ClaimTypeD_ID Field 5 = ClaimTypeE_ID Field 6 = ClaimTypeF_ID
The six field relate to the 6 different tables ID.
If I do this how do I store the data? put 0's in each of the claim types that are not used???
My Question: Does anyone know of a decent way (i.e. I do not want to loop to insert each row and check the SCOPE_IDENTITY() field or anything like that) to copy parent/child rows to their same respective table besides using the method I have listed below (my manager does not really like the idea of the "PreviousID" field)? More details are listed below.
My Table Situation: I have a parent table and a child table. Both tables have an identity column as the primary key. The relationship between the tables is established using the parent table's primary key to the column in the child table that stores the relationship. The identity column in both tables is the only column that is unique in the tables.
Sample Data (made up and simplified for visualization purposes): ParentTable - "Items" ID ItemCategory Price 1,T-Shirt,$20 2,Blue Jeans,$50 3,T-Shirt,$40
My Need: I need to make a copy of the records (keeping the parent/child relationship intact) to the same table as the source records. For example, in my data example above, I may need to make a copy of all the "T-Shirt" items and their child records. As the parent records are copied, they will be assigned new keys since the primary key is an identity. Obviously, this new key needs to be used when creating the child records, but I need to somehow associate this new key to the new child records.
Possible Solution: I know this can be achieved by adding another column to the parent table to store the "PreviousID" (INT NULL). Using this new field, when I want to copy the "T-Shirt" items, I would insert the new records and store the ID of source records (i.e. the identity value of the row that was copioed would be stored in this "PreviousID" field). Once the parent record has been copied, I could then insert the child records, and I could join on the "PreviousID" field to get to the new ID to use for inserting the copies of the child records.
I am trying to insert data into two tables with a SSIS package. One table has a foreign key relationship to the other table's primary key. When I try to run the package, the package will just seems to hang up in bids. I have found two ways around the issue but I don't like either approach. Is there a way to set which table gets insert first?
If I uncheck the check constraints option on the child table, the package will run very quickly but this option alters the child table and basically disables the constraint. I don't like this option because it is altering the database.
The second approach is to set the commit level on both tables to say 10,000 and make sure that the multicast component has the first output path moved to the parent table. I don't like this option because I am not sure if the records are backed out if the package should abend after records have been committed.
I have 6 tables which are very huge in row count and need to be partitioned for better manageability.
Little info: Every day, 300 Million records are inserted and 300 million records are deleted in below 7 tables. we maintain only 8 days worth of data in below tables which is the reason records which are older than 8 days are continuously deleted.
Master table which has [ID],[Timestamp] Table Name: Sample - 2,578,106
Child tables: Foreign key [ID] is common for all the tables. There is no timestamp column in child table. dbo.ConnectionDB - 1,147,578,048 dbo.ConnectionSS - 876,458,321 dbo.ConnectionRT - 118,133,857 dbo.ConnectionSample - 100,038,535 dbo.Command - 100,032,235
I would like to partition the above child tables based on the IDs that are inserted every 4 hours. Meaning, All IDs that are inserted in 4 hours window should be in a partition.
I have 6 tables which are very huge in row count and records needs to deleted which are older than 8 days.
Little info: Every day, 300 Million records are inserted in below 7 tables. we should maintain only 8 days worth of data in below tables. How to implement Purge script which can delete records in all tables in the same time and with optimized parallelism.
Master table which has [ID],[Timestamp] Table Name: Sample - 2,578,106
Child tables: Foreign key [ID] is common for all the tables. There is no timestamp column in child table. So the records needs to deleted based on Min(ID) from Sample
I have a database with many tables. I would like to Delete all rows with practiceID=55 from all Parents tables and all corresponding rows from its child tables. Tables are linked with foreign key constraints (but there is no ON DELETE CASCADE).
How to write a generalized code for removing rows from both parent and child tables.
Query should pick parent table one by one and delete rows with practiceID=55 and all corresponding rows from its child tables
I have used Aasim Abdullah's (below link) stored procedure for dynamically generate code for deletion of child tables based on parent with certain filter condition. But I am getting a output which is not proper (Query 1). I would like to have output mentioned in Query 2.
Link:
[URL]
--[Patient] is the Parent table, [Case] is child table and [ChartInstanceCase] is grand child
--When I am deleting a grand child table, it should be linked to child table first followed by Parent
--- query 1
DELETE Top(100000) FROM [dbo].[ChartInstanceCase] FROM [dbo].[Patient] INNER JOIN [dbo].[Case] ON [Patient].[PatientID] = [Case].[PatientID] INNER JOIN [dbo].[ChartInstanceCase] ON [Case].[CaseID] = [ChartInstanceCase].[CaseId] WHERE [Patient].PracticeID = '55';
--Query 2
DELETE Top(100000) [dbo].[ChartInstanceCase] FROM [dbo].[ChartInstanceCase] INNER JOIN [dbo].[Case] ON [ChartInstanceCase].[CaseId]=[Case].[CaseID] INNER JOIN [dbo].[Patient] ON [Patient].[PatientID] = [Case].[PatientID] WHERE [Patient].PracticeID = '55';
how to modify the SP 'dbo.uspCascadeDelete' to get the output as Query 2.
I have used Aasim Abdullah's (below link) stored procedure for dynamically generate code for deletion of child tables based on parent with certain filter condition. But I am getting a output which is not proper (Query 1). I would like to have output mentioned in Query 2.
Link: [URL]
--[Patient] is the Parent table, [Case] is child table and [ChartInstanceCase] is grand child
--When I am deleting a grand child table, it should be linked to child table first followed by Parent
--- Query 1
DELETE Top(100000) FROM [dbo].[ChartInstanceCase] FROM [dbo].[Patient] INNER JOIN [dbo].[Case] ON [Patient].[PatientID] = [Case].[PatientID] INNER JOIN [dbo].[ChartInstanceCase] ON [Case].[CaseID] = [ChartInstanceCase].[CaseId] WHERE [Patient].PracticeID = '55';
--Query 2
DELETE Top(100000) [dbo].[ChartInstanceCase] FROM [dbo].[ChartInstanceCase] INNER JOIN [dbo].[Case] ON [ChartInstanceCase].[CaseId]=[Case].[CaseID] INNER JOIN [dbo].[Patient] ON [Patient].[PatientID] = [Case].[PatientID] WHERE [Patient].PracticeID = '55';
how to modify the SP 'dbo.uspCascadeDelete' to get the output as Query 2
I have two table, tblCharge and tblSentence, for each charge, there are one or more sentences, if I join the two tables together using ChargeID such as: select * from tblCharge c join tblSentence s on c.ChargeID=s.ChargeID , all the sentences for each charge are returned. There is a field called DateCreated in tblSentence, I only want the latest sentence for each charge returned, how can I do this? I tried to create a function to get the latest sentence for a chargeID like the following: select * from tblCharge c join tblSentence s on s.SentenceID=LatestSentenceID(c.ChargeID) but it runs very slow, any idea to improve it? thanks,
In a parent/child table structure (order/orderdetail) I have used identity columns for the orderdetail or compund primary keys. I find a single identity column on the detail table easier to manage (with a fk to the parent) but what ends up bieng easiest for the user is to have an order (say #3456) and detail items listed sequentially from 1 to n. This reflects a compound key structure but generating the 2nd field is a pain. Is there any way to tie an identity field to the parent key so that it will generate this number for me automatically?
Request ID Parent ID Account Name Addresss 1452 1254789 Wendy's Atlanta Georgia 1453 1254789 Wendy's Norcross Georgia 1456 1254789 Waffle House Atlanta Georgia
............child |parent|.............a1 |a | .............a2 |a |.............a11 |a1 |.............b1 |b |.............b11 |b1 |.............b111 |b1 |............. Here is my table I want to get all childs of "a" ie {a1,a11,a2}I mean recursivelyie child,chid of child ,child of child of child ........etc up to any extentet the table containHow can i for mulate the select querry?
1. to display all parent with ORDER BY ItemOrder (no need to sort by ItemDate) 2. display all child row right after their parent (ORDER BY ItemOrder if ItemDate are same, else ORDER BY ItemDate) 3. display all grand child row right after their parent (ORDER BY ItemOrder if ItemDate are same, else ORDER BY ItemDate)