I'm using a "Execute SQL Task" to run a stored procedure to populate a Fact table in our dimensional datawarehouse. There is one row which violates a foreign key constraint, which causes the entire task to fail so zero rows are loaded. Is there any way to grab the offending row and send it off to some holding ground and go ahead and load the rest of the rows.
I'm using Execute SQL Task mostly because I am very comfortable with writing SQL whereas the rest of SSIS is a bit unfamiliar to me, but I'm guessing that to handle error rows I might have to change to a different kind of task ?
I have a Job that creates .CSV files at predetermined intervals (the delimiter is a tabstop) and every 5 minutes I have to retrieve These files one at a time and Import them into the relevant database.
My solution was to create a list of the files and insert this list into a temporary table where a Cursor would extract each filename one at a time and insert that Name into a BCP Statement. The BCP Statement is constructed using dynamic SQL.
I have a script that inserts rows from a linked server. It basically looks like this:
use mydb; go insert into my.table (col1, col2, ...) select col1, col2, ... from LinkedServer.db.my.table;
The DDL for both tables is identical: 550 columns (de-normalized from a data warehouse), of which 548 are varchar(max). The remaining two are varchar(255). (not my design! but unchangeable at the moment).
Running my query raises an error:
Cannot create a row of size 9948 which is greater than the allowable maximum row size of 8060. The statement has been terminated.
Now, inserting rows on the table on the LinkedServer has never been an issue. (Since most columns are varchar(max), SQL has the option to store the data off-row.) The data is there, intact and whole. However, when pulling it into my target server using a LinkedServer (4-part naming), I get the error.
Three pairs of eyes have confirmed that the table definitions are identical on both source and target.
I recently updated the datatype of a sproc parameter from bit to tinyint. When I executed the sproc with the updated parameters the sproc appeared to succeed and returned "1 row(s) affected" in the console. However, the update triggered by the sproc did not actually work.
The table column was a bit which only allows 0 or 1 and the sproc was passing a value of 2 so the table was rejecting this value. However, the sproc did not return an error and appeared to return success. So is there a way to configure the database or sproc to return an error message when this type of error occurs?
I have a parent package that calls child packages inside a For Each container. When I debug/run the parent package (from VS), I get the following error message: Warning: The Execution method succeeded, but the number of errors raised (3) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
It appears to be failing while executing the child package. However, the logs (via the "progress" tab) for both the parent package and the child package show no errors other than the one listed above (and that shows in the parent package log). The child package appears to validate completely without error (all components are green and no error messages in the log). I turned on SSIS logging to a text file and see nothing in there either.
If I bump up the MaximumErrorCount in the parent package and in the Execute Package Task that calls the child package to 4 (to go one above the error count indicated in the message above), the whole thing executes sucessfully. I don't want to leave the Max Error Count set like this. Is there something I am missing? For example are there errors that do not get logged by default? I get some warnings, do a certain number of warnings equal an error?
Starwin writes "when i execute DBCC CHECKDB, DBCC CHECKCATALOG I reveived the following error. how to solve it?
Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -2093955965, index ID 711, page ID (3:2530). The PageId in the page header = (34443:343146507). . . . . . . . .
CHECKDB found 0 allocation errors and 1 consistency errors in table '(Object ID -1635188736)' (object ID -1635188736). CHECKDB found 0 allocation errors and 1 consistency errors in table '(Object ID -1600811521)' (object ID -1600811521).
. . . . . . . .
Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -8748568, index ID 50307, page ID (3:2497). The PageId in the page header = (26707:762626875). Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -7615284, index ID 35836, page ID (3:2534). The PageId in the page heade"
JobRequirements (A) JobID int QualificationTypeID int
EmployeeQualifications (B) EmployeeID int QualificationTypeID int
Employee (C) EmployeeID int EmployeeName int
I need to return a list of all employees fit for a specific job ... The criteria is that only employees who have all the JobRequirements are returned. So if a job had 3 requirements and the employee had just 2 of those qualifications, they would not be returned. Likewise, the employee might have more qualifications than the job requires, but unless the employee has all the specific qualifications the job requires they are not included. If an employee has all the job qualifications plus they have extra qualifications then they should be returned...
How to only return those records where all the child records are present in the other table..
I am using sql server 2005. I stuck out in a strange problem. I am using view in my stored procedure, when I run the stored procedure some of the rows get skipped out means if select query have to return 10 rows then it is returning 5 rows or any other but not all, also the records displyaing is randomly coming, some time it is displaying reords 12345 next time 5678, other time 2468.
But if I run seperately the querys written in SP then it returns all the rows. Please give me solution why it is happening like this.
There are indexes in the tables.
Once I shrink the database and rebuild the indexes, from then this problem is happening. I have rebuild the indexes several time, also updated the statistics but nothing improving.
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
How can we handle transactions in SSIS 1. when some error/something happens during export and the # of rows are not exported fully to destination, how to rollback the transaction in SSIS.
I have a conditional split in an SSIS package - one split is where if rows are returned according to a specific rule, then insert those rows into to a Recordset Destinationm which points to a variable of Object type.
How I can use this variable to email fellow users. For example, what I would like is if ANY rows are returned to the Object variable (1 or more), then I would like to execute an email SP that we have on our server.
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
Hello, I have a survey (30 questions) application in a SQL server db. The application uses several relational tables. The results are arranged so that each answer is on a seperate row: user1 answer1user1 answer2user1 answer3user2 answer1user2 answer2user2 answer3 For statistical analysis I need to transfer the results to an Excel spreadsheet (for later use in SPSS). In the spreadsheet I need the results to appear so that each user will be on a single row with all of that user's answers on that single row (A column for each answer): user1 answer1 answer2 answer3user2 answer1 answer2 answer3 How can this be done? How can all answers of a user appear on a single row Thanx,Danny.
Hi i tried designing a SSIS package which loads only those rows which were different from existing rows in the table , i need to timestamp the existing row with an inactive date when a update of that row is inserted (ex: same studentID ) and the newly inserted row with a insert time stamp so as to indicate the new row as currently active, in short i need to maintain history and current rows in same table , i tried using slowly changing dimension but could not figure out, anyone experience or knowledge regarding the Data loads please respond.
example of Data would be like
exisiting data
StudentID Name AGE Sex ADDRESS INSERTTIME UPDATETIME 12 DDS 14 M XYZ ST 2/4/06 NULL 14 hgS 17 M ABC ST 3/4/07 NULL
New row to insert would be
12 DDS 15 M DFG ST 4/5/07
the data should reflect
StudentID Name AGE Sex ADDRESS INSERTTIME UPDATETIME 12 DDS 14 M XYZ ST 2/4/06 4/5/07
12 DDS 15 M DFG ST 4/5/07 NULL
14 hgS 17 M ABC ST 3/4/07 NULL
Please provide your input as much as you can even though it might not be a 100% solution.
I had created a trigger which sees that whether a database is updated if it is its copy the values of the updated row into another control table now I want to read the content of control_table into BIzTalk and after reading I want to delete it.Can any one suggest the suitable ay to do this?
I have the following variables VehicleID, TransactDate, TransactTime, OdometerReading, TransactCity, TransactState.
VehicleID is the unique vehicle ID, OdometerReading is the Odometer Reading, and the others are information related to the transaction time and location of the fuel card (similar to a credit card).
The records will be first grouped and sorted by VehicleID, TransactDate, TransactTime and OdometerReading. Then all records where the Vehicle ID and TransactDate is same for consecutive rows, AND TransactCity or TransactState are different for consecutive rows should be printed.
I also would like to add two derived variables.
1. Miles will be a derived variable that is the difference between consecutive odometer readings for the same Vehicle ID.
2. TimeDiff will be the second derived variable that will categorize the time difference for a particular vehicle on the same day.
My report should look like:
VehID TrDt TrTime TimeDiff Odometer Miles TrCity TrState 1296 1/30/2008 08:22:42 0:00:00 18301 000 Omaha NE 1296 1/30/2008 15:22:46 7:00:04 18560 259 KEARNEY NE
Running this code on my PC via VS 2005 .Net version 2.0.50727 on the server (shown in IIS) Code is in ASP.NET 2.0 and is a VB.NET Console application SSIS 2005
Problem & Info:
I am bringing in an Excel file. I need to first strip out any non-detail rows such as the breaks you see with totals and what not. I should in the end have only detail rows left before I start moving them into my SQL Table. I'm not sure how to first strip this information out in SSIS specfically how down to the right component and how to actually code the component to do this based on my Excel file here: http://www.webfound.net/excelfile.xls
Then, I assume I just use a Flat File Source coponent or something to actually take the columns in the Excel and split into an OLE DB Datasource to shove each column into a corresponding column in my SQL Server Table. I have used a Flat File Source in the past to do so with a comma delimited txt file but never tried with an Excel.
Desired Help:
How to perform
1) stripping out all undesired rows 2) importing each column into sql table
I am facing an issue that Data flow task failing after loading 29000 rows out of 2lakhs rows.
I am loading data from .csv file to OLE DB Destination.
This data flow task is placed inside For each loop container.
is this issue because of any performance issue in SSIS packages such as buffer size.
find the error below:
DFT Load Data from FlatFile:Error: The conditional operation failed. DFT Load Data from FlatFile:Error: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR.
The "DER Add Calc Columns" failed because error code 0xC0049063 occurred, and the error row disposition on "DER Add Calc Columns.Outputs[Derived Column Output].Columns[M_VALUE_NUM]" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
DFT Load Data from FlatFile:Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "DER Add Calc Columns" (48) failed with error code 0xC0209029 while processing input "Derived Column Input" (49). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.
;WITH ctePreAgg AS ( select top 500 act_reference "ActivityRef", row_number() over (partition by act_reference order by act_reference) as rowno, t3.s_initials "Initials" from mytablestuff order by act_reference
[code]...
But what I would love to do next is take each of the above rows - and return the initials either in one column with all the nulls and duplicate values removed, separated by a comma ..
OR the above but using variable number of columns based on the maximum number of different initials for each row.this is not strictly required, but maybe neater for further work on the view
I have a SQL script to insert data into a table as below:
INSERT into [SRV1INS2].BB.dbo.Agents2 select * from [SRV2INS14].DD.dbo.Agents
I just want to set a Trigger on Agents2 Table, which could delete all rows in the table , before carry out any Insert operation using above statement.I had below Table Trigger on [SRV1INS2].BB.dbo.Agents2 Table as below: But it did not perform what I intend to do.
USE [BB] GO /****** Object: Trigger Script Date: 24/07/2015 3:41:38 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
I have a table with the following structure in sql server 2005
create table app( sno int, name varchar(50), add varchar(50), city varchar(50), state varchar(50) )
it contains the follwing data ------------------------------------------ sno name add city state ------------------------------------------ 1 mark street no1 newcity newstate 2 mark street no1 newcity newstate 3 mark street no1 newcity newstate 4 mark street no1 newcity newstate 5 mark street no1 newcity newstate 6 mark street no1 newcity newstate 7 mark street no1 newcity newstate 8 mark street no1 newcity newstate 9 mark street no1 newcity newstate 10 mark street no1 newcity newstate 11 mark street no1 newcity newstate 12 mark street no1 newcity newstate 13 mark street no1 newcity newstate 14 mark street no1 newcity newstate 15 mark street no1 newcity newstate 16 mark street no1 newcity newstate 17 mark street no1 newcity newstate 18 mark street no1 newcity newstate 19 mark street no1 newcity newstate 20 mark street no1 newcity newstate
----------------------------------------
I want to retrive previous 5 records, next 5 records and the record that meet the where condition of a select query.
When I run
select sno,add,name,city,state from app where sno=7
I want the following result
------------------------------------------ sno name add city state ------------------------------------------ 2 mark street no1 newcity newstate | 3 mark street no1 newcity newstate | 4 mark street no1 newcity newstate | -- previous 5 records 5 mark street no1 newcity newstate | 6 mark street no1 newcity newstate | 7 mark street no1 newcity newstate --- searched record 8 mark street no1 newcity newstate | 9 mark street no1 newcity newstate | 10 mark street no1 newcity newstate |--- next 5 records 11 mark street no1 newcity newstate | 12 mark street no1 newcity newstate | ----------------------------------------
if there is a method to get the above result set, kindly post the query.
Hi, I need to write a query which I have never attempted before and could do with some help.... I have a Groups table and a Users_Groups look up table. In this model, users can only be assigned to 1 group. If a group is deleted, a trigger should fire and delete any rows in User_Groups having a matching Groups.Ref. Unfortunately, the trigger hasn't been firing and I now have a load of defunct rows in Users_Groups relating users to groups which do not exist.I now need to find all of these defunct rows in Users_Groups so that I can delete them. How can I find rows in Users_Groups where the parent rows and refs in Groups are null? I've tried searching the net for something similar but don't even know how to word the search properly to get any half relevant results. Cheers PS, I do realise I need to tighten the constraints on my database
I'm stuck. I have a table that I want to pull some info from that I don''t know how to.
There are two colomuns, one is the call_id column which is not unique and the other is the call_status column which again is not unique. The call_status column can have several values, they are ('1 NEW','3 3RD RESPONDED','7 3RD RESOLVED','6 PENDING','3 SEC RESPONDED','7 SEC RESOLVED').
The call_id could be any number, I only want the 6 PENDING rows where there are other rows for that call_id which have either 3 3RD RESPONDED or 7 3RD RESOLVED. If someone knows how it would be a great help.
Dear Gurus,I have table with following entriesTable name = CustomerName Weight------------ -----------Sanjeev 85Sanjeev 75Rajeev 80Rajeev 45Sandy 35Sandy 30Harry 15Harry 45I need a output as followName Weight------------ -----------Sanjeev 85Rajeev 80Sandy 30Harry 45ORName Weight------------ -----------Sanjeev 75Rajeev 45Sandy 35Harry 15i.e. only distinct Name should display with only one value of Weight.I tried with 'group by' on Name column but it shows me all rows.Could anyone help me for above.Thanking in Advance.RegardsSanjeevJoin Bytes!
helo.. I have 100,000 rows in the database and I want to read results for eg: from 5000 to 5050 by DataReader. I wrote this code to do this but its too slow:
Dim SlctStr As String = "select * from topicstbl where partID like '" & PagePartID & "'"
Dim ReadCom As New SqlClient.SqlCommand
ReadCom.CommandText = SlctStr
ReadCom.Connection = MainLib.MyConnection
Dim MyReader As SqlClient.SqlDataReader = ReadCom.ExecuteReader()
Dim StartTNum As Long = 5000
For IR As Long = 0 To StartTNum - 1
MyReader.Read()
Next
Do While MyReader.Read
StartTNum += 1
If StartTNum > 5500 Then Exit Do
'''''''''''''''''''
Loop
MyReader.Close()
is there another way to do the same thing better off than this code?
Dear Friends, Is there any way to display a table data separately like odd rows and even rows?I dont know this is possible or not?If it is possible means how can i achieve it?Please guide me a proper way. Thanks all!
Type Name Value x M1 5 x M2 10 x M3 20 y M1 10 y M2 15 y M3 30
Now, i need to add four more rows to the table
Type Name Value x M1 5 x M2 10 x M3 20 y M1 10 y M2 15 y M3 35 z1 Total 15 (xM1+XM2) z1 Diff 5 (xM3-xM1+XM2) z2 Total 25 (yM1+yM2) z2 Diff 10 (yM3-yM1+yM2)
I know this table is designed wrong for what I am doing but I hope Ican do it. I have a table like this.Prod_A_Jan, Prod_A_Feb, Prod_B_Jan, Prod_B_FebI want a query that returns data like this (two rows of data)"ProdA", Prod_A_Jan, Prod_A_Feb"ProdB", Prod_B_Jan, Prod_B_FebI know two queries can get it but I want one. Any Help would begreat!!!Sheila T.
Hi I'm having a bit troubble by creating a SQL-sentence which substract the newest row from the second newest row in the same column. The table looks like this:
What I need is that the largest "Pricecalcid" that is 3100 equals "Price" 2500 and the second largest "Pricecalcid" eqauls 2800 results in a pricecalculation that substracts 2500 from 2800 for "Itemid" 100.
I run the following statement and it will not update beyond 7 million plus rows and I have about 38 million to complete. I keep checking updated row counts and after 1/2 day it's still the same so I know something is wrong because it was rolling through no problem when I initiated it. I need to complete ASAP so it's adding to my frustration. The 'Acct_Num_CH' field is an encrypted field (fyi).
SET rowcount 10000 UPDATE [dbo].[CC_Info_T] SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v' WHERE [Acct_Num_CH] IS NOT NULL WHILE @@ROWCOUNT > 0 BEGIN SET rowcount 10000 UPDATE [dbo].[CC_Info_T] SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v' WHERE [Acct_Num_CH] IS NOT NULL END SET rowcount 0
Stepping thru the code with the debugger shows the dataset rows being deleted.
After executing the code, and getting to the page presentation. Then I stop the debug and start the page creation process again ( Page_Load ). The database still has the original deleted dataset rows. Adding rows works, then updating works fine, but deleting rows, does not seem to work.
The dataset is configured to send the DataSet updates to the database. Use the standard wizard to create the dataSet.
cDependChildTA.Fill(cDependChildDs._ClientDependentChild, UserId); rowCountDb = cDependChildDs._ClientDependentChild.Count; for (row = 0; row < rowCountDb; row++) { dr_dependentChild = cDependChildDs._ClientDependentChild.Rows[0]; dr_dependentChild.Delete(); //cDependChildDs._ClientDependentChild.Rows.RemoveAt(0); //cDependChildDs._ClientDependentChild.Rows.Remove(0); /* update the Client Process Table Adapter*/ // cDependChildTA.Update(cDependChildDs._ClientDependentChild); // cDependChildTA.Update(cDependChildDs._ClientDependentChild); } /* zero rows in the DataSet at this point */ /* update the Child Table Adapter */ cDependChildTA.Update(cDependChildDs._ClientDependentChild);
Is there any way to control this scenario, I know trick to put 10 on each row ,but I need to split them unevenly, 10 on first page and the rest on second page. is it possible ?
I have been receiving 644 errors on an index for one of my tables. We have dropped and rebuilt the index. Even BCP out, drop the table, create the table and BCP in everything and still have problems.
Has anyone else experienced problems with 644 errors? Did you ever determine the cause and are they any preventative ideas?