I have 25 columns in my excel which is the second row in the excel. I have merged column comprising of 3-4 columns so like wise 7 columns as the 1st row. How can I suggest SSIS that 2ns row in the header or column names? Is there a way to validate values in each cell in excel for their data types using SSIS?
Also is there any way using SSIS that I can refer to specific cell in the excel?
I am having this question in my mind from when I started using SSIS.. In a package I have an OleDB source and a flat file destination. and I hook up the source to destination in SSIS and run the package.. it works .. fine.. but internally what is done??? does SSIS performs BCP out ???
Here i am going to ask some questions in SSIS pls clarify my doubts...
1. Can i include a package with in another package? like If i have 20 packages..i need to run all packages at a time ..for this can i create a package and can i place all those packages in to a single package?
2. If i have 700 packages then how to run those packages all at a time?
3.Can i develop a Workflow or package in SSIS with out using Data Flow Tasks? Like with the help of only control Flow Tasks can i create a package or workflow?
I have a SQL cluster 2005 with 2 nodes. I involved a ETL (DTS) on SSIS.
I used OLEDB to get a connection ORACLE and INFORMIX.
I used a user and password to authenticate.
When I publish the package, to the Integration Services.
I publish it to the MSDB so in case that we have a failover that the package is the same.
However in case of switching server (failover) the package that I published on NODE 1 does not run on NODE 2.
And it fail:
OnError,EDVSQLCLUSTER02,DOMAIN-LISBOA olo,Insert pt_fornsap,{F06434D9-431B-48B7-A3E6-F17E04A29488},{4FF70682-705E-4410-885F-36F25BC1EC8C},22-09-2006 2:00:01,22-09-2006 2:00:01,-1071636471,0x,An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Statement(s) could not be prepared.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Invalid object name 'pt_lfa1'.".
Trying to upload excel in server where excel is not installed. BIDs was there in the server, when i am trying to craete Excel source I am not able.what the workround for this.. How to upload excel without excel installed on the server.
We have 10 sheets in Excel File and 10 sheet contains errror data. How to load 9 sheets data in to 1 destination and error data in to other destination?
I am creating an SSIS package witha a Dataflow task, which reads from an Excel source and then uses script component to dumpt the data to multiple tables in Sql Server database
I need to some how make my Excel source dynamic, that is my excel template which i would be using to map the excel columns to script component's input columns would be dynamic..
In other words, I should be able to define the Excel Source, Column Mapping Information, Precedence constraint to the Script component dynamically
I am currently using SQL Server version 6.5 and am trying to execute the BCP command from the command prompt to update my database. Right now, if the tables doesn't have any primary key constraints, the BCP works fine and inserts all the rows to the database. Suppose the table has some primary keys and the datafile to the BCP has some duplicate records in it. It says, the following
Starting copy... Msg 2627, Level 14, State 1: Server 'DHRUVA', Line 1: Violation of PRIMARY KEY constraint 'PK_docket_booking4_1__10': Attempt to insert duplicate key in object 'docket_booking4'.
It does not insert any rows to the database. Is there anyway I can overcome this error and proceed on with the BCP to copy data. THe datafile which I give will have duplicate records.
hi all,Please find below my queries, and help me to understand things..1. How to find the port number of a specific SQLServer instance?(I have heard that the different instance of the SQLServer runs in different port number)2. How to create a new instance for an existing SQLServer database? and how would i connect to it with my JDBC Connection string which usually reads as,jdbc:jtds:sqlserver://<Host_Name>:<Port_Number>/<DataBase_Name>Thanks in advanceShefu
i. If we have files and filegroups in place we have improved performance on backup and restore .Can we have any other improved perfortmance ? 2.If we are rebuilding or reorganising indexes on a table,which one will the degrades the performance more on the production server and which sits more on the transaction log. 3 how can we check the tableid in the database.
Hello i attended one interview . I have some doubts
1.) Indexes decreases the performance for insert ,UPDATE commands.? what will u do to increase the performance with these statements. 2.).tell me the 5 steps do u consider while writing a stored procedure to increase the performance.? 4).Is truncate a DDL or DML operation ? 5.) How to delete duplicate rows from a table? 6).table1 has an identity coluimn with 5 rows. delete opearation is performed on table 1.If user enters a new row what wiil be identity column no 1 or 6? table2 has an identity coluimn with 5 rows. truncate opearation is performed. .If user enters a new row what wiil be identity column no 1 or 6? 7.) when do we go for sub queries and when do we go for joins ? 8) could me tell the difference between sp and function.?
Hi. I need to import excel file in database. i first need to do an unpivot task. the column names are dates and SSIS seems to be unable to pick up the column name as it is replaced by F2 F3 F4etc Can you advise of a solution. thanks ken
Hi, I am totally new to sql server and have got a few doubts.I couldnt find it in earlier posts.Sorry if it is a repetition
*Can we get all the table names of a database by Query? *How to create a similar copy of a table by using AS..does this work in sql server 2000?(tried but showing error) *can we modify the column properties(like ensuring NOT NULL) after the table is created?
set @DN=@DN+'AA9999' select @ID = ID from TableA where status='A' and ColumnA like '%'+@DN ColumnA is NonClusteredIndex
When I use = symbol it goes for Index Seek (like below query) otherwise its going for table scan. Could anyone advice me which index need to be used for getting 'Index Seek' when I use like Operator. Basically TableA has millions of records.
select @ID = ID from TableA where status='A' and Code = 'AA9999'
soon soon, í´ll have to devellop some procedures to read an ASCII file to supply MS SQL tables. As I´ve read some old post, I´ve understand that I have to use BULK INSERT , or else, BCP or DTS. I´d like to know the diference between this commands and witch of them is more powerful, faster and efficient. If you can give me some implementation tips, I will be very grateful.
hi, here i am inserting the total dates between 1983/08/26 to 2006/08/26. i.e the table will have 36525 recoeds approximately. after executing the command i verified the table n i noticed that the last record in this table was 2025/xx/xx approximately. why? defualtly what is the capacity of a sql table. can i change the table properties to hold more num. of records. please help me on this.
One can never consent to creep,when one feels an impulse to soar RAMMOHAN
I've to do a mining project and I intend to use the SSIS.
I've done a clustering plugin last year on analysis services and I also want to use it.
Let me try to explain the architecture of the process:
1) Receive data (read data from the database - these data are texts, actually)
2) Pre-process the data (transform the texts in a sparse matrix) using a new plugin
3) Call my clustering plugin and assign it to read the table created on the previous step
4) Call my KNN plugin to classify other pre-processed texts using the clusters found on the previous step as classes.
5) Show results.
Alright... It all running as a workflow on integration services
Here are my doubts:
A) How to view and use my plugin made for Analysis Services on Integration Services ? (is it possible or will I have to create another plugins from zero just to run on Integration Services ?)
B) Assuming the previous step is possible, how to modify my plugins to define inputs and outputs to do the correct communications between each plugin ? I think this is the most important question. Is it simple to do ? Is there any documented examples ?
I have few more clarification regarding time series. Firstly In my model the month level product sales value represented across 1st day of every month. So that the key time column is of datetime datatype containing a sequence of dates representing the 1st day of every month of the year. Eg: 2006-01-01, 2006-02-01€¦€¦€¦. etc. all in (yy-mm-dd) format But when I make prediction for next five months, though it makes monthly predictions the date part for the months are random whereas I expect the date part to be 1st of every month. What is the reason for this and how can I overcome it. Secondly Predicted sales values for some time period are negative though I do not have any negative value in the training data. What is the reason for this and how can I rectify it?
Thirdly In one of your earlier posts you had said that the time series algorithm does not have any built in time intelligence but uses the key time column as a time sequence stamp. So If have to make predictions for a particular time period where the time slice for each time period is 25 days or 50 days etc, then I understand that the input data used to train the model should be in the same time sequence. Or Can I specify the span of the time period according to which the prediction needs to be made? Basically how can I use the same time series model to make monthly, yearly, quarterly, daily or predictions or for custom time period like I have mentioned above.
I am currently using SQL Server 2000 server. I have run some BCP command to import data to SQL from CSV and Text files with the help of FORMAT files through Windows Scheduler.
My doubt is, if I swich to SQL Server 2005, whether the same BCP commands will supported? If no, What are the things I need to do to run the BCP commands in 2005 as in 2000?
I heard that the BCP utility is no longer support in 2005 and this can be done thru SSIS utility. Is it right...? Is SQL Server 2005 support to use the BCP commands without going to SSIS utility?
Is there anyway to  send excel file from ssis using send mail task without saving the excel file locally. I need to automate the process which involves loading the excel file from the database and send it to some people.Â
Hi am trying to import data from a excel file into my 2005 DB using a SSIS package.
This first thing i've done is create a Excel source and then a derived column task as i need to format my date, so am using substring to format the date but the expression am using will not work am geting a error on it
the data in the excel file is like 8122007 here is my expression
When i ran SSIS package from Business Intelligence studio it works fine..but when i use the DTEXEC.EXE /SQL it gives me error at the Data transformation Tasks for Excel Destination
Description: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft JET Database Engine" Hresult: 0x80004005 Description: "The Microsoft Jet database engine cannot open the file ''. It is already opened exclusively by another user, or you need permission to view its data.".
I have given the same permissions as the Sql server and Sql server agent to the Folder where the excel file is sitting.
Is there anything else i have to add for the package to work?
I am new to SQL Server and am trying to import rows from Excel using SSIS and am getting the following error.
Does anyone have any ideas on how to resolve??
SSIS package "Package.dtsx" starting. Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning. Warning: 0x80047076 at Data Flow Task, DTS.Pipeline: The output column "SupplierID" (161) on output "Excel Source Output" (9) and component "Excel Source" (1) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance. Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning. Warning: 0x80047076 at Data Flow Task, DTS.Pipeline: The output column "SupplierID" (161) on output "Excel Source Output" (9) and component "Excel Source" (1) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance. Information: 0x40043006 at Data Flow Task, DTS.Pipeline: Prepare for Execute phase is beginning. Information: 0x40043007 at Data Flow Task, DTS.Pipeline: Pre-Execute phase is beginning. Information: 0x4004300C at Data Flow Task, DTS.Pipeline: Execute phase is beginning. Information: 0x402090DF at Data Flow Task, OLE DB Destination [583]: The final commit for the data insertion has started. Error: 0xC0202009 at Data Flow Task, OLE DB Destination [583]: An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "The statement has been terminated.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Cannot insert the value NULL into column 'SupplierID', table 'Northwind.dbo.Suppliers'; column does not allow nulls. INSERT fails.". Information: 0x402090E0 at Data Flow Task, OLE DB Destination [583]: The final commit for the data insertion has ended. Error: 0xC0047022 at Data Flow Task, DTS.Pipeline: The ProcessInput method on component "OLE DB Destination" (583) failed with error code 0xC0202009. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. Error: 0xC0047021 at Data Flow Task, DTS.Pipeline: Thread "WorkThread0" has exited with error code 0xC0202009. Information: 0x40043008 at Data Flow Task, DTS.Pipeline: Post Execute phase is beginning. Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning. Information: 0x4004300B at Data Flow Task, DTS.Pipeline: "component "OLE DB Destination" (583)" wrote 1 rows. Task failed: Data Flow Task Warning: 0x80019002 at Package: The Execution method succeeded, but the number of errors raised (3) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. SSIS package "Package.dtsx" finished: Failure.
We had this ActiveX Task in DTS SQL 2000 that updates an Excel File and saves it. ================================================ Function Main() Dim xlApp Dim Desc
Set xlApp=CreateObject("Excel.Application") xlApp.WorkBooks.Open DTSGlobalVariables("ExistingBOM_Path").Value xlApp.ActiveWorkbook.RefreshAll xlApp.ActiveWorkBook.Close True 'xlApp.Quit True Set xlApp=Nothing
Main = DTSTaskExecResult_Success
End Function ================================================== We need to do this in SSIS Script Task since ActiveX Task will be deprecated. We successfully did an update with OLEDB to update the excel file but it needs to be saved for all the changes to happen properly since we were getting invalid results. Can anyone help with an approach doing so since we don't have access to the excel object in VSA? Thanks in advance
I€™ve been battling with a client who€™s supplied us with what they consider to be a legitimate CSV file. 1st off let me say that I understand that there€™s no CSV €œspecification€? per say, but here€™s the situation regardless.
The client has a test string in one of their fields that looks something like this:
He said "STOP" so, of course he stopped
The CSV best practice requires that you double quote, so a valid CSV filed would look like this:
He said ""STOP"" so, of course he stopped
Once this is placed into a comma delimited CSV it looks something like this
"Col1","Col2" "He said ""STOP"" so, of course he stopped","value in col2"
---------------------
So the problem here is that the client saves the above as a CSV, opens it in Excel and say - Look, Excel deals with this just fine €“ why can€™t you handle it?
Trying to explain to the client that SSIS can€™t deal with a field that has embedded commas in it but that Excel can is quite honestly a little embarrassing (especially considering the const difference between the two).
It seem that having embedded quotes in the filed is fine, but that as soon as you have an embedded comma €“ SSIS can€™t handle it, yet Excel can.
---------------------
That said €“ I€™ve also read quite a few posts where people flame the original poster saying €¦ change your delimiter. That€™s all good and well when you€™re the one generating the CSV but when your client knows hinks they€™re generating a legit CSV (according the Excel and quite a few other CSV parsers) it€™s not a pleasant argument €“ especially when you know that asking them to make this change is going to take a few weeks of your project timeline.
I know I€™m not the 1st person to experience the problem, but I did want to see if I could get a straight answer as to why the Excel CSV Parser would dela with this situation but the SSIS parser would not.
Dear Friends, I need to import data from several excel files. How can I configure excel source object to dinamically import each file? The name of the file will be in a parameter of ssis package and this name change frequently, and ach time the filename change I dont want to change the configuration on the excel source? What you sugest? Shoul I use a script component as source?! Regards!