I have a DTS in SQL Server 2000, Where I am importing some data from a remote server (SQL 2000) to local server (SQL 2000).And this is working fine. it is taking max of 1 min to execute the package.
Now I have created the same DTS in sql server 2005 (SSIS) where the source server is sql server 2000 and the destination server is 2005.and I have created the ssis in that server. The same logic which i have created in sql 2000. But here it is taking almost 10 min to execute the package.
Where as the same in sql server 2000 taking max of 1 min. Why this happening.. Is there any configuration to execute the SSIS package.?
I have SSIS Projects taking a long time to open with packages with a large number of data flows. Is there a way to turn off validation of metadata when a package opens? Turn off validation during execution on SSIS Service (after previously validated in dev)? Or be able to control when validation takes place in general?
In my one package (1 of 5) I have 43 data flows (with a single source to target mapping) in 4 sequence containers, and it takes approximately 2-3 seconds per source to target mapping and sequence container to validate which will translate to 1 ½ to 2 ½ minutes to open. When the project with all 100+ tables for the data warehouse goes through validation, I can make coffee in the time it takes to open the project. I have to delete *.suo file (or verify all packages are closed in the designer and save the project file), and when I open the project, I have to jump immediately to SSISÃ Work Offline to set it to not validate the metadata to be able to work in a timely fashion. DelayValidation=TRUE does not help much.
Running in debug mode, has an effect of causing packages that were not open and validated to go through validation though I am not running those packages. Validate once during design and run forever.
Even if I re-open a package that I just closed from designer and had gone through validation, it will go through the validation process again.
It would be great if there could be an on-demand option off the menu bar to allow one to control when validation can take place for a project, or a more granular validation option for a specific data flow or container.
Does the validation time of DTSX package depend on the amount of data?
I have a data transfer task which contains a oledb source data from a SQL Server 2005 view. Then the data go through 3 Lookup transfromations before going into another view, also on SQL Server 2005 but a different database. The purpose of this package is to load fact data, so it has to deal with few million of rows. Before setting DelayValidation to true, it takes few minutes just to open the package in BI Studio. Now with DelayValidation set to false, I can open the package without any problem. But it takes more than 5 minutes during Validation, Prepare for Execution and Pre-execution Phase. During the time the memory usage and cpu time on SQL Server goes up significantly. The CPU doesn't hit 100% though. My client machine doesn't have any significant activity.
I have similar packages ( Oledb view -> 3 or 4 lookup Transformations -> Oledb Table) but dealing with dimension data. With DelayValidaion set to False, those packages can be opened in BI Studio within few seconds. They also take only few seconds during those 3 phases before starting actual execution phase.
So I have an impression that the validation time depends on the amount of data in the database. Shouldn't it just depends on the metadata?
Hello all, I have a SSIS package that is importing data from a DB2 database. I am using SSRS (BIDS/VS2005). I want to be able to access the last time a particular package ran in my report, like in the footer of the page. I have found the globals.executiontime for grabbing the time the report was ran.
I want to do the same thing, except I want to call the date/time the last import was made to the database. Is there some easy way to grab that from SSIS, or will I need to maybe reference the DB creation time? (The database is dropped and recreated (for now anyway) in the SSIS package.)
I'm planning to consider writing a SSIS package for a new project that requires downloading large chunk of data and transform into the diverse databases such as MS SQL or Oracle depending on the Client's Datbase.
For the clients having SQL Server installed at their end, i had no issues in deploying this package on their server and run it in their licensed instance.
What should be the case for others having Oracle database? Wouldn't installing the SQL 2005 client tools install the necessary run-time services for running SSIS packages? What i understood from the MSDN library (http://msdn2.microsoft.com/en-us/library/ms403355.aspx) is that there's no run-time support available for running the SSIS packages (unlike DTS run-time support) in production environment!
Would that mean that it requires a SQL Standard edition, at minimum, (as Integration Services is OOTB from Standard Edition onwards) to be installed at the production site to run this package?
If so, the client wouldn't be ready (which is fair too) to buy the new license just to run this package. Is there any work-around/suggestions for this case?
If not, can somebody please point me to the right location where i can download the run-time support for running SSIS packages?
I have a VB application which uses SQL server as the database and uses Crystal reports for reporting.We are using a stored procedure to create a report and we pass ID from the vb side to run the stored procedure. In boston the report shows up in 4 sec.But in california it takes 7 min. We have a very good network(T3).Why it is taking more time in california ?. Any ideas ?
Dear Experts, i've one table named table11. in this perticular table, i've 30 columns and 40,000 rows of data. this table is taking 35 sec for select * from table11.
defnetly it will take more time if i used this in some places like procedures and functions or views like that.
where is the problem? generally it takes that much of time or is there any problem?
guidence please.....
Vinod Even you learn 1%, Learn it with 100% confidence.
Dear All, here i'm posting my query which is taking 3 minutes
please suggest me the best query
SELECT distinct INP.COLUMN001 REPORT_INPUT_ID, INP.COLUMN002 REPORT_ID, INP.COLUMN003 OPERATION_ID, OPER.COLUMN004 OPERATION_CODE, OPER.COLUMN005 OPERATION_NAME, INP.COLUMN004 ITEM_ID, CONVERT(NVARCHAR , INP.COLUMN005, 110) RECEIVED_DATE, INP.COLUMN006 LOT_NO, INP.COLUMN007 RECEIVED_QTY, INP.COLUMN008 CONSUMED_QTY, (select CODE from view1 where item_id = INP.COLUMN004) my_val, (select NAME from view1 where item_id = INP.COLUMN004) Item_Name, INP.COLUMN009 UOM_ID, U.UOM_CODE, INP.COLUMN010 BASE_RECEIVED_QTY, INP.COLUMN011 BASE_CONSUMED_QTY, case when INP.COLUMN012 ='1' then 'Progress' when INP.COLUMN012 ='2' then 'Closed' end OPERATION_STATUS, case when INGDTL.COLUMN006 ='0' then 'Ingredient' when INGDTL.COLUMN006 ='1' then 'Intermediate' end INPUT_TYPE, INP.COLUMNB01 COLUMNB01, INP.COLUMNB02 COLUMNB02, INP.COLUMNB03 COLUMNB03, INP.COLUMNB04 COLUMNB04, INP.COLUMNB05 COLUMNB05, INP.COLUMNB06 COLUMNB06, INP.COLUMNB07 COLUMNB07, INP.COLUMNB08 COLUMNB08, INP.COLUMNB09 COLUMNB09, INP.COLUMNB10 COLUMNB10, INP.COLUMND01 BRANCHID, INP.COLUMND02 COMPANYID, INP.COLUMND03 CREATEDBY, INP.COLUMND04 CREATEDDATE, INP.COLUMND05 LASTUPDATEDBY, INP.COLUMND06 LASTUPDATEDDATE, INP.COLUMND07 ROWGUID, INP.COLUMND08 UPDATEDSITE, INP.COLUMND09 LANGID, WC.COLUMN009 WIP_WAREHOUSE_ID, (SELECT (sum(WIP.COLUMN011) - sum(wip.column010)) FROM TABLE066 WIP where wip.column008 = INP.column004 and WIP.COLUMN005 = '8cd741c7-1ac6-4839-88e7-df85518170f1' and wip.column006 = inp.column003 ) WIP_Qty , WIPM.Column005 WIP_ITEM_ID FROM TABLE073 INP left join view1 I on I.ITEM_ID = INP.COLUMN004 left join view2 U on U.UOM_ID = INP.COLUMN009 left join TABLE022 OPER ON OPER.COLUMN001 = INP.COLUMN003 left join TABLE066 WIP on WIP.column008 = INP.column004 left join TABLE015 WC on WC.COLUMN001 = OPER.COLUMN008 left JOIN TABLE040 INGDTL ON INGDTL.COLUMN002 = INP.COLUMN004 AND WIP.column008 = INGDTL.COLUMN002 left join TABLE065 WIPM on WIPM.column005 = INP.column004 where INP.COLUMN002 = '057f87aa-7884-43fa-8984-9b74c971da62' order by my_val
The issue is in the data flow for loading and setting the Fact table dimension keys (the dimensions are all loaded fine). After 16 rather pedestrian Lookup Transformations, I have an escalating problem adding additional Lookup transforms to the Data Flow. The problem is not in execution; the problem is adding more transforms in design mode.
Lookup # Fields in Data Flow Time to validate that lookup <17 47 Sub-second 17 48 2 sec 18 49 4 sec 19 50 8 sec 20 51 16 sec 21 52 32 sec 22 53 64 sec
While I€™m intrigued by the mathematical progression that is forming here, the issue is that I have at least 6 more Lookups to perform. I hope you can see my dilemma.
I have gone to where it takes a little over 4 minutes each to validate the lookup transform and its associated Derived Column transform and Union transform (Total 12 Minutes). Not only does this add up to many idle minutes to each design step, BUT it breaks the debugger as it pre-validates the ENTIRE data flow before it ever switches into debugging mode.
Some notes: 1. It doesn€™t matter what order the Lookup transforms occur in, the timings are exactly the same. 2. I tried many Data Flow execution optimizations, but they don€™t improve the validation times (or even get a chance to improve the execution times!)
I realize this may be somewhat of a unique problem.
I have a stored procedure when I run it manually it takes 23 seconds. I Scheduled a job to run the same stored procedure.It is taking 33 minutes. Any ideas why it is taking so much time ?
I have SQL Server 2005 installed on my machine and I am firing following query to insert 1500 records into a simple table having just on column.
Declare @i int Set @i=0 While (@i<1500) Begin Insert into test2 values (@i) Set @i=@i+1 End
Here goes the table definition,
CREATE TABLE [dbo].[test2]( [int] NULL ) ON [PRIMARY]
Now the problem with this is that on one of my server this query is taking just 500ms to run while on my production and other test server this query is taking more than 25 seconds. Same is the problem with updates. I have checked the configurations of both the servers and found them to be the same. Also there are no indexes defined on either of the tables. I was wondering what can be the possible reason for this to happen. If any of u people has any pointers regarding this, will be really useful
I have a query which returns approximately 50000 records, I am using a linked server to connect to two databases and retrieve data. For some reason it is taking a liitle more than hour to execute the query, but on MS Sql Server query window it comes after few minutes but the query runs for a long time.
How can expediate my query execution process.
Environment details
Database: MS Sql Server 64bit 2005 MS Sql jar file: sqljdbc_1.2.jar OS: Windows both server and client.
I'm running a query (see below) on my development server and its taking around 45 seconds. It hosts 18 user databases ranging from 3 MB to 400 MB. The production server, which is very similar but with only 1 25 MB user database, runs the query in less than 1 second. Both servers have been running on VMWare for almost 1 year with no problems. However last week I applied SP 2 to the development server, and yesterday I applied Critical Update KB934458. The production server is still running SQL Server 2005 Standard SP 1. Other than that, both servers are identical and running Windows 2003 Server Standard SP 1. I'm not seeing this discrepancy with other queries running against user databases.
use MyDatabase
GO
select db_name(database_id) as 'Database', o.name as 'Table',
s.index_id, index_type_desc, alloc_unit_type_desc, index_level, i.name as 'Index Name',
I have a SSIS package where a small table of 270 rows are fuzzy looked up with a table in another sql server and inserts the records to a temporary table. This takes more than 3 hours in debug mode or so and never goes beyond this step.I have used a OLE DB destination to insert to temporary table and temporary table doesn't get a value.
Sometime is necessary to stop MSSQLSeverOLAPServieces to do a full backup in my OLAP Server disks. After backup had finished and I tried to star MSSQLSeverOLAPServieces but it takes almost 30 minutes to the services starts. What can it be causing that?
I have a stored procedure that normally takes about 5 hours to complete: DELETE tblX WHERE PROC_DT < dateadd(day, -93 , getdate())
tblX has about 55 million records and has an index on PROC_DT.
I have this running as a scheduled task. Over the weekend, the task executed and it is still running 56+ hours later. Does anybody have any ideas as to where I should look for the problem? I am afraid to kill the process because of the rollback time.
Hi, I have a table with 48 million rows,when i executed following update query it is taking 10 HOURS in SQL SERVER 2000 with SP1. Where as when i executed same query in SQL SERVER7.0 with same table then it is taking 13 MINUTES. Comming to Machine...SQL 2000 Server has more processors and greater memory than SQL 7.0 m/c. It looks strange but this is true.Does any one faced such problem..is there any bug in SQL 2000?????
Here is Query::
update cus_pay_jan_dist set univ_regdate = b.dayid from cus_pay_jan_dist a with (nolock), tm_dayids b with (nolock) where a.univ_regdate = b.dayidnum and a.univ_regdate like '2001%'
I have one stored procedure and its taking 10 mins to execute. My stored procedure has 7 input parameters and one temp table( I am getting the data into temp table by using the input parameters) and also I used SET NOCOUNT ON. But if copy the whole code of the SP and execute that as regular sql statement in my query analyzer I am getting the result in 4 seconds. I am really puzzled with this.
What could be the reason why the SP is taking more than query,Unfortunately I can't post the code here.
There are two applications running on different server say ServerA and ServerB. Both applications are using same database server SQL Server 2005 say ServerB. Called the application as ApplicationA and ApplicationB with respect to Server names
It means for ServerA the database is remote and for ServerB, database is local.
Both the applications are Java application and using datasource to connect to the database. The driver used are SQL Server 2000 driver (which includes 3 jars). This can be a question that why 2000 driver is used for 2005. The reason is, application on ServerA is getting error while using SQL Server 2005 as Driver not proper.
Problem Area:
When ApplicationB (local to database) is doing some DB operations (which includes select and then batch insert), ApplicationA (remote) is trying to insert a record which is taking too long time (around 40 sec.). This is causing timed out in ApplicationA.
ApplicationA is inserting the data into the same table from where ApplicationB is selecting the data.
I have an update statment in my SSIS that use to take 10 minutes in SQL 2000 dts and now its take 1 hour 15 minutes in SQL 2005.
this is my sql update statment - Update WeeklySalesHistory set weekendingdate = (SELECT LastTransDateTime from ReplicationControl where TableName = 'WEEKHST') where weekendingdate is null
It is using ole db connection. About 36,000 records that it is updating.
I have read ole db can be slow and to use staging table. Does that mean on all updates like this I have to use a staging table and then insert. I didn't use to have to do this in SQL 2000. Has it changed. Are there any other options?
I'm trying to figure out why my transaction log backup is taking up to an hour to complete. I started off with a full recovery model with a Full database back up every Sunday, differential backups every Tuesday/Thursday and log backups every 5 minutes. I would have thought that the log file backups would execute much quicker because I'm backing them up more often.
Here is my backup statement, I'm hoping I've got a wrong option that you can point out to me:
BACKUP LOG [xxxx] TO [LogFilexxxxBackups] WITH NOINIT , NOUNLOAD , NAME = N'xxxx log backup', SKIP , STATS = 10, NOFORMAT
I'm using a OLE DB COMMAND component to perform an update (SQL statement) but the procces takes about 9 hours, so I changed it to a stored procedure but it was the same, I need to update about a a million of rows and the package is very simple.
How can I improve the time, Can I use another component or startegy?
The destination table is truncated and indexes are dropped before loading and after data being inserted we re-create the indexes.
Before this, a view extracts data from more than 22 tables from a staging database and tries to insert this data in the destination table.
it used to take 12-15 mins, but since yesterday loading one particular table never completes. While loading, the database is set to Simple recovery. There are no blocking. It's part of a daily batch thats loads 6 GB of data everyday. But while loading on particular table it's just keep running for hours. I tried rebuilding the indexes and re-starting the SQL Server but of no use.
Any help is much appreciated as this production batch job.
Good afternoon everyone, I have written a view that pulls customer demographic infomration as well as pulling data from multiple scalar-valued functions. I am using this view to pull and send data from one database to another in the same SQL server. The problem that I am having is that I am running this import as a scheduled job in windows. The job is taking almost 24 hours to complete this task. The total number of records that are being pulled is around 21,000+. I have tried removing the functions from the view and it only takes the view 20 seconds to pull the demographic information from the same 21,000+ records but when I add the function calls this is where the time to complete goes through the roof. Has anyone encountered this before if so what would you suggest doing? Any help would be appreciated. Here is the syntax for my view: SELECT TOP 100 PERCENT CUS_EMAIL AS Email, CUS_CUSTNUM AS MemberID, CUS_PREFIX AS Prefix, CUS_FNAME AS FirstName, CUS_LNAME AS LastName, CUS_SUFFIX AS Suffix, CUS_TITLE AS Title, CUS_STATE AS State, CUS_COUNTRY AS Country, CUS_ZIP AS ZipCode, CUS_SEX AS Gender, CAST(CUS_DEMCODEA AS nvarchar(20)) + ',' + CAST(CUS_DEMCODEB AS nvarchar(20)) + ',' + CAST(CUS_DEMCODEC AS nvarchar(20)) + ',' + CAST(CUS_DEMCODED AS nvarchar(20)) AS DemoCodes, dbo.GetSubScribedDateMLA(CUS_CUSTNUM, CUS_EMAIL) AS MLASubscribedDate, dbo.GetSubScribedDateMLP(CUS_CUSTNUM, CUS_EMAIL) AS MLPSubscribedDate, dbo.GetSubScribedDateLDC(CUS_CUSTNUM, CUS_EMAIL) AS LDCSubscribedDate, dbo.GetMLAExpiration(CUS_CUSTNUM, CUS_EMAIL) AS MLASubExpireDate, dbo.GetMLPExpiration(CUS_CUSTNUM, CUS_EMAIL) AS MLPSubExpireDate, dbo.GetLDCExpiration(CUS_CUSTNUM, CUS_EMAIL) AS LDCSubExpireDate, dbo.IsProspect(CUS_CUSTNUM, CUS_EMAIL) AS AGMProspect, dbo.IsCurrentCustomer(CUS_CUSTNUM, CUS_EMAIL) AS AGMCurrentCustomer, dbo.IsMLAMember(CUS_CUSTNUM, CUS_EMAIL) AS MLAMember, dbo.IsMLPMember(CUS_CUSTNUM, CUS_EMAIL) AS MLPMember, dbo.IsLDCMember(CUS_CUSTNUM, CUS_EMAIL) AS LDCMember, dbo.CalculateTotalRevenue(CUS_CUSTNUM, CUS_EMAIL) AS AGMTotalRevenue, dbo.GetPubCodes(CUS_CUSTNUM, CUS_EMAIL) AS ProductsPurchased, dbo.GetEmailType(CUS_CUSTNUM, CUS_EMAIL, CUS_RENT_EMAIL) AS EmailType, CUS_COMPANY AS Company, CUS_CITY AS City FROM dbo.CUS WHERE (CUS_EMAIL IS NOT NULL) AND (CUS_EMAIL <> '') AND (CUS_EMAIL_VALID = 'Y') AND (CUS_EMAIL LIKE '%@%.%') AND (CUS_RENT_EMAIL = 'Y' OR CUS_RENT_EMAIL = 'R' OR CUS_RENT_EMAIL = 'I') AND (CHARINDEX(' ', CUS_EMAIL) = 0) AND (CUS_EMAIL NOT LIKE '@%') Thanks in advance Michael Reyeros
I have a backup mainentance plan that does a full backup daily at 03:00am and then 2 minute transaction log backups throughout the day to a raided hard drive (It is set to overwrite after 2 weeks), When i go into enterprise manager and select the database to restore it just seems to take too long to read the backup history in. Can this time be reduced as i need to be able to restore the database A.S.A.P but still need a point in time restore to within 2 minutes of going down??
I have a CLR stored proceudure than runs under 2 or 3 mins when run in management studio. But when the report calls the same SP, it runs in around 30 mins and the CPU usage goes to 90 or 100% and the sqlserver service uses massive amount of memory. Previously the report worked fine, only after made a few data driven subscriptions and redeployed the reports with minor changes that it began to happen. Please help--Iam stumped.
There are no infinite loops in the report. I checked.