I have a DTS package ON SQL 2000 which transfer data from AS400 to SQL 2000 using an ODBC Client Access 5.1 (The DSN was configured by a sysdmin on the AS400 so it is configured properly). When i execute the package manualy (by right click and "execute package") the package runns fine and ruterns data in no time (Eproximatly 30000 rows in 15 sec).
The problem starts when a job executes the same packagee!!! When i start the job, the DTS package is running Very Very Slow!!!! sometime it takes Hours to return a few rows! and it seems that it is stuck.
The SQLAgent is running as a NT Account with Administrator rights, and has full access to the AS400!! so the problem is not the Agent.
by monitoring the AS400, i have noticed that the job/DTS is retreaving the first fetch quickly , and then it is in a waiting status
i have tried everything and cant seem to get this problem fixed
Does anyone know what could be the problem? I Need Help Quick!!! Thank You
I have a huge speed issue on one or two of my SQL Tables. I have included the basic design below.
Structure Id ParentId Name
Group Id ParentId Name Weight
Products Id Name
StructureProducts StructureId ProductId Imported
StructureGroups StructureId GroupId
GroupProducts GroupId ProductId
AnswerDates Id AssessmentDate
Scores <-- This table is the slow table AnswerDateId StructureId GroupId (nullable) ProductId (nullable) Score >= 0 && <= 100
Ok, Structures are the start of everything. Structures, have children. If a group/product is Linked to a parent or child structure then that group/product is visible along the structure tree flow path. Groups, like structure have children. And also like structures, if a group is given a product, then that product is visible through the structure tree flow path.
Example: Earth [Structure] - Asia [Structure] --- China [Structure] --- Japan [Structure] ----- Computer Stuff [Group] ------- Desktops [Group] ------- Servers [Group] ------- Laptops [Group] --------- HP [Product] --------- Dell [Product] --------- Fujitsu [Product] - Europe [Structure] --- Germany [Structure] ----- Berlin [Structure] --- Italy [Structure] ----- Rome [Structure] ----- Venice [Structure] - America [Structure] --- United States of America [Structure] ----- New York [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- HP [Product] ----------- Dell [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Washington [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- HP [Product] ----------- Dell [Product] ----------- Acer [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Chicago [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] - Africa [Structure] --- South Africa [Structure] ----- Johannesburg [Structure] ------- Computer Stuff [Group] --------- Desktops [Group] --------- Servers [Group] --------- Laptops [Group] ----------- Acer [Product] ------- Home Stuff [Group] --------- Kitchen Stuff [Group] --------- bedroom Stuff [Group] ----- Durban [Structure] ----- Capte Town [Structure] - Australasia [Structure]
So the initial steps that happen (with regards to Scoring) are as follows. 1. Insert root score (which would be for a structure, a group, an answer date and either a product or no product 2. Score the next group up along the treeview, using the scores for the groups at the same level as the original group (0 score if no score exists). 3. Continue this till GroupTree is at root (parentid == null) 4. Using the next structure up along the treeview, repeat steps 2 & 3. 5. Continue steps 4 until Structuree is at root (parentid == null)
Example Scoring a product for Johannesburg Acer Laptop would go as follows 1. Initial score for [Acer] product against Group [Laptop] for Johannesburg. 2. Calculate Score for all products (productid = null) against Laptop for Johannesburg 3. Calculate Score for [Acer] product against Group [Computer Stuff] for Johannesburg 4. Calculate Score for all products against Group [computer Stuff] for Johannesburg 5. Calculate score for [Acer] product against all root groups for Johannesburg 5.1. Group [Comptuer Stuff] and [Home Stuff] 6. Calculate score for all products against all root groups for Johannesburg 6.1. Group [Comptuer Stuff] and [Home Stuff] 7. Calculate score for [Acer] Product against Group Laptop for South Africa 8. Calculate Score for all products (productid = null) against Laptop for South Africa 9. Calculate Score for [Acer] product against Group [Computer Stuff] for South Africa 10. Calculate Score for all products against Group [computer Stuff] for South Africa 11. Calculate score for [Acer] product against all root groups for South Africa 11.1. Group [Comptuer Stuff] and [Home Stuff] 12. Calculate score for all products against all root groups for South Africa 12.1. Group [Comptuer Stuff] and [Home Stuff] 13. Calculate score for [Acer] Product against Group Laptop for Africa 14. Calculate Score for all products (productid = null) against Laptop for Africa 15. Calculate Score for [Acer] product against Group [Computer Stuff] for Africa 16. Calculate Score for all products against Group [computer Stuff] for Africa 17. Calculate score for [Acer] product against all root groups for Africa 17.1. Group [Comptuer Stuff] and [Home Stuff] 18. Calculate score for all products against all root groups for Africa 18.1. Group [Comptuer Stuff] and [Home Stuff] etc. etc. etc...
This basicly coveres the concept behind the basic scoring methodology. Now the methodology splits into 2. The first Methodology 1, say it should do these calculations using the Exact same date as the original scored date. (Ie. if i do a score today, only scores on today will be used in the calculations). The other, Methodology 2, says that it should do the calculations on the latest available date. (Ie. If i do a score today, only scores from today and the latest before today will be used in the calculations).
Now to add another problem to this already complex process, is that each Group and each product within a structure can have either of the 2 scoring methodologies assigned to it. Also, products can only be scored against the structures and groups that they are assigned to. Ie, Acer exists in Laptop Group, in Johannesburg or South Africa or Africa, but doesnt exist in New York.
Ok, so now that i've explained briefly how this scoring works, let me get to the heart of the problem. Basicly its speed (can clearly see why), though the speed issue only comes up in 1 Place. And that is where it has to look backwards for the latest available score for the required group, structure and product.
For this to happen i wrote a function ALTER FUNCTION [dbo].[GetLatestAnswerDateId] ( @StructureId INT, @GroupId INT, @ProductId INT, @AnswerDateId INT ) RETURNS INT AS BEGIN DECLARE @Id INT DECLARE @Date DATETIME
SELECT TOP 1 @Date = [Date] FROM [dbo].[AnswerDate] WHERE [Id] = ISNULL(@AnswerDateId, [Id]) ORDER BY [Date] DESC
SELECT TOP 1 @Id = ad.id--gs.[AnswerDateId] FROM [dbo].[Scoring] gs INNER JOIN [dbo].[AnswerDate] ad ON ad.Id = gs.AnswerDateId WHERE [StructureId] = @StructureId AND ISNULL([GroupId], -1) = ISNULL(@GroupId, -1) AND ISNULL([ProductId], -1) = ISNULL(@ProductId, -1) AND [Date] <= @Date ORDER BY [Date] DESC
RETURN @Id END
Now on small amounts of data (1000 rows or so) its quick, though that is due to the fact that the data is minimal, but on large amounts of data this function runs for along time. Specificly in the context of the following when there is 6 months of scoring data (100 000+ rows) to peruse.
SELECT [StructureId], [GroupId], [AnswerDateId], [ProductId], [Score] FROM [Scoring] WHERE AnswerDateId = GetLatestAnswerDateId([Structure], [GroupId], [ProductId], null) AND [StructureId] = South Africa AND [GroupId] = Computer Stuff AND [ProductId] = Acer
Any idea's on how to make this quick? or quicker?
My Current runtime for calculating the 2500 base scores (totals 100 000+- rows) takes 15 hours. Though this is an initial calculation and is only supposed to be done once. Also, this calculations are all correct, so my only issue itself is the speed of the entire process.
Hi,I have a table defined asCREATE TABLE [SH_Data] ([ID] [int] IDENTITY (1, 1) NOT NULL ,[Date] [datetime] NULL ,[Time] [datetime] NULL ,[TroubleshootId] [int] NOT NULL ,[ReasonID] [int] NULL ,[reason_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[maj_reason_id] [int] NULL ,[maj_reason_desc] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[ActionID] [int] NULL ,[action_desc] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[WinningCaseTitle] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Duration] [int] NULL ,[dm_version] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[ConnectMethod] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[dm_motive] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[HnWhichWlan] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[RouterUsedToConnect] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[OS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,[WinXpSp2Installed] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CS_AS NULL ,[Connection] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Login] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL,[EnteredBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_ASNULL ,[Acct_Num] [int] NULL ,[Site] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CS_AS NULL ,CONSTRAINT [PK_SH_Data] PRIMARY KEY CLUSTERED([TroubleshootId]) ON [PRIMARY]) ON [PRIMARY]GOWhich contains 5.6 Million rows and has non clustered indexes on Date,ReasonID, maj_Reason, Connection. Compared to other tables on the sameserver this one is extremely slow. A simple query such as :SELECTSD.reason_desc,SD.Duration,SD.maj_reason_desc,SD.[Connection],SD.aolEnteredByFROM dbo.[Sherlock Data] SDWhere SD.[Date] > Dateadd(Month,-2,Getdate())takes over 2 minutes to run ! I realise the table contains severallarge columns which make the table quite large but unfortunately thiscannot be changed for the moment.How can i assess what is causing the length of Query time ? And whatcould i possibly do to speed this table up ? The database itself isrunning on a dedicated server which has some other databases. None ofwhich have this performance issue.Anyone have any ideas ?
Hello! I have a query that join five tables and returns around 45000 rows that takes no more than a minute to execute, in management studio, on a SQL Server 2005, 2CPU 32 bit(dual core), 4GB and RAID5 disk system. The O/S is Windows 2003 sp2 Standard Edition.
When the same query is executed in SSRS2005, with some drilldown and summary of drilldown levels, it never stops to execute.
Looking at the hardware in the performance monitor reveals nothing strange except that % CPU-time is around 40 percent. Memory resources over 2 GB are available all the time.
Any suggestions is appreciated!
Any problems with SQL Server 2005 source database running on SQL Server 2000 compatibility level?
I have a package that reads the contents of 11 Excel files into various tables. Opening this package in the designer, or with DTExecUI is extremely slow. In both cases when I open the package is takes over 10 minutes to do anything. Visual Studio gives the "Visual Studio is Busy" message for 10 minutes and DTExecUI just hangs. DTExecUI actually hangs twice, once when opening the package and a second time when clicking "Execute" (totalling over 20 minutes). It seems like no matter how I try to run the package it will always hang for 10 minutes before running with no status message or anything. Once it runs it completes quickly with no errors.
The various tasks in the package are fairly simple, most being Source > Data Conversion > Destination.
i'm experiencing an extremely slow connection from a WXPP Sp2 client to a MSSQL2000 running on a W2k server. The client is running a VB6 application that connect with Windows authentication: every form requesting data opens with a long delay at the first launch; next attempts run normally fast.
In the same LAN there are some others identical clients, all running fine.
Every other network activity from that client is ok.
I'm working within VS2005/Business Intel studio environment. I've got one master package, which loads about 18 sub-packages as tasks.
After openin the master package (and waiting 5-10 minutes for the packages to open and validate), maneuvering within the IDE is nearly impossible, it is so slow. Context menus can take 30 seconds to open. Certian operations, like closing a window, seem to hang the environment.
Does anyone have any feedback about this kind of ide performance problem?
I have successful upgraded from Windows 2000 / SQL Server 2000 to Windows 2003 / SQL Server 2005. Here is the process: - Install Windows 2003 / SQL Server 2005 on new VM instance - Detached SQL Server 2000 database -> Copy data to the new server and Attached to SQL Server 2005 - Run scripts to restore user login - Change Compatibility level to: SQL Server 2005 (90)
Now, open 3rd party application and connect to the new database server (via IP address), the application was connecting ok and access data without no problem. However, database take more than 5 minute to connect/open instead of 30sec via the old the server?
I've got a football (soccer for the yanks!) predictions league website that is driven by and Access database. It basically calculates points scored for a user getting certain predictions correct. This is the URL:
http://www.pool-predictions.co.uk/home/index.asp
There are two sections of the site however that have almost ground to halt now that more users have registered throught the season. The players section and league table section have gone progressively slower to load throughout the year and almost taking 2 minutes to load.
All the calculations are performed in the Access database Ive written and there are Access SQL queries to get the data out.
My question is, is how can I speed the bloody thing up! ! Somone has alos suggested to me that I use stored procedures and SQL Server to speed things up? Ive never used SQL Server before so I am bit scared about using it (Im only a hobbyist), and I dont even know what a SP is or does. How easy will it be upgrading the whole thing to SQL Server and will it be worth the hassle, bearing in mind I expect my userbase to keep growing? Do SP help speed things up significantly? Would appreciate some advice!
I have a bunch of packages that take views and create tables from them. Some of the views are rather complex, but the packages themselves are very simple... drop and re-create a table using the data from a view on the same server. We create a new DB for each year, and this year we've upgraded to a new server with SQL 2005, so our DTS packages on the 2000 SQL server had to be recreated in SSIS on the new server. No problem, as I said the packages are really simple. But when I create the packages in SSIS they now take an extremely long time to execute, and I cannot figure out why.
For instance, one DTS package would take approximately 5 minutes to run when the view contained hundreds of thousands of rows and the underlying tables contained millions. But now, even with MUCH smaller tables (since it's the beginning of the year, new DB) the SSIS package I created on the new server takes over an hour, literally. The view that the SSIS package is using to create the table only takes about 15 seconds to execute in management studio (only about 16,000 rows). How can this possibly take so long??
the new server is virtually the same hardware-wise... 4 x 2400mhz, 4gb ram, win2k3 server
I designed an SSIS package about 200 packages in one project. the package extract from live to reporting server. some of my packages are very slow about 10 of them. strage enough the ones with more data number of rows run very fast. I'm using Source->Lookup->Conditional split->OLEDB Destination or OLDB Comand.
Can someone help me to found out what could be the problem. I'm very new in SSIS.
I don't know what is wrong but the moment i open SSIS it run's too slow... AND if i have 100 warnings it takes it's own time to even open the designer...
I am just getting frustated with this
If i open my browser it's slow n email slow... Any help is appreciated..
hi~ our system to run the the job with SSIS package is slower than DTS package. The SSIS package action is the same as DTS package. Why? What do I take care??
We have noticed in our environment slowness when starting SSIS packages from SQL Server jobs. I did a quite detailed study on when the slowness actually occurs and what are the consequences. Here are the results.
The SSIS package execution is slow if all the following is true:
The package is started from a job. If started directly as a SSIS package, the execution is fast. The job is running on a 64 bit Windows Server (SQL Server 2005 SP2). The SSIS package and the job are either on the same server or on different servers (the second server is SQL Server 2005 SP1). If the job is run on a 32-bit workstation (Windows XP SP2) the execution is fast (the SSIS package still being on the server). The package contains tasks. § If there are no tasks, just an empty sequence container, the execution is fast. § If a package that has no tasks has logging into the database configured, the execution is fast. § Slowness has been verified with A) a package having a single Execute SQL statement and B) a package having a Send Mail task.
It doesn't seem to matter which user account is used on when running the job.
The slowness happens in several locations, e.g. (there are also others, at least the following have been verified)
There is exactly 30 seconds lag between starting the job (as seen from job history) and when PreValidate (as seen in the sysdtslog90 table) of the package appears in the log. The validation of the package takes 15 seconds (the time difference in the log betwen PreValidate and PackageStart) The problem is really affecting our production environment. Currently the only solution we have come up is to put all the jobs on a workstation and use the workstation as a production server for the jobs.
I haven't heard anyone else having the same problem.
I am using SSIS packages for data transfer, When i run the package on virtual server it takes more time as when run on a PC. After analysing i found that Package when run on Virtual server takes time in startup around (50 sec) or so.Could anyone help me with a little bit of detail description as to why it runs slow.
2 SQL Execute Task, One Loop container, 2 Data Flow tasks, 1 Foreach loop container, 1 ftp task. The data flow tasks has 1 oledb source, 1 flat file source, 1 row count transformation, 1 recordset destination and 1 oledb destination.
When I load the package into BIDS it takes 125 MB of memory and then everything is slow, the properties panel slides in slowly and exists slowly. The object is the packages are not painted properly. to make changes and run takes lot of time.
Am I doing anything wrong here? Why is it consuming so much of memory?
I am very new to SQL, and am basically self-taught(apart from SQL for Dummies and a couple of other books). I am hoping someone can help me with the 'CONVERT' statement.
My system outputs the date format as '12345'. What I have written so far is this;
select Resprj.Re_code, Res.re_desc, resprj.Pr_code, projects.Pr_desc,Res.Re_status1, Projects.active, Projects.Pr_start, Projects.Pr_end from res inner join Resprj on (Res.Re_code = resprj.Re_code) inner join projects on (projects.PR_code = resprj.Pr_code) and Projects.Pr_desc like '%C9%' where projects.active =-1 order by Projects.Pr_code, Res.Re_desc
Could someone please help in regards to using the 'CONVERT' statement to change the date from '12345' to dd/mm/yy.
we have some SSIS packages using a dataflow sourcing data from DB2 using an oledb connection. These were working when we were using the oledb for DB2 drivers as supplied in HIS2004, but since upgrading to HIS2006 some of these dataflows are failing with the error shown below.
Also a DTS package running the same query against the same database(s) is working fine.
Can anyone shed any light on this? Do we need a patch?
The error text we're seeing is:
[OLE DB Source [1380]] Error: An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft DB2 OLE DB Provider" Hresult: 0x80040E14 Description: "".
[DTS.Pipeline] Error: The PrimeOutput method on component "OLE DB Source" (1380) returned error code 0xC0202009. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
[DTS.Pipeline] Error: Thread "SourceThread0" has exited with error code 0xC0047038.
So, I just downloaded and installed Windows 8 and SQL Server 2014 Developer. I used to have Visual Studio, that seems to be gone now. Can't find it. I can't figure out a way to get to SSIS or SSAS.
I upgraded to SQL Server 2005 SP2 release today. I had been working with 9.00.1399. I had developed few packages in the older version which were getting invoked through the asp web application locally and it was working fine.
Today I upgraded to SQL server 2005 3042 version. Now the same packages are not working when invoked through the web application. But it works fine stand alone. All errors refer to connection problem.
I need to resolve this urgently. Can anyone help me please?
Errors -
"SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft JET Database Engine" Hresult: 0x80004005 Description: "The Microsoft Jet database engine cannot open the file ''. It is already opened exclusively by another user, or you need permission to view its data.". "
SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "Excel Connection Manager" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed.
I have a test db in SQL Server Express 2005 with a 65MB dat file and a 1.1GB log file!! The production db has a slightly larger dat file and only a 6MB log file. I haven't updated by test db from the production db in a couple of months. I tried Shrink Database and Shrink File in the Management Studio Express, but the log file size hasn't changed. How did the log file get so large and how do I fix it? Thanks for any help.
Hello everyone, I'm not sure if this is a problem but I've got a database which is about 1700mg in size (at least that's the allocated space on disk) and the log file is over 4600 mb. I've truncated the log file but it still keeps growing. None of our other databases are this large and there are a lot of transactions performed regularly but it looks odd to me that the log is this big when the data is half the size. How can I find out exactly how much space is being taken up by the data and is there anything I can do that will shrink the size of the log file? I am not really a dba so I'm not sure how crucial this is in the grand scheme of things. Thanks
We have SQL Server running on a Windows 2003 server, only because Backup Exec requires it. AT the location : C:Program FilesMicrosoft SQL ServerMSSQLData there is this file: SuperVISorNet_log.LDF which is 15 Gb and is accessed daily. I apologize because I don't know what this is!
My question is: can this file be 'pruned' (for want of a better word) because it's taking up a lot of backup space.
"Document contains one or more extremely long lines of text. These lines will cause the editor to respond slowly when you open the file. Do you still want to open the file."
I click Yes and my project open normally. Someone know why this happen? My project is small, have one package with any imports excel files to Sql Server 2005.
I am having a question about SSIS. Is it a totally new product in SQL Server 2005 or it is a upgrade version of previous version DTS? Thanks a lot for any guidance for that.
Right, I'm no SQL programmer. As I type this, I have roughly the third the hair I had at 5 o'clock last night. I even lost sleep over it.
I'm trying to return a list of records from a database holding organisation names. As I've built a table to hold record versions, the key fields (with sample data) from a View I created to display this is as follows:
as you can see the record id will always be unique. record 3 is a newer version of record 1, and 4 of 2. the issue is thus: i only want to return unique organisations. if a version of the organisation record is live on the system (in this case record id 3), i want to return the live version with its unique record id. i'm assuming for this i can perform a simple "SELECT WHERE live = 1" query.
however, some organisations will have no live versions (see org with id 2). i still wish to return a record for this organisation, but in this case the most recent version ie version 2 (and again - its unique record id)
in actual fact, it seems so much clearer when laid out like this. however, i feel it's not going to happen this end, and so any help would be greatly appreciated.
I am attaching a database with 3 data files.When I execute "exe sp_attache_db..." I obtain this error:database 'POINT' cannot be opened because some of the files could not beactivated.I have deleted its LDF file.Usually I detach my db, then I delete transaction log, and reattach 3 datafiles...Now it doesn'work!!!!!!!!!!Someone can help me?Thanks.