Strange SSIS Behavior
Oct 10, 2007
I created a very simple SSIS package (it just updates a single row in a table). When I execute the package from the command line (using dtexec), it takes about a second to finish, as expected. But when I execute it using dtexec via xp_cmdshell, it takes about 91 seconds. When I use a SQL job to execute the package as an operating system type, it takes 91 seconds. Using a SQL job to execute it as a SSIS package takes again 91 seconds. It appears that something is causing a delay of about 90 seconds before the package actually gets executed. I tried changing the SSIS service account, but that didn't change anything. Why is executing the package through SS2005 different than executing it directly from the command prompt?
View 4 Replies
ADVERTISEMENT
Feb 1, 2008
I've done a new tabel that insert the UserId that in a uniqueidentifier get from Membership.GetUser().ProviderUserKeySo if I want to make a select statement threw storedprocedure in codebehind it runs as it shouldCode behindDim GetCustomersCars As CustomerCarByUserId = New CustomerCarByUserId MyCars.DataSource = GetCustomersCars.CarByUserId(Membership.GetUser().ProviderUserKey)MyCars.DataBind() But in when I use ObjectDataSource it fails<asp:ObjectDataSource id="ObjectDataSource1" runat="server" selectmethod="CarByUserId" typename="CustomerCarByUserId"> <SelectParameters> <asp:Parameter defaultvalue="Membership.GetUser().ProviderUserKey" name="UserId" type="Object" /> </SelectParameters> </asp:ObjectDataSource>I've tried with Membership.GetUser().ProviderUserKey.ToString(), but that doesnt work. Error message:InvalidCastExceptionI connect to the same source in both cases.Any one with an Idee ?
View 1 Replies
View Related
Nov 29, 2005
I have a SP that usually works fine (0-16 CPU time, 40 ms Duration), but from time to time the server hangs with apparently no reason. The SP has a lock timeout set to 500, so it should abort if a lock timeout error (1222) occurs but it doesn't. The Profiler reports very long execution time (over 30 sec), and because of that all other SP calls are blocked, 'cause the transaction opened by the first sp execution is not finished yet.
Any other attempts to identify other blocking queries did not show me anything suspect (sp_lock, dbcc opentran) other then the usual blocked chain. I'm starting to think about an IO bottleneck, or IO failure, that could block the disk access and cause the delay. The status of RAID 5 is healthy.
The server is used as storage system for a website (approx. 2000 concurrent users), and occasionally I noticed an ASP queue, but this strange behavior occurs even during the peak-off hours.
Any thoughts ?
-----
HP Server - 2 CPU @ 3,4 ; 4 GB RAM; SCSI - RAID 5
Windows 2000 Advanced Server - SQL Server 2000 SP4
View 1 Replies
View Related
Jul 23, 2005
Hi folksI have an C# app. connecting to a MS-ACCESS database with several tables.In a specific situations I have problems with a DateTime type in a table.The problemis when I want to select records from a table in a specific period the dayand monthseems to be swapped in the query, but it only happens when the swappinggives avalid date eg.12/10/2005 (12. Oct. 2005) returns records on 10/12/2005 (10. Dec. 2005)23/05/2005 (23. May 2005) returns records correctly since 05/23/2005 is nota valid date with danish regional settings.The query is:"SELECT [ID], [Activity], [BeginDate] FROM TimeReg WHERE [BeginDate] >= #" +_start + "# " AND [BeginDate] <= #" + _end + "#"_start and _end are of type DateTimeMy PC in running with danish regional settings and if I shift to en-USsettings in the control panel, thisfixes the problem, but that is not a solution for me.Any suggestions to solve this problemThanks in advance.Kim W.
View 4 Replies
View Related
Nov 1, 2007
I am have the following code below on a standalone computer and it worked perfectly. Suddenly, without any significant changes to the code there were no Servers instances found on my local computer. I know there are several server instance on the computer. Why is it acting so unpredictable? The same thing happened when I tried SQLDMO.
// Get a list of SQL servers available on the networks
DataTable dtSQLServers = SmoApplication.EnumAvailableSqlServers(false);
foreach (DataRow drServer in dtSQLServers.Rows)
{
String ServerName;
ServerName = drServer["Server"].ToString();
if (drServer["Instance;] != null && drServer["Instance"].ToString().Length > 0)
ServerName += " + drServer["Instance"].ToString();
if (cmbServer.Items.IndexOf(ServerName) < 0)
cmbServer.Items.Add(ServerName);
}
View 3 Replies
View Related
Mar 5, 2008
I have 2 packages that for ease I'll call Parent & Child. The Parent package calls the Child package as the 4th step in the process. Once the Child has completed, the Parent has a few more imports that it does.
The Portfolio table is loaded in the Child package which is step 4 in the Parent package. Then in step 5 a few tasks utilize that Portfolio data for lookups.
The strange part is that there are probably 4 or 5 data tasks that do lookups against the Portfolio data in Step 5 (step 5 is a container). All but 2 of the data tasks retreive data from the Portfolio data. The other 2 don't find any data and just move on. Once the package stops, if I simply execute those tasks they run and load the data correctly.
It seems to me to be a caching or an isolation problem but I can't find a solution.
Any ideas?
View 7 Replies
View Related
Aug 2, 2005
Hi all,
I face a problem as follows: We have an application runnig on SS2K.We log every delete of
documents(from Archive table) in another table.Now it seems some of the rows have deleted strangely
without any delete log by our application.We assumed there is somebody who has direct access to
database and delete them manually(obviousely our app does not generate any log in this situation)But
there is no people.We check that with admins many times.
Does SQL Server itself deletes rows for any reason? How can I know what is happening?Do you think
our app flaws somewhere?
Thanks a lot for your attention.
View 1 Replies
View Related
Aug 2, 2005
Hi all,
I face a problem as follows: We have an application runnig on SS2K.We log every delete of documents(from Archive table) in another table.
Now it seems some of the rows have deleted strangely without any delete log by our application.We assumed there is somebody who has direct access to database and delete them manually(obviousely our app does not generate any log in this situation)But there is no people.We check that with admins many times.
Does SQL Server itself deletes rows for any reason? How can I know what is happening?Do you think our app flaws somewhere?
Thanks a lot for your attention.
View 2 Replies
View Related
Oct 5, 2004
Hi,
We are noticing some strange behavior with MSSQL. I was hoping somebody can shed some light.
Since the past few days in our production database we have been getting the following error
Could not allocate space for object 'Person' in database 'PROD' because the 'PRIMARY' filegroup is full...
Some data on your system
The PRIMARY filegroup is 20G in size. And 80% of it is free. Also, the Primary filegroup is setup to auto grow and there is about 20G free space at the OS level. So, I don't think it has anything to do with the filegroup.
I started doing some research on the 'person' object (table), run sp_spaceused etc... to get some data. On a trail and error basics I run DBCC INDEXDEFRAG on the 'person' table and the error went away.
Questions
1. Why is the error misleading? Why does it say, the 'PRIMARY' filegroup is full?
2. Why am I getting this error and why does running DBCC INDEXDEFRAG fix the problem?
3. I can understand the index being fragmented and needing a defrag, but can MSSQL server actually fail with this error if the index is fragmented too much?
4. What data can I look at and prevent this from happening in the future?
Any other data will be much appreciated.
Thanks so much.
View 4 Replies
View Related
Mar 22, 2006
Running SQL 2000 SP4 on Windows 2000 Server.
When a SELECT query is executed in Query Analyzer results are displayed in the results pane, fine...when an "ORDER BY" clause is added to the select stmt the query runs for apprx. 20 seconds then displays "TempDb log is full [Error 9002, Severity 17]". (The tempdb is set to autogrow/10%/unrestricted and plenty of storage space) The next time the query is executed after getting the "tempdb log is full" error, the server reboots upon query execution. As soon as F5 or ctrl-e is pressed to execute the query the server does a hard crash - black screen then reboot...no warnings, no event viewer log, no sql log warnings/errors, no drwtsn log, no hardware log message errors...nothing.
Re-applied Windows 2000 SP3 and SQL SP4 to server, same behavior.
View 4 Replies
View Related
Aug 17, 2007
Hi folks,
I have some code, that just works. But when I put it into a exec() I get a strange error. First the code
exec ('
select
year,quarter, min(price) as minimum
into #temptable from
(
select
ntile(4) over (partition by year,quarter order by price) as rang
,year
,quarter
,price
from
(
select distinct id,year,quarter,price from #tbl1
) as a
) as b
group by rang,year,quarter
Select year ,quarter,
(
SELECT CAST(minimum as varchar(max)) + ","
FROM #temptable t2
where t1.year=t2.year AND t1.quarter=t2.quarter
FOR XML PATH("")
)
from #temptable t1
group by year,quarter
')
SQL Server says, that Insert Into is missing a column name. It points at line with FOR XML PATH("").
Any Idea what's wrong here?
The Output without exec (and correct quotes) looks like:
200430.00,252.90,331.40,470.00,
200440.00,241.00,325.00,450.00,
20051102.00,242.90,326.37,448.00,
200520.00,253.00,340.00,480.00,
200530.00,250.00,325.00,465.00,
2005443.00,260.00,355.00,490.00,
Thank you
View 3 Replies
View Related
Jun 1, 2004
I have a trigger on each table in a database which updates a datetime column (lastupdatedon) and a varchar field (enteredby) after update on each individual table. The problem is, when one table is updated at the same instant as another table (by different users), the same varchar data (SYSTEM_USER) is put in both tables, even though the users are different.
Here is an example of the trigger:
CREATE TRIGGER EventUpdate ON jrowley.Event
AFTER UPDATE AS
UPDATE jrowley.Event SET LastChangedOn = getdate(),EnteredBy = SYSTEM_USER WHERE EventID in (Select EventID from deleted)
Any suggestions are welcome.
Thanks,
Jerry
View 2 Replies
View Related
May 1, 2008
Ok maybe someone smarter than me (not difficult) can help me out :)
Two queries:
#1:
select
a.load_id,
b.attribute_name,
a.attribute_loc,
b.attribute_loc
from
PCI_Template_NR_Map b
left outer join
PCI_Master a
on a.attribute_name=b.attribute_name and
a.load_id in (select distinct top 53 load_id from PCI_Load')
#2:
select
a.load_id,
b.attribute_name,
a.attribute_loc,
b.attribute_loc
from
PCI_Template_NR_Map b
left outer join
PCI_Master a
on a.attribute_name=b.attribute_name and
a.load_id in (select distinct top 54 load_id from PCI_Load')
#1 Produces a correct left outer join, any values in PCI_Template_NR_Map that are not in PCI_Master show null. This works for any number of load_id values in the subselect up to 53.
#2 Is the exact same query, except I am no longer limiting it to 53, when i get to 54 (or if I take away the top altogether) it returns rows as if it were a normal inner join, instead of a left outer join (i.e. it only shows rows that are a match between PCI_Master and PCI_Template_NR_Map).
Can anyone explain to me what is happening here, and how to get around this issue? I need to be able to filter this for as many load_ids as I need (usually aobut 200). Thanks in advance,
Brad
View 20 Replies
View Related
Apr 25, 2008
I have a fairly complex report. I have a couple of sub reports on the left hand side, a table in the middle and a couple of rectangles on the right side of the screen. When I try to add another sub report, even though I make it about a one inch square, it pushes the rectangles three or four inches further right when I display it.
Any way to reign in an errant sub report?
Thanks.
View 4 Replies
View Related
Jan 20, 2007
I have a data flow two lookups components (call them lookup1 and lookup2). They both query the same relational table but with different values. Each has a single row result set containing one column and the each of the two columns is mapped to a corresponding package-level variable. The original data flow sequence had lookup1 executing after lookup 2. Each component redirects errors to a separate text file.
Lookup1 succeeds but lookup2 fails on every row which populates its error text file; however I can construct a sql query from the lookup2 values that returns the expected result.
If I reverse the sequence of components (lookup2 followed by lookup1) lookup2 still fails on every row. Whenever both lookups are present in the dataflow, lookup2 fails for every row and all its rows are redirected to the error text file
Now this is where it gets interesting. If I omit either lookup1 or lookup 2 from the data flow, it works. If the data flow contains lookup1 only the destination is populated. If the dataflow contains only lookup2 then no errors are written to the error text file, all lookups succeed, and the destination is populated.
I'm stumped. Is it possible that both lookups selecting from the same table could cause a problem? Each works independently, but when both are together in the data flow, lookup2 fails for every row. I've been over the configuration and code a dozen times and am positive there are no errors; besides, lookup2 runs fine if lookup1 is excluded from the data flow.
View 11 Replies
View Related
Jul 7, 2006
Hi,
In my package I have a source, a script component to make some changes to that and a destination. To speed up the process, within a data flow, I have created 6 copies of the above components and running them in parallel. Each source takes different set of data. I have divided the data using the record no such that, each set will read 1million records.
Now, my question is, though each pipleline is supposed to process exactly 1million records, they are not running at the same speed. For example, 1 pipeline completes processing all 1million records whereas another pipeline processed only 250000 records in that time. I don't see any reason for why one should run slow while another is running fast considering that both are doing the same thing?
Do you have any idea about this?
Thanks.
View 6 Replies
View Related
Jun 23, 2000
I have a stored proc with the following two lines of code:
Select @SumCredits = (Select Sum(CreditAmount) From AccountBalanceList WITH (NOLOCK) Where (AccountRowId = 2000) And (DocDate >= '1/1/1900' And DocDate <= '12/31/2099'))
Select @SumDebits = (Select Sum(DebitAmount) From AccountBalanceList WITH (NOLOCK) Where (AccountRowId = 2000) And (DocDate >= '1/1/1900' And DocDate <= '12/31/2099'))
If I execute this stored proc via Query Analyzer, it will take about 11 seconds. If I execute the above two SQL statements indiviudally within Query Analyzer, each takes less than a second (the entire stored proc should take about a second). This hasn't always been happening. Just recently this behavior started occuring - after we imported a large amount of data into our database. However, I don't know if the two events are related.
Has anyone ever noticed this type of thing?
View 1 Replies
View Related
Jan 18, 2007
Try a little experiment. Partly to humor me, and make me believe I am not quite insane.
Step 1: Install SQL server 2000, such that the data files are not in the default location, but in a location with a shorter path (i.e. install the data files to E:MSSQL8).
Step 2: Run the following queries, and comment on any oddities:
select filename
from master..sysaltfiles
where dbid = 2
go
select reverse(rtrim(filename)), filename
from sysaltfiles
where dbid = 2
go
select reverse(rtrim(filename))
from sysaltfiles
where dbid = 2
I am guessing that #2 is some sort of odd effect caused by the fixed length data field, but I just want to make sure that other people get this oddity, and not just me. I have no idea what could be causing the third output...or perhaps the lack of it.
View 2 Replies
View Related
Jul 23, 2005
Hello,I developed a win32 .exe CGI that connects to a clustered SQLServer toreport some data.The software is written with Borland C++Builder.This is the oledb string:Name=Provider=SQLOLEDB;Password=xxx;Persist Security Info=True;UserID=xxxx;Data Source=xxxxx;Initial Catalog=xxxxx;NetworkLibrary=dbmssocnIt suddendly stopped working on my customer network, so I made sometest and I verivied that the problem is on the connection withSQLServer: my test program just opens a connection, closes it andexits, reporting in a log file if the open was successful or failed.If I run the program locally, just launching it, no problem, it works.I can run it mutilple time continuosly and is connects every time.If I run the program through my webserver, as a CGI that's how it issupposed to work (http:\localhostscriptsconnect.exe), it connectsthe first time, and then I have to wait 40 seconds to connect againsuccessfully, or it fails.If I try against MY sqlserver on my laptop or on my network no 40 secproblems, but on my customer network, with THEIR SQLServer , if I tryto connect from their webserver, or from my laptop webserver, I havethis 40sec problem.I analyzed the network traffic, and I discovered that when I run mytest program locally it originates only TCP/IP packets, and SQLServeranswers only with TCP/IP.But when I use it from the webserver as a CGI, it originates an UDPpacket, then SQL answers with another UDP packet, and then theycommunicate over TCP/IP.This when it works: the second time my program continues to send theUDP packet, but it receives no answer, and fails the communication.I can only say that:- we haven't touched the program for months, and it really stoppedworking suddendly, so I suspect that something in my customer netowrkhas changed- I tried many different OLEDB strings, disabling connection poolingand all the services, calling the SQLServer by name or IP...- the problem can't be related to my program, because now is reallyjust an oledb connection testAnyone have an idea?Thank you very much,Mattia
View 1 Replies
View Related
Apr 25, 2008
Has anyone expirenced this? I'm upgrading my SSRS 2000 SP2 instance to 2005 SP2 and I have to "upgrade" the database twice (First time I click "upgrade" in configuration it throws an error).
So, I get report manager to open. But, when I go to run a report I receive an rsInternalError exception. I look in the log and see that a few columns are marked as NOT NULL and the system is trying to push null values into the columns. The colums are:
SnapshotDataID
IsPermianentSnapshot,
HasInteractivity
I also get serious blocking on the key on chunkData table in the tempDB as well. this was a serious issue that I had a ticket open with Microsoft about. As I am finding is typical with the helpdesk with SSRS I had to fix it myself by adding locking hints on the CreateChunkAndGetPointer sp. Changing the last line to:
Code Snippet
SELECT @ChunkPointer = TEXTPTR(Content)
FROM [ReportServerTempDB].dbo.ChunkData AS CH with (nolock)
WHERE CH.ChunkID = @ChunkID
stopped the blocking issues that I was having.
Now, my question is this, is this normal or do I have a hosed installation of SSRS 2000 that I should just take the time to scrap and start new? Should I have to do this just to get reporting services working after upgrading?
PS - I found a bug in the upgrade script that is created for running later. if ReportServer is not the name of the database, the script will still use ReportServerTempDB in on of the stored procedures that is created.
Any input on this would be GREWATLY appreciated. Any input from MS would be better!
Thanks
Scott
View 1 Replies
View Related
Dec 1, 2006
Hi to all!
i hope that You could help me, because i must build a release version of my SSIS package in 2 hours..
This is my big issue:
i've a SQL Server 2005 (SP1 64bit version) and a remote SQL Server 2000 (SP4)..
I've created a Data flow task that contains an OLEDB source (native client on SQL 2005 local server) and a Lookup Transformation that has to connect to remote SQL Server 2000 and returns me some columns..
I see all, tables, columns.. no problem at first sight.. I choose some columns to use in Lookup output and confirm..
When i re-open my lookup task, columns are not selected (checked)..
obviously my oledb destination can't see that cols..
I choose, but my SSIS don't store any cols in output..
I've tried using error output, advanced editor, creating another package with sql 2005 tables only.. but no change.. the same silent error.. i cant use the columns for output to add to my sets..
can you help me plz??
i extremely need you a lot!!!
thanx in advance..
View 3 Replies
View Related
Jan 25, 2007
Hi, I came upon a strange behavior of Format function in report, which I'm unable to explain.
I have some double value, which I want to format. E.g. the value is in db field "Users" and is 303870
1)
If I set a Value property for the field to
=Fields!Users.Value or CDec(=Fields!Users.Value)
//the value is already double so the CDec has no effect
and Format property for the field to
=Format(me.Value,"#,#") or =Format(Fields!Users.Value,"#,#")
than the number which is visible in the report is 3303,873870 ???
2)
If I put in the Value property
=Format(Fields!Users.Value,"#,#")
and the Format property is empty
than the output is ok 303,870, however this is not desired, because the value is than handled as string
3)
If I put in the Value property
=Fields!Users.Value
and into Format property
=Format(CStr(me.Value),"#,#")
than the output is ok again.
As I understand, the Format should be same as in VB (I am actually C# programmer, not very familiar with VB) so I tried to use Format in VB on double value, and the output was as expected (e.g. 303,870), but when I used to a string I got only the style (e.g. #,#).
So I wonder, the ReportingServices Format function works correctly only with string input? But why than works the example 2)? Or do I have somewhere a mistake?
Thanks for the advice.
View 3 Replies
View Related
Oct 1, 2004
I have two SQL Server Instances on two servers. One server is my webserver and database server and the other one is just a database server. i have an application that calls a stored procedure located on the webserver/database server that runs a query on the OTHER database server. I use linked tables in my first instance to make the call possible.
Everything was working just fine for months until the database server was restarted and the IP address was changed. The name of the database is the same however and my first SQL Server instance has no problems running queries on the other databases tables. However, when you try to run the application i get the following error:
Login failed for user 'sa'. Reason: Not associated with a trusted SQL Server connection
I have mixed mode authentication selected and my security uses the security context with username=sa and password=sa.
So here's the weird part.
The application will only run correctly when i manually run a SQL command from my webserver's SQL Analyzer on the linked SQL Server. however, after a few minutes, the same error comes back!! so as a temporary fix, i scheduled a dts job to run a simple query on the linked server every two minutes, so the application keeps working! It's almost as if the webserver's sql server forgot that the linked server is there, and by running a simple query in query analyer, the connection gets refreshed and everythings normal again - for about 3 minutes!
I am completely stumped by whats happeneing and appreciate any help. Thank you.
View 3 Replies
View Related
Jan 21, 2008
I'm a wee bit of a newbie concerning DTS and have inherited a db with a DTS containing a Copy SQL Server Objects task set to run nightly. Essentially, it does an informal backup of some core data.
Recently, I was notified that one of the tables it copies over is now empty on the Destination db. The DTS shows that it runs successfully with no errors logged, the table in question IS selected to be copied from the Source database, there IS data in the Source database table, and every other table in the Destination database is populated appropriately.
Any ideas on what would cause this one table to be empty without generating any errors?
FYI, running SQL Server 2000.
View 1 Replies
View Related
Jul 20, 2005
I'm using Access as the front end to SQL2000.I have a table of contacts. UniqueID is the PK.There's also a column named "CreatedBy" int and a column Categorysmallint. They're all set to no nulls. If I create an index onCategory, an SP that searches the CreatedBy column fails and I get anerror message: 2757 There was a problem accessing a property or methodof the OLE object.If I delete that column from the index then the opposite happens withthe same error. I have even deleted the columns, saved the table, andrecreated the columns. This is an aggravating one!
View 1 Replies
View Related
May 31, 2007
I have an SSIS package, (actually many packages at this point because I can duplicate this problem each and every time) with an FTP Task and three connection managers (one for FTP, one for a file share, one OLEDB).
The package in its simplest form, grabs a file from an FTP site and transfers it to the file share. When there are no files found on the FTP site, the FTP task fails as expected.
Here is where it gets strange...I turned on a package configuration, with all variables / properties saved to a SQL table; all except for the OLEDB connection which I am leaving hardcoded in the package for the time being.
The problems I am seeing as soon as I activate the package configuration are twofold: The first occurs with the property "FileUsageType" defined to the file share connection. I have it set to "existing folder" in the SSIS tool, the SQL config table also has a row for this item defined with a value of 2. I have even tried manually changing this value in the SQL config table and the result is always the same error when I run the package:
Error: 0xC002F314 at FTP Task, FTP Task: File usage type of connection "Z:" should be "FolderExists" for operation "Receive".
If I remove this entry totally from the SQL config table, and leave that property hardcoded in the package then I am able to continue execution in the package and that particular error goes away. But then I come to weird problem #2: I stated above that when there are no files in the FTP directory to retrieve, then the FTP task fails, as I would expect. However, as soon as I activate the package configuration, even when there are no files at the FTP site, the FTP task doesnt fail anymore, it marks itself as successful instead. This is wreaking havoc on my precedence constraints that follow the FTP task naturally. As soon as I deactivate the package configuration, the FTP task behaves normally again.
The problems only seem to happen with config packages stored in SQL. I tested this also with XML file config instead and everything seemed to behave as it should. Unfortunately, I need to store these configs in SQL, not XML.
Does anyone have any ideas on what I need to look at to fix these weird issues?
View 5 Replies
View Related
Jan 15, 2008
High all,
I have a very simple SSIS package that is moving data from a DB2 database to a Teradata box. I've run it around 10 times, twice it pushed data over, the balance of the time, it executes with no error, but moves nothing over. In the "incomplete" runs, a command line box pops up for half a second, then the package ends.
Does anyone have ideas as to why this behavior is occurring?
Thanks,
Mark
View 1 Replies
View Related
Sep 5, 2007
I have a ssis package that has multiple large lookups without memory restriction. When running the package manually from SSMS on the same server it runs on when running automatically under the job agent, the package errors out when the server memory gets depleted by the loading of the large lookup reference data. One of the messages I get is
"An out-of-memory condition prevented the creation of the buffer object. "
Anyway, the package runs successfully when it runs automatically under the job agent.
I was curious as to why the above happens. Is that a bug or is the run time behavior different under these 2 environments by design.
js40
View 2 Replies
View Related
Aug 9, 2007
I would like to know what happens when a very large reference data set for a lookup transform with full caching enabled is getting loaded during package execution and the computer memory runs out or is very low.
Does SSIS
a) give an out of memory error of some sort
b) resort to a no caching or partial caching mode
c) maintain the full caching mode but will switch to using the paging file(virtual memory).
I think it will resort to using the page file in which case the benefits of in memory lookups are lost and performance would suffer. If I cannot upgrade the memory or shrink the reference set somehow, i should switch that lookup task to use partial caching or no caching with an indexed lookup table. Would this make sense?
View 1 Replies
View Related
Sep 24, 2007
Hello,
I'm getting the following error running a package...
Date,Source,Severity,Step ID,Server,Job Name,Step Name,Notifications,Message,Duration,Sql Severity,Sql Message
ID,Operator Emailed,Operator Net sent,Operator Paged,Retries Attempted
09/24/2007 15:00:00,Hourly Extract From OReSA,Error,0,INHSCTSTTOMVM82,Hourly Extract From OReSA,(Job outcome),,The
job failed. The Job was invoked by Schedule 7 (Hourly). The last step to run was step 1 (OReSA
Extract).,00:00:59,0,0,,,,0
09/24/2007 15:00:01,Hourly Extract From OReSA,Error,1,INHSCTSTTOMVM82,Hourly Extract From OReSA,OReSA
Extract,,Executed as user: INENVts_hia. hod call failed. End Error Error: 2007-09-24 15:00:58.93 Code:
0xC0047017 Source: dtProduceExtractFiles DTS.Pipeline Description: component "ole_srcExtractDB" (22) failed
validation and returned error code 0xC020801C. End Error Error: 2007-09-24 15:00:58.93 Code: 0xC004700C
Source: dtProduceExtractFiles DTS.Pipeline Description: One or more component failed validation. End Error
Error: 2007-09-24 15:00:58.93 Code: 0xC0024107 Source: dtProduceExtractFiles Description: There were
errors during task validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started:
3:00:01 PM Finished: 3:00:58 PM Elapsed: 57.75 seconds. The package execution failed. The step
failed.,00:00:58,0,0,,,,0
But it works on other machines and other SSIS packages are running ok on the offending box.
Any ideas?
Thanks in advance,
Tony.
View 4 Replies
View Related
Jun 19, 2007
I have added a few fields to a table and now when I try to populate it it bombs. It always stops on the same record even with different files and gives me the following error messages. Data looks fine.
[OLE DB Destination [6525]] Error: An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Invalid character value for cast specification". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Invalid character value for cast specification". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Invalid character value for cast specification".
[OLE DB Destination [6525]] Error: There was an error with input column "Column 87" (6742) on input "OLE DB Destination Input" (6538). The column status returned was: "Conversion failed because the data value overflowed the specified type.".
[OLE DB Destination [6525]] Error: The "input "OLE DB Destination Input" (6538)" failed because error code 0xC020907A occurred, and the error row disposition on "input "OLE DB Destination Input" (6538)" specifies failure on error. An error occurred on the specified object of the specified component.
[DTS.Pipeline] Error: The ProcessInput method on component "OLE DB Destination" (6525) failed with error code 0xC0209029. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.
[DTS.Pipeline] Error: Thread "WorkThread0" has exited with error code 0xC0209029.
[Flat File Source [1]] Error: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020.
[DTS.Pipeline] Error: The PrimeOutput method on component "Flat File Source" (1) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
[DTS.Pipeline] Error: Thread "SourceThread0" has exited with error code 0xC0047038.
Any help would be appreciated.
View 2 Replies
View Related
Jan 28, 2008
Hi,
I have a strange problem scheduling SSIS package in SQL Server Agent.
An SSIS package which uses FTP to download a file from an FTP Server is scheduled through the SQL Server Agent to run every hour to look for the file and download it if exists.
- Iam using an Agent proxy which has all the necessary permissions to run the package.
- The package is protected as "EncryptsensitivewithUserKey".
It runs fine in schedule sometimes, ie it downloads the file and processes it alright.
But sometimes, very strangely , for an extended period while in schedule, it gives up an error saying -
- Unable to connect to FTP server using "Click_FTP_Location".
Click_FTP_Location - is the FTP connection manager used in the package.
Has anyone experienced this in any of their work??
We are also trying to see if there is something wrong with the FTP server itself.
Any comments/suggestions appreciated.
Thanks
View 5 Replies
View Related
May 18, 2006
This package which is a child package has been running successfully for quite some time now. All of a sudden we are getting these intermittant error messages. Does anyone have any ideas what to do or check for?
thanks
===========================
Error portion
Error: 0xC0047012 at CF-DFT Oracle Sales Fact, DTS.Pipeline: A buffer failed while allocating 100483760 bytes.
Error: 0xC02020C4 at CF-DFT Oracle Sales Fact, order line id [1]: The attempt to add a row to the Data Flow task buffer failed with error code 0x8007000E.
Error: 0xC0047011 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The system reports 30 percent memory load. There are 8587960320 bytes of physical memory with 5972680704 bytes free. There are 2147352576 bytes of virtual memory with 1324290048 bytes free. The paging file has 12673945600 bytes with 10005012480 bytes free.
Error: 0xC0047038 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The PrimeOutput method on component "order line id" (1) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
Error: 0xC0047056 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The Data Flow task failed to create a buffer to call PrimeOutput for output "Union All" (13359) on component "Union All Output 1" (13361). This error usually occurs due to an out-of-memory condition.
Error: 0xC0047021 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "SourceThread1" has exited with error code 0xC0047038.
Error: 0xC0047021 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread2" has exited with error code 0x8007000E.
Error: 0xC0047039 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread3" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
Error: 0xC0047039 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread1" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
Error: 0xC0047021 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread3" has exited with error code 0xC0047039.
Error: 0xC0047021 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread1" has exited with error code 0xC0047039.
Error: 0xC0047039 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
Error: 0xC0047021 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread0" has exited with error code 0xC0047039.
====================================================
Complete child package log
Executing ExecutePackageTask: D:ssissrwpackagesSRW_ORACLE_SALES_FTBL.dtsx
Information: 0x40016041 at SRW_ORACLE_SALES_FTBL: The package is attempting to configure from the XML file "D:SSISconfigurationCONFIG-STAGE1.dtsConfig".
Information: 0x40016040 at SRW_ORACLE_SALES_FTBL: The package is attempting to configure from SQL Server using the configuration string ""MSSQL-CONFIG";"[dbo].[SSIS_Configurations]";"System Configuration Settings";".
Information: 0x40016040 at SRW_ORACLE_SALES_FTBL: The package is attempting to configure from SQL Server using the configuration string ""MSSQL-CONFIG";"[dbo].[SRW_SSIS_Configurations]";"SRW Main Configurations";".
Information: 0x4004300A at CF-DFT Oracle Sales Fact, DTS.Pipeline: Validation phase is beginning.
Warning: 0x802092A7 at CF-DFT Oracle Sales Fact, TEMP OUTPUT [998]: Truncation may occur due to inserting data from data flow column "IC_ORDER" with a length of 240 to database column "IC_ORDER" with a length of 1.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "SERIAL_NUMBER" (2680) on output "Sort Output" (2453) and component "Sort 1" (2451) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "ORG_ID" (13377) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "CUST_TRX_TYPE_ID" (13428) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "Data Conversion 1.Copy of CUST_TRX_TYPE_ID" (13443) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "GL_ID_REV" (13449) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "Copy of GL_ID_REV" (13458) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Information: 0x4004300A at CF-DFT Oracle Sales Fact, DTS.Pipeline: Validation phase is beginning.
Warning: 0x802092A7 at CF-DFT Oracle Sales Fact, TEMP OUTPUT [998]: Truncation may occur due to inserting data from data flow column "IC_ORDER" with a length of 240 to database column "IC_ORDER" with a length of 1.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "SERIAL_NUMBER" (2680) on output "Sort Output" (2453) and component "Sort 1" (2451) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "ORG_ID" (13377) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "CUST_TRX_TYPE_ID" (13428) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "Data Conversion 1.Copy of CUST_TRX_TYPE_ID" (13443) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "GL_ID_REV" (13449) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "Copy of GL_ID_REV" (13458) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Information: 0x4004300A at CF-DFT Oracle Sales Fact, DTS.Pipeline: Validation phase is beginning.
Warning: 0x802092A7 at CF-DFT Oracle Sales Fact, TEMP OUTPUT [998]: Truncation may occur due to inserting data from data flow column "IC_ORDER" with a length of 240 to database column "IC_ORDER" with a length of 1.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "SERIAL_NUMBER" (2680) on output "Sort Output" (2453) and component "Sort 1" (2451) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "ORG_ID" (13377) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "CUST_TRX_TYPE_ID" (13428) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "Data Conversion 1.Copy of CUST_TRX_TYPE_ID" (13443) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "GL_ID_REV" (13449) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Warning: 0x80047076 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The output column "Copy of GL_ID_REV" (13458) on output "Union All Output 1" (13361) and component "Union All" (13359) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.
Information: 0x40043006 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Prepare for Execute phase is beginning.
Information: 0x40043007 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Pre-Execute phase is beginning.
Information: 0x400490F4 at CF-DFT Oracle Sales Fact, REV GL SEGS [307]: component "REV GL SEGS" (307) has cached 780 rows.
Information: 0x400490F4 at CF-DFT Oracle Sales Fact, get oper unit [813]: component "get oper unit" (813) has cached 12 rows.
Warning: 0x802090E4 at CF-DFT Oracle Sales Fact, get oper unit [813]: The Lookup transformation encountered duplicate reference key values when caching reference data. The Lookup transformation found duplicate key values when caching metadata in PreExecute. This error occurs in Full Cache mode only. Either remove the duplicate key values, or change the cache mode to PARTIAL or NO_CACHE.
Information: 0x400490F4 at CF-DFT Oracle Sales Fact, get header txn type for IC flag [13685]: component "get header txn type for IC flag" (13685) has cached 768 rows.
Information: 0x4004300C at CF-DFT Oracle Sales Fact, DTS.Pipeline: Execute phase is beginning.
Information: 0x4004800D at CF-DFT Oracle Sales Fact, DTS.Pipeline: The buffer manager failed a memory allocation call for 100484768 bytes, but was unable to swap out any buffers to relieve memory pressure. 83 buffers were considered and 83 were locked. Either not enough memory is available to the pipeline because not enough are installed, other processes were using it, or too many buffers are locked.
Error: 0xC0047012 at CF-DFT Oracle Sales Fact, DTS.Pipeline: A buffer failed while allocating 100484768 bytes.
Error: 0xC0047011 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The system reports 31 percent memory load. There are 8587960320 bytes of physical memory with 5869387776 bytes free. There are 2147352576 bytes of virtual memory with 1223802880 bytes free. The paging file has 12673945600 bytes with 9901600768 bytes free.
Information: 0x4004800D at CF-DFT Oracle Sales Fact, DTS.Pipeline: The buffer manager failed a memory allocation call for 100483760 bytes, but was unable to swap out any buffers to relieve memory pressure. 162 buffers were considered and 162 were locked. Either not enough memory is available to the pipeline because not enough are installed, other processes were using it, or too many buffers are locked.
Error: 0xC0047012 at CF-DFT Oracle Sales Fact, DTS.Pipeline: A buffer failed while allocating 100483760 bytes.
Error: 0xC02020C4 at CF-DFT Oracle Sales Fact, order line id [1]: The attempt to add a row to the Data Flow task buffer failed with error code 0x8007000E.
Error: 0xC0047011 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The system reports 30 percent memory load. There are 8587960320 bytes of physical memory with 5972680704 bytes free. There are 2147352576 bytes of virtual memory with 1324290048 bytes free. The paging file has 12673945600 bytes with 10005012480 bytes free.
Error: 0xC0047038 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The PrimeOutput method on component "order line id" (1) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
Error: 0xC0047056 at CF-DFT Oracle Sales Fact, DTS.Pipeline: The Data Flow task failed to create a buffer to call PrimeOutput for output "Union All" (13359) on component "Union All Output 1" (13361). This error usually occurs due to an out-of-memory condition.
Error: 0xC0047021 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "SourceThread1" has exited with error code 0xC0047038.
Error: 0xC0047021 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread2" has exited with error code 0x8007000E.
Error: 0xC0047039 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread3" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
Error: 0xC0047039 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread1" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
Error: 0xC0047021 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread3" has exited with error code 0xC0047039.
Error: 0xC0047021 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread1" has exited with error code 0xC0047039.
Error: 0xC0047039 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
Error: 0xC0047021 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Thread "WorkThread0" has exited with error code 0xC0047039.
Information: 0x40043008 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Post Execute phase is beginning.
Information: 0x402090DF at CF-DFT Oracle Sales Fact, TEMP OUTPUT [998]: The final commit for the data insertion has started.
Information: 0x402090E0 at CF-DFT Oracle Sales Fact, TEMP OUTPUT [998]: The final commit for the data insertion has ended.
Information: 0x40043009 at CF-DFT Oracle Sales Fact, DTS.Pipeline: Cleanup phase is beginning.
Information: 0x4004300B at CF-DFT Oracle Sales Fact, DTS.Pipeline: "component "TEMP OUTPUT" (998)" wrote 0 rows.
Task failed: CF-DFT Oracle Sales Fact
Warning: 0x80019002 at SRW_ORACLE_SALES_FTBL: The Execution method succeeded, but the number of errors raised (15) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
Task failed: CF-EPGT SRW_ORACLE_SALES_FTBL
Warning: 0x80019002 at CF-SQC Facts: The Execution method succeeded, but the number of errors raised (15) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
Warning: 0x80019002 at SRW_MAIN: The Execution method succeeded, but the number of errors raised (15) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "SRW_Main.dtsx" finished: Failure.
View 9 Replies
View Related