Scheduled Job Fails With 'EXCEPTION - Insufficient Memory'
May 31, 2001
A series of export/import jobs are scheduled on a dozen databases sitting on one of our servers, and are run at regular intervals through the day. Some of the jobs are failing with the following error recorded in the 'View Job History..':
EXCEPTION: Insufficient memory for this operation. Process Exit Code 2. The step failed.
Will this be cured by increasing the memory available to SQL Server (it has 512Mb already, 1/2 of the total physical RAM)? Also, why are only some jobs failing and others completing? Should I run performance monitor when the next schedule is?
I have one SSIS package which is written in Visual studio business intelligence tool. For that SSIS packages i have scheduled a job from SQL server management studio 2005. I mean i have scheduled a job in SQL server agent. This job which i have scheduled contains 6 SSIS packages and the other 5 SSIS packages executes successfully but this only fails giving sone com.Interop exception.
But it is failing giving some com.interop exception. Not sure what type of error is this?
It give following type error:
Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 11:00:00 PM Error: 2008-03-27 23:00:00.81 Code: 0x00000000 Source: Execute DTS 2000 Package Task Description: System.Runtime.InteropServices.COMException (0x80040427): Execution was canceled by user. at DTS.PackageClass.Execute() at Microsoft.SqlServer.Dts.Tasks.Exec80PackageTask.Exec80PackageTask.ExecuteThread() End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 11:00:00 PM Finished: 11:00:00 PM Elapsed: 0.579 seconds. The package execution failed. The step failed.
I get the same error when i try to execute the package from Visual studio Business Intelligence tool.
Can you please help me out as to what is this "System.Runtime.InteropServices.COMException" exception occuring when scheduling or executing the job.
Hi, I get the following error Error: 17803, Severity: 17, State: 17 Insufficient memory available.Source ODS. When I have lot of scheduled jobs are running during the night. Does anyone know why this happens and how it can be fixed. Let me know.
About once a week I'm receiving this message in the Sqlserver Log from ODS
Error 17803,Severity: 17, State: 14 Insufficient Memory Available
The machine has 1 gig of RAM and is dedicated to sqlserver7 with sp1. Any ideas of what might be causing this problem? Any help is greatly appreciated.
I have a dedicated SQL 2000 on Windows 2K with over 7GB memory, SQL memory configuration is dynamic. This is a new server and doesn't have much processes yet. This morning, SQL logs recorded the error 'insufficient memory available, error 17803, severity 20, state 17'. Does anyone have any clue what could be the cause? Thanks in advance.
I am running a .sql file containing a large number of delete and insert statements, using isql from the command line. After 2 minutes I get a message "Insufficient memory to continue", same statements if I cut and paste in SQL server query analyzer I do not get this message. On looking at the task manager, it shows a lot of available memory.
I was converting a lot of MS Access records to SQL Server (almost 500,000) and about midway through I got the old MSACCESS.EXE Not Responding. A few more times through this and SQL Server informed me that I was running out of locks. I upped the locks from 5000 to 20000, and retried it. I am now getting errors like "There is insufficient system memory to run this query". I receive this when I attempt to look at tables, devices, etc. In SQL Enterprise Manager. I am also not able to run the sp_configure command or choose configure by right-clicking on the server in Ent. Mgr. I am running SQL Server 6.5, SP4, on a dual P200 with 256MB RAM. Any help would be incredibly great.
My company has a database that is throwing a weird error. We've tried reinstalling both the OS and the SQL instance, and the error still persists. We think this error might have to do with some .NET code we've written to run on the box, but I cannot find anything out on the internet about it. Here is the Enterprise Manager Error Log:
Insufficient memory available.. Error: 17803, Severity: 20, State: 4 Query Memory Manager: Grants=0 Waiting=0 Maximum=97638 Available=97638 Global Memory Objects: Resource=912 Locks=42 SQLCache=67 Replication=2 LockBytes=2 ServerGlobal=20 Xact=12 Dynamic Memory Manager: Stolen=2138 OS Reserved=1048 OS Committed=1026 OS In Use=1022 Query Plan=1777 Optimizer=0 General=1066 Utilities=12 Connection=262 Procedure Cache: TotalProcs=488 TotalPages=1787 InUsePages=542 Buffer Counts: Commited=5168 Target=131072 Hashed=1917 InternalReservation=191 ExternalReservation=0 Min Free=128 Visible= 131072 Buffer Distribution: Stolen=351 Free=1113 Procedures=1787 Inram=0 Dirty=599 Kept=0 I/O=0, Latched=23, Other=1295 WARNING: Failed to reserve contiguous memory of Size= 65536.
I can find information if I do a Google search on "Error: 17803, Severity: 20" But as soon as I add "State: 4" to the query I get no results. Also, the articles that I have seen that give the same error messages (but different states) tend to deal with servers that have more than 4GB of memory. This server has ONLY 4GB of memory and in order to try and resolve this issue, we have limited the server to 1GB of physical memory to no avail.
We are hitting a crippling 701 "insufficient System Memory" error intermittently in out production environment. I haven’t gotten anywhere with PSS in two weeks. The error has occurred 4 times over the past two weeks, crippling our SQL server and application each time. When the error occurs it lasts for 5 to 20 minutes, causing the app to time out, refusing new connections, and a massive slow-down of anything that is running. SQL has recovered on its own two of these times. It recovered following a Kill of hundreds of threads reporting “SEMAPHORE WAIT�. The most recent occurrence nailed all 16 processors at 100%. We were forced to issue shutdown with nowait. I have been monitoring Perfmon very closely; there are no symptoms that precede the error. Each occurrence captures a different query. Any of the queries, when run from Management Studio, complete in under a second. DBCC MEMORY STATUS reports all memory as being in an unstressed state. The first time the error occurred there were 10 GB still available on the server.
Has anyone else experienced this problem or anything similar? We don’t use linked servers or table valued functions (there are known memory bugs related to each of these items)
The following server and configuration has been running in production for 6 weeks with no issues:
.SQL 2005 EE SP1 Post SP1 Hotfix kb918222 .Win 2003 SP1 (dedicated box) .Quad Dual Core 3GHz .32 GB memory .AWE enabled .No memory related flags in boot.ini ."Lock Pages in memory" set for SQL Startup account .1 Instance (default) .1,994 OLTP databases avg less than 100MB each .1,200 active user threads on average (from connection pool of avg 4,000 concurent users)
We have an application running on a SQL cluster (Win 2003) and SQL 2005 SP2 within it's own instance - 12 total databases and about 100G of data total. The node this instance is on has 64G of RAM with 16 allocated to this instance (only 8G allocated to other instances currently).
Now to the problem there is one process that when running we get the error below and we cannot figure how to correct this - the process runs 8 times a day and has been running great and then all of a sudden stopped running with the memory error. I am in search of any tips to diagnose or correct this issue
[298] SQLServer Error: 701, There is insufficient system memory to run this query. [SQLSTATE 42000]
We seem to have developed a memory leak in our sql server applicationand are getting the above error on occasion. Also, over several hoursof hard usage the memory consumed by the sql server ramps up and isnever released. The only thing we have found to remedy the problem isto stop/start sql server.My question to the group is, how can I debug this problem? Are theresystem stored procedures that would be useful in indentifying any temptables, cursors, etc, not getting cleaned up?Thanks,John
Just wondering if anyone else has seen any memory issues with cross instance communication. Just this last week we moved from single instance to cross instance communication. I started up a process that will move 3,000,000 messages through service broker, but after a couple of hours the server tanks out and we get insufficient system memory errors in the sql error log. If no one else has seen or experienced this then I'll probably get a ticket opened to see what it may be. It's just that the change in our service broker network was the only change I was aware of on this instance.
This morning the server became unresponsive and we had to stop and restart the services. The log showed the dreaded 17803 "Insufficient memory available" error. Server has 4gig of O/S memory and 4 processors. Searching on this suggests this is really an internal error and not something we can do much about. Is that true? What should be my next step?
Thanks!
2007-08-20 21:04:47.89 server Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Enterprise Edition on Windows NT 5.0 (Build 2195: Service Pack 4) . . . 2007-08-31 10:48:09.51 spid2 LazyWriter: warning, no free buffers found. 2007-08-31 10:48:09.51 spid2 Buffer Distribution: Stolen=243524 Free=0 Procedures=6 Inram=0 Dirty=60 Kept=0 I/O=0, Latched=98, Other=0 2007-08-31 10:48:09.51 spid2 Buffer Counts: Commited=243688 Target=243688 Hashed=158 InternalReservation=254 ExternalReservation=42 Min Free=512 Visible= 243688 2007-08-31 10:48:09.51 spid2 Procedure Cache: TotalProcs=4 TotalPages=6 InUsePages=6 2007-08-31 10:48:09.51 spid2 Dynamic Memory Manager: Stolen=243474 OS Reserved=1112 OS Committed=1090 OS In Use=1042 Query Plan=524 Optimizer=240444 General=2750 Utilities=143 Connection=489 2007-08-31 10:48:09.51 spid2 Global Memory Objects: Resource=1217 Locks=94 SQLCache=44 Replication=2 LockBytes=2 ServerGlobal=25 Xact=48 2007-08-31 10:48:09.51 spid2 Query Memory Manager: Grants=1 Waiting=0 Maximum=9793 Available=9695 2007-08-31 10:48:27.41 spid64 Error: 17803, Severity: 20, State: 8 2007-08-31 10:48:27.41 spid64 Insufficient memory available.. 2007-08-31 10:48:39.51 spid2 LazyWriter: warning, no free buffers found. 2007-08-31 10:48:39.52 spid2 Buffer Distribution: Stolen=243522 Free=0 Procedures=6 Inram=0 Dirty=60 Kept=0 I/O=0, Latched=100, Other=0
I have scheduled a SSIS package to run repeatedly by creating a scheduled SQL job that runs every minute . After every hour the packgae fails with the following error
Description: System.OutOfMemoryException: Insufficient memory to continue the execution of the program. at Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSCustomProperty90.get_Value() at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.CreateUserComponent() End Erro
The strange thing is that if i right click on SQL JOb and run it...it works fine.
Did anyone encounter similar problem or know the solution of the above problem ?
I have an instance of sql server 2005 Enterprise Edition 9.0.3152.
The server has 32 Gigs of RAM. Sql Server is AWE enabled and is configured to use 14Gigs MAX memory (it is in a cluster so it is configured so that way for fail over).
When i perform the following command on a 1.9 Gig database on the server:
dbcc checkdb('CLMS', NOINDEX)WITH NO_INFOMSGS
I get the following error message:
Msg 701, Level 17, State 123, Line 1
There is insufficient system memory to run this query.
I am stumped i can see the server has 17 Gigs of available memory.
After this i perform the folloing query :
dbcc checkdb('OLTP1PRO', NOINDEX)WITH NO_INFOMSGS
This database is 140 Gigs and checdb works no problem.
What the hell is going on ? I have never received this error before with CHECKDB, and it is ahppening on the smallest database on the instance while the exact same checdb works fine for all the other larger db's on the instance ?
We have got this error on a server after a reboot, now i can appreciate that the server went into recovery after a shutdown cos of services still doing stuff when the shutdown occured.. and this worked
the machine has 4 gig of mem and as you can see from log 4 cpu...we are running sql server 2000 sp4
log file from errorlog..any ideas 2007-09-17 10:21:20.39 server Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 1)
2007-09-17 10:21:20.39 server Copyright (C) 1988-2002 Microsoft Corporation. 2007-09-17 10:21:20.39 server All rights reserved. 2007-09-17 10:21:20.39 server Server Process ID is 2036. 2007-09-17 10:21:20.39 server Logging SQL Server messages in file 'D:Microsoft SQL ServerMSSQLlogERRORLOG'. 2007-09-17 10:21:20.45 server SQL Server is starting at priority class 'normal'(4 CPUs detected). 2007-09-17 10:21:20.60 server SQL Server configured for thread mode processing. 2007-09-17 10:21:20.64 server Using dynamic lock allocation. [2500] Lock Blocks, [5000] Lock Owner Blocks. 2007-09-17 10:21:21.01 server Attempting to initialize Distributed Transaction Coordinator. 2007-09-17 10:21:23.18 spid3 Starting up database 'master'. 2007-09-17 10:21:23.73 spid3 0 transactions rolled back in database 'master' (1). 2007-09-17 10:21:23.73 spid3 Recovery is checkpointing database 'master' (1) 2007-09-17 10:21:23.81 server Using 'SSNETLIB.DLL' version '8.0.2039'. 2007-09-17 10:21:23.81 spid5 Starting up database 'model'. 2007-09-17 10:21:23.82 server SQL server listening on 128.1.100.121: 1433. 2007-09-17 10:21:23.82 server SQL server listening on 127.0.0.1: 1433. 2007-09-17 10:21:23.85 spid3 Server name is 'LDS-SSQL1'. 2007-09-17 10:21:23.85 spid8 Starting up database 'msdb'. 2007-09-17 10:21:23.85 spid10 Starting up database 'Sample_1'. 2007-09-17 10:21:23.85 spid9 Starting up database 'C8DM_Catalogs_RnD'. 2007-09-17 10:21:23.90 server SQL server listening on TCP, Shared Memory, Named Pipes. 2007-09-17 10:21:23.90 server SQL Server is ready for client connections 2007-09-17 10:21:23.90 spid11 Starting up database 'c8content'. 2007-09-17 10:21:23.90 spid12 Starting up database 'DataWarehouse'. 2007-09-17 10:21:23.90 spid13 Starting up database 'C8DM_Catalogs'. 2007-09-17 10:21:23.93 spid14 Starting up database 'DW_DS'. 2007-09-17 10:21:24.01 spid5 1 transactions rolled forward in database 'model' (3). 2007-09-17 10:21:24.04 spid5 0 transactions rolled back in database 'model' (3). 2007-09-17 10:21:24.04 spid5 Recovery is checkpointing database 'model' (3) 2007-09-17 10:21:24.18 spid5 Clearing tempdb database. 2007-09-17 10:21:24.28 spid13 113 transactions rolled forward in database 'C8DM_Catalogs' (9). 2007-09-17 10:21:24.37 spid8 199 transactions rolled forward in database 'msdb' (4). 2007-09-17 10:21:24.37 spid10 64 transactions rolled forward in database 'Sample_1' (6). 2007-09-17 10:21:24.39 spid10 0 transactions rolled back in database 'Sample_1' (6). 2007-09-17 10:21:24.42 spid13 0 transactions rolled back in database 'C8DM_Catalogs' (9). 2007-09-17 10:21:24.45 spid10 Recovery is checkpointing database 'Sample_1' (6) 2007-09-17 10:21:24.48 spid13 Recovery is checkpointing database 'C8DM_Catalogs' (9) 2007-09-17 10:21:24.49 spid9 49 transactions rolled forward in database 'C8DM_Catalogs_RnD' (5). 2007-09-17 10:21:24.51 spid9 0 transactions rolled back in database 'C8DM_Catalogs_RnD' (5). 2007-09-17 10:21:24.53 spid9 Recovery is checkpointing database 'C8DM_Catalogs_RnD' (5) 2007-09-17 10:21:24.59 spid5 Starting up database 'tempdb'. 2007-09-17 10:21:24.60 spid8 0 transactions rolled back in database 'msdb' (4). 2007-09-17 10:21:24.64 spid8 Recovery is checkpointing database 'msdb' (4) 2007-09-17 10:21:27.60 spid12 Analysis of database 'DataWarehouse' (8) is 8% complete (approximately 4 more seconds) 2007-09-17 10:21:28.12 spid14 Analysis of database 'DW_DS' (10) is 100% complete (approximately 0 more seconds) 2007-09-17 10:21:31.09 spid14 31 transactions rolled forward in database 'DW_DS' (10). 2007-09-17 10:21:31.10 spid14 0 transactions rolled back in database 'DW_DS' (10). 2007-09-17 10:21:31.14 spid14 Recovery is checkpointing database 'DW_DS' (10) 2007-09-17 10:21:36.00 spid12 Analysis of database 'DataWarehouse' (8) is 100% complete (approximately 0 more seconds) 2007-09-17 10:21:36.01 spid12 Recovery of database 'DataWarehouse' (8) is 0% complete (approximately 54 more seconds) (Phase 2 of 3). 2007-09-17 10:21:36.98 spid12 Recovery of database 'DataWarehouse' (8) is 9% complete (approximately 9 more seconds) (Phase 2 of 3). 2007-09-17 10:21:41.78 spid12 Recovery of database 'DataWarehouse' (8) is 56% complete (approximately 3 more seconds) (Phase 2 of 3). 2007-09-17 10:21:47.92 spid12 Recovery of database 'DataWarehouse' (8) is 100% complete (approximately 0 more seconds) (Phase 2 of 3). 2007-09-17 10:21:47.92 spid12 25 transactions rolled forward in database 'DataWarehouse' (8). 2007-09-17 10:21:48.10 spid12 0 transactions rolled back in database 'DataWarehouse' (8). 2007-09-17 10:21:48.20 spid12 Recovery is checkpointing database 'DataWarehouse' (8) 2007-09-17 10:21:48.37 spid3 Recovery complete. 2007-09-17 10:21:48.37 spid3 SQL global counter collection task is created. 2007-09-17 10:21:49.95 spid51 Using 'xpsqlbot.dll' version '2000.80.2039' to execute extended stored procedure 'xp_qv'. 2007-09-17 10:22:33.24 spid53 Using 'xpstar.dll' version '2000.80.2039' to execute extended stored procedure 'xp_sqlagent_monitor'. 2007-09-17 10:23:55.17 spid58 Error: 17803, Severity: 20, State: 12 2007-09-17 10:23:55.17 spid58 Insufficient memory available..
We're experiencing the following problem on our servers:
Server: Msg 701, Level 17, State 1, Line 3 There is insufficient system memory to run this query.
I've been able to fix the problem by a)Lowering the Max Server Memory and b)Minimum Query Memory.
However, Microsoft state there is a hot fix available for this issue at in KB Article 817262 ( http://support.microsoft.com/default.aspx?scid=kb;en-us;817262&Product=sql2k).
Any idea how one is supposed to contact Microsoft to get this fix without paying?
We receive this error after running a complex query. Could someone please shed some light on what this means exactly?
One of our developer said we needed to purchase a server with more memory, but would SQL Server not simply just run slower by using virtual memory instead of physical RAM?
I know there is a limit and servers must be upgraded as the processing requirements increase, due to data set size increases for example, but we have just been told to "purchase more power because after a while as you process more rows, SQL Server will require more resources"
I'm looking for a sanity check. Service broker and Event Notifications are new to me, so it is possible I'm either abusing or incorrectly using the technology. I'm experimenting with using Event Notifications and Server Broker to log error messages. The process works pretty well as long as I don't do something stupid such as stressing the process by calling raiserror in a loop of a 100000 - twice. It processed for a while, but eventual either SQL Server terminates (sev 25) or I had to force the SQL service to stop by killing it's process via task manager. I could not connect to SQL Server or stop the service normally.
I have several general questions. I think they are good questions. Any thoughts or suggestions would be appreciated.
1) I'm looking into generating deadlock and blocking email alerts. It appears that Event Notification is the way to go since there are very few events and I can respond to the event with an email. Is there a better method to automate blocking and deadlock email alerts that include event details?
2) If I wanted to retain some history of errors for the developers, should Event Notifications be avoided if a high number of errors can be generated by a badly formed T-SQL? Avoid TRC_ERRORS_AND_WARNINGS and USER_ERROR_MESSAGE? (I could tell the developers not to be stupid, but why would that stop them if it does not stop me?)
3) Is there a way to efficiently filter a Event Notification from entering the queue before the Receive statement is called? I get some events that I throw away after receiving them. For example, perhaps I want all events from all non-system databases without having to add a notification for each single non-system database. Or I want all errors with severity 11-17 only?
4) Is there a trick to filter out events from the procedure activated by the event. I tried using raiserror to debug a procedure without thinking. The result was that the queue never was empty because the processing produced more events to process. As a result, I don't use raiserror and use a try-catch to avoid raising errors in the procedure activated.
5) I can receive one message at a time using local variables or receive a batch of messages using a local table. Is a small batch the best way even if there is memory pressure?
6) In the activated procedure I continue processing in a loop until there are no more messages. This seems to be the most efficient. Is this always the case? Should I exit the procedure after a set number (large) of messages have been received? The procedure would activate again to continue processing?
7) Is there any point to using the MAX_QUEUE_READERS setting when processing event notifications? Should it be 1?
8) I currently get the next conversation group id and process its messages within a transaction. Is this a bad idea with event notifications? Should I just call Receive and get the next batch? I don't really care if I lose some messages if things are going badly. Should I avoid wrapping the receive in a transaction?
9) I could run a trace that starts with SQL Server; however, I think my only choice is to log to a file. Is there a way to trace to a table using SQLTrace without running profiler? I would like to automate the process and have the data in a table so that it can be easily queried and parsed for each database/team.
10) Is there a way to fix my process and handle 100000 messages a minute? Is there a way to skip messages when it gets to busy? Can the query generating the messages get throttled - perhaps along with the query designer - before the server gets into trouble? Query the memory used by a queue and drop/create the queue with a delay if there is an issue?
I'm using a single CPU test machine with only 1 GB and 2 GB pagefile and a single disk. OS: Windows 2003 R2 SP2. SQL: 9.0.3054 and on 9.0.3228 (update package 6 just came out today). I could add memory, but I think that would just permit me to queue more message before running into trouble. I'll add code shortly - length limit.
I'm using VS'08, developing in VB.NET, using an SQL'05 DB and a local sync DB (3.5)
When attempting to append a record to the SQL CE DB's local sync'd table I get the error <<Insufficient memory to continue the execution of the program>>, the same record gets appended to the SQL'05 DB without error.
Received the error message below when we trying to execute an sql file which has insert statments and size of the file is 111mb. Environment - SQL Server 2005 - windows 2003.
Error Message - Insufficient Memory to continue the execution of the program. (mscorlib)
Hi All, I am trying to restore a DB from my production to Standby server..it gives me a mesage "Msg 3105, Level 16, State 1 Data on dump will not fit into current database. Need 6500 Mbyte database."
The production server DB size is 5000MB and I have increased the size of standby DB to 6500MB but still the same message...
My Integrity job started failing recently with the following error. Msg 701, Level 17, State 123, Line 1 There is insufficient system memory in resource pool 'default' to run this query. Process Exit Code 1. The step failed.
select @@ version Microsoft SQL Server 2008 R2 (SP2) - 10.50.4033.0 (X64)  Jul 9 2014 16:04:25  Copyright (c) Microsoft Corporation Developer Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor) System is having 4GB RAM and SQL is using most of it. It has 2 processors.
When attempting to save an SSIS package in Visual Studion I receive the following error message detailed below. If I attempt to "Save As" to another location, I then receive an insufficient storage error. The development machine has over 1.5 GB of available physical memory and several GB of disk space availabe to save my 16 MB package. I have checked the event log and have found no related messages in the Application or Server logs.
Any suggestions on how to determine the cause or resolution of this error message would be greatly appreciated.
Failure saving package. (Microsoft Visual Studio)
Insufficient memory to continue the execution of the program. (Microsoft.SqlServer.ManagedDTS)
Advanced Error Message Details
Failure saving package. (Microsoft Visual Studio) ------------------------------ Program Location: at Microsoft.DataTransformationServices.Design.Serialization.DtrDesignerSerializer.SerializePackage(IDesignerSerializationManager manager, Package package, TextWriter textWriter) at Microsoft.DataTransformationServices.Design.Serialization.DtrDesignerSerializer.SerializeComponent(IDesignerSerializationManager manager, IComponent component, Object serializationStream) at Microsoft.DataWarehouse.Serialization.DesignerComponentSerializer.Serialize(IDesignerSerializationManager manager, Object value) at Microsoft.DataWarehouse.VsIntegration.Designer.Serialization.DataWarehouseDesignerLoader.Serialize() at Microsoft.DataWarehouse.VsIntegration.Designer.Serialization.BaseDesignerLoader.Flush(Boolean forceful) at Microsoft.DataWarehouse.VsIntegration.Designer.Serialization.BaseDesignerLoader.Flush() at Microsoft.DataWarehouse.VsIntegration.Designer.Serialization.DataWarehouseContainerManager.OnBeforeSave(UInt32 docCookie) =================================== Insufficient memory to continue the execution of the program. (Microsoft.SqlServer.ManagedDTS) ------------------------------ Program Location: at Microsoft.SqlServer.Dts.Runtime.Package.SaveToXML(String& packageXml, IDTSEvents events) at Microsoft.DataTransformationServices.Design.Serialization.DtrDesignerSerializer.SerializePackage(IDesignerSerializationManager manager, Package package, TextWriter textWriter)
I have an SSIS Package that loads data from a log file. Prior to loading the data I need to prepare the file. I run a script that cleans the file. Then I import the flat file into SQL Server.
Log File Management Task 1. Run Unix Log File Task 2. Import the new log file (flat file) into SQL Server
Error i.Unix.dtsx Message: The script threw an exception: Exception of type 'System.OutOfMemoryException' was thrown.
Is this because the system is running out of memory? The RAM on the server is 4gb. Below is a sample of the script. The job doesn't always fail; there are times when the job executes with success and other times when it fails.
Script Source Code ----------------------------------------------------------- ' Microsoft SQL Server Integration Services Script Task ' Write scripts using Microsoft Visual Basic ' The ScriptMain class is the entry point of the Script Task. Imports System Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Runtime Imports System.Collections.Generic Imports System.IO Imports System.Text Imports System.Diagnostics Imports System.Globalization Imports Microsoft.VisualBasic Imports System.Text.RegularExpressions Public Class ScriptMain '********** Begin Error Log Settings ********** 'Dim sSource As String = "i.SSIS.Unix.FileManager" 'Dim sLog As String = "Application" 'Dim sMachine As String = "." 'Dim ELog As New EventLog(sLog, sMachine, sSource) '********** End Error Log Settings **********
Public Sub Main() 'variables for the unix log file Dim newFile As String = "D:iLogunixlog.txt" Dim copyFile As String = "\server16iLogunixlog.txt" 'variables for working log files Dim oldFile As String = "D:i empunixlog.txt" Dim difFile As String = "D:i empunixdiff.txt" Dim trimdiff As String = "D:i empunixdifft.txt" Dim formatTemp As String = "D:i empunixlog_formatted.txt" Dim errorFile As String = "D:i empunixlog_bad.txt"
'delete unixlog.txt copy unixlog.txt 'if the file is on the local server delete it and copy the new file over 'if the file is not present copy the new file over Try If File.Exists(newFile) Then File.Delete(newFile) File.Copy(copyFile, newFile) Else File.Copy(copyFile, newFile) End If While Not File.Exists(newFile) System.Threading.Thread.Sleep(1000) End While 'ELog.WriteEntry("FileCopy.Success".ToString(), EventLogEntryType.SuccessAudit, 4, CType(4, Short)) Catch ex As Exception 'ELog.WriteEntry("FileCopy.Failure" & ControlChars.CrLf & ex.ToString(), EventLogEntryType.Error, 2, CType(4, Short)) End Try
'open the old file; read backwards until we reach the carriage 'return and store that "seek" position; now open the new file and 'seek to that stored position. finally, read the rest of the file 'and write that data to the difference file. ' determine position of last line in the old file Dim lastLine As Long = GetLastLinePosition(oldFile) ' get all data in new file starting at position determined above Dim fi As New FileInfo(newFile) Dim buffer(fi.Length - lastLine) As Byte Dim fs As New FileStream(newFile, FileMode.Open) Try fs.Seek(lastLine, SeekOrigin.Begin) fs.Read(buffer, 0, buffer.Length) fs.Close() ' write that new data to the difference file fs = New FileStream(difFile, FileMode.OpenOrCreate, FileAccess.Write, FileShare.None) fs.Write(buffer, 0, buffer.Length) fs.Close() 'ELog.WriteEntry("FileCopy.Success".ToString(), EventLogEntryType.SuccessAudit, 4, CType(4, Short)) Catch ex As Exception 'ELog.WriteEntry("FileCopy.Failure" & ControlChars.CrLf & ex.ToString(), EventLogEntryType.Error, 2, CType(4, Short)) End Try
'remove the partial row from the difference file Try TrimFinal(difFile, trimdiff) 'ELog.WriteEntry("TrimFinal.Call.Success".ToString(), EventLogEntryType.SuccessAudit, 4, CType(4, Short)) Catch ex As Exception 'ELog.WriteEntry("TrimFinal.Call.Failure" & ControlChars.CrLf & ex.ToString(), EventLogEntryType.Error, 2, CType(4, Short)) End Try 'perform the file formatting sFormatFile(trimdiff, formatTemp, errorFile) ' Dts.TaskResult = Dts.Results.Success End Sub
Function GetLastLinePosition(ByVal fileName As String) As Long Dim pos As Long = -1 Dim fs As New FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite) Try fs.Seek(-2, SeekOrigin.End) ' -2 to skip a potential vbcrlf at the end of file While fs.Position > 0 fs.Seek(-1, SeekOrigin.Current) If fs.ReadByte = 10 Then pos = fs.Position Exit While Else fs.Seek(-1, SeekOrigin.Current) End If End While fs.Close() 'ELog.WriteEntry("GetLastLinePosition.Success".ToString(), EventLogEntryType.SuccessAudit, 4, CType(4, Short)) Catch ex As Exception 'ELog.WriteEntry("GetLastLinePosition.Failure" & ControlChars.CrLf & ex.ToString(), EventLogEntryType.Error, 2, CType(4, Short)) End Try Return pos End Function
Sub TrimFinal(ByVal difFile As String, ByVal trimdiff As String) Dim fi2 As New FileStream(difFile, FileMode.OpenOrCreate, FileAccess.Read) Dim fo2 As New FileStream(trimdiff, FileMode.OpenOrCreate, FileAccess.Write) Dim sr2 As New StreamReader(fi2) Dim sw2 As New StreamWriter(fo2) Dim line2 As String Try Do While sr2.Peek <> -1 line2 = sr2.ReadLine() If (sr2.Peek <> -1) Then sw2.WriteLine(line2) End If Loop sw2.Flush() : sw2.Close() sr2.Close() fi2.Close() : fo2.Close() 'ELog.WriteEntry("TrimFinal.Success".ToString(), EventLogEntryType.SuccessAudit, 4, CType(4, Short)) Catch ex As Exception 'ELog.WriteEntry("TrimFinal.Failure" & ControlChars.CrLf & ex.ToString(), EventLogEntryType.Error, 2, CType(4, Short)) End Try End Sub
Sub sFormatFile(ByVal currentFile As String, ByVal tempFile As String, ByVal errorFile As String) Dim tfp As New Microsoft.VisualBasic.FileIO.TextFieldParser(currentFile) Dim sw As New System.IO.StreamWriter(tempFile) Dim swErrorFile As New System.IO.StreamWriter(errorFile) tfp.TextFieldType = FileIO.FieldType.Delimited tfp.SetDelimiters(",") tfp.HasFieldsEnclosedInQuotes = True tfp.TrimWhiteSpace = True Dim fields() As String Try While Not tfp.EndOfData Try fields = tfp.ReadFields() If fields.Length <> 23 Then 'write bad rows to error-file swErrorFile.WriteLine(String.Join(",", fields)) Else If fields(3) = "" And fields(13) = "" Then sw.WriteLine(Chr(34) & String.Join(Chr(34) & "," & Chr(34), fields) & Chr(34)) ElseIf IsDate(fields(3)) = True OrElse fields(3) = Format(CDate(fields(3)), "yyyy-MM-dd HH:mms") And fields(13) = "" Then sw.WriteLine(Chr(34) & String.Join(Chr(34) & "," & Chr(34), fields) & Chr(34)) ElseIf IsDate(fields(13)) = True OrElse fields(13) = Format(CDate(fields(13)), "yyyy-MM-dd HH:mms") And fields(3) = "" Then sw.WriteLine(Chr(34) & String.Join(Chr(34) & "," & Chr(34), fields) & Chr(34)) ElseIf IsDate(fields(3)) = True OrElse fields(3) = Format(CDate(fields(3)), "yyyy-MM-dd HH:mms") _ And IsDate(fields(13)) = True OrElse fields(13) = Format(CDate(fields(13)), "yyyy-MM-dd HH:mms") Then sw.WriteLine(Chr(34) & String.Join(Chr(34) & "," & Chr(34), fields) & Chr(34)) Else swErrorFile.WriteLine(String.Join(",", fields)) End If End If Catch ex As Exception 'ELog.WriteEntry("sFormatFile.TFP.Failure" & ControlChars.CrLf & ex.ToString(), EventLogEntryType.Error, 0, CType(4, Short)) End Try End While tfp.Close() sw.Close() swErrorFile.Close() File.Delete(currentFile) File.Move(tempFile, currentFile) 'ELog.WriteEntry("sFormatFile.Success".ToString(), EventLogEntryType.SuccessAudit, 0, CType(4, Short)) Catch ex As Exception 'ELog.WriteEntry("sFormatFile.Failure" & ControlChars.CrLf & ex.ToString(), EventLogEntryType.Error, 0, CType(4, Short)) Finally GC.Collect() End Try End Sub End Class -------------------------
Does my script seem okay for releasing the server memory usage?
I have a DTS local package that transfer data from the host SQL server to another. If I run the package directly (i.e. Right click - Execute Package) everything works as it should.
However, if I try and execute this as a Server Agent job it fails, the job history step details shows the following error:
Executed as user: MYSERVERNAMESYSTEM. ...: Drop table customer Step DTSRun OnError: Drop table customer Step, Error = -2147467259 (80004005) Error string: [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied. Error source: Microsoft OLE DB Provider for SQL Server Help file: Help context: 0 Error Detail Records: Error: -2147467259 (80004005); Provider Error: 17 (11) Error string: [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied. Error source: Microsoft OLE DB Provider for SQL Server Help file: Help context: 0 DTSRun OnFinish: Drop table customer Step DTSRun OnStart: Create Table customer Step DTSRun OnError: Create Table customer Step, Error = -2147467259 (80004005) Error string: [DBNETLIB][ConnectionOpen (Connect()).]SQL Server does not exist or access denied. Error source: Microsoft OLE DB Provider for SQL Server... Process Exit Code 2. The step failed.
Whilst this is obviously a connection issue, I dont understand how this can work if executed directly and not if executed as an agent job.
Can anybody shed any light on this?
If it helps this is the sequence of events my DTS package uses
1) Connect to remote DB 2) Drop table customers 3) Create new remote customers table 4) Connect to to local DB (hosting the DTS package) 5) Select * from local customers table 6) Dump data from local select into remote DB
I have a DTS job that moves a text file from a workstation to the SQL server, transforms the data, pumps into a table and then archives the file. It runs fine within EM on the SQL Server, but once scheduled fails immediately on Step 1. I have logged the error as follows:
************************************************** ********** The execution of the following DTS Package succeeded:
Step 'DTSStep_DTSActiveScriptTask_3' was not executed ************************************************** **********
I have taken the following actions to resolve: 1. Tested DTS job on local SQL Server - executes without error 2. Checked security: The DTS job is owned by me, and I am a member sysadmins, etc... The SQL Job I have tried several differing accounts (Mine, Domain Administrator, and most recently a proxy account I setup called SQLDTSuser) All have failed with the same error. 3. I have checked the SQL Agent account to ensure that it has necessary rights, stopped and started SQL Agent service, same results. 4. As mentioned I setup a Proxy Account which is in Domain Admins, and Sysadmins on SQL and changed SQL Agent Properties to use this account. result was same error. 5. Set Workflow Properties on the DTS job ActiveXtask to "Execute on Main PAckage Thread" - result was same error.
I'm at a loss I have to be overlooking something obvious, but need someone to hopefully point me in the right direction. Thank you in advance for any help I can get!
The platform is SQL 2000 and I've included the first activex Task code below:
Function Main() Dim fso Set fso = CreateObject("Scripting.FileSystemObject") Dim filePath filePath = DTSGlobalVariables("gv_SourceDir").Value & "" & SourceFileName If fso.FileExists(filePath) Then fso.MoveFile filePath, DTSGlobalVariables("gv_WorkDir").Value & "" & DTSGlobalVariables("gv_WorkFileName").Value Main = DTSTaskExecResult_Success Else Main = DTSTaskExecResult_Failure End If End Function
Function SourceFileName Dim fileDate fileDate = DateAdd("d", -1, Date) SourceFileName = "LASERDATA" & PadLeft(DatePart("d", fileDate), 2) & PadLeft(DatePart("m", fileDate), 2) & Right(DatePart("yyyy", fileDate), 2) & ".CSV" End Function
Function PadLeft(Value, Length) Dim retval retval = Value Do While Len(retval) < Length retval = "0" & retval Loop PadLeft = retval End Function
I'm running a package that has a XML Task in the control flow. This task tranforms a XML file with a XSLT.
The file is about 2 megs on a daily basis, but at the end of the month there is a full dump of data that makes the file to be around 400 megs. There is where my problem is.
I run this on my 2 GB memory workstation and when the memory gauge on the task manager reaches about 1.5gb the package fails with an "Out of memory exception".
I also run this package on a 8GB Ram server, and same applies.
Is there any way of making this package utilize all the available memory, I even increased the virtual memory to see if that helped my issue, but nothing.