If I want to download a file, but I don't know if it's available yet (actually positive it won't be available for some time), how do I make FTP Task retry/wait until file shows up in the ftp folder?
Hello,I am trying to create a view that shows the followingField1: Sum of Amounts from Table AField2: Count of Amounts from Table AField3: Sum of of Amounts from Table BField4: Count of Amounts from Table B......Field3: Sum of of Amounts from Table HField4: Count of Amounts from Table H......Things are a bit more complex but this is the gist.I am using SQL 2000.I know how to do this pretty easily using a stored procedure. But howcan I do it in a view? A SQL server won't meet my needs in thissituation.I tried OpenQuery ('myserver', 'exec myprocedure') but get the messagethat my server is not configured for data access. I tried the systemstored procedure to set data access to true but nothing seemed tohappen.I also tried Select * from (Select Statement1, select statement2)but got syntax error at the comma between statement1 and statement2.Trying to use select Statement1 as ABC to does not seem to work either.Is there a way to do what I want without making 15 views and then afinal view that shows them all together? I know I could probably dosomething by creating a ton of functions, but it really seems thisshould not be that hard...I am definitely open to any easy suggestions!Thanks,Ryan
Is there a way by which we can make the Connection in the connection manager retry for a certain amount of time?
I have a DataFlow Task which uses a OLE Source which is connected to a database. there are time when the connection to the database is not available due to some transport level error. so i wanted the connection manger to make a couple of tries before giving up and saying that it was not able to connect and it timed out..
i have the Connect TimeOut and the General Timeout properties set to 0, but is there a way to retry?
I have a flat source file(.csv) which I am importing to SQL server. Now, if the source file is not available at the specified location, then the SSIS package should retry to execute n times( say 3 times) after certain time interval. The number of retries and the time interval should be configurable.
I have used Flat File connection manager for the source and OLEDB connection manager for the destination.
I am quite new to SSIS. Any help would be highly appreciated.
I am not sure if this is a correct place to post this question. i am making a simple pay bill system, require people set a schedule that pays bill, then save it into database, when the time come, it auto transfers the money, i am thinking if i can do this in a store procedure. here is the interface:From Account:To Payee:Amount:ScheduleDate: Save the schedule task View scheduled task
I have a SQL Agent job that runs at 4:15 in the morning. The job has 5 steps, each step only runs if the preceding step succeeds. The second step, which calls an SSIS package that does the main processing, appears to finish as it goes on to the next step; however, when looking in 'View History' there are 2 entries for this step - the first one shows it as still running (Circled Green Arrow) but with a start and end time. The second entry says the job succeeded.
I have been seeing conflicts, such as deadlocks, with later jobs. I suspect this job is causing the conflicts - maybe the package is still running in the background instead of having actually completed?
what conditions a job step my be showing in the job history as both running AND completed successfully?
I have the following code in Script task. However, tablesInFile.Rows is sorted by name in ASCII order. Anyway to get the "natural" order of Excel workbook? Or just the first tab? connectionString = "Provider=Microsoft.Jet.OLEDB.4.0;" & _ "Data Source=" & excelFileName & _ ";Extended Properties=Excel 8.0" excelConnection = New OleDbConnection(connectionString) excelConnection.Open() 'tablesInFile = excelConnection.GetSchema("Tables") tablesInFile = excelConnection.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, New Object() {Nothing, Nothing, Nothing, "TABLE_NAME"}) ' .GetSchema("Tables") tableCount = tablesInFile.Rows.Count For Each tableInFile In tablesInFile.Rows
I've already shrunk the tlog from 350 GB to 313.My DB Server (2008 R2 Sp2) cannot be restarted and the db cannot go offline or detach due to company policy.My DB after changing from full to simple mode still has 313GB tlog file and when I run DBCC OPENTRAN I get Transaction information for database 'DB'.
Replicated Transaction Information:
Oldest distributed LSN : (0:0:0) Oldest non-distributed LSN : (2882:26:1) DBCC execution completed. If DBCC printed error messages, contact your system administrator.
Which means this DB is participating in a High Availability process like replication, mirroring or log shipping.
So I run EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time = 0, @reset = 1.
This is useful when there are replicated transactions in the transaction log that are no longer valid and you want to truncate the log.
But I get an error:
Msg 18757, Level 16, State 1, Procedure sp_repldone, Line 1
Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication.
There are currently 9 connections and all are sleeping.What else can i try in order to shrink the tlog file?
Hi everyone! I'm new using SQL server (7.0) and I would like to know if it is possible to create a batch file that will automaticly start a job... If so, let me know and give me some piece of information (command, parameters) or a web site, anything that will help me do so... I'll be very grateful. If not, let me know anyways Thanks a lot in advance...
Hi! I would first like to thank the person (Ray) who answered my question and also those who read it and tried to find an answer to it.
But, it didn't quite help me much, so I'll ask again; reformulate my question. Here it goes: I would like to know if I can make a batch file, that excutes or starts an active script (in enterprise manager go to SQL Server Group/Management/SQL Server Agent/jobs/Dbl-click on a job/then go to the steps tab and you will see that the type of the job is an active script. Or, try to see if you have an active script job). P.S. It's not a SQL script (*.sql). What I want to do is to start that job from a batch file. If I can, can you give me more info on how to do it (command line, parameters), a web site, anything that can help me do that.
If I cannot do that, let me know please. Thanks a lot!
We have database when trying to make read only throwing below error: with stack dump
Location:Â Â Â Â Â Â Â Â Â Â Â Â Â recovery.cpp:4517 Expression:Â Â Â Â Â Â Â Â m_recoveryUnit->IsIntendedUpdateable () SPID:Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 51 Process ID:Â Â Â Â Â Â Â Â Â 6448 Msg 926, Level 14, State 1, Line 1 Database 'XXXX' cannot be opened. It has been marked SUSPECT by recovery.
See the SQL Server errorlog for more information. Msg 5069, Level 16, State 1, Line 1 ALTER DATABASE statement failed. Msg 3624, Level 20, State 1, Line 1
A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a QFE from Technical Support.
Msg 3313, Level 21, State 2, Line 1
During redoing of a logged operation in database 'XXXX', an error occurred at log record ID (0:0:0). Typically, the specific failure is previously logged as an error in the Windows Event Log service. Restore the database from a full backup, or repair the database.
Msg 3414, Level 21, State 1, Line 1
An error occurred during recovery, preventing the database 'XXXX' (database ID 7) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
Investigation DONE:
- DBcc checkdb shown Clean - database is online and able to access -Detached database and attached with rebuild log, still could not bring database read_only
I have packages that must generate log errors dynamically including time of execution and the name of the task.
I make it changing the properties of the connection inserting two expressions.
1.-I alter the File Usage Type to 1 to generate this files. 2.- I alter the connection string as: @[User::myvariable] +"constant_description"+ time description+ +".txt"
The time description is : (DT_STR,40,1252) DAY (GETDATE())+"-"+(DT_STR,40,1252) MONTH( GETDATE())+"-"+(DT_STR,40,1252) YEAR( GETDATE())+" + REPLACE( (DT_STR,10,1252) (DT_DBTIME) GETDATE(),":","_") But it is not the problem
In those packages I have one connector for all the tasks and in execution time it creates one file for each of the tasks.
The problem is the way I insert in the filename the task name.
I have a pre-execute event handler in each task that modifies a string variable( myvariable) appending the task name. When I execute de package it works great but when I only execute a task, the program do not enter in the event and do not put the task’s name.
How can I put that name without using that handler? There is another handler can I use to do it that happens before the system generates the new file name and after pre-execute? Anyone knows another way to do this kind of things?
Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.
When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.
could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.
The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?
Hi again! I would like to thank the 2 persons which I had an answer from my previous question first. Thanks a lot !
But, it didn't quite answer my question so I'll give a bit more info. I would like to know if I can make a batch file, that excutes or starts an active script in SQL Server Agent (that is in SQL Enterprise Manager). If I can, can you give me more info on how to do it (command line, parameters), a web site, anything that can help me to that.
If I cannot do that, let me know please. Thanks a lot!
In the Control flow tab, I have an Execute SQL Task that outputs full Result set into a variable of an object type. Now how can I write the contents of the Full Result Set into a text file using Script Task. I also want to format the following way while I output into a file:
Column Name 1 : Column Value
Column Name 2: Column Value and so on
I tried writing the contents of the Object Variable into a file, but the file had an output of single word: System.__ComObject.
Code for Writing the Full Result Set into a Text File
Dim RSsqloutput as String = Dts.Variables("objVariable").Value.ToString
Dim strVal as String = "File completed on " & Now() & vbCrLf & "------------------------------------------------------" & vbCrLf
HI, I need to trigger some packages upon existance of specific files in a particular directory. Sound lkike the file watcher task (from SQLIS) would do the job but I am wondering what is the difference of using this tool instead of a for each loop container. I mean, If a file exists in a directory, the for each loop container will detect it. Since the file watcher is not a service, the package containing it needs to ne scheduled on a regular basis for the filewatcher to detect the file, right? So, a for each loop container would do the job? So, waht wouldbe the advantage of using the file watcher task?
A common issue that I run across with clients is they want only want to process a file if it's finished transmitting to the server. This SQL Server 2005 task reads the properties of a file and writes the values to a series of variables. For example, you can use this task to determine if the file is in use (still be uploaded or written to) and then conditionally run the Data Flow task to load the file if it's not being used. You can also use it to determine when the file was created in order to determine if it must be archived.
Is it possible to set up some kind of error handler in VB using the DTS DOM that will enable a retry of a step/task within a package? This is a DataPump task with a Custom Task. I am transferring 16 tables within each of the packages, 1 package for each of the 50 states, and executing them 1 at a time from a VB app. I can retry a package from the package error handler but can't find an equivalent for a step or task. I'd rather not have to resend tables, but restart from the table with the problem.
It is possible to write code to recreate an abbreviated form of the package but I'd like to avoid that, too.
Better yet, how can I avoid this error even appearing:
"Execution canceled (sic) by user".
I'm not cancelling anything, but about every 4th package this error comes up, not even while transferring the same table. If I retry the package, it almost always flies.
>When a conversation is marked delayed, Service Broker performs the matching process again after a timeout period. Notice that failure to find a matching route is not considered an error.
for example it could happen when there is no route for a service. the question is - what is this timeout. Is it configurable? on a system wide basis ?
It will be great if this process is described in more details.
I need advice on how to retry a Data Flow if it errors. There is one problematic data flow within a .dtsx w many other data flows. The DF is a relatively OLE DB Source to OLE DB target task, except that it is long-running and sometimes gets timed out at the source. Not sure how to do it with the event handler. Any advice appreciated.
I'm copying files to a folder with the naming convention as follows in the source folder:
CM_ABC_MY_TEST.txt
In the destination folder, this filename needs to appear as:
CM_XYZ_MY_TEST.txt
In my File System Task, I'm pretty sure I'm going to need an expression with a replace, substring, etc. But am having a hard time nailing down the exact syntax.
In my script task I have the following code. The task I'm trying to accomplish is: If the filename on FTP can be found in the local archive folder of e: drive then show message "FileAlreadyThere" (I will ultimatley change it to do nothing); if the filename on FTP cannot be found in the local archive folder of e: drive then transfer the file to the local package folder on d: drive.
While the script task is executing I was watching it closely, but the problem i saw is that: If some files on FTP are already in local archive folder and some are not, then it the files which are already in the archive folder are dumped to the package folder; then after that the files which are not in the archive folder are then dumped to the package folder. But I only want the new files on FTP to be transferred to the package folder for further processing.
Then after this is finished, I saw all the files in the package folder are refreshed one after another, after the first round of refresh the second round starts, after the second round finishes it then stopped. I saw it refreshes itself because the 'Date Modified' of the file changes. And I saw the script task turned green.
I don't see how the code below produced this result. Something is wrong in the logic of the loop? Anyone has any idea why it's behaving the way it is now? And how to change the code to accomplish what I want? Thanks a lot!!
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Imports System Imports System.IO Imports System.Data Imports System.Math Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim cm As ConnectionManager = Dts.Connections.Add("FTP") cm.Properties("ServerName").SetValue(cm, "ftp2.name.com") cm.Properties("ServerUserName").SetValue(cm, "username") cm.Properties("ServerPassword").SetValue(cm, "password") cm.Properties("ServerPort").SetValue(cm, "21") cm.Properties("Timeout").SetValue(cm, "0") cm.Properties("ChunkSize").SetValue(cm, "1000") '1000 kb cm.Properties("Retries").SetValue(cm, "1") Dim ftp As FtpClientConnection = New FtpClientConnection(cm.AcquireConnection(Nothing)) ftp.Connect() ftp.SetWorkingDirectory("/directory") Dim fileNames() As String Dim folderNames() As String ftp.GetListing(folderNames, fileNames) If fileNames Is Nothing Then MsgBox("NoFileOnFTP") Else Dim fileName As String For Each fileName In fileNames If File.Exists("c: emp" + fileName) Then MsgBox("FileAlreadyThere") Else ftp.ReceiveFiles(fileNames, "c: emp", True, True) End If Next End If
Does anyone know how to do this using variables? Everytime I try it, I get the
Error: Failed to lock variable for read access with error 0xc00100001.
I also tried it writing a script and still the same error. If I hard code the values into the variables it works fine but I will be running this everday so that it will pull in the current date along with the filename. So the value of the variables will change everyday. Here is my expression:
I was wondering if there is a way to 'Move File' with the File System Task inside of a For Each Loop container but to dynamically set the Destination path variable.
Currently, this is what I have: FileDestinationPath variable - set to C:TestFiles FileSourcePath variable - set to C:TestFiles FileNameAndLocation variable - set to blank
For Each Loop Container €“ Iterates through a folder C:TestFiles that has .txt files in it with dates in the file name. Ex: Test_09142006.txt. Sets the file path (fully qualified) to the Variable Mapping FileNameAndLocation.
Script Task (within For Each Loop, first step) €“ Sets the FileDestinationPath to the correct dated folder within C:TestFiles. For example, if the text files I want to move are for the 14th of September, it takes FileDestinationPath and appends the date folder to the end of it. The text files have a date in the file name (test_09142006.txt) and I am picking this apart (from FileNameAndLocation in the For Each Loop) to get the folder date. (dts.Variables(€œUser::FileDestinationPath€?).Value = dts.Variables(€œUser::FileDestinationPath€?).Value & €œ€? Month & €œ_€? & Day & €œ_€? & Year & €œ€?) which gives me €œC:TestFiles 9_14_2006€?.
File System Task (within For Each Loop, second step) €“ This is where the action is supposed to occur. I want it to take the FileDestinationPath and move the FileNameAndLocation file (from the For Loop) into this folder for each run.
Now as for my problem. I want this package to run everyday but it has to set the FileDestinationPath variable dynamically according to that day€™s date. Basically, how do I get this to work since I can€™t hard code the destination path variable from the start? I have the DestinationVariable on the File System Task set to the FileDestinationPath variable, after the script task builds it. However, using FileNameAndLocation as the SourceVariable on my File System Task tells me that the €œVariable €œFileNameAndLocation€? is used as a source or destination and is empty.€?
Let me know if I need to clarify further€¦...I may be missing something very simple. Any help would be greatly appreciated!
Ive built an SSIS package which generates a file from a legacy system and then downloads the file into a designated folder on the server. I need the file watcher task to wait for a the file to completely finish loading before it says it is complete. Currently, as soon as the file is created, the WMI step finishes.
I am able to run SSIS packages as SQL Server Agent jobs with a Control Flow items "File system task", if I move a file (test.txt) from a drive (c on the server (where SQL Agent jobs run) to a subdirectory on the same drive. But, if I try to move a file on a network drive, the package fail.
Historically I've always written a VB script to copy a file from a sharepoint library. I don't like this method because I have to input a username & password in the script and maintain a config file.
Yesterday I was playing around with using a file system task. The sharepoint file has a UNC path so why not? I created a simple test package with a single file system task that copies the sharepoint file (addressed via UNC) to another network location. Package runs fine locally.
I try running on our utility server but am getting a "The file name [SHAREPOINT UNC PATH] specified in the connection was not valid" error. Package is running with a proxy on the server and the proxy account has the same permissions to the sharepoint site (so far as I can tell) as me.
I have created a File System task which is contained in a Foreach Loop Container. I have .bak files that are populating a directory from a maintenance backup plan.
There is a point where I need to delete the .bak file's after I've zipped them all up.
How do I set the SourceVariable to read through the directory and pick up just the .bak file's in the directory to delete.
I have an issue with a DTS package. We create a zip file and then attach it to emails going out using DTS. The problem is that the attachment, when received, is named using the full path to the file, so it is quite long.
Has anyone seen this before? Is there a way out of this?
I am considering mapping a drive to the share holding the file to be named, but the fact is this will shorten the name but will still result in the path being included.
I am wondering if this is a bug, as I suspect this isn't the default behaviour.