I want to log package info like when the package starts and ends, and write info to a sql server table. there are of course many ways to do this. I just want some opinions from you if you have some clever ways to do this.
I have a master package with 5 Execute Package Task for sub_packages. How to get the detailed logging information as I see in the debug model in the local file system?
What I am confused about that the schema mentioned in this article is
Computer
The name of the computer on which the log event occurred.
Operator
The identity of the user who launched the package.
SourceName
The name of the container or task in which the log event occurred.
SourceID
The unique identifier of the package; the For Loop, Foreach Loop, or Sequence container; or the task in which the log event occurred.
ExecutionID
The GUID of the package execution instance.
MessageText
A message associated with the log entry.
DataBytes
A byte array specific to the log entry. The meaning of this field varies by log entry.
Is missing certain things like event id, Type, category (these are usually seen for all messages in windows applicaiton event log). I feel that these are pretty important to be logged because based on these I will be setting up alerting mechanism.
Can the SSIS logging schema be changed so that things like eventid, type and category can be assigned at the time of logging the message?
Hello all, I am struggling around defining a logging mechanism for my packages. I have 2 questions concerning that matter: I have used event handlers for my loggings (as defined here: http://blogs.conchango.com/jamiethomson/archive/2005/06/11/1593.aspx ), but the problem is with packages that failed validation. I cannot find log entry for these cases since no "onerror event" doesn't trigger (for instance when the table I'm loading to doesn't exsist).
And the second question: many of my packages are executed using execute process task (using dtexec command line). I am trying to capture the result of the execution as a log file by using the ">" in the command line in order to output the execution to a log file in the following format: dtexec /FILE "MyPackage.dtsx" > " MyPackageLog.log" This works fine when executed by myself but when using the Execute Process task (defined: Executable: DTExec.exe, Arguments: /FILE "MyPackage.dtsx" > " MyPackageLog.log") I get execution error€¦ Thanks, Liran
Ok, I understand it is possible, but I still can't quite get the mechanics of it to work.
I create a new BIDS project, and add a package to it, and add 3 connection managers, and set the server, instance, DB, etc. for SQL connections. I change the name of each connection manager so the server name is not there, but use something more like a generic name of what the database is. Fine. I right click in the Control Flow area and go to configurations. I enable configurations, and save to a common place on the C drive. Save everything, exit, VS, fine, and I export all connection managers properties.
Then I start Visual Studio again. Create another new BIDS project, add a package to it, and add 3 connection managers. I don't actually connect them, but use those same generic names from the first iteration. I enable configurations for the package. I am somewhat expecting to see the connections change from the same-o-default to the data I used in the first go around.
The scenario is an ETL that takes flat file feeds via FTP to move data into varous production SQL server databases nightly.
There are a number of packages involved, and this depends upon the type of data being sent.
There are a set number of servers and databases to receive the transformed data. I would like to be able to define say 3 servers, and maybe a couple of databases in each one time in the configuration. For simplicity lets say 6 databases total. I would like a single point of maintenance for these 6 locations. I would like all connection managers in all packages in all solutions to share these 6 settings in all connection managers. Is this possible? From my initial attempts, it would appear each package gets its own independent list of connection managers and which must be configured separately. I don't see how to share settings, which is really where the power of SSIS configurations would be.
Similarly, I would like to be able to locate flat files at given paths. The package may know the name of the file it is looking for, or the file it will create, and the folder path needs to be computed from a configured folder root, and for the package connection manager to store only the name of the file, which never changes.
I can not see how to set something like this up with connection managers and configurations. Perhaps I still need a highly customized solution to achieve this, as we did for DTS, where we would only execute packages using our own executor, which would load the package, search out all environment specific settings and modify them on the fly prior to executing the package.
Thanks for any direction you can give here. The books I've read seem to imply I might be able to do what I need here, but I can't seem to find the mechanics of making it happen.
Since I can't seem tofind the Microsoft SQL 2000 forum, I will post this here:
I currently have logging enable on several of my packages. However, we are still in development of our packages and are reaching upwards of 100 and logging will eventually need to be active on all of them. In production, there will still be a development server and a production server, both with different server names and user id/pwd.
I am looking for a way to dynamically change the logon information for the logging so that we do not have to have someone go through and manually change the options. I have tried using Dynamic Properties Task, but this only works on the 2nd run of the package.
-----
As a second question: can anyone explain to me why the errordescription field in sysdtssteplog is cut short?
It is pretty easy to create the package configuration to the shared, never-changing path to XML file. But to get this to work, I have to add a bunch of package variables which are driven from the configuration. I then use expressions to compute actual paths to source files and packages.
This business of copying a block of variables from one package to another is extremely tedious, and at least to me, something of a maintenance issue. I have found if I get ahead of myself and forget to create the variables first, then Visual Studio is NOT very forgiving, and will complain repeatedly just prior to crashing without the offer to save anything. If I add a new variable, it is like an easter egg hunt. This does NOT feel very single point of maintenance when variables must be used in this way to really effectively apply configuration, and it is so tedious and error prone, and unforgiving. I seem to spend a great percentage of time copying the names and datatypes and values of variables ONE AT A TIME. What is that all about?
I hear people telling the praises of configurations, but I must be missing something.
With DTS I used a system where global scope settings were "injected" into packages as they ran, and I never had to worry about anything, and it was damn hard to goof it.
Please someone, evangelize me!! Are people bypassing VS and using NOTEPAD on the packages or something like that to save time?
I have deployed to production a number of nested packages (parent packages that call child packages) to the SQL msdb via the Save As option rather than building a deployment utility. These packages reference configuration files in a static location off of the c: drive on the production server. In the development environment, when connection changes are made and I run the Reload with Upgrade option the connection manager takes on the new server and user id settings. However, out on the production side I get the following error from the SQL job log:
Cannot load the XML configuration file. The XML configuration file may be malformed or not valid.
As a result the SQL job uses the default connection information which references the development database rather than the production database. I did research the error but found no good solutions. Is there a way to ensure the configuration files are formed correctly and that the packages are correctly referencing the configuration files? We are trying to run the ETL updates via a SQL job.
I am developing a package on my local workstation. I have defined two logging service providers. One is for SQL Server and the other is for the Windows Event Log. I am using the Dts.Log method in a script task to write log entries.
Logging is working properly with the SQL Server provider and rows are being inserted into the sysdtslog90 table. However, the only events that are being logged in the Windows Event Log are the package start and end events which I believe SSIS is doing automatically anyway.
Is there something I need to do to enable WIndows Event Log logging other than defining a log provider and making sure it is checked active? Won't SSIS write to two different logs with one Dts.Log call? Any ideas on what might be going wrong with my approach?
Hi, I decided to use the SQL Server log provider to store logging data of all my Integration Services packages. I also created some reports about this data for operating purposes. I have a problem occurs the name of the executing package is not always written to the log,but the name of the single task which failed. But that is not very useful information for operating, because I do not see any chance to get the name of the package by the information which is logged in the sysdtslog90 table in the database which I defined for SSIS Logging.
How do I configure the package to always log the package information into the table, too?
I want to be able to see when records have been added to a table. The issue is we have a DTS job scheduled to run every night. The developer who wrote it password protected it and doesn't work here anymore. I want to add a step to this series of DTS jobs and want to run it just prior to his job. Is there a way to see when the records are being added or when this job is being run? Thanks again, you guys are the best.
I recently read the project real ETL design best practices whitepaper. I too, want to do custom logging as I do today, and also use SSIS logging. The paper recommended using the variable system::PackageExecutionId to tie the 2 logging methods together.
I've run into a problem with SSIS packages wherein tasks that write or copy files, or create or delete directories, quit execution without any hint of an error nor a failure message, when called from an ASP.NET 2.0 application running on any other machine than the one where the package was created from. By all indications it appeared to be an identity/permissions problem.
Our application involves a separate web server and database server. Both have SQL Server 2005 installed, but the application server originally only had Integration services. The packages are file system-deployed on the application server, and are called using Microsoft.SqlServer.Dts.Runtime methods. For all packages that involve file system tasks, the above problem occurs.
When the above packages are run using the command prompt (either DTEXEC or DTEXECUI) the packages execute just fine. This is expected since we are using an administrative account. However when a ShellExecute of the same command is called from ASP.NET, the same problem occurs.
I've tried giving administrative permissions to the ASPNET worker process user to no avail.
I have likewise attempted to use the SQL Server Agent job approach but that approach might not be acceptable for our clients since it means installing SQL Server 2005 Database services on the application server.
I have read the relevant threads in this forum, namely http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1044739&SiteID=1 and http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=927084&SiteID=1 but failed to find any solution appropriate for our set up.
We manage some SSIS servers, which has only SSIS and SSIS tools installed on them and not the sql server DB.
SSIS packages and configuration files are deployed on a NAS. We run the SSIS packages through DTEXEC by logging in to the server.
We want to allow developers to run their packages on their own on the server, but at the same time we dont want to give them physical access on the server i.e we do not want to add them into RDP users list on server properties. We want them to allow running their packages remotely on the server.
One way We could think of is by using powershell remoting and we are working on that. But is there any other way or any tool already present for the same.
I've inherited the administration of two SQL Servers one 7.0 and the other 2000. We are looking into upgrading to 2005 so I'm gathering all the info I can on the DBs and applications using them.
Now, how can I check if a DB is currently in use? Is there a way to find out when a DB was last accessed?
By the way, I've never done DB administration so you can figure out how desperate I'm getting.
i need small information regarding how the picture image data can be stored in the database and how it can be made possible. is there for the need for any conversion.
Well i need to document what tables i have in my stored procedures. Im really new to sql as well. And the problem is if i try code i have or sp_depends for instance it will only tell me the tables that are in that DB so if I have multiple tables from different DB's they are left out. I was told that sp_MSForEachDb for go through each DB and I could try it that way. Any help or example code dumbed down for me would be awesome. This was posted in a previous thread but i though this would be a better explanation
How can I check from database username and password? It doesent need any special authentication, just a lookup through the database and if the user exist than continue with the next page.Thanks
I have a web application accessing a SQL Server database (the ususal stuff).
I want to be able to log who did what on which table. I need to display this information on the web application. Is there an easy way of doing this, rather that making duplicates of a lot of data?
The best way I have thought of so far is making a new table with the following fields: Table_Changed Table_Primary_Key Old_Field_Value New_Field_Value User Date_Changed
Every time someone changes something, it is logged in this table, so that, at any time, I can display who changed what. I have one more question. If I do do it this way, is there a way of getting the primary key value of any table? E.G. could I do something like this_table.primary_key.value ?
Is there a way to produce a log of all SQL statements hitting a database in a given range of time by a specific SPID? Obviously the SQL Server activity logs do not go into that much detail, except when errors are produced or a change is made to a system table. Is there a setting to add more detail, or to log a specific SPID's actions, or maybe a third party software that will give me what I am looking for?
does SQL 7.0 have any built in logging capabilities to identify row level actions by operator. For instance, can it tell me that a particular user deleted or inserted a row? How would I tell it who the operator is?
I've been asked to write a trigger that will basically log changes to certain fields in certain tables, then create a front-end application where the user will be able to review the info. The front-end app. is not a problem for me - the trigger is. I have found example of how to do this on Update when it's a complete row you want to log, but not a specific field. In addition, I also need to know if someone is attempting to read certain data and who that user is. If the user is not someone that is allowed to read the data, then I need to send an email alert. I believe it's possible to do the above (despite my lack of knowledge :) - Does anyone know where I can get more information on how to accomplish the above - or where to start looking? Thanks to any who can guide me in the right direction.
Here's my string. I know it's way wrong right now SELECT binbox_receipt.partner_code ,binbox_receipt.link_id ,binbox_receipt_archive.partner_code,binbox_receipt_archive.link_id FROM binbox_receipt, binbox_receipt_archivewhere binbox_receipt_archive.link_id = binbox_receipt.link_idand binbox_receipt.partner_code = '1154' and binbox_receipt.link_id = '2684'and (binbox_receipt_archive.partner_code = '1154' and binbox_receipt_archive.link_id = '2684')I need to check 2 tables if in the first table the link_id and partner_code exist or the second table link_id and partner_code existany help would be greatly appreciated I'm a little new at this but having fun trying
Iam not sure if the Question is approriate for this forum but here goes....
I have just started playing around with ASP .NET and since i do not have a lot of money i downloaded and installed MSDE as my database. So far i havent been able to find any free tools to access MSDE. Iam looking for GUI tools from where i can create Databases , tables etc... I have experience with MySQL database and am looking for something like MySQLAdmin for MSDE..
I'm using a SQL selection to fill a DataGrid. One of the fields I have is called diagnosis. This field in the database can contain multiple diagnosis. But I use a set of characters to divide each diagnosis. Example : Sick!@#$%Hurt!@#$%Ill!@#$% My problem is this is how it looks in my Data Grid. Can someone tell me how to parse out each diagnosis.
Hi, Here i have a question.. Whenever i see my spid in LOCK/PROCESS ID one command DBCC BUFFERINPUT(MY SPID) always running and it is showing ''''tempdb.dbo.##lockinfo''' MODE---- X Is it pretty normal or something wrong.. i am not running any thing, just simply opened EM.. Pls any suggestions... thank u