I try to add multiple files to a raw files. I use a loop for it and set the write option to "create once", so that the file should be created when the package is started and files are appended as they flow to the destination... However I always get an error when I try to add the second file that the raw file already exists... Well, I expect that it exists but I don't expect this error because that's not the intended behaviour!
Is there anything I also have to do to use the raw file as it's described in BOL?
I need to get a list of customer ids and then use them as a parameter to select from a transaction table and then create a file for each customer. I have used an execute SQL task to get the list of customers and have put the result set into a variable.
How do go through the recordset to create a file for each customer?
I have an SSIS package that creates a new file and saves results into that file. However, the requirement has it that I need to create this file on a server residing in a different domain. How can I achieve this in an SSIS package when it is scheduled as a job?
I need to create about 1,000 (literately) Excel files that each contain 5 tabs. The data being placed on the tabs will always be the same (meaning the columns are static). I am fairly advanced at Excel VBA so I can write code that does all the following in Excel (looped 1,000 times):
Open an Excel template
Bring data in from the tables
Filter, then copy-paste the appropriate rows into each tab.
Save the new Excel file.
Email the file to appropriate individual (it is a Microsoft Exchange Server). As I started this in VBA, I thought that I might be able to do it with SSIS. My concern is I need to have the rows formatted (font, border, etc.) and the number of rows change.
My questions are: Is it possible to format Excel with SSIS? Can I email the files even if it is not with an SMTP protocol? Would SSIS process this data faster then Excel? Does this approach even make sense? Am I better just doing it with VBA?
I created a SSIS solution for reading data from dbase and storing them in SQL Server. In a ForEachDirectory-Loop up to one thousand dbase files are read and stored. The system where the packages are running has 16 GB RAM. For the first few hundred dbase files everything goes fine, but then, the RAM seems not to suffice any more and a temp file is created (I changed the path in BufferTempStoragePath).
How can it be that there is a need to create temp files if there is so much RAM available? Why is the RAM filled more and more during the SSIS package execution? Is there anything I can do to release some of it? (it is running in a loop and there is no need to store all the data) Could it be caused by dbase?? (I use Microsoft Jet 4.0 OLE DB Provider)
Another thing is that the temp file is not stored in the path I set in BufferTempStoragePath. There are sufficient permissions set, but temp file is still created in user temp folder...
I have about 1200 sql files in one of my folders. Almost all of these files do data inserts and updates, so they should be run only once. As and when required I have manually ran around 150 of them already. Whenever I ran any of these scripts, I log that file name into a log table in my sql server including the execution time. Since running 1000+ more files takes a lot of time, I want to automate running of these files through a batch file. But I also want to filter the files that are already run.My file list looks like follows.
select * from sqlfileexecutionlog FileNameRunTimeResult --------------------- DeleteDuplicateOrders.sql03/12/2014 14:23:45:091Success UpdateInventory.sql04/06/2014 08:44:17:176Success
Now I want to create a batch file to run the remaining files from my directory to my sql server. I also want to wrap each of these sql file executions in a transaction and log success/failure along with the runtime and filename into sqlfileexecutionlog table. As I add new sql files into this directory, I should be able to run the same batch file and execute only the sql files that have not bee run.
I need to create script that will import large XML files (500 - 7GB) on a daily basis and store the data in a relational db structure.
What is the best and fastest way of importing such files. I have played around with smaller files and found the following.
1. SSIS XML Data Source: It doesn't seem to like the complex elements types and throws out the file. 2. Using Bulk File Import, sorting the file in XML variable and using XQuery to parse the file: This works but it can't take a file more than 2GB in size, so I can't use this method. 3. C# + XML Serialization: This also works, but seems to be terribly slow. I open the DB connection once, so it doesn't open and close for each db call, but still seems like it takes a long time.
how to import large XML quickly in a relational table structure?
I was in the process of creating additional TempDB.ndf files, and received an error saying they already exist. I checked the location and it was empty, nothing to see here. So I looked in sys.master_files and there are several tempdb files listed in various locations, all of which come up empty.
So the files are listed as online in sys.master_files, but they do not exist on the server. I restarted SQL services but it did not change anything.
I have to make a user defined function in c# as the class liberary and create a dll file, now i want to use this function in SQL Server 2005 as a part of CLR Integration
I have tried like this
CREATE ASSEMBLY abc FROM 'C:abc.dll'
WITH PERMISSION_SET = SAFE
but it gives me incorrect syntax error so plzzzzz anyone help me wht to do in my probbbbbbbbb???????
I have a folder which contains all the flat files which are used by all the packages(ex--flat file connection managers) in my project. If we want to change the name of the folder,have to change in every package( in all connection managers) manually.It looks hardcoding and timetaking.
Is there any way to change in one place(xml,file,variable) so that it should be affected in all the packages.
one more doubt is..
If we configure the flat file connection manager in package configurations,configuration file (ex-xml)will be created (we can make changes in that file regarding that connecion mgr only.)
But i want one configuration file (ex--xml) so that i should configure the details of all the connection managers used in all packages.
I've been using replication for a long while now but have never come across this error. It's a basic transactional replication from ServerA to ServerB, where ServerA is also the distributor. Everything had been running fine on it until yesterday, when this error started popping up and no further transactions could be delivered.
After some quick googling I was able to determine that the distribution agent account needed write access to C:Program FilesMicrosoft SQL Server100Com. According to the MSFT article it's because the distribution agent is running under a non-default profile. I didn't change this. However, what I did change around the time that these errors started occurring was the server's Replication Max Test Size setting. It would be far too coincidental for this to not be the cause, but what I don't understand is *why* that would have changed it.
how do I change this? It is definitely not preferable to create temp files in this directory in our environment.
I have a master table containing details of over 800000 surveys made up of approximately 400 distinct document names and versions. Each document can have as few as 10 questions but as many as 150. Each question represents one row.
My challenge is to create a separate spreadsheet for each of the 400 distinct document names and versions containing all the rows and columns present in the master table. The largest number of rows would be around 150 and therefore each spreadsheet will not be very big.
e.g. in my sample data below, i will need to create individual Excel files named as follows . . . "Document1Version1.xlsx" containing all the column names and 6 rows for the 6 questions relating to Document 1 version 1 "Document1Version2.xlsx" containing all the column names and 8 rows for the 8 questions relating to Document 1 version 2 "Document2Version1.xlsx" containing all the column names and 4 rows for the 4 questions relating to Document 2 version 1
I assume that one of the first things is to create a lookup of the distinct document names and versions assign some variables and then use this lookup to loop through and sequentially filter the master table data ready for creating the individual Excel files.
--CREATE TEMP TABLE FOR EXAMPLE
IF OBJECT_ID('tempdb..#excelTest') IS NOT NULL DROP TABLE #excelTest CREATE TABLE #excelTest ( [rowID] [nvarchar](10) NULL, [docName] [nvarchar](50) NULL,
How to create a text file in UNIX format using Flat File connection manager. By default when we create a connection manager for flat files it is taking "CRLF" as the delimeter.
I beleive the format of the file will be decided based on the control line feed character at the end of each row. There are different control line feeds for different operating systems.
CR - Mac OS (Carraige return) LF - UNIX (Line Feed) CRLF - Windows. (Carriage return Line Feed)
I have a SQL2000 DTS package that executes vbscript to loop through a recordset which:
- runs a stored procedure and populated tables
- builds a recordset from the populated tables to write records to an Excel file
- writes status to text files with either the error or success notices
I use FSO to set up the success and error files, but the scheduled job in SQL2005 which calls the SSIS package returns the following error:
"Retrieving the file name for a component failed with error code 0x0015F74C"
I can successullly run this (vbscript) in both the SSIS package via the BI Development Studio and in MS Access (exactly the same code in both) - but not as a SSIS package called in a scheduled job in SQL2005.
I am at an impasse with this ... any and ALL assistance would be GREATLY appreciated.
Hi, I am trying to open or create a sqlexpress database within Visual Studio 2005 Professional in the App_Data folder. If I attempt either method, I get the following dialog box:Required Components MissingConnections to SQL Server files (*.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL: http://go.microsoft.com/fwlink/?LinkID=49251 The link above just takes you to the download page for Sql Server Express 2005. I have both SQL Server 2005 Developer and SQL Server 2005 Expression instances running. The existing database will work properly in my web application, however I cannot access it through Visual Studio.As background, I did have a problem connecting to the database via the web application, receiving a "Failed to generate a User Instance of SQL Server" error. But I was able to fix that by renaming my SQLEXPRESS folder in C:Documents and SettingsuserLocal SettingsApplication DataMicrosoftMicrosoft SQL Server Data. When the web page ran, it created another SQLEXPRESS folder.However, I cannot figure out why I am having the other issue.Thanks in advancePatrick
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
In the For Loop, How to Iterate from Older flat files to Newer flat files based on File's Timestamp. If there are some older files in that folder, it should be processed first and then continue with the newer one.
In the first step of my SSIS package I need to get files from FTP and dump it/them in a local directory, but it's more than that, the logic is like this: 1. If no file(s) found, stop executing and send email saying no file(s) found; 2. If file(s) found, then compare it/them with existing files in our archive folder; if file(s) already exist in archive folder, stop executing and send email saying file(s) already existed, if file(s) not in archive folder yet, then transfer it/them to the local directory for processing.
I know i have to use a script task to do this and i did some research and found examples for each of the above 2 steps and not both combined, so that's why I need some help here to get the logic incorporated right.
Thanks for the help in advance and i apologize for the long lines of code!
example for step 1: ----------------------------------------------------------------------------------------------------------
' Microsoft SQL Server Integration Services Script Task ' Write scripts using Microsoft Visual Basic ' The ScriptMain class is the entry point of the Script Task.
' The execution engine calls this method when the task executes. ' To access the object model, use the Dts object. Connections, variables, events, ' and logging features are available as static members of the Dts class. ' Before returning from this method, set the value of Dts.TaskResult to indicate success or failure. ' ' To open Code and Text Editor Help, press F1. ' To open Object Browser, press Ctrl+Alt+J.
Public Sub Main()
Dim cDataFileName As String Dim cFileType As String Dim cFileFlgVar As String WriteVariable("SCFileFlg", False) WriteVariable("OOFileFlg", False) WriteVariable("INFileFlg", False) WriteVariable("IAFileFlg", False) WriteVariable("RCFileFlg", False) cDataFileName = ReadVariable("DataFileName").ToString cFileType = Left(Right(cDataFileName, 4), 2) cFileFlgVar = cFileType.ToUpper + "FileFlg" WriteVariable(cFileFlgVar, True) Dts.TaskResult = Dts.Results.Success End Sub Private Sub WriteVariable(ByVal varName As String, ByVal varValue As Object) Try Dim vars As Variables Dts.VariableDispenser.LockForWrite(varName) Dts.VariableDispenser.GetVariables(vars) Try vars(varName).Value = varValue Catch ex As Exception Throw ex Finally vars.Unlock() End Try Catch ex As Exception Throw ex End Try End Sub Private Function ReadVariable(ByVal varName As String) As Object Dim result As Object Try Dim vars As Variables Dts.VariableDispenser.LockForRead(varName) Dts.VariableDispenser.GetVariables(vars) Try result = vars(varName).Value Catch ex As Exception Throw ex Finally vars.Unlock() End Try Catch ex As Exception Throw ex End Try Return result End Function End Class
example for step 2: -------------------------------------------------------------------------------------------------------
' Microsoft SQL Server Integration Services Script Task
' Write scripts using Microsoft Visual Basic
' The ScriptMain class is the entry point of the Script Task.
Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime
Public Class ScriptMain
' The execution engine calls this method when the task executes.
' To access the object model, use the Dts object. Connections, variables, events,
' and logging features are available as static members of the Dts class.
' Before returning from this method, set the value of Dts.TaskResult to indicate success or failure.
'
' To open Code and Text Editor Help, press F1.
' To open Object Browser, press Ctrl+Alt+J.
Public Sub Main()
Try
'Create the connection to the ftp server
Dim cm As ConnectionManager = Dts.Connections.Add("FTP")
It works remotely if I run it via command prompt. But when I add this to a TSQL job on my remote SQL instance, it runs without deleting anything. What I'm missing?
Brief overview...Running SQL Server 2003 Server Enterprise 64 bit - All Service Packs and patches current SQL Server 2005 Enterprise Edition 64 bit Build Microsoft SQL Server 2005 - 9.00.3054.00 (X64) Mar 23 2007 18:41:50 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2)
I cannot import any SSIS packages nor crete any new folders under stored packages. I hve googled the news groups and looked at BOL to no avail. HELP!!!!
I am thinking about replacing the INSERT data scriptfiles that I have with XML files. This way I can open the XMLfile using an XML Editor and see the values in a GRID andmake changes easier.Do you see any problem with this approach?I managed to put together some code that is exportinga SQL table with its data to an XML file and also a codethat reads the XML file's data and inserts it into a table.Now I am researching on XSD, td:datatype, DTD...(I am new to XML) in order to figure out how I canuse a single xml file that will hold both the sql serverfields, the datatypes and their values.If you have links to some sample code that has anythingto do with the datatype export and import I am workingon, can you please share them with me?Most importantly what do you think about the idea of usingXML files vs sql scripts?Thank you
I am wanting to reduce the amount of Virtual Log Files I have. In reading through the Online Book Documentation, I realize that I have forgotten to move the Transaction Log Files to a different drive. Now that the server is in production, I wanted to get some input about the best way of making this change.
Can I just change the directory the log files are being written to in the DB properties without having any adverse problems occurring?
Hi! I'm using replication with two database on SQL 2000,when begin, the log files size is 50mb and the data files size is 150mb. But now the log files size is 2Gb and the data files size is 4Gb. I would like to decrease the log files and the data files ??? How do i do this??? (I using Truncate and shrink doesn't change ) Thanks!!!
I need to write codes to access many image files and wave files and display them on the webpage. Both of these files are stored on the server (as many experts in this forum adviced) and their locations are in the SQL server database. About the wav files, I think I will have to convert them to MP3 first for fast delivery on internet. Since I am a newbie, any help would be appreciated. Thanks,
I have a problem with bcp and format files.We changed our databases from varchar to nvarchar to support unicode. Noproblems so fare with that. It is working fine.But now I need a format file for the customer table and and it is notworking. It is working fine with the old DB with varchar, but withnvarchar I'm not able to copy the data. The biggest problem is, that Igot no error message. BCP starts copying to table and finished withouterror message.This is my table:CREATE TABLE [dbo].[Customer] ([ID] [int] NOT NULL ,[CreationTime] [datetime] NULL ,[ModificationTime] [datetime] NULL ,[DiscoveryTime] [datetime] NULL ,[Name_] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AI NULL ,[Class] [int] NULL ,[Subclass] [int] NULL ,[Capabilities] [int] NULL ,[SnapshotID] [int] NOT NULL ,[CompanyName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AI NOTNULL ,[TargetRCCountry] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AINOT NULL ,[LocationID] [int] NULL ,[MirrorID] [binary] (16) NULL ,[DeleteFlag] [bit] NULL ,[AdminStatus] [bit] NULL) ON [PRIMARY]GOand this is the format file:8.0131 SQLINT 1 12 "#~@~#" 1 ID ""2 SQLDATETIME 1 24 "#~@~#" 2 CreationTime ""3 SQLDATETIME 1 24 "#~@~#" 3 ModificationTime ""4 SQLDATETIME 1 24 "#~@~#" 4 DiscoveryTime ""5 SQLNCHAR 2 510 "#~@~#" 5 Name_SQL_Latin1_General_CP1_CI_AS6 SQLINT 1 12 "#~@~#" 6 Class ""7 SQLINT 1 12 "#~@~#" 7 Subclass ""8 SQLINT 1 12 "#~@~#" 8 Capabilities ""9 SQLINT 1 12 "#~@~#" 9 SnapshotID ""10 SQLNCHAR 2 510 "#~@~#" 10 CompanyNameSQL_Latin1_General_CP1_CI_AS11 SQLNCHAR 2 510 "#~@~#" 11 TargetRCCountrySQL_Latin1_General_CP1_CI_AS12 SQLINT 1 12 "#~@~#" 12 LocationID ""13 SQLBINARY 1 33 "#~@~#
"13 MirrorID """#~@~#" is the field terminator. We have a lot of text files with allkind of charachers in it. So we think this is a set that will neveroccur in our files.Thanks for your help!*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
Are there any MDS Descriptor Files associated with Databases Files? Are these associated with transaction logs? As per my information there are .mdf , .ndf and .ldf files.Could anyone please guide me ASAP,as there is an urgent requirement from one of the clients?
I've created a linked server (and set up the corresponding schema.ini file) in order to perform bulk-inserts from some CSV text files into SQL tables (from my standpoint the text files are just for reading purposes). The linked server works fine (I can select the data in the files without a problem).
Now the question: is possible to automatically detect when one or more of those files change in order to start the import process automatically? Something like having a trigger created on the CSV files Or there's no easy way to do that so I have, to say something, to create a Job that periodically checks if the files have changed programatically (say, recording each file's timestamp everytime is imported and comparing the recorded value with the current one, or whatever)?