I am using an UDL file to connect to an ORACLE database. In UDL GUI the test of the connection is ok. However, in the Connection Manager, when I set the File Name property to the UDL file name and I test the connection, I get the message 'The connection failed because of an error in initializing provider. The ConnectionString property has not been initialized.'
In the For Loop, How to Iterate from Older flat files to Newer flat files based on File's Timestamp. If there are some older files in that folder, it should be processed first and then continue with the newer one.
I need to move specific files from a server to another server on a monthly basis. Â There are hundreds of files that are in the source directory and I need to move approximately 40 of those to the destination server. Â I would like to easily add or delete the file list as needed. Â I have seen where several variables were created for for each file name (and one for the path) and the ForEach Loop would go through them. Â With 40 or more I was thinking that I could make a connection to an Excel spreadsheet or text file with a record for each file name and read in and and move to the next record and make that value become the content of a "FileName" variable. Â Then if I wanted to add another file name I could just add another record to spreadsheet/text file or remove and the package would handle automatically....
I my requirement I need to read all the files from folder (one by one) and insert data from those files into the database table . I am facing one issue here . If suppose while executing if any of the file fails its not executing the other files. Package stops execution.
I cannot use Redirect rows option beacuse as per requirment if file has some Data problem , I am suppose to ignore the file instead of Data Rows.
Is there is any property in for each file task .....kindly suggest
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
I have another question. If I use FOr Each Loop Container (For each file Enumerator), it will select all the files in that folder. What if I want to select just 100 files (assuming 500 files in the folder)
I am attempting to document the various files that are incorporated into a reporting services project and need a more official explanation or defintion of the particular file and it's purpose. I understand what most of the files are and what they serve but would prefer to document it using Microsoft's official explanation so that we can decide which files may require that we source control them.
I have tried searching MSDN for 'file extentions',' file types', and typing in the individual .xxx extensions in to see if there is a documented definition for those files but seemed to get results that do not give me an official definition, didn't come close or were entirely not related.
Any links to the official explantion or definition of the files that make up a reporting services report project and their function/importance to the project would be appreciated.
what data type am i going to put to my uploadedFiles column in my database... uploaded files are in document format or .txt also.. how can i make those files converted into pdf files.. also enable users to download it.. tnx!!! forums.asp.net = "great help"
I've one Dafta flow task where I'm getting data from OleDb source and then doing some scripting using script component and then generating a file. Now I would need to get the same data and apply some different things and generate another file. Can I used this same task for doing the secondry work? If yes how woulld I put the thing in place, I would need to get the same data but I would need to use a seperate scripting and generate a seperate file?
In my enviornment i have one database with 6 ndf files and 5 ldf files and one mdf file.Actuvally what i am looking is to merge 6 ndf files into one ndf file and 5 ldf files into one ldf file.is it possible to do like this? , i tried using MOVE and TO option while backup is restorating but getting below error messages.
ERROR: Msg 3176, Level 16, State 1, Line 4 File 'J:NDFabc.ndf' is claimed by 'Finance_data2'(4) and 'Finance_data1'(3). The WITH MOVE clause can be used to relocate one or more files.
Hello, I need to generate a report, which should display 4 reports. Two tables and some charts. I have all these reports (I mean the .RDL files) individually. I can render the reports separately. But, now the need is to combine these reports in the one RDL file. Is this possible? If yes, how?
Also, I tried to create a stored procedure, which would call all these 4 SP inturn and provide 4 result sets. I thought of have an RDL by calling only this SP which would give 4 result sets. But infortunately, it gave only the first SP's result set. So, I have to combine the 4 RDL files into one to show on the Reporting Console. Can anyone please help me in this? Help would be grately appreciated.
Thanks a lot. Let me know if the question is not clear.
I have one really long .sql file I'm working on. It's actually a data conversion type script. It's gotten really cumbersome to work on as long as it is. I would like to split up various logical parts of script into their own .sql file.How can I have one file .bat, .sql or whatever call each .sql file in the order I specify? Hoping this is easy. Thanks
A small database ABC with data only 5 mb but its log is growing everyday around 20 mb. I want to shrink its size like for other databases on daily bases.
1. backup log ABC with truncate_only 2. DBCC SHRINKDATABASE (ABC, 10) got following error: <<Cannot shrink log file 2 (ABC_Log) because all logical log files are in use.>>
with no_log also tried but have the same error when dbcc shrinkdatabase.. any idea?
I have an export file that updates each night and dumps into prices.txt
I have to send them to an ftp site that processes them automatically if they're formatted correctly. They contain the same header information (saved in header.txt) and footer information (saved in footer.text). I then combine them all header.txt + prices.txt + footer.txt = glpcwholesale.prn
The script below is what I have so far but it isn't working and errors out on line 23.
' Create the File System Object Dim objFSO1, objFSO2,objFSO3,objFSO4
Set objFSO1 = CreateObject("Scripting.FileSystemObject") Set objFSO2 = CreateObject("Scripting.FileSystemObject") Set objFSO3 = CreateObject("Scripting.FileSystemObject") Set objFSO4 = CreateObject("Scripting.FileSystemObject")
'Creating string references to file locations Dim strDirectory, strFileHeader, strFilePrices, strFileTrailer Dim strFileGLPC
I would like to add to it the actual file size in mb or gb of each file to the results.
select sd.name,mf.name as logical_name,mf.physical_name, case when dm.mirroring_state is null then 'No' else 'Yes' end as Mirrored from sys.sysdatabases sd JOIN sys.master_files mf on sd.dbid = mf.database_id join sys.database_mirroring dm on sd.dbid = dm.database_id
The sp_spaceused procedure does a nice job on it's own giving me what I want (only one db though), plus a bonus allocated space column. How can I combine this sp with my other query, or is there a better way to ad this information?
I receive data via FTP to our webserver nightly as .txt files and .dic (if anybody is familiar with idx realtor websites, that's what this data is). I've learned recently that I'm not going to be able to use Access to import or link to this data, so I'm trying to get my feet wet with SQL. I have been practicing importing text files into SQL db, but I notice that the dts imports everything as varchar 8000, and that you can edit that. I've got a .dic file that accompanies every .txt file that contains definitions of each fieldname, fieldtype & length & I was wondering how to import that data as well, without having to manually retype everything. I would be happy to email these text files to anybody willing to take a look.
We have two db's. One live and one test.When I right click on the live one in SQL Enterprise Manager andselect properties -> Data Files ->File Name is LIVE.MDFLocation is F:DataLIVE.MDFWhen I right click on the test one in SQL Enterprise Manager andselect properties -> Data Files ->File Name is LIVE.MDFLocation is F:DataTEST.MDFSame thing applies to Transaction log files too.My concern is File Name is same in both the above cases even thoughthe location is different. What are the consequences of this.Thanks for your helpGVV
I need help with better solution to copy my bak and trn files to a file server. This is the background. I use to dump the backup files directly to the file server; this was not €œbest practices€? according to some books and forums. Back to dump the file first to local disc and after that letting robocopy copy the files to the file server. And I end up with this problem.
This is error from the backup log Failed-1073548784) Executing the query "BACKUP DATABASE [BPROJCT] TO DISK = N'F:\backup\BPROJCT\BPROJCT_backup_200711281700.bak' WITH NOFORMAT, NOINIT, NAME = N'BPROJCT_backup_20071128170005', SKIP, REWIND, NOUNLOAD, STATS = 10 " failed with the following error: "Write on "F:\backup\BPROJCT\BPROJCT_backup_200711281700.bak" failed: 112(There is not enough space on the disk.)
This is the error from the robocopy log New File 0 BPROJCT_backup_200711201700.bak 2007/11/20 17:02:11 ERROR 32 (0x00000020) Copying File F:ackupBPROJCTBPROJCT_backup_200711201700.bak The process cannot access the file because it is being used by another process. Waiting 30 seconds... Retrying...
Robocopy tries to copy the file before its completely written to disk. I have made a batch file that looks like this robocopy
Monitor the file folder and copy all files every 5 min, the file is started with scheduled Task. Have I missed some switch? Can start robocopy from SQL Serer Agent in a xp_cmd_shell?
hi all, i'm facing a small problem with xml file source. let me explain the scenario!
we have 4 source xml files with same format, each file has around 25k rows of records.
XML Source Derived Columns Data Conversion Conditional Split OLEDB Destination
its executing fine. once we change xml file source (second xml file with same xsd), all other components are showing errors. the error is that
Error 21 Validation error. Staging: DTS.Pipeline: input column "LateRsnCd" (17917) has lineage ID 19380 that was not previously used in the Data Flow task. OnTimeOrderEntry.dtsx 0 0
lineage id has changed it seems, when we are giving new file as xml source. of course xsd is same, but its not taking with names. to resolve this, if we open that errored task, select all, map using column name--> apply. its getting mapped with the columns as it is in the input. still the conditional split will show the error. if we open the conditional flow and give ok, the error has gone, i think some meta dat change has occured in the opening task and giving ok.
what could be the possible probelm and how to fix it? one more doubt, how can we include xml source in the configuration file? since it doesn't have connection manager, i'm struggling with it for dynamic file selection for xml. i dont want hard code the source file path (as it reside in the server) in the poperties of the xml source task. can i have some suggestions please?
I don't know if anyone faced this issue. We are having a strange problem. Our process was working well when it was implemented on 32 bit processor.IT ran perfectly for 6 months with out a problem. But when we moved the packages to a 64 bit machine, this issue along with some other issues started to show up.
The issue is we are missing files in the source folder.
Our process is designed such that a source process, brings in a file and updates a status for the file in a audit table. The ETL process picks up the file, then assigns the status as €˜running€™ when SRC process is complete and loads into Target DB, and updates ETL status to complete. But current problem is the ETL is losing files after it assigns the status as running. When we looked into the DB weather the data is loaded, we could not find any data related to these files.
we are have mapping level parameters for source path and target path.
We are using a For Each Loop task, and processing files(which are simple flat files) in the source path. The file name is stored in the mapping level parameter. Once the file is process we are moving them into a target path.
Our src and target file paths are on the same drive, just have src folder, inside src folder we have processed folder and failed folder. So files are picked from the source folder and moved into processed folder after processing. The files are not even moved to a failed folder.
There are lot other processing going on this box, and the trend observed is that when more processors are running at peak hour, the missing files€™ count is more.
Right now we are refetching those files, as a work around, but does any one has any suggestion why this is happening or any better implementation suggestions?
Hello, Completely new to this stuff, so sorry if I'm asking something that is painfully simple to resolve!!! Also, not sure whether there are standard formats for code, errors etc.
OK. When I create a Login.aspx file in VWD Express on my local Hard Disk Drive, the file works and runs correctly, allowing me to log in etc. However, when I do exactly the same thing using a remote Network Drive connection to an Intranet server, when you enter the Username and Password, and Click the Log In Button, the following error message appears:-
The file "[Network Drive]AuthorsApp_Dataaspnetdb.mdf" is on a network path that is not supported for database files.An attempt to attach an auto-named database for file S:AuthorsApp_Dataaspnetdb.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. Then I get the following information - which I'm sure is supposed to help, but doesn't!!!
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Data.SqlClient.SqlException: The file "S:AuthorsApp_Dataaspnetdb.mdf" is on a network path that is not supported for database files.An attempt to attach an auto-named database for file S:AuthorsApp_Dataaspnetdb.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.
Source Error:
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
Stack Trace:
[SqlException (0x80131904): The file "S:AuthorsApp_Dataaspnetdb.mdf" is on a network path that is not supported for database files.An attempt to attach an auto-named database for file S:AuthorsApp_Dataaspnetdb.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share.] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +115 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +346 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +3244 System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) +56 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +1083 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +272 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +687 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +82 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +558 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +126 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +651 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +160 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +122 System.Data.SqlClient.SqlConnection.Open() +229 System.Web.DataAccess.SqlConnectionHolder.Open(HttpContext context, Boolean revertImpersonate) +114 System.Web.DataAccess.SqlConnectionHelper.GetConnection(String connectionString, Boolean revertImpersonation) +225 System.Web.Security.SqlMembershipProvider.GetPasswordWithFormat(String username, Boolean updateLastLoginActivityDate, Int32& status, String& password, Int32& passwordFormat, String& passwordSalt, Int32& failedPasswordAttemptCount, Int32& failedPasswordAnswerAttemptCount, Boolean& isApproved, DateTime& lastLoginDate, DateTime& lastActivityDate) +1105 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved, String& salt, Int32& passwordFormat) +157 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved) +68 System.Web.Security.SqlMembershipProvider.ValidateUser(String username, String password) +100 System.Web.UI.WebControls.Login.AuthenticateUsingMembershipProvider(AuthenticateEventArgs e) +100 System.Web.UI.WebControls.Login.OnAuthenticate(AuthenticateEventArgs e) +113 System.Web.UI.WebControls.Login.AttemptLogin() +178 System.Web.UI.WebControls.Login.OnBubbleEvent(Object source, EventArgs e) +134 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +56 System.Web.UI.WebControls.Button.OnCommand(CommandEventArgs e) +107 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +178 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +31 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +32 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +72 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +3838
Version Information: Microsoft .NET Framework Version:2.0.50727.42; ASP.NET Version:2.0.50727.42
I have about 1200 sql files in one of my folders. Almost all of these files do data inserts and updates, so they should be run only once. As and when required I have manually ran around 150 of them already. Whenever I ran any of these scripts, I log that file name into a log table in my sql server including the execution time. Since running 1000+ more files takes a lot of time, I want to automate running of these files through a batch file. But I also want to filter the files that are already run.My file list looks like follows.
select * from sqlfileexecutionlog FileNameRunTimeResult --------------------- DeleteDuplicateOrders.sql03/12/2014 14:23:45:091Success UpdateInventory.sql04/06/2014 08:44:17:176Success
Now I want to create a batch file to run the remaining files from my directory to my sql server. I also want to wrap each of these sql file executions in a transaction and log success/failure along with the runtime and filename into sqlfileexecutionlog table. As I add new sql files into this directory, I should be able to run the same batch file and execute only the sql files that have not bee run.
When the database is configured for mirroring and you want to do partitioning on that database, How can we do? Is this similar process or any variation there while adding file groups and files? The partition will reflect in the mirroring database also?
Hi.I need to give my customer an sql file that they can run in query analyzer.All the stuff they need to run is in a set of existing files.I'd like to just tell them to load this file (this is oracle syntax):@file1.sql@file2.sql@file3.sqlis there some way of calling these files (that are in the same dir) from amaster sql file?ThanksJeff Kish
The MDF and LDF files are placed in SSD drive and tempdb files are placed in HDD drive. Snapshot isolation is enabled on the database. When a script is executed to insert data with NULL value to a table which has NOT NULL column, the transaction fails and then a log undo happens which fails and takes the database to suspect mode.
But when the MDF and LDF files are placed in HDD drive all this do not happen. The transaction just fails.
I want to poll an ftp site to see when files arrive, then I would like to download them, and move them into a different directory on the ftp site. It seems like I would have to do a lot to work around the limitations of the FTP Task. Is it capable of this sort of work? If not is there a 3rd party task that is better suited for ftp operations within SSIS?
I have a ForEach file enumerator that executes a data flow for every source file in a specified directory. If there are no files in the directory, the package executes with a result code of 0.
I need the package to fail if there aren't any files found in the specified load directory. Is there a simple way to do this with the SSIS component, or will I have to script the entire file enumeration process inside script tasks to handle this requirement?
Thanks in advance for any assistance with this question. Please post replies to this forum.