Setting Up File Conn Mgr When File Is Sometimes Empty
Jun 4, 2007
Hi,
Here's an interesting problem. I have to set up connection managers for some files. The thing is, sometimes the files have data in them, sometimes not.
The files that don't have data in them just have some header info, so the file isn't technically empty, but I won't want to load these files when they're empty.
What would be an approach to solving this problem? I can't eliminate the file based on file size, since it's not 0, and there is no set file size that would be a reliable threshold, since they're small files to begin with.
I've production sql server 7 sp3 on windows NT. I had a 8GB data file ofwhich 5GB were used and 3GB were unused. I wanted to take back the unused3GB.So I did the following with EM GUI:1. I tried to "truncate fre space from end of the file". Didn't truncatethe file. I believe there was no empty space at the end of the file.2. Next I chose the option to "shrink file to 5GB". And to my horror thedata file instead of taking just 5GB took the empty spaces also and the sizeof the used data file went to 8GB.Any idea what's going on?TIA,SP
Here's a really annoying problem. Let's say you have a text file with 2 million rows.Delimiters all look good and rows are previewed well but the file has a missing row at say lin 1234567 - way deep in the file. When SSIS encounters the blank row, an error is raised and processing on the file STOPS! I verified this in by checking the SSIS log and have even developed an error routine to notify me via email when the error occurs (really cool if I do say so myself ). The main problem still remains - how to resume processing from the point of failure in the file? Any help is appreciated. Thanks.
Yesterday, i changed my sql connection string and i added ;
AttachDbFilename=|DataDirectory|\dbase.mdf
And i moved mdf file to project's debug directory by using cut-paste. I deleted dbase from databases section in SQL Managament Studio. And i attached dbase according to new path(project debug path).
After these, problem started to occur. If i open a table in SQLMS i cant start debugging my application, it throws exception " Cannot open user default database .login failed. "
And when i start my application , i cant use SQLMS to see my tables or to run query.It says ; ...cannot be opened due to inaccesible file....
It seems like only 1 application can use my db at a time. But before i moved my database, i could both open tables in SQLMS and i could start running my application at the same time
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
Now, here's the problem. In the preview screen, it shows only up to column 518 correctly. In column 519, it shows the remaining hundreds of columns all glommed together as one big string, like: "data", 123, 10/17/2007, "more data", etc
Anyways, I am wondering what to do about this?
When I attempt to run the data flow I get this error: [Flat File Source [1]] Error: The column delimiter for column "Column 519" was not found.
However, the good news is that I only need the first 9 columns of the file. Some preprocessing in order, maybe?
I have a simple CSV/XLS dump that i need to put up on the server. I would need to create a file output daily and need a variable name to identify them easily. I have the problem of naming it in the "connection string" easily and i have tried variables and expressions too. I am new to this so please need help. Thanks
How can I empty an existing excel file before using DTS to export new data in this excel file? Or is there any way to delete this excel file from DTS task?
I am using this bcp out construct and it works fine except that if the query does not return values it bcp's out a file anyhow. This is not wanted and I am looking for a work around.
The subquery checks first in DBCleanerHist if a file already has been extracted onto hd and if so do not create an empty file and overwrite an existing file.
My team is working on a problem. Please help us solve it.
I am looping through a set of files and on each loop i process the file and move it to another folder. I am using File System task and variables with destination path and name, to do so . It works fine.
Requirement :
However now I want that after processing the file, instead of moving it, I create an empty text file at the destination containing the file name. I want to do this with minimum effort. Can anyone suggest me the way.
I was running a DBCC SHRINKFILE with EMPTYFILE to move data to a different drive. But somehow the autogrow got unchecked on the new file while it was running. So the shrinkfile died and the file wasn't anywhere near being empty. When I try to run it again on the file is comes back right away and says that it completed, but it hasn't moved any data. It like SQL thinks that the empty file is complete. But it isn't near being done, with about 50 GB left to go. I made sure that the new file will autogrow and that it actually can grow and also that I can write to it. I created an index on the filegroup and it went to the new file and not the old. Any help would be appreciated.
hi,my sql database log file has been fulled recently ..... becuasethere are 55 millions records in main 3 tables .... so how i can emptylog file ...i don't want to attach new log file or save any pervious log info.....thanks for helping me ... and my company ..Abdul SalamSr. DBA + ProgrammerXebec Groups of Business.
Hello,I'm not getting any response to this on the SQLDTS newsgroup, so Ithought that I would try here:I just ran into this problem and I can't find any other mention of itthrough Google. I have a text file that is comma-delimited. It alsouses double quotes as text identifiers. A new column has been added tothe file, but currently has no values. I would like to finish mydevelopment so that when it does finally get some values, they will beimported as well. The problem is, the last column does not show up inDTS.I can reproduce this problem easily enough... create a text file withthe following two lines in it:1,"test",2,"test2",Now, create a new DTS package and add a text file connection. Point itto the new file and go through the properties for the file. You willnotice that on the second screen where it displays the preview of thedata there are only two columns shown.This does not happen if there is no text qualifier or if at least onerow has the final column value filled. Is there any way around thisproblem?Thanks!-Tom.
I am using Sql server 2012. In my project whenever I run the Package individually, it run successfully. But while executing the package through SSIS task, I get the following warning and not able to transfer the data from flat file to DB.
Foreach Loop Container:Warning: The For Each File enumerator is empty. The For Each File enumerator did not find any files that matched the file pattern, or the specified directory was empty.
In order to troubleshoot some deadlocking that I am seeing on SQL Server, I am trying to capture the Deadlock XML by enabling the Events Extraction Settings option 'Save Deadlock XML events separately' and specifying a Deadlock XML results file.
Meanwhile, I am also tracing the Deadlock graph, Lock:Deadlock, and Lock:Deadlock Chain events. Yet the xdl file remains empty even though I am getting hits on the events themselves in the SQL Profiler trace.
Also, I have the following trace flag settings enabled.
TraceFlagStatusGlobalSession 1204110 1222110
Why the xdl file remains empty even though (I think) it should contain some XML for deadlocks that are actually happening?
I have a particular issue that has been causing me some problems for a while. I have an SSIS package that imports an excel file into my database, and then performs various data manipulation that I won't go into. The problem I am having is at the import end. The excel source file I am working on is provided to me by my client. It is a fixed format and doesn't change, it contains a header row and there are 32 headings. The trouble I am having is that quite often, the last column is empty, i.e. it contains no data. The header is still there, but theres no data underneath. When I try to import this file using my SSIS package it fails, and complains about needing to remove the metadata for this final column from the External Columns list (VS_NEEDSNEWMETADATA). When I try to preview this file in the properties of the Excel Data Source, the last column does not exist. It's as if it's determining that as there is no data in that final column, that it's unnecessary and not part of the data set, even though it has a header.
Now I've done a bit of research, and found cases that a sort of like mine, I know that the excel file has the first 8 records sampled to determine the data format. This problem suggested to use the IMEX=1 extension in the connection string, which didn't help. I also discovered that when using flat files, if you have odd numbers of columns in your comma seperated list there can be problems. But neither of these issues seem to match the issue I'm facing.
Has ANYONE had a similar problem to me, and can anyone offer any kind of assistance regarding what I need to do to import an excel file that may or may not have data in the final column?
I was running an operation to shrink file/emptyfile a data file, and then remove it.
It blocked and caused a huge mess, I suspect on the removal part. But I want to confirm that the emptyfile completed (and that the engine isn't going to try to put more data in there for when I schedule the removal part again a week or more from now).
How does the engine know not to put any more data in there, and how long does that situation last?
In SSIS flat file import using fastload, I'm trying to import data into SQL 2005 previously created tables.
The table may contain column that are NULLable BUT there is NO DEFAULT for them.
If the incoming data from flat files contains nothing either between the delimeters, how can I have a NULL value inserted in the column instead of blank/empty string?
I didn't find an easy flag unless I'm doing something wrong. I know of at least two ways to do it the hard way:
1- set the DEFAULT(NULL) for EVERY column that needs this behaviour
2-set up some Derived Column option in the package to return NULL if the value is missing.
Both of the above are time consuming since I'm dealing with many tables. Is there a quick option to default the value to NULL WHEN there is NO data ELSE insert the data itself? So the same behavior that I have right now except that I want NULL in place of empty string/blank in the varchar(x) columns.
I am attempting to get this script provided by Microsoft to work to no avail. Specifically, when I set the variable FFNonDataRows to 1 (in order to accommodate for the header row), the variable is not being set to False as expected. I don't know enough about C# to understand why this script isn't working. How to get this script to work in this manner?
Does anyone know exactly how to create a trace that runs continuously on a server and writes the data to a table? Now I know how to create a trace file with the profiler, but I want something set up so that I don't have to have the profiler running on the server all the time. As well as soemthing that will restart itself if the server is rebooted. I have been looking at these x(xp_trace.*) procedures. Is this the way to do it?
I have to trap login information in a table and have a scheduled job that runs once a month and look for specific data in the table and send out e-mails based on certain values.
I have written the procedure which does this I just need to know how to set up the trace so it runs in the background continuously.
I am using a Foreach loop container to go thru all the files downloaded from the ftp site and I am assigning the file name of each file to a variable at the foreach loop level called filename. In the dataflow task inside the foreach loop container, I have a source script component that uses a flat file connection. The connection string of the flat file connection is set to the filename variable declared at the foreach loop level. However the script component has a error System.ArgumentException: Empty pathname is not legal.
Please let me know how to correct this? The connectionString property of the flat file connection is set to the complete filename including the path. Does a script component need to have a flat file name specified in the flat file connection that it is using? I need to have a script source component as the flat file I am reading from is not in any of the standard formats.
The flat file connection manager's connection string property is blanked out the moment I specify an Expression for the connection string. Is this a defect or is it expected behavior.
We have created SSIS package to load a text file into a table. Source system shares 10 text files and recently they stopped generating data for one of the text file (comping empty), after few months they will start generating the data for the empty file batch processing.
The Issue here is Data Flow task is getting failed while loading empty text file into table. How to handle this empty file load issue in SSIS package.
Hi everybody, Is there a way to set SelectParameter for SQLDataSource in ASPX file using System.Configuration.ConfigurationManager.AppSettings["SiteID"]) ? Thanks a lot in advance.
I'm running EM on my local box and sql server on a remote internetaccessed box.How do I specify a file path for a DTS package to access files on theremote box?For example, to run a local dts package the filepath isc:filepathfile.txt.How would I change the file path/name to allow the dts package toaccess files on the same remote machine?-Dave
I was looking to change the file growth setting in our AlwaysOn environment databases.We have a single availability group, one primary and one secondary replica. I learned that when changing the file growth setting on the primary databases (data file), the change flows though to the database on the secondary replica.However after doing the same with the log files, the file growth setting changed on the primary but the change did NOT propagate to the secondary.
Is the solution to apply the change directly to the secondary?here's the T-SQL code I used:
ALTER DATABASE myDB MODIFY FILE ( NAME = N'myDB_log', FILEGROWTH = 512MB ); GO SQL Server 2012 (11.0.5532)
I are using a BULK INSERT to insert the data from a ascii file to a sql table. The table has a ProductInstanceId column that exists in the tables but does not exist in the ascii DICast data. I am setting the ProductInstanceId to a Guid that will be used for Metrics. I would like to create the Guid in C++ and then set it somehow during the BULK INSERT DICastRaw1hr and DICastRaw6hr. I am calling the BULK INSERT from C++/ADO. I do not see how you can set a static data in the BULK INSERT for a column that exists in the table but does not the source data ... seems there should be a way to do this with the format file?
The other way to do this is with a TRIGGER. I have the TRIGGER below. Prior to the calling the BULK INSERT using ADO I will use ADO to ALTER the TRIGGER with the new Guid. When the BULK INSERT runs the ProductInstanceId will be populated with the new Guid.
ALTER TRIGGER DICastRaw1hrInsertGuid ON Alphanumericdata.dbo.DICastRaw1hr FOR INSERT AS UPDATE dbo.DICastRaw1hr SET ProductInstanceId = '4f9a44eb-092b-445b-a224-cc7cdd207092' WHERE modelrundatetime = (select max(modelrundatetime) from Alphanumericdata.dbo.DICastraw1hr(NOLOCK))
More Questions:
- The Trigger is slow. The Bulk Insert without the Trigger runs in about 10 sec ... with the Trigger in about 40 sec. I tried to use the sql code below in the TRigger but it was only doing the UPDATE on the last row. The TRIGGER must run after the BULK INSERT is complete. Now I am using the select (bad). Any comments ...
ALTER TRIGGER DICastRaw1hrInsertDate ON Alphanumericdata.dbo.DICastRaw1hr FOR INSERT AS DECLARE @ID as integer SELECT @ID = i.recordid from inserted i UPDATE dbo.DICastRaw1hr SET ProductInstanceId = '4f9a44eb-092b-445b-a224-cc7cdd207092' WHERE recordid = @ID
- I understand that I could set the Guid in the Default Value part of the table definition using the NEWID() function. I need the Guid to be the same for all the rows that are inserted during the BULK INSERT (all have the same modelrundatetime) ... how would I do this?