When I accept my input data, it's convenient to read it in from a flat file as a fixed width file with a width of 100. Then I run a splitter to split the input into two different paths (path_1 and path_2, say), based on an indicator field.
The output of path_1 has 1 column with width of 100, but now I want to split it up into lots of little columns. The output of path_2 also has 1 column with a width of 100, but I want to split it up, too, only using a different fixed width map than i used for path_1.
I'm trying to split a hyphen-delimited string into three columns in a view. I've been using substring and len to split up the string, but it is getting very complicated (and isn't working in all cases). I've used a SPLIT function in vbscript - does t-sql have anything similar? I've attached a spreadsheet that shows what I am looking for. Maybe someone can guide me in the right direction?
I have to split a column using comma a delimiter into multiple columns. I am able to do it if i know how many column will be present in the final output. But in daily run, the columns may vary randomly.
how to split columns without hardcoding how many columns it ll come.
This is the code am using
Code: WITH Split_Names (Fil_id,Name, xmlname) AS ( SELECT Fil_ID,
I'm creating a web-based NT RAS report site and am looking for the most efficient way to import the data from NT Event log into SQL2k. I'm using the 'dumpel' utility from rsc kit and all is fine except the 10th column - the message detail:
"The user DOMAINuserid connected on port Mdm15 on 08/23/2002 at 07:25am and disconnected on 08/23/2002 at 07:27am. The user was active for 2 minutes 23 seconds. 78809 bytes were sent and 50675 bytes were received. The port speed was 49300."
I need to parse this one long text string into 6 distinct columns: userID, port, duration, bytes_xmt, bytes_rcv and portspeed. After a quick review of the rowsets, the strings seem to hold a consistent output ... no real variances I can see.
I've dablled with views but am facing a small performance issue that could get bigger: The sql server not only has to run the text file import package, but also the view to format the text dump into a workable dataset, then my report code bangs over 30 queries against the final dataset. It already takes our SQL2k server over 3 minutes to parse about 20,000 rows and the server's a beast (dual 1.8 p4 cpu, 3gb ram, raid, etc).
What I think would work best is to abandon the view (performance will only get worse as the row count increases) and instead INSERT the rows into one table.
Any ideas anyone? any good scripts out there that can help me to parse the long text string quicker that using substring and replace functions?
What I need is split the data into two columns if data in column Main starts with 'PR-' then output result to column P and if it starts with 'CC-' then to column C (the output needs to be in one table).
I have a very interesting problem in T-SQL coding for which I can't figure out the solution. Actually there is a Line_1_Address column in our data warehouse address table which is being populated from various sources. Some sources have already concatenated house number + street address fields in the Line_1_Address column whereas one source has separated columns for both data fields.
Now I'm trying to extract data from this data warehouse table and I need to split the house number from street address and load it into separate columns in my destination table. In case there is no data for house number then I should load it as NULL.
The issue is that data in this Line_1_Address column is very inconsistent so I don't know which functions to use. Here is some sample data for your consideration:
Line_1_Address 101 E Commerce ST 120 E Commerce ST 2 Po Box 301 W. Bel Air Ave West Main Street, PO Box 1388
I have a description field in a table which also stores unit of measure in the same column but with some space between them, I need to split these into two different columns.
If you see below there are 2 customer names on 1 loan, most of them share the same lastname and address, I want to separate it with fields,LoanID, customer 1 Firstname, Customer 1 Lastname, Customer 2 FirstName, Customer 2 Lastname, Adddress,zip
LEFT JOIN Status As S on S.LoanID = L.LoanID LEFT JOIN Borrower B on B.LoanID = L.LoanID LEFT JOIN MailingAddress MA on MA.LoanID = L.LoanID where S.PrimStat = '1' and B.Deceased = '0'
I'm a non-programmer and an SQL newbie. I'm trying to create a printer usage report using LogParser and SQL database. I managed to export data from the print server's event log into a table in an SQL2005 database.
There are 3 main columns in the table (PrintJob) - Server (the print server name), TimeWritten (timestamp of each print job), String (eventlog message containing all the info I need). My problem is I need to split the String column which is a varchar(255) delimited by | (pipe). Example:
2|Microsoft Word - ราย�ารรับ.doc|Sukanlaya|HMb1_SD_LJ2420|IP_192.10.1.53|82720|1
The first value is the job number, which I don't need. The second value is the printed document name. The third value is the owner of the printed document. The fourth value is the printer name. The fifth value is the printer port, which I don't need. The sixth value is the size in bytes of the printed document, which I don't need. The seventh value is the number of page(s) printed.
How I can copy data in this table (PrintJob) into another table (PrinterUsage) and split the String column into 4 columns (Document, Owner, Printer, Pages) along with the Server and TimeWritten columns in the destination table?
In Excel, I would use combination of FIND(text_to_be_found, within_text, start_num) and MID(text, start_num, num_char). But CHARINDEX() in T-SQL only starts from the beginning of the string, right? I've been looking at some of the user-defind-function's and I can't find anything like Excel's FIND().
Or if anyone can think of a better "native" way to do this in T-SQL, I've be very grateful for the help or suggestion.
I need to query some hierarchical data. I've written a recursive query that allows me to examine a parent and all it's related children using an adjacency data model. The scenario is to allow users to track how columns are populated in an ETL process. I've set up the sample data so that there are two paths:
1. col1 -> col2 -> col3 -> col6 2. col4 - > col5
You can input a column name and get everything from that point downstream. The problem is, you need to be able to start at the bottom and work your way up. Basically, you should be able to put in col6 and see how the data got from col1 to col6. I'm not sure if it's a matter of rewriting the query or changing the schema to invert the relationships.
One of our users tried to set replication solution in a sqlserver (the idea was given up). SQL Server added in each table a column related to replication. We want to remove theses columns and I used the following script :
select 'ALTER TABLE dbo.'+object_name(id)+' DROP CONSTRAINT '+object_name(constid)+' GO' +'ALTER TABLE dbo.'+object_name(id)+' DROP COLUMN '+'msrepl_tran_version GO' from sysconstraints where object_name(constid) like '%msrep%'
Question: 1. I want to know how to introduce a carraige return in order to have some thing like this :
... ALTER TABLE dbo.T_CommandCopyFile DROP CONSTRAINT DF__T_Command__msrep__44AB0736 GO ALTER TABLE dbo.T_CommandCopyFile DROP COLUMN msrepl_tran_version ... 2. Is there any other solution to do this more simply ?
Now what i have to do is i need to make sure that ID,Name,City,County,Phone is there in flat file. if it is not there then i have to send mail to client saying that file is not valid.
Let me know how i will do it.I need to also calculate the size of flat file.
I need find out the number of columns in flat file before i process that particular file.I have file name in @filename variable and file path is @filepath variable.But do not not that how i will check the column name in before i will process that file.
@filePath = C:DatabaseSourceFilesCAHCVSSourceFiles And i am using for each loop container to read the file one by one and put the file name in @filename variable.and my file name like
Now what i have to do is i need to make sure that ID,Name,City,County,Phone is there in flat file.if it is not there then i have to send mail to client saying that file is not valid.I need to also calculate the size of flat file.
I was trying to extract data from the source server using OLEDB Source and SQL Server Destination when i encountered this error:
"Transaction (Process ID 135) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.".
What must be done so that even if the table being queried is locked, i wouldn't experience any deadlock?
Hello all, I am running into an interesting scenario on my desktop. I'm running developer edition on Windows XP Professional (9.00.3042.00 SP2 Developer Edition). OS is autopatched via corporate policy and I saw some patches go in last week. This machine is also a hand-me-down so I don't have a clean install of the databases on the machine but I am local admin.
So, starting last week after a forced remote reboot (also a policy) I noticed a few of the databases didn't start back up. I chalked it up to the hard shutdown and went along my merry way. Friday however I know I shut my machine down nicely and this morning when I booted up, I was in the same state I was last Wenesday. 7 of the 18 databases on my machine came up with
FCB:pen: Operating system error 32(The process cannot access the file because it is being used by another process.) occurred while creating or opening file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf'. Diagnose and correct the operating system error, and retry the operation. and it also logs FCB:pen failed: Could not open file C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataTest.mdf for file number 1. OS error: 32(The process cannot access the file because it is being used by another process.).
I've caught references to the auto close feature being a possible culprit, no dice as the databases in question are set to False. Recovery mode varies on the databases from Simple to Full. If I cycle the SQL Server service, whatever transient issue it was having with those files is gone. As much as I'd love to disable the virus scanner, network security would not be amused. The data and log files appear to have the same permissions as unaffected database files. Nothing's set to read only or archive as I've caught on other forums as possible gremlins. I have sufficient disk space and the databases are set for unrestricted growth.
Any thoughts on what I could look at? If it was everything coming up in RECOVERY_PENDING it's make more sense to me than a hit or miss type of thing I'm experiencing now.
Dear list Im designing a package that uses Microsofts preplog.exe to prepare web log files to be imported into SQL Server
What Im trying to do is convert this cmd that works into an execute process task D:SSIS ProcessPrepweblogProcessLoad>preplog ex.log > out.log the above dos cmd works 100%
However when I use the Execute Process Task I get this error [Execute Process Task] Error: In Executing "D:SSIS ProcessPrepweblogProcessLoadpreplog.exe" "" at "D:SSIS ProcessPrepweblogProcessLoad", The process exit code was "-1" while the expected was "0".
There are two package varaibles User::gsPreplogInput = ex.log User::gsPreplogOutput = out.log
How do I use the execute process task? I am trying to unzip the file using the freeware PZUnzip.exe and I tried to place the entire command in a batch file and specified the working directory as the location of the batch file, but the task fails with the error:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC0029151 at Unzip download file, Execute Process Task: In Executing "C:ETLPOSDataIngramWeeklyUnzip.bat" "" at "C:ETLPOSDataIngramWeekly", The process exit code was "1" while the expected was "0".
Then I tried to specify the exe directly in the Executable property and the agruments as the location of the zip file and the directory to unzip the files in, but this time it fails with the following message:
SSIS package "IngramWeeklyPOS.dtsx" starting.
Error: 0xC002F304 at Unzip download file, Execute Process Task: An error occurred with the following error message: "%1 is not a valid Win32 application".
The command in the batch file when run from the command line works perfectly and unzips the file, so there is absolutely no problem with the command, I believe it is just the set up of the variables on the execute process task editor under Process. Any input on resolving this will be much appreciated.
I am designing a utility which will keep two similar databases in sync. In other words, copying the new data from db1 to db2 and updating the old data from db1 to db2.
For this I am making use of the 'Tablediff' utility which when provided with server name, database, table info will generate .sql file which can be used to keep the target table in sync with the source table.
I am using the Execute Process Task and the process parameters I am providing are:
The customer.bat file will have the following code: tablediff -sourceserver "LV-SQL5" -sourcedatabase "TC_CTI" -sourcetable "CUSTOMER_1" -destinationserver "LV-SQL2" -destinationdatabase "TC_CTI" -destinationtable "CUSTOMER" -f "c:SQL_bat_Filessql5TC_CTIsql_filescustomer1"
the .sql file will be generated at: C:SQL_bat_Filessql5TC_CTIsql_filescustomer1.
The Problem: The Execute Process Task is working fine, ie., the tables are being compared correctly and the .SQL file is being generated as desired. But the task as such is reporting faliure with the following error :
[Execute Process Task] Error: In Executing "C:SQL_bat_FilesSQL5TC_CTIpackage_occurrence.bat" "" at "C:Program Files (x86)Microsoft SQL Server90COM", The process exit code was "2" while the expected was "0". ]
Some of you may suggest to just set the ForceExecutionResult = Success (infact this is what I am doing now just to get the program working), but, this is not what I desire.
I'm pulling data from Oracle db and load into MS-SQL 2008.For my data type checks during the data load process, what are options to ensure that the data being processed wouldn't fail. such that I can verify first in-hand with the target type of data and then if its valid format load it into destination table else mark it with error flag and push into errors table... All this at the row level.One way I can think of is to load into a staging table then get the source & destination table -column data types, compare them and proceed.
should I just try loading the data directly and if it fails try trouble shooting(which could be a difficult task as I wouldn't know what caused error...)
I am having this table locking issue that I need to start paying attention to as its getting more frequent.
The problem is that the data in the tables is live finance data that needs to be changed and viewed almost real time so what I have picked up so far is that using 'table Hints' may not be a good idea.
I have a guy at work telling me that introducing a data access layer is the only way to solve this, I am not convinced but havnt enough knowledge to back my own feeling up. (asp system not .net).
Hi, I'm trying to upload the ASPNETDB.MDF file to a hosting server via FTP, and everytime when it was uploaded half way(40% or 50%) I would get an error message saying: "550 ASPNETDB.MDF: The process cannot access the file because it is being used by another process" and then the upload failed. I'm using SQL Express. Does anybody know what's the cause? Thanks a lot
Hi. When I try to start a package manually clicking the Start Debugging button I get this after a little while:
Cannot process request because the process (3880) has exited. (Microsoft.DataTransformationServices.VsIntegration)
How can I prevent this from happening? This happens every time I want to start the package and every time the process id is different. Here it is 3880.
I have had a full lock on my sql server and I have a few logs to found the origin of the lock.
I know the process at the head of the lock is the 55 process.
Here are the information I have on this process: Spid 55 55 ecid 5 5 Ecid 0 0 ObjId 0 1784601646 IndId 0 0 Type DB PAG Resource 1:1976242 Mode S IS Status TransID GRANT GRANT TransID 0 16980 TransUOW 00000000-0000-0000-0000-000000000000 00000000-0000-0000-0000-000000000000
lastwaittype PAGEIOLATCH_SH CMD AWAITING COMMAND Physycal id 1059 Login time 2007-07-05 04:29:53.873 nat address DFF06EBF974D Wait type 0x0046 HostName . BlkBy . DBName grpprddb CPUTime 54331 DiskIO 1059 ProgramName
Would someone know a way to identify the origin of the process 55?
I have already tried to execute the following request: select * from SYSOBJECTS where id=1784601646
I have a File System Task Copy file operation to copy a file in an SSIS package. The package when scheduled as a job fails with the following error:
The process cannot access the file 'C:ETLConsignmentAppleAppleRawFile.txt' because it is being used by another process.".
However when I right click on the package and execute it manually from the Integration Services it runs successfully without any problem. I am not certain on how to resolve this issue any inputs will be much appreciated.
Error: 0xC002F304 at Rename file 1, File System Task: An error occurred with the following error message: "The process cannot access the file because it is being used by another process.".
When running two File System Tasks after each other, with the same file, the file is still locked when running the second task. Resulting in an error: 0xC002F304 at Rename file 1, File System Task: An error occurred with the following error message: "The process cannot access the file because it is being used by another process.".
I found a workaround by addind a Execute Process Task before the second File System Task that pings to the localhost. This results in a 5 second delay, but there must be a better solution. Anyone?
Case: Exporting Report to PDF/Printing/TIFF Report: Contains 1 table with 19 Columns. 1 column is static, the other 18 are visible at the users descretion. Report when printed/exported to pdf spans 2 pages naturally, 16 on the first page, 3 on the second, and the column widths have been adjusted to provide a perfect page span .
User A elects to hide two of the columns, and show the rest. The report complies and the viewable version is perfect, the excel export is perfect.. the PDF export on the first page causes every fith column, starting with the last column that was hidden to be expanded to take up additional width. On the spanned page, it renders the first column on that page correctly, then there is a white space gap equal to the width of the hidden columns and then the rest of the cells show with the last column expanded to take up the same width that the original 2 columns were going to take up, plus its width.
We have tried several different settings to see if it helps this issue or makes it worse. So far cangrow/canshrink/keep together have made no impact. It is not possible to increase the page size due to limited page size selection availablility for the client. There are far too many combinations of what the user can elect to show or hide to put together different tables to show and hide on the same report to remove this effect.
Any help or suggestion on this issue would be appreciated