Split Pipeline
Oct 27, 2006
This is probably obvious, but how do I split a pipeline. I.e. I've got a data source with 200 columns - I need to split this into 20 pipelines each containing 10 of the original columns.
This is probably obvious, but how do I split a pipeline. I.e. I've got a data source with 200 columns - I need to split this into 20 pipelines each containing 10 of the original columns.
The scenario is as follows: I have a source with many rows. Each row has a column called max_qty_value. I need to perform a calculation using another column called qty. This calculation is something similar to dividing qty/(ceiling) max_qty_value. Once I have that number I need to write an additional duplicate row for each value from the prior calculation performed. For example, 15/4 = 4. I need to write 4 rows to the same target table as in line information for a purchase order.
The multicast transform appears to only support fixed and/or predetermined outputs. How do I design this logic in SSIS to write out dynamic number of rows to a target table.
Any ideas would be greatly appreciated.
thanks
John
Hi,
I want to incorporate this code but I dont know how to import Microsoft.SqlServer.Dts.Pipeline in an Integration Services Project template. I was thinking of putting this code in the script task but still, I cant import Pipeline. Add reference list does not have it as well. Please let me know how to incorporate this code. Thanks!
Code:
if (ComponentMetaData.RuntimeConnectionCollection["SourceFileConnection"].ConnectionManager != null)
{
cm = DtsConvert.ToConnectionManager(ComponentMetaData.RuntimeConnectionCollection["SourceFileConnection"].ConnectionManager);
if (cm.CreationName == "FILE")
{
fileUsage = (Microsoft.SqlServer.Dts.Runtime.DTSFileConnectionUsageType)cm.Properties["FileUsageType"].GetValue(cm);
if (fileUsage == Microsoft.SqlServer.Dts.Runtime.DTSFileConnectionUsageType.FileExists)
{
connectionString = ComponentMetaData.RuntimeConnectionCollection["SourceFileConnection"].ConnectionManager.AcquireConnection(transaction).ToString();
if (connectionString == null || connectionString.Length == 0)
{
throw new Exception("No file name specfiy");
}
}
else throw new Exception("Incorrect file connection usage type, should be set to exiting file type");
}
else throw new Exception("Connection is not a file connection");
}
else throw new Exception("Connection is not as assign");
We have deployed an SSIS package successfully to production. We needed to apply SP1 to fix a different issue and now have encountered a new problem. We have numerous Data Reader Sources in different Data Flow Tasks that connect to a IBM iSeries (DB2) source. Pretty simple extracts that have worked fine in the past. They pump the data into staging tables on the SQL2K5 instance running the package (64-bit).
After we applied SP1 however, all of the Data Reader tasks fail AFTER they successfully copy the records with the following error.
[iSeries Invoice Details [1]] Error: System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper90 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer90[] buffers, IntPtr ppBufferWirePacket)
If I delete the source and destination and recreate identical transforms, they work fine, but I don't feel like rebuilding all of the extracts. Any ideas! The problem occurs in all environments that we've tried.
TIA,
Michael Shugarman
P.S. I just tried the SP2 CTP, but that doesn't fix the problem.
Hi I have created a simple SSIS project on my client that carries out 4 Data Flow tasks, each one copying a few hundred rows from an Oracle 10.0.2 database. This works OK and will also run in debug mode fine.
I have copied the package to the file system on our development server and get the following error when in debug mode:-
[DTS.Pipeline] Information: Validation phase is beginning.
Progress: Validating - 0 percent complete
[OLE DB Source [1]] Error: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "Server.user" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed.
[DTS.Pipeline] Error: component "OLE DB Source" (1) failed validation and returned error code 0xC020801C.
Progress: Validating - 50 percent complete
[DTS.Pipeline] Error: One or more component failed validation.
Error: There were errors during task validation.
Validation is completed
[Connection manager "Server.user"] Error: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft OLE DB Provider for Oracle" Hresult: 0x80004005 Description: "Error while trying to retrieve text for error ORA-01019 ".
Validation is completed
If you go to the source of each flow task and select preview you can retreive the data.
Thanks Paul
Is there any way I can capture the below information? I want to capture this to get the no of rows processed by each transformation.
[DTS.Pipeline] Information: "component "abc" (3798)" wrote 2142 rows.
[DTS.Pipeline] Information: "component "xyz" (4223)" wrote 1026 rows.
[DTS.Pipeline] Information: "component "abc2" (4324)" wrote 7875 rows.
Thanks
I have a situation where we get XML files sent daily that need uploading into SQL Server tables, but the source system producing these files sometimes generates duplicate records in the file. The tricky part is, that the record isn't entirely duplicated. What I mean, is that if I look for duplicates by grouping the key columns, having count(*) > 1, I find which ones are duplicates, but when I inspect the data on these duplicates, the other details in the remaining columns may differ. So our rule is: pick the first record, toss the rest of the duplicates.
Because we don't sort on any columns during the import, the first record kept of the duplicates is arbitrary. Again, we can't tell at this point which of the duplicated records is more correct. Someday down the road, we will do this research.
Now, I need to know the most efficient way to accomplish this in SSIS. If it makes it easier, I could just discard all the duplicates, since the number of them is so small.
If the source were a relational table, I could use a SQL statement to filter the records to remove the duplicates, but since the source is an XML file, I don't know how to filter these out in the pipeline, since the file has to be aggregated to search for dups.
Thanks
Kory
Hi
I have an existing application that programmatically builds SSIS 2005 packages.
I'm trying to get to working with the February CTP of SQL Server 2008. Having changed all the 2005 references to 2008 references and things like IDTSComponentMetaData90 to IDTSComponentMetaData100, my application compiles okay now, but hits a problem when it tries to create a Data Flow task.
The code which worked fine before (and seems to still be the recommended way in Books Online is):
Code Snippet
Dts.TaskHost myMainPipe = (Dts.TaskHost)container.Add("DTS.Pipeline.1");
However, this now produces the exception:
Cannot create a task with the name "DTS.Pipeline.1". Verify that the name is correct.
Should I be using a different moniker now? I took a stab at "DTS.Pipeline.2", but that didn't make a difference.
Thanks,
Andrew
Hello,
I'm wish to receive pipeline events fired by a SSIS package.
I execute the package successufully with the following code (c#):
MyEventListener eventListener = new XplorerEventListener();
DtsApplication app = new DtsApplication();
Package pkg = app.LoadPackage("c: est.dstx", null);
pkg.Execute(null, null, eventListener, null, null);
MyEventListener is inherited from DefaultEvents, overriding all OnXXX methods.
It works perfectly, however I cannot intercept the following events:
- PipelineExecutionTrees
- PipelineExecutionPlan
- PipelineExecutionInitialization
- BufferSizeTuning
- PipelineInitialization
Anyone knows how to catch those pipeline events?
TIA,
Paolo.
Alot of people complain, legitamately, that they wish to remove columns from the SSIS pipeline that they know are not going to be used again. This would help to avoid the "clutter" that can exist when there are alot of columns in the pipeline.
If you are one of those people then click-through below, vote and (most importantly) add a comment. The more people that do that - the more likely we are to get this functionality in a future version.
SSIS: Hide columns in the pipeline
https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=252462
-Jamie
Hi, My package hangs and the log says DTS.Pipeline: Validation phase is beginning. Any ideas why this is happennig? This same package runs fine when I run it without turning on the transaction.
View 4 Replies View RelatedI have several stage to star (i.e. moving data from a staging table through the key lookups into a fact table) ETL transformations in a single SSIS package. Each fact table has a different set of measures but the identical foreign key set, e.g. ConsultantKey, SubsidiaryKey, ContestKey, ContestParamKey and MonthKey.
Currently I have to replicate the key lookup (Surrogate Key Pipeline, or SKP) for each data flow. If I could cache each dimension one time in the package and reuse it for each stage to fact it would be much more efficient.
Is there a way for me to reuse a common data flow?
Dear Experts,
I can look the values of the proprieties in each PipelineComponentInfo, for example:
ComponentType: Transform
CreationName: DTSTransform.Merge.1
Description: Merge Transformation
FileName: C:Program FilesMicrosoft SQL Server90DTSPipelineComponentsTxMerge.dll
FileNameVersionString: 2000.90.1049.0
IconFile: C:Program FilesMicrosoft SQL Server90DTSPipelineComponentsTxMerge.dll
IconResource: -201
ID: {08AE886A-4124-499C-B332-16E3299D225A}
Name: Merge
NoEditor: False
ShapeProgID:
UITypeName: Microsoft.DataTransformationServices.....
but I don't know what means the proprieties: FileName, FileNameVersionString, IconFile, IconResource, NoEdit, ShapeProgID and UITypeName...
Can anyone helps Me?
Thanks
Francesco
I am using Component Script to do - Transforming Comma-delimited list row data to column
and I want to use MessageBox to see the value
Dim DataPnts As String
DataPnts = Row.DataPnts.ToString() -- this is my input column (data type = text in Source table and I put as Unicode string [DT_WSTR] in Output column)
MessageBox.Show(DataPnts, "DataPoints1", MessageBoxButtons.OK)
---and why can't I see it. It gives me some message with Microsoft.SqlServer.Dts.Pipeline.BlobColumn. Why?
Values = DataPnts.Split(CChar(","))
Please point me to more info on how to do transform Comma-delimited list row data to column.
Thanks.
Im am pulling down table called PRV from another server throught an ODBC connection in my SSIS package. I have the source and destination task all set up. I get this error when i run the packag. Most of the time, the error is pretty self explanatory but this one is .....beyond me. Any ideas.
Error: 0xC02090F5 at PRV TABLE FROM CYPRESS, PRV SOURCE [1]: The component "PRV SOURCE" (1) was unable to process the data.
Error: 0xC0047038 at PRV TABLE FROM CYPRESS, DTS.Pipeline: The PrimeOutput method on component "PRV SOURCE" (1) returned error code 0xC02090F5. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
Error: 0xC0047021 at PRV TABLE FROM CYPRESS, DTS.Pipeline: Thread "SourceThread0" has exited with error code 0xC0047038.
Error: 0xC0047039 at PRV TABLE FROM CYPRESS, DTS.Pipeline: Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
Error: 0xC0047021 at PRV TABLE FROM CYPRESS, DTS.Pipeline: Thread "WorkThread0" has exited with error code 0xC0047039.
Information: 0x40043008 at PRV TABLE FROM CYPRESS, DTS.Pipeline: Post Execute phase is beginning.
Information: 0x402090DF at PRV TABLE FROM CYPRESS, PRV Destination [4076]: The final commit for the data insertion has started.
Error: 0xC0202009 at PRV TABLE FROM CYPRESS, PRV Destination [4076]: An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Arithmetic overflow occurred.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Arithmetic overflow error converting IDENTITY to data type smallint.".
Information: 0x402090E0 at PRV TABLE FROM CYPRESS, PRV Destination [4076]: The final commit for the data insertion has ended.
Error: 0xC0047018 at PRV TABLE FROM CYPRESS, DTS.Pipeline: component "PRV Destination" (4076) failed the post-execute phase and returned error code 0xC0202009.
Information: 0x40043009 at PRV TABLE FROM CYPRESS, DTS.Pipeline: Cleanup phase is beginning.
Information: 0x4004300B at PRV TABLE FROM CYPRESS, DTS.Pipeline: "component "PRV Destination" (4076)" wrote 113136 rows.
Task failed: PRV TABLE FROM CYPRESS
I cannot get a simple package to execute a data pump to an access database from SQL2005.
I have tried it in both SSIS and by running the Export data function.
I have been able to write to this database in the past using dtp in SQL2000 but I am not able to write to it using SQL2005.
What is the deal with the new SSIS?
Does anybody have any ideas I can try to get my export to work.
I have many more to do and I have to migrate over all of my SQL 2000 DTS packages to SQL2005 and some export to MSAccess.
This is the only error message I can find:
[DTS.Pipeline] Information: "component "OLE DB Destination 1" (2196)" wrote 0 rows.
Edit:
I found more errors in the debug section and a post here that discussed the problem as they had run into it. I was able to use part of that and some more research in order to tackle my problem.
I would still be interested in finding out why I suddenly had this problem arise after I upgraded to SQL2005.
This is going to be a real pain as apparently SQL2005 treats NULL as Zero Length and now all of my databsaes that had that set in access will have to be modified to deal with this in the export.
Hi
I have a SSIS project that has one parent package and three child packages. When I run the project on my development machine in debug mode it works fine. Also if i run the packages using dtexec on my development machine it still works fine. However the problem comes in when I try and run the project using dtexec on the staging server i get the following error:
Microsoft.SqlServer.Dts.Pipeline.DoesNotFitBufferException: The value is too large to fit in the column data area of the buffer.
does anyone have any idea how to fix this please?
thanks
G
Hi,
I have a SSIS package which pumps data from one server to other without any additional steps. There are 11 tables for which data is transferred. And this packages runs fine on two different environments but fails in one environment i.e. on SIT.
It doesn't throw any error and every time stops at the below step
[DTS.Pipeline] Information: Pre-Execute phase is beginning.
Progress: Pre-Execute - 0 percent complete
Progress: Pre-Execute - 1 percent complete
Progress: Pre-Execute - 2 percent complete
Progress: Pre-Execute - 3 percent complete
Progress: Pre-Execute - 4 percent complete
Progress: Pre-Execute - 5 percent complete
Progress: Pre-Execute - 6 percent complete
Progress: Pre-Execute - 7 percent complete
It doesn't complete neither throws an error. Any pointers on what the problem could be
Thanks
I have been working with DTS and ETL in data warehousing projects for several years and my question is this. You can only update a dimension column with SSIS by using TSQL-update statements.
There is no way to do this except issuing TSQL from the control flow or the data flow?
This subject is not mentioned in Wrox SSIS book nore in Kirk Haseldens book.
When you run the SCD task in the data flow you will get an OLEDB command that actually do this, issue a TSQL-statement.
Is this correct?
Regards
Thomas Ivarsson
I have been trying to follow/implement the examples in the following help topics (thanks to Jamie for these links).
Building Packages Programmatically
(http://msdn2.microsoft.com/en-us/library/ms345167.aspx)
Connecting Data Flow Components Programmatically
(http://msdn2.microsoft.com/en-us/library/ms136086.aspx)
The problem I am having is that MainPipe is not recognized as a valid type in my Script task, even though I have the imports statements that are listed in the example. I get the message "Error 30002: Type 'MainPipe' is not defined". The other and related problem is that when I type "imports microsoft.sqlserver.dts", the intellisense offers only two choices: {}Runtime and {}Tasks. I don't see any choice for Pipeline. Can anyone tell what I am missing? It seems to be some kind of configuration/installation issue, but I have no idea how to resolve it. I have tried this on 3 different machines, with both the RTM SQL 2005 standard edition, and with SP2 installed, all with the same result. Any help is appreciated
Here is my code:
' Microsoft SQL Server Integration Services Script Task
' Write scripts using Microsoft Visual Basic
' The ScriptMain class is the entry point of the Script Task.
Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime
Imports Microsoft.SqlServer.Dts.Pipeline
Imports Microsoft.SqlServer.Dts.Pipeline.wrapper
Imports Microsoft.SqlServer.Dts.
Public Class ScriptMain
Public Sub Main()
'
Dim package As Microsoft.SqlServer.Dts.Runtime.Package = _
New Microsoft.SqlServer.Dts.Runtime.Package()
Dim e As Executable = package.Executables.Add("DTS.Pipeline.1")
Dim thMainPipe As Microsoft.SqlServer.Dts.Runtime.TaskHost = _
CType(e, Microsoft.SqlServer.Dts.Runtime.TaskHost)
Dim dataFlowTask As MainPipe = CType(thMainPipe.InnerObject, MainPipe)
Dts.TaskResult = Dts.Results.Success
End Sub
End Class
Hi,
I have some data coming through pipeline and I wanna add some component at some point to pass on only selected rows based on conditions to the objects onwards. My opinion is I should use conditional split object, but Please suggest me something if you know better.
Thanks,
Fahad
I'm building a custom transform component. I want to mark some input columns as keys for deduplicating. In a similar way to the provided Sort component, I want to check those columns and allow pass-throughs (or not) for the others - so next to each input column name I need two checkboxes (1:use for dedupe; 2:include in output if 1 not checked). If a column is checked for use in the dedupe, I want some other attributes to be shown indicating how it will be used. How do I display the checkboxes to let users select which columns to include for deduplication, and then how do I add further attributes underneath (copying the Sort component's look) for selection?
Thanks in advance for guidance and pointers on this.
Hi,
I need to access columns from a data flow by ordinal position in a script transformation (I'm parsing an excel file which has several rowsets across the page). The first problem I encountered is the generated BufferWrapper does not expose the columns collection (i.e. Input0Buffer(0) does not work) but I got around that by implementing my own ProcessInputs(InputId, Buffer) method instead of using the wrapper.
My problem now is that the column ordinals are in some random order (i.e. Column "F1" is ordinal 1 but Column "F2" is 243). Where in the object model can I map between the name and the ordinal - it's not jumping out at me?
Dave
PS Why is the script editor modal, it's frustrating having to switch between the Visual Studio environment and the VSA one.
Dear experts,
My MS SQL Server 2005 is generating the following error. may i know what's wrong with it?
"
The Collect Procedure for the "DTSPipeline" service in DLL "XXX:Program FilesMicrosoft SQL Server (x86)90DTSBinnDTSPipelinePerf.dll" generated an exception or returned an invalid status. Performance data returned by counter DLL will be not be returned in Perf Data Block. The exception or status code returned is the first DWORD in the attached data.
"
Thanks in advance for any assistance rendered.
pat
Hi all, i got this error:
[DTS.Pipeline] Error: "component "Excel Source" (1)" failed validation and returned validation status "VS_NEEDSNEWMETADATA".
and also this:
[Excel Source [1]] Warning: The external metadata column collection is out of synchronization with the data source columns. The column "Fiscal Week" needs to be updated in the external metadata column collection. The column "Fiscal Year" needs to be updated in the external metadata column collection. The column "1st level" needs to be added to the external metadata column collection. The column "2nd level" needs to be added to the external metadata column collection. The column "3rd level" needs to be added to the external metadata column collection. The "external metadata column "1st Level" (16745)" needs to be removed from the external metadata column collection. The "external metadata column "3rd Level" (16609)" needs to be removed from the external metadata column collection. The "external metadata column "2nd Level" (16272)" needs to be removed from the external metadata column collection.
I tried going data flow->excel connection->advanced editor for excel source-> input and output properties and tried to refresh the columns affected.
It seems that somehow the 3 columns are not read in from the source file?
ans alslo fiscal year, fiscal week is not set up up properly in my data destination?
anyone faced such errors before?
Thanks
Hi,
I am trying to create a simple BI Application for SSIS. In Visual Studio 2005 I just get a Data Flow Task from the toolbar and add it to the project. When I double click it I get the following error:
The task with the name "Data Flow Task" and the creation name "DTS.Pipeline.1" is not registered for use on this computer.
Then when I try to delete it it gives this other error:
Cannot remove the specified item because it was not found in the specified Collection.
I am creating this application in an administrator account in this computer, so I doubt the problem is related to permissions. I am running SQL Server 2005 and Visual Studio 2005 in WinXP Tablet PC Edition.
Any suggestions why this is happening and how to fix it?
I can't find 'SQL Server: SSIS Pipeline' performance object in performance monitor on a 64-bit SQL Server. I see it on a 32-bit. Does anybody know why?
Thanks
I have a custom (dataset) destination component from ms samples and it has an input holds DT_NTEXT value.
Whenever I try to retrieve data from this it returns "Microsoft.SqlServer.Dts.Pipeline.BlobColumn" as value.
I try this but didn't work:
String sValue = System.Text.Encoding.Default.GetString(Convert.FromBase64String(this.dataSet.Tables[0].Rows["Data"].ToString()));
It throws an execption "invalid character in.."
Please help how I can convert this?
Thanks in advance
Quick question.
I've got a CHAR (70) field called NAME that has a first and last name separated by a space. I want to split it into two fields FIRST and LAST -- with all the characters to the left of the space a first name and all the characters to the right of the space as last name. I couldn't find a string function that would let me do this simply (it may be right in front of me and I missed it).
Thanks in advance.
Ray
I have a database with a "large" table containing date based information Basically they're reservations. I've thought about creating a new table and adding any records from past years to this table. For the most part only current reservation need to be searchable, but in some circumstances it would be useful to be able to search through the archive too. so, my questions!!!
Is 8,000 or so rows of data "large" and unwieldly in SQL terms?
Would splitting this data into 2 tables - one small table for current and future reservations and one larger archive table then using a UNION SELECT query to make archive information seachable be a significant improvment on server resources/load or am I making the whole thing more complicated than it need be as 8,000 rows of data is nothing to worry about.............
What did they say about a little bit of knowledge being a dangerous thing?
Thanks in advance of any guidance to a neophyte!!?
Hi to all
I have one problem regarding sp and pass value in sp
I am gating a value like Abc,Def,Ghi,
Now I want to split the whole pass value by “,�
And fire one for loop to store value in database
This things is done in asp.net web form but I want to do all process in sp
So please guide me how I am write sp .
The purpose is pass value one time so connection time is decrees and give fast perforce
SQL UDF split()
The objective of this article is to help the SQL developers with an UDF that can be used within a stored procedures or Function to split a string (based on given delimiter) and extract the required portion of the string.
Scripting languages like VB script and Java script have in-built split() functions but there is no such function available in SQL server. In my experience this function is really handy when you’re working on an ASP application with SQL server as backend, whereby you’ll need to pass the ASP page submitted values to the SQL stored procedure.
To give a simple example, in a typical Monthly reporting ASP page – the users would select a range of months and extract the information pertaining to this date range. Classic implementation of this model is to have an ASP page to accept the input parameters and pass the values to the SQL stored procedure (SP). The SP would return a result set which is then formatted in the ASP page as results.
If the date range is continuous ie. JAN07 to MAR07 then the SP can typically accept a ‘From’ and ‘To’ range variables. But I’ve encountered situations whereby the users select 3 months from the current year and 2 months from previous year (non-continuous date ranges). In such scenario the SP cannot have a date range as input parameters.
Typically an ASP programmer would do is by having a single date input parameter in the SP and call the SP within a loop in the ASP page. This is an inefficient way of programming as contacting the database server within an ASP loop could cause performance overhead especially if the table being queried is an online transaction processing table.
Here is how I handled the above situation.
1.Declared one string input parameter of type varchar(8000) (if you’re using SQL 2005 then it is advisable to use Varchar(Max))
2.Pass the ASP submitted values as string, in this case the months selected by user would be supplied to the SP as a string
3.Within the Stored Procedure I’ll call the split() function to extract each month from the string and query the corresponding data
The basic structure of the stored procedure is as pasted below:-
CREATE PROCEDURE FETCH_SALES_DETAIL (
@MONTH VARCHAR(MAX)
)
AS
BEGIN
DECLARE @MONTH_CNT INT,@MTH DATETIME
SET @MONTH_CNT=1
WHILE DBO.SPLIT(@MONTH,',',@MONTH_CNT) <> ''
BEGIN
SET @MTH = CAST(DBO.SPLIT(@MONTH,',',@MONTH_CNT) AS DATETIME)
--<<Application specific T-SQLs>>-- (BEGIN)
SELECT [SALES_MONTH],[SALES_QTY],[PRODUCT_ID],[TRANSACTION_DATE]
FROM SALES (NLOCK)
WHERE [SALES_MONTH]= @MTH
--<<Application specific T-SQLs>>--(END)
SET @MONTH_CNT=@MONTH_CNT+1
END
END
Dbo.SPLIT() function takes 3 parameters
1)The main string with the values to be split
2)The delimiter
3)The Nth occurrence of the string to be returned
The functionality of the UDF is as explained STEP by STEP:
1.Function Declaration
CREATE FUNCTION [dbo].[SPLIT]
(
@nstring VARCHAR(MAX),
@deliminator nvarchar(10),
@index int
)
RETURNS VARCHAR(MAX)
Function is declared with 3 input parameters:-
@nstring of type VARCHAR(MAX) will hold the main string to be split
@deliminator of type NVARCHAR(10) will hold the delimiter
@index of type INT will hold the index of the string to be returned
2.Variable Declaration
DECLARE @position int
DECLARE @ustr VARCHAR(MAX)
DECLARE @pcnt int
Three variables are needed within the function. @position is an integer variable that will be used to traverse along the main string. @ustr will store the string to be returned and the @pcnt integer variable to check the index of the delimiter.
3.Variable initialization
SET @position = 1
SET @pcnt = 1
SELECT @ustr = ''
Initialize the variables
4.Main functionality
WHILE @position <= DATALENGTH(@nstring) and @pcnt <= @index
BEGIN
IF SUBSTRING(@nstring, @position, 1) <> @deliminator BEGIN
IF @pcnt = @index BEGIN
SET @ustr = @ustr + CAST(SUBSTRING(@nstring, @position, 1) AS nvarchar)
END
SET @position = @position + 1
END
ELSE BEGIN
SET @position = @position + 1
SET @pcnt = @pcnt + 1
END
END
4.1The main while loop is used to traverse through the main string until the word index is less than or equal to the index passed as input parameter.
4.2Within the while loop each character within the string is verified against the delimiter and if it does not match then local word count variable is checked against the input index parameter
4.3If the values are same ie., the input variable index and the word being processed in the while loop are the same then the word is stored in the @ustr variable. If the values does not match then the @position variable is incremented.
4.4If the character matches with the delimiter then the word count variable @pcnt is incremented along with the @position variable
5.Return the value
RETURN @ustr
I hope this article would benefit those who are looking for a handy function to deal with Strings.
Feel free to send your feedback at dearhari@gmail.com
I have 5 dynamic rows each row consisting of 5 checkboxes & 5 dropdowns.I am concatenating the values of each controls in a row using a wildcard charater "~" and each row i am concatenating using "|".The complete string is then assigned to one hidden field and passed as sql parameter to the backend.
Please help in writing the split function to get the values of each checkboxes and dropdowns from the string in order to save them in separate columns.
Thanks