SYNCHRONOUS TRANSFORMATION - FILE TO TABLE - Help Needed
Apr 2, 2007
Hi,
Can anyone please point me in the right direction?
What I am trying to do should be very straightforward:
Take a flat file, perform various transformation on various columns using the SCRIPT COMPONENT task, then send the transformed (and un-transformed) rows to a table in the database.
My question is, how to do this using scripting? I have yet to see an example of what I'm trying to do. (I have both Kirk Haselden's book, Donald Farmer's SSIS scripting book, and the msdn website, but I have yet to see an example of what I'm trying to do!)
FILE SOURCE --> SCRIPT COMPONENT (synchronous transform) --> OLE DB DESTINATION
How do I account for all the columns that will be both transformed and un-transformed, and get them into the table? That is the missing piece of information I can't find anywhere.
The closest thing I found was this code snippet. Do I need to use this syntax, eg. Me.Output0Buffer.FirstName = (where FirstName is the actual column name??)
etc.
Then, once I hook up the SCRIPT COMPONENT to the OLEDB Destination, which uses a connection manager to the table, it will insert FirstName with what I specify?
Help. Thanks.
Me.Output0Buffer.AddRow()
Me.Output0Buffer.FirstName = columnValue (or whatever)
View 8 Replies
ADVERTISEMENT
Jun 5, 2006
Hi,
If you have two synchronous transformation components and the input of the second is connected to the output of the first, does the first transformation process (loop through) all rows in the buffer before outputting these rows to the second transformation? Or does the first transformation output each individual row to the second transormation as soon as it has finished processing it?
Thanks in advance,
Lawrie.
View 5 Replies
View Related
Mar 13, 2006
Hi
I am currently trying to write a custom transform componet in c# that will take a row of data, perform a look-up via an external system,
then if there is a match then send the data from the extranel system down macth ouptut (which will have different columns to the input) and drop the data that
was read, else send the data down the unmacthed output which will be the same as the input.
So I would like to write a synchrons transform becuase I don't need read all the rows from the input buffer before I started processing, also I wish have millions of rows
load in memory.
Can this be done? also does any have explame code of how to do this? becuse I can't see how to send data down the match output buffer,
as this will have the lookup results data which will have diffent columns to the input data and how disgard the input data as well.
Thanks Steve
View 7 Replies
View Related
Apr 23, 2008
I have a package with 10 synchronous dataflows, which, combined, load about 300MB of flat file data to a database. This package would run successfully on 2 of our database servers, but would regularly fail on a third. The server on which it was failing is a 4 processor box with 16GB Ram with Windows Server 2003, SQL 2005, SSIS and SSRS installed - much more robust than one of the others that the package worked on. The SSIS error messages returned alternated between the following (with no apparent reason why one would show up rather than another, though the first was the most common):
"The file name "\Server1Folder1File1.txt" specified in the connection was not valid."
"The file name property is not valid. The file name is a device or contains invalid characters."
"An error occurred while initializing the flat file parser."
For the first error message, the error would report different connection managers and their associated file as invalid from run to run. All of the files across the 10 dataflows resided in the same network folder, and the package would read in and process a few of them before failing, so the problem was definitely not the connection string.
Searching the forums, etc. for these errors provided no useful information - given the real cause of the problem, these error messages are worse than unhelpful, they send you looking in the wrong direction. It was only when trying to track down another problem on the same server that I discovered the issue. When trying to copy database backups greater than 12GB over the network to this server, the operation would fail with an "Insufficient System Resources" message.
Some research led to the discovery that problem was caused by the /3GB switch in the boot.ini file of the server (don't let your Server team use that switch if you have 16GB of memory or more). Removing the switch and setting SQL to utilize AWE, fixed both the file copy problem AND the SSIS package failure problem. The SSIS package failed, not due to a bad connection string, but rather to insufficient server resources (read memory) to handle the simultaneous connections.
I hope this may help any others trying to track down this kind of SSIS package failure.
I will also provide here what I have gleaned about setting up Memory usage for SQL Server 2005 running on 32 bit Windows Server 2003 (with the caveat that I am no expert €“ corrections and additional information are welcome).
The following links got me started in my research (thanks to the folks who provided such useful information):
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=55191
http://articles.techrepublic.com.com/5100-10878_11-6091280.html
http://www.simple-talk.com/community/blogs/brian_donahue/archive/2007/09/30/37747.aspx
http://blogs.technet.com/askperf/archive/2007/03/23/memory-management-demystifying-3gb.aspx
http://www.modhul.com/2007/11/10/optimising-system-memory-for-sql-server-part-i/
Also, search BOL for:
Server Memory Options
Enabling Memory Support for Over 4 GB of Physical Memory
Enabling AWE Memory for SQL Server
Windows Server 2003 provides access to 4GB of virtual address space. By default, 2GB is assigned to the OS and 2GB to applications. This default can be change to 1GB for the OS and 3GB for applications by the use of the /3GB switch in the boot.ini file.
Physical memory over 4GB can be addressed by enabling Physical Addressing Extensions (PAE), which is done by setting the /PAE switch in the boot.ini file. This does not increase the systems virtual address space, rather it increases the size of the page table (which is maintained within the virtual address space), adding entries to reference the physical memory above 4GB.
It is important to note that these two switches are not interdependent (they do different things and you can turn each on or off regardless of the others status), though the combination of them has an impact on server performance and the maximum amount of physical memory which can be addressed.
The /3GB switch only impacts the allocation of the first 4GB of memory (virtual address space) between the OS and applications (default 50/50 % split, with switch on - 25% OS and 75% applications). The /PAE switch enables the system to reference/manage physical memory above 4GB, but does not alter the allocation percentages of the first 4GB of memory between the OS and applications. However, when PAE is enabled, the OS requires more memory within the first 4GB to manage the physical memory above 4GB (due to increased page table entries). With the /3GB switch, the OS has only 1GB of virtual address space, and only enough space to manage a total of 16GB of physical memory. If 32GB of physical memory is installed, 16GB of it will go to waste.
Address Windowing Extensions (AWE) is an API that allows an application to address more than the 2-3GB of memory that is available to applications within the virtual address space (first 4GB of memory). SQL Server can utilize AWE to take advantage of memory above the first 4GB that is made available via PAE, and can even reserve portions for its own use. I believe (though I can€™t remember where I got this bit) that SQL utilizes AWE memory only for the page cache (buffer pool €“ which seems to be a misnomer), and not for other operations.
To enable AWE, see the BOL references above.
The big question: what are the recommended settings for all of these? That all depends on what you have running on the server. You need to leave space for the OS, SQL Server and any other applications you have.
The hard and fast rules:
If you have more than 4GB of RAM, you must use the /PAE switch in order to take advantage of it.
If you have more than 16GB of RAM, you must NOT use the /3GB switch in order to take advantage of it.
Based on anecdotal evidence, I€™ve noticed the following generally recommended guidelines €“ assuming the server is dedicated to SQL.
Use of the /3GB switch seems to be a generally accepted practice if you have 8GB of RAM or less. For between 8 and 16GB, some say never use the /3GB switch, others say you can use it up to 12GB and still others up to 16GB. I interpret this to mean that it all depends on what types of loads are being placed on the server and that testing on individual servers will be required to determine whether or not to use the switch. Certainly that was my experience - the /3GB switch worked fine with 16GB RAM, until the server encountered a certain workload. For me, no more /3GB switch.
For setting SQL to use AWE, most seem to agree that it should be enabled if you have more than 4GB RAM. The setting of max server memory is more complicated. BOL seems to suggest (the €˜Server Memory Options€™ entry) a formula of Total Physical Memory minus 1-2GB for the operating system. Based on a desire to be a bit more conservative, I am now using the following formula:
max server memory = total physical memory
minus
4GB for the OS and application processes (since the AWE memory is utilized for page cache, not SQL processes)
minus
AWE memory required by other applications, including other instance of SQL Server
If anyone has additional insight, or a more refined equation, I could certainly benefit from it.
View 1 Replies
View Related
May 24, 2007
Hi guys,in my db i have these three tables1.Stores 2.Products3.Partstheir structure is something like :Stores ----Products ----PartsStores----------------StoreId, StoreNameProducts----------------ProductId, StoreId, ProductNameParts----------------PartId, ProductId, PartNamenow, in my application i wanna to implement a bulk-copy operation souser can copy products from one store to another one and when aproduct copied to new store;all of it's parts should copy too.in fact i need a method to insert a Product item in Products table andsynchronously copy it's parts into Parts table and repeat this stepsuntil all of proucts copied.how can i do that without cursors or loops ?Thanks
View 19 Replies
View Related
Mar 26, 2007
Hi,
I have 56 fields coming into the input of an script component, The need for script component was to just to check if one of those 56 columns has a valid date or not, If valid it will parse and put in an output date column, if not, it will put in NULL.
The 55 fields should be passed on. I dont really wanna write code and define output columns. How do I do this ?
Any input in this would be appreciated.
Thanks,
View 5 Replies
View Related
Apr 23, 2008
I am trying to use a merge transformation task and receiving an error that I don't know how to troubleshoot further. Could I please have some advice on what else to look at to try to resolve the problem.
The error message text is: Error at Data Flow Task [Merge [1245]]: The metadata for "input column "LOCATION" (5451)" does not match the metadata for the associated output column
I have looked at the metadata and cannot see any differences: the following is output from the data flow path.
Name Data TypePrecisionScaleLengthCode PageSort Key PositionSource Component
ACCOUNT DT_STR 0 0 6 1252 1 Sort - FinSysData
PROGRAM DT_STR 0 0 6 1252 2 Sort - FinSysData
LOCATION DT_STR 0 0 6 1252 3 Sort - FinSysData
PROJECT DT_STR 0 0 6 1252 4 Sort - FinSysData
SUBPROJECTDT_STR 0 0 2 1252 5 Sort - FinSysData
ACTIVITY DT_STR 0 0 6 1252 6 Sort - FinSysData
FUNDING DT_STR 0 0 3 1252 7 Sort - FinSysData
CLIENT DT_STR 0 0 6 1252 8 Sort - FinSysData
NTWAGE DT_STR 0 0 3 1252 9 Sort - FinSysData
TYPE DT_STR 0 0 1 1252 10 Sort - FinSysData
PERIOD DT_STR 0 0 6 1252 11 Sort - FinSysData
CO DT_STR 0 0 2 1252 12 Sort - FinSysData
FIN_YEAR DT_I4 0 0 0 0 13 Sort - FinSysData
BALANCES DT_R8 0 0 0 0 14 Sort - FinSysData
Name Data TypePrecisionScaleLengthCode PageSort Key PositionSource Component
ACCOUNT DT_STR 0 0 6 1252 1 Sort - DataWarehouse
PROGRAM DT_STR 0 0 6 1252 2 Sort - DataWarehouse
LOCATION DT_STR 0 0 6 1252 3 Sort - DataWarehouse
Project DT_STR 0 0 6 1252 4 Sort - DataWarehouse
SubProjectDT_STR 0 0 2 1252 5 Sort - DataWarehouse
Activity DT_STR 0 0 6 1252 6 Sort - DataWarehouse
Funding DT_STR 0 0 3 1252 7 Sort - DataWarehouse
Client DT_STR 0 0 6 1252 8 Sort - DataWarehouse
NTWage DT_STR 0 0 3 1252 9 Sort - DataWarehouse
TYPE DT_STR 0 0 1 1252 10 Sort - DataWarehouse
Period DT_STR 0 0 6 1252 11 Sort - DataWarehouse
CO DT_STR 0 0 2 1252 12 Sort - DataWarehouse
Fin_Year DT_I4 0 0 0 0 13 Sort - DataWarehouse
Balance DT_R8 0 0 0 0 14 Sort - DataWarehouse
View 7 Replies
View Related
May 21, 2007
My lookup data is in a csv file, not a table. Is there a way to get the Lookup transformation to use the csv file as the source 'table'? Obviously the alternative is to load the file into a SQL Server table and use that, but I want to keep it simple if possible.
View 4 Replies
View Related
Jun 4, 2007
hi
I have a flat file as below
Date 20070606
Empid Salary X1 x2 x3 x4 x5
100 10000 .............................
where 10000 is the salary got on date 20070606 by empid 100
.x1,x2,x3...are the remaining columns in the file.I need to extract the date and also continue reading the remaining data.How can i do this???
Any help?
thanks
ami
View 1 Replies
View Related
Feb 9, 2006
Hello,
Probabaly a silly question yet as a DOTNET developer, I'm trying to simulate DTS when for example, I don't have permission to perform DTS on a production server.
In particular and interested regards caching of rows before the service decides to flush the buffer and write to the target table. Safe to assume DTS is cursor based?
View 1 Replies
View Related
Dec 6, 2007
In DTS/SSIS packages, How do to data validation or check for special characters and remove them at Copy column level.
thanks,
View 3 Replies
View Related
Apr 22, 2015
tell me the difference between Audit transformation and rowcount transformation.
Because audit and rowcount transformation will provide the environment variables.
Only difference i am finding is rowcount returns the count of rows its updating .
Apart from these is there any other difference?
Tell me the scenario where i need to use the audit transformation.
View 3 Replies
View Related
Jan 5, 2008
The following statement is from Microsoft documentation:
If you use the ExclusionGroup property to specify that rows should only go to one or another of a group of outputs, as in the Conditional Split transformation, you must call the DirectRow method to select the appropriate destination for each row. When you have an error output, you must call DirectErrorRow to send rows with problems to the error output instead of the default output.
I have a question about this because I have never used the "ExclusionGroup" property. For example, I have a script component where I specify 4 separate outputs, because I am sending different groups of rows to each output. I accomplish this programmatically using a lot of conditionals and it works fine.
I did not have to use the "ExclusionGroup" property to do this. So I'm not sure why I would ever need this, or to use DirectRow? I'm trying to understand this better, because maybe I feel like I'm not understanding the DirectRow, or how/when to use it.
Thanks
View 1 Replies
View Related
Jun 11, 2001
Hi,
Is it the way use T-SQL to select text data from table and add them to the file on a HD, but save the information in the file without changes anything that was in the file before, another words without rewriting, just add?
Help needed - urgent!
Thanks beforhand.
Dmitri
View 2 Replies
View Related
Jan 3, 2008
Can someone please clarify:
If you have a data file, and you only want CERTAIN rows to pass to the destination, ie) a table
and you are using a script task to accomplish this,
is this a synchronous or asynchronous transformation?
Q. And how do you assign the values to the output? Do you have to create output columns, or not?
I am very very confused right now. I can't seem to find a decent answer to what is a very basic question either in my SSIS book or in the documenation. Perhaps it is so basic, that the question doesn't seem valid? I don't know. But I just don't understand this at all.
Thank you
View 9 Replies
View Related
Oct 1, 2007
I'm new to SSB, so please bear with me. Our application requirements are:
1) Web app gathers user input from a web UI.
2) Web app calls a stored procedure, passing in the user input gathered in step (1).
3) Procedure issues queries to multiple data sources (SQL Server 2005 db's) derived from the user input.
4) Procedure waits for replies from these multiple data sources, governed by a timeout in case a data source is unavailable or slow to respond.
5) Procedure aggregates the data returned in step (4) from multiple data sources into an XML document.
6) Procedure returns the XML document as an output parameter.
This is different than the usual SSB asynchronous application paradigm. In particular, I'm wondering:
How can I setup a synchronous dialog, where the procedure that issues the SEND waits for a reply from the target service? I don't want the initiator procedure to end after SENDing, but rather wait for the replies to those messages so it can aggregate the data from the reply message bodies.
Thanks - Dana Reed
View 4 Replies
View Related
Sep 25, 2006
HI, I need to know whether I am on the last row or not in my script component. If this is the case, I would alter a column in the row that indicates me that I am processing the last row. Is there a way to do it? I tried with process input but when the EndOfRowSet() indicates me that the last row is porcessed, I cannot alter the row in the buffer.
Thank you,
Ccote
View 4 Replies
View Related
Sep 26, 2006
hi all,
i'm a newbie in SSIS. i created a package to transfer data from one table to another. before the data flow, i added a Execute SQL Task package that truncate the dest table if it exists and create a new one if it doesn't.
i'll encounter an error (invalid object name) when i run the whole package but no error if i execute the tasks 1 by 1.
what's the workaround for this? thanks!
View 3 Replies
View Related
Dec 4, 2007
Hi everyone!
I'm on my way to learn SSIS by myself and it's a little complicated!
I'd like to ask you one thing:
I have two tables at my data source, one is "Clients" and the other one is ClientsAddress. That is, a client can have more than one address. Both tables are related by a one to many relationship and the tables description is:
CLIENTS CLIENTSADDRESS
#PK_Client #PK_Client
other fields.. #ID_address
... other fields
What i intend to do is to obtain one table with approximately 3 fields, each one for a possible client address; something like this:
CLIENTS
PK_Client
ID_Address1
ID_Address2
ID_Address3
My question is what transformation can i use? an how ?
Thanks very much in advance!!
Emilio Leyes
Salta, Argentina
View 1 Replies
View Related
May 13, 2006
I have synchronous mirroring. Some times I loose connection to witness and mirror servers. These times primary server is down. Is there any way I can change mirroring to asynchronous when primary server is down due to communication break down between witness and mirrored servers? I can break mirroring but to re-establish mirroring, I have to backup and restore on the other side. So if I can change mirroring to asynchronous when primary server is down due to connection breakdown between witness and mirrored server, then when witness and mirror servers come back, I don't need to restore the entire database. Ofcourse I could use asynchronous always but that does not failover automatically. I am thankful to all answers and suggestions. Thanks.
View 3 Replies
View Related
Aug 30, 2006
hi there
I am using SQL SERVER 2005.
I am creating a Merge replication(between two databses), its working well, but i need to goto the Local publications & right click there & select the "View Synchronization Status" & then start the Agent manually to Synchronous the databases.
Now I want the Two databses should be replication (Merge) automatically without going to any menu.
I mean the Synchronization takes place when any of the Databases changes, automatically, without touching anything.
For Example:
If one record is inserted in a table(on commit) of Database ONE, it should be Reflected to the other table of the database TWO.
Any Idea, or Link or solution???
Gurpreet S. Gill
View 4 Replies
View Related
Mar 19, 2008
Porting an existing SQL 2k DTS job over to a SQL 2k5 SQL Server running SSIS.
Background:
The job loads data into an empty work table and performs some work before clearing out the work table.
This job runs every minute.
Question:
If the job happens to take longer than a minute, does SSIS create a second instance of the job?
Or perhaps it does what DTS did and reschedules the job for the next iteration?
Concern:
I need to know because there would be key contraint violations if another instance of the job started before the working table was cleared out.
Thanks in advance
View 1 Replies
View Related
Jan 15, 2008
I'm creating a script component that reads from an OLEDB source and writes to an OLEDB destination.
For every input row, I need to output several rows.I tried using the Row.DirectRowToOutput0() method inside a loop in the
Input0_ProcessInputRow routine but that's not working. Should I be using Addrow() instead? If I use Addrow() does this mean it needs to be an asynchronous transformation?
I remember seeing a blog entry (Jamie's?) that did almost exactly what I wanted, but I can't find it now.
Any pointers appreciated
View 13 Replies
View Related
Jun 8, 2006
Hi
I'd like to know if there's a way to control the execution of ETL packages, such that:
Different packages, or at least packages that don't access the same table or database run asynchronously with respect to each other; e.g., two different packages run at the same time
and
If a package is called for execution more than once by different requests, force them to run synchronously, or one after the other.If this is possible, what resources would it require? Is this possible under, say, a dual or quad processor machine?
Thanks.
View 6 Replies
View Related
Jun 13, 2007
Im from Russia, sorry if my english is not very good.
Here's the case:
1)-------------------------------
I made a DTS-package in sql2000 that transfers the [sql table] into [dbf file] via jet4.
First i create (in delphi) the empty dbf with the same name and columns same as in sql table.
Second, I run my DTS with variables - source and destination table names
In DTS there is source, destination and transformation . After I send the Variables(table names)
, the transformation "arrow" needs to be "refreshed" to make column names in both tables correspond each other. For that in transformation I chose ActiveXScript Mode and wright VB Script:
'**********************************************************************
' Visual Basic Transformation Script
'************************************************************************
' Copy each source column to the destination column
Function Main()
dim i
For i = 1 To DTSSource.Count
DTSDestination(i) = DTSSource(i)
next
Main = DTSTransformStat_OK
End Function
And it works
2)------------------------------
I want to do same thing in sql2005 SSIS but don't figure out how...
I managed to make a package that recieves (in variables) table names and runs correctly.
But after I change those variable names into any other it crashes -
Description: "component "OLE DB Source" (1)" failed validation and returned validation status "VS_NEEDSNEWMETADATA".
Of cource this happens 'cause I didn't "refresh" the transformatoin (and maybe also source and dest), but I don't know how.
Anyone can help ?!
View 8 Replies
View Related
Apr 25, 2008
Hello
I've been given the task of migrating a DTS package to SSIS (neither of which I am particularly familar with). The first job in the DTS package is to read a .ini file and set a bunch of variables. These variables are then used throughout the DTS package. After running the DTS package through the SSIS migration wizard this job turns into an execute script task and I can't see if it is still reading the .ini file. However, the only real purpose of this step is to allow different parameters to be passed in development, test, production etc. So I am thinking this whole step can be removed and effectively replaced with a package configuration (I'll probably use an XML file). My understanding is that by selecting the name/value pairs as appropriate in the XML package configuration file means this values will be passed in at runtime and achieve the same functionality. Is this the correct way to do this in SSIS, or do I still need the .ini file and variables?
Thanks for any advice on this issue.
Regards, John
View 7 Replies
View Related
Nov 10, 2014
We are seeing high number of hadr_sync_wait types on our server after setting up AOAG during peak times. We have setup sync type as synchronous commit and failover automatic. Can we change these settings to async and manual failover whenever we need and change them back to sync commit during off peak timings. Any drawbacks because of these changes ?
View 4 Replies
View Related
Mar 8, 2015
I am trying to implement a read only replica to move much of the data read for an application to the secondary replica. Initially I had the the primary and secondary set to asynchronous commit. QA brought up an issue with creating entities from the application because after the creation of an entity the application turns around and repopulates the entire aggregate object. Well it seems that the application was reading the secondary replica before the data had been committed. Although I understand the issues that synchronous commits can cause, I went ahead and made the change as I expected it to fix the issue. After changing the primary replica to synchronous we still had the error, so I also changed the secondary although that makes no since, but the issue remains.
View 5 Replies
View Related
Nov 21, 2006
Wanted to enquire how this is done. Tried doing it in the setUsageType method I have overwritten but only allows description to be changed. Basically need to change "Name".
Best option would be to change it instantly when a user selects a column from the inputs in the custom component, ie. it changes the Output Alias to a desired value. (Input tab in advanced editor)
All this is being done in a custom component which I would like to be synchronous, can achieve a similar result asynchronously.
Thanks
View 2 Replies
View Related
Apr 9, 2008
Hello,
I have developed some packages to load data into "Fact" tables in the data warehouse.
Some packages are OK, other ones not. What is the problem?: some packages load fact tables with lots of "Lookup - Data Flow Transformation" into the "data flow task" (lookup against dimension tables) but they are very very slow, too much slow to be choosen as a solution.
Do you have any other solutions to avoid using "Lookup - Data Flow Transformation"? Any other solution (SSIS, TSQL and so on....) is welcome to speed up the Fact table loading process.
Thank in advance
View 7 Replies
View Related
Jan 8, 2008
Hi,
Can someone explain how the PR testharness is supposed to function? When I run it f with and without debugging and by using dtexec other processes can start and stop before the awaiting response from the console date and number of days to process. If you just look at the testharness and the loadgroupfulldaily the increment date and SQL Audits will finish prematurely before the console data is even entered. Is there a property that I am missing?
Thanks,
Larry
View 1 Replies
View Related
Jul 30, 2007
I'm creating a new Integration Services Project that copies data out of a SQL 7 server, transforms it, and places the data on a SQL 2005 (SP 2) Server. When defining a lookup transformation, if I specify an OLE DB Connection to my server running SQL 7 as the reference table, as soon as I click on the Colums tab, Visual Studio closes / crashes and dumps me to windows. I don't get an error message. If however I specify a connection to a server running SQL 8, or SQL 2005, no problems.
Is this supposed to happen?
My workstation is running Windows XP Pro SP2, Visual Studio 2005 Pro.
Microsoft SQL Server Integration Services Designer
Version 9.00.1399.00
The server that doesn't work for a reference table is running Windows 2000 Server SP4
SQL 7.00.623
Thanks for your help,
Kirk
View 6 Replies
View Related
Jul 1, 2015
My source has 2.2 million of records. I'm performing the incremental load.In the lookup transformation i used the destination table for the reference using Full cache mode.For the first time package executed successfully but when i executed the package second time, Suddenly Package hangs while running.Than i truncate the data from the destination table and restart the SQL Server Services.After doing all this i executed package again and it worked but when i executed package second time, again package hangs up .I have 8GB RAM and i5 2.5 GHz Processor laptop.
View 7 Replies
View Related