Interesting Scenario - Multiple Files Updating One Table
Sep 6, 2007
Hi,
i have a scenario where I have to read 2 files that update the same table (a temp staging table)...this comes from the source system's limitation on the amount of columns that it can export. What we have done as a workaround is we split the data into 2 files where the 2nd file would contain the first file's primary key so we can know on which record to do an update...
Here is my problem...
The table that needs to be updated contains 9 columns. File one contains 5 of them and file2 contains 4 of them.
File 1 inserts 100 rows and leaves the other 4 columns as nulls and ready for file 2 to do an update into them.
File 2 inserts 10 rows but fails on 90 rows due to incorrect data.
Thus only 10 rows are successfully updated and ready to be processed but 90 are incorrect. I want to still do processing on the existing 10 but cant affort to try and do processing on the broken ones...
The easy solution would be to remove the incorrect rows from the temp table when ever an error occurs on the 2nd file's package by running a sql query on the table using the primary keys that exist in both files but when the error occurs on the Flat File source, I can't get the primary key.
What would be the best suggestion? Should i rather fail the whole package if 1 row bombs out? I cant put any logic in the following package that does the master file update/insert from the temp table because of the nature of the date. I
Regards
Mike
View 4 Replies
ADVERTISEMENT
Nov 16, 2006
Hi
I need to update one of the test databases:
There are about 140 tables which don€™t get used but I don't want to delete to I want to prefix them with 'x'.
Is this possible or do I have to alter each one, one at a time
Thanks
Rich
View 2 Replies
View Related
Sep 25, 2007
Hi,
I have a table called "tblProducts" with following fields:-
ProductID(Pk, AutoIncrement), ProductCode(FK), ProdDescr.
So to the above table I have added a new field/column named "ProdLongDescr(varchar, Null)"
So, I need to populate this newly added column with specific values for each row depending on "ProductCode" which is different forevery row. The problem is that I have 25 rows.So instead of Writing 25 individual update scripts, is there a way in which single query will do the same job instead of writing one update query for each row ?. If so can some one guide me how to achieve that OR point to me a good resource.
Below are a couple of Individual update scripts I Wrote. "ProductCode" is different for all 25 rows.
Update tblValAdPackageElement SET ProdLongDescr = 'Slideshows' WHERE ProductCode = 'SLID'
And szElementDescr='Slideshow'
if @@error <> 0
begin
goto ErrPos
end
Update tblValAdPackageElement SET ProdLongDescr = 'CategorySlideshows' WHERE ProductCode = 'SLDC'
And szElementDescr='CategorySlideshow'
if @@error <> 0
begin
goto ErrPos
end
Thanks,
View 1 Replies
View Related
Sep 3, 2014
I'm trying to update a checkbox from "False" to "True" within a single table for multiple records. I can update a single record using the script below. However, I'm having trouble applying additional Id's to the string.
(Works) - Update Name_Demo set KEY_CONTACT = 'true' where ID = 225249
(doesn't work) - Update Name_Demo set KEY_CONTACT = 'true' where ID = '225249, 210014, 216543'
It says query executes successfully but returned no rows.
View 3 Replies
View Related
Apr 12, 2007
Hello All
I am trying to figure out if what i am attempting to do is possible and whether or not my approach is wrong to begin with.
I am trying to build a custom report for our accounting system which is Traverse from Open systems. This is what i have done in the stored procedure thus far
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
ALTER PROCEDURE rptArFLSalesByCustItemized_sp
@custId pCustID,
@dateFrom datetime,
@dateThru datetime,
@itemIdFrom pItemId,
@itemIdThru pItemId
as
set nocount on
-- define some variables for previous year
declare @LYqty int, @LyAmt money, @LYfrom datetime, @LYthru datetime
-- set defaults
SET @itemIdFrom=ISNULL(@itemIdFrom,(SELECT MIN(itemId) FROM tblInItem))
SET @itemIdThru=ISNULL(@itemIdThru,(SELECT MAX(itemId) FROM tblInItem))
SET @LYfrom=DATEADD(YEAR,-1,@dateFrom)
SET @LYthru=DATEADD(YEAR, -1, @dateThru)
-- create small temp table to hold customer info
Create Table #tmpArCustInfo
(
custId pCustID,
custName VARCHAR (30),
)
-- populate customer temp table with info
Insert into #tmpArCustInfo
select custId, custName
from tblArCust
WHERE custId = @custId
-- create a temp table to hold the Data for each Item
Create Table #tmpArSalesItemized
(
itemId pItemId,
productLine VARCHAR (12),
pLineDesc VARCHAR (35),
descr VARCHAR (35),
LYQtySold int,
LYTDQtySold int,
QtySold int,
LYTDsales money,
totalSales money,
LastInvDate datetime,
)
-- populate the temp table with all of the inventory items
insert into #tmpArSalesItemized
select ii.itemId, ii.productLine, ip.Descr, ii.Descr, 0,0,0,0,0, NULL
from tblInItem ii, tblInProductLine ip
where ip.productLine = ii.productLine
AND ii.itemId BETWEEN @itemIdFrom AND @itemIdThru
-- update table with this years quantities
update #tmpArSalesItemized
SET QtySold = (select SUM(QtyOrdSell) from tblArHistDetail hd
where TransId IN (select TransId from tblArHistHeader where custId = @custId)
AND orderDate IN (select OrderDate from tblArHistHeader where OrderDate BETWEEN @dateFrom AND @dateThru)
AND hd.partId BETWEEN @itemIdFrom AND @itemIdThru
GROUP BY hd.partId
)
-- Return the temp tables results
select * from #tmpArSalesItemized, #tmpArCustInfo
drop table #tmpArSalesItemized, #tmpArCustInfo
return
GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO
My problems begin where i want to start updating all of the Qty's of the QtySold field. I have managed to get it to write the same sum in every field but i cannot figure out how to update each row based on the sum of the qty found for that item in the tblArHistDetails table, trouble is too that there is no reference to the custId in that table either. The custId resides in tblArHistHeader and is linked to the details table via the TransId column. So really i need to update many rows based on criteria from 2 other tables.
Can anyone please help? I dont have a clue how to make this work, and most of what i have learned about sql thus far has been from opening other stored procs etc in the accounting system and just reading to see how the developers have done things.
Thanks
Jamie
View 1 Replies
View Related
May 16, 2008
Hi Guys
I have 4 databases in the same space.My users use all of them and use the same username and password to log into these 4 databases.In each of these databases,i have put a control table to allow me to keep track of all users that have to reset their passwords.
The control table consists of the username and flag fields.When the flag is ON(1) the user is forced to reset thier password and if the flag is OFF(0) they are not.
When a user logs into any one of these databases and they have to reset thier password,how do i make sure that all the other tables in other databases are also updated to make sure that the user is not forced to reset their password again when they log into those other databases later since they are using the same username and password for all databases.
I am planning to use a stored procedure which i will put into all the four databases and when a user logs in and has to reset their password,that sproc is called and automatically updates all the other 3 tables in other 3 databases.
Some SQL examples will be very appreciated.
Thanks.
View 5 Replies
View Related
Apr 30, 2015
I am trying to multiple update records in Table B from a single record in Table A. To identify the records that need to be updated I used this:
Select
Table1.cfirstnameas'Table
1 First',Table1.clastnameas'Table
1 Last',Table1.ifamilyid,CR.icontactid,CR.lLiveswithStudent,
Table2.cfirstnameas'Table
2 First',Table2.cLastNameas'Table
2 Last',Table2.ilocationidas'Table
2 Location',Table1.iLocationIDas'Table
1 Location'
fromTable1
JoinTable3
CRonTable1.istudentid=CR.istudentid
JoinTable2
onCR.icontactid=Table2.iContactID
whereCR.lLivesWithStudent=1
andTable1.ifamilyid>0
andTable1.ilocationid<>Table2.iLocationIDandTable1.lCurrent=1
I need to update the ilocationid from Table 1 to all Table 2 records related to Table 1but there is no direct relation from Table 1 to Table 2. I needed Table 3 to make the connection from Table 1 to 2.
View 2 Replies
View Related
Jul 20, 2005
Hi all,I have de following application to do :I receive several .csv files from another application in a determined folderof my PC.Those files are named with the format log1.csv logs2.csv logs...The number of file is variable but the internal format is always : time_sec;levelSo the files content a field that may be used as unique key in the target database.I'm trying to build a DTS package that should import periodicallyall the CSV's present in the folder and then destroy them if donesuccessfully.Apparently its not so simple than I supposed. I have always to give the nameof the table I want to import.any idea?
View 1 Replies
View Related
Jan 21, 2005
Dear All,
I am importing all the files from a particular folder to a table on my database KB. It is working perfectly if i use it on the same system where the DB exists and not working from the network.
USE TESTDB
--Table Creation Starts here
Create table Account([ID] int IDENTITY PRIMARY KEY, Name Varchar(100),
AccountNo varchar(100), Balance money)
Create table logtable (id int identity(1,1),
Query varchar(1000),
Importeddate datetime default getdate())
--Table Creation ends here
---Stored Procedure Starts here
Create procedure usp_ImportMultipleFiles @filepath varchar(500),
@pattern varchar(100), @TableName varchar(128)
as
set quoted_identifier off
declare @query varchar(1000)
declare @max1 int
declare @count1 int
Declare @filename varchar(100)
set @count1 =0
create table #x (name varchar(200))
set @query ='master.dbo.xp_cmdshell "dir '+@filepath+@pattern +' /b"'
insert #x exec (@query)
delete from #x where name is NULL
select identity(int,1,1) as ID, name into #y from #x
drop table #x
set @max1 = (select max(ID) from #y)
--print @max1
--print @count1
While @count1 <= @max1
begin
set @count1=@count1+1
set @filename = (select name from #y where [id] = @count1)
set @query ='BULK INSERT '+ @Tablename + ' FROM "'+ @Filepath+@Filename+'"
WITH ( FIELDTERMINATOR = ",",ROWTERMINATOR = "")'
--print @query
exec (@query)
insert into logtable (query) select @query
end
drop table #y
--sp ends here
Exec usp_ImportMultipleFiles 'c:myimport', '*.csv', 'Account'
If i use the above Exec like
Exec usp_ImportMultipleFiles '\kb-02C$MyImport', '*.csv', 'Account'
I am getting the following error:
Could not bulk insert because file '\kb-02C$MyImportAccess is denied.' could not be opened.
Operating system error code 5(Access is denied.).
C Drive and MyImport folder is shared on system kb-02
Would appreciate your valuable HELP.
thanking your valuable help in advance.
K006B
View 2 Replies
View Related
Sep 23, 2014
I have over 600+ Excel .xlsx file that I have been trying to import to Sql database table. I've been trying to complete this task with SSIS but no luck yet. I have seen several videos and read articles but when I run the package the source is validated but I always get an error in the destination. I am using Excel 2010 and SQL Server 2012.
View 3 Replies
View Related
Jan 29, 2007
How to import multiple text files (residing in single folder) into SQL Server table? I know how to import single file but not sure how multiple files could be loaded? Pls. guide.
Thanks,
HShah
View 1 Replies
View Related
Oct 28, 2015
I have a requirement to load multiple flat files in target table .
I have created the package which used to load files into target table using For each loop container.
But now requirement has been changed now I have to take only those files from table where status="Success" and max JobId. By the query I am to get those records which need to load into table.
Below query I am using to get the files which need to load.
select [JobLogKey],[SrcNm],[DestNm]
FROM [ConfigRep].[dbo].[JobLog]
Where [JobId]=
(Select Max(cast([JobId] as Int)) Jobid
FROM [ConfigRep].[dbo].[JobLog]
Where [JobStat]='Success')
Output:-
JobLogKey SrcNm  DestNm
268 H:Data PlatformSource FileClient2LocHGSSpecLocation.txt Location.txt
269 H:Data PlatformSource FileClient1LocHGSSpecLocation.txt Location.txt
I have to load using above 2 files which are under SrcNm. I have created one variable called FileToLoad as Object and mapping to result set of above query. I have create JobId,SrcNm and DestNm variable to catch the record at every loop. I have  created 2 For each Loop containerÂ
Below screen shot of outer Foreach loop. Till here Its working fine. Inner for each loop container not executing any task under that. How to get it done.
View 3 Replies
View Related
Oct 24, 2007
I am getting the following error when trying to load multiple excel files using for each loop container in SSIS, I tried to put the quotes in several different ways but still can't get rid of this error. I was able to successfully load single excel file, but when I use the for each loop container that's when I am having problems. Any help is greatly appreciated. Thx.
TITLE: Package Validation Error
------------------------------
Package Validation Error
------------------------------
ADDITIONAL INFORMATION:
Error at Package1 [Connection manager "SourceConnectionExcel"]: The connection string components cannot contain unquoted semicolons. If the value must contain a semicolon, enclose the entire value in quotes. This error occurs when values in the connection string contain unquoted semicolons, such as the InitialCatalog property.
Error at Package1: The result of the expression ""Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + @[User::Folder] + @[User::file] + ";Extended Properties="Excel 8.0;HDR=NO";"
" on property "ExcelFilePath" cannot be written to the property. The expression was evaluated, but cannot be set on the property.
(Microsoft.DataTransformationServices.VsIntegration)
------------------------------
BUTTONS:
OK
------------------------------
View 8 Replies
View Related
Oct 24, 2007
Hi,i am using sql 2005 and this is what i am trying to do..i have multiple table say tbl1,tbl2,tbl3 etc and each one of have columns say col1,col2,col3,col4,col5 etc... and table_total.Each of these table contain same forignKey,say col2, and gets populated different time.In table_total i need to store the sum of particular column/columns(like sum of col3 from tbl1 and sum of col3 and also sum of col4 from tbl2 )values in a given row,for a given foriegn key under the respective column in table_total. Something like table_total has columns sum_col3tbl1, sum_col4tbl2, sum_col4tbl3, sum_col5tbl2 likewise..table_toal should get updated at the same time,whenever tbl1,tbl2,tbl3 etc..are getting updated,so that it always has most updated value.Can someone suggest me how to achive this? thanks...
View 2 Replies
View Related
Sep 25, 2007
Hi,
What's the most efficient way to store the following information:
* Table contains 1 million listings
* Each listing can be geo-targeted to any of the 200+ countries
* Searches return listings based on location
Storage options:
Option #1 (normalized)
* Listings (PK listingID int) [1 million rows]
* ListingLocations (listingID, locationID) [could be up to 200 million rows]
Option #2 (denormalized)
* Listings (PK listingID int, binary(32) with bit-mask consisting of 200 bits one for each location)
Usage: Usually the query will simply lookup listings based on some keywords. It will get back 50-200 listings. Then the application (C#) will filter the listings based on location.
Did anyone have experience with similar structures? Which option is more efficient?
I know that using the intersection-table in Option #1 is the "proper" relational-DB way of doing things. However, I do not like the idea of storing the listingID so many times (ones for each locationID).
Thanks,
Av
View 1 Replies
View Related
Nov 13, 2007
I began with a partition function as follows:
CREATE PARTITION FUNCTION [TimeTicksRangePFN](bigint) AS RANGE RIGHT FOR VALUES (633294720000000000, 633320640000000000, 633347424000000000, 633374208000000000, 633399264000000000)
These numbers happen to correspond to the dates 11/1/7, 12/1/7, 1/1/8, 2/1/8 and 3/1/8 in ticks respectively.
I began with a partition scheme as follows:
CREATE PARTITION SCHEME [TimeTicksRangePScheme] AS PARTITION [TimeTicksRangePFN] TO ([FG_xxx_EventArchive00001], [FG_xxx_EventArchive00002], [FG_xxx_EventArchive00003], [FG_xxx_EventArchive00004], [FG_xxx_EventArchive00005], [PRIMARY])
While running my “sliding window script� , which I hoped would 1) roll off the oldest partition of my EventArchive table and 2) add a new partition with a tick boundary that equates to 3/5/8, I get an error related to my switch out table's index, the same table's Filegroup and Primary.
After getting the error, I scripted the partition function as a create in mgt studio and got…
CREATE PARTITION FUNCTION [TimeTicksRangePFN](bigint) AS RANGE RIGHT FOR VALUES (633320640000000000, 633347424000000000, 633374208000000000, 633399264000000000, 633402720000000000)
...which looks like what I had intended cuz the last boundary is the tick representation of 3/5/8 and the oldest has rolled off
scripting the scheme produced...
CREATE PARTITION SCHEME [TimeTicksRangePScheme] AS PARTITION [TimeTicksRangePFN] TO ([FG_xxx_EventArchive00001], [FG_xxx_EventArchive00003], [FG_xxx_EventArchive00004], [FG_xxx_EventArchive00005], [PRIMARY], [FG_xxx_EventArchive00001])
which looks nothing like what I intended, I thought I’d end up with …00002,…00003,…00004,…00005,…00001,PRIMARY
the script steps that seem most relevant start at the 5th step as follows...
5) creates table [dbo].Switch on the switch out filegroup with columns, PK and indexes matching exactly those of [dbo].EventArchive
6) switches partition 1 of [dbo].EventArchive to [dbo].Switch
7) ALTER PARTITION FUNCTION TimeTicksRangePFN() MERGE RANGE (633294720000000000) --this was the oldest date corresponding to 11/1/7
8) truncates [dbo].Switch
9) drops all indexes on [dbo].Switch except a clustered index (IX_TimeTicks), leaves PK constraint alone
10) ships the new data whose values range from 3/1/8 to less than 3/5/8 to [dbo].Switch and deletes them from their source
11) recreates all non clustered indexes on [dbo].Switch
12)ALTER TABLE [dbo].[Switch] WITH CHECK ADD CONSTRAINT RangeCK CHECK ([TimeTicks] < the number of ticks represented by 3/5/8)
13)ALTER PARTITION SCHEME TimeTicksRangePScheme NEXT USED [FG_xxx_EventArchive00001] --fg isnt really hardcoded
14)ALTER PARTITION FUNCTION TimeTicksRangePFN() SPLIT RANGE (the number of ticks represented by 3/5/8)
15)ALTER TABLE [dbo].[Switch] SWITCH TO [dbo].[EventArchive] PARTITION 5
step 15 is the one that fails with message "ALTER TABLE SWITCH statement failed. index 'xxx.dbo.Switch.IX_TimeTicks' is in filegroup 'FG_xxx_EventArchive00001' and partition 5 of index 'xxx.dbo.EventArchive.IX_TimeTicks' is in filegroup 'PRIMARY'.
View 1 Replies
View Related
Nov 7, 2007
I have a partition function as follows:
CREATE PARTITION FUNCTION [TimeTicksRangePFN](bigint) AS RANGE RIGHT FOR VALUES (633294720000000000, 633320640000000000, 633347424000000000, 633374208000000000, 633399264000000000)
These numbers happen to correspond to the dates 11/1/7, 12/1/7, 1/1/8, 2/1/8 and 3/1/8 in ticks respectively.
I have a partition scheme as follows:
CREATE PARTITION SCHEME [TimeTicksRangePScheme] AS PARTITION [TimeTicksRangePFN] TO ([FG_xxx_EventArchive00001], [FG_xxx_EventArchive00002], [FG_xxx_EventArchive00003], [FG_xxx_EventArchive00004], [FG_xxx_EventArchive00005], [PRIMARY])
After running my €śsliding window script€? , which I wanted to switch out the lowest partition with, and add a new partition with a new tick boundary that equates to 3/5/8, I get an error saying that 1 of my switch out table€™s indexes is in filegroup 1 but partition 5€™s index of the same name is in PRIMARY. At this point the partition function looks like€¦
CREATE PARTITION FUNCTION [TimeTicksRangePFN](bigint) AS RANGE RIGHT FOR VALUES (633320640000000000, 633347424000000000, 633374208000000000, 633399264000000000, 633402720000000000) which looks like what I had intended cuz the last boundary is tick representation of 3/5/8 and the oldest has rolled off
and the scheme looks like€¦
CREATE PARTITION SCHEME [TimeTicksRangePScheme] AS PARTITION [TimeTicksRangePFN] TO ([FG_xxx_EventArchive00001], [FG_xxx_EventArchive00003], [FG_xxx_EventArchive00004], [FG_xxx_EventArchive00005], [PRIMARY], [FG_xxx_EventArchive00001]) which looks nothing like what I intended, I thought I€™d end up with €¦00002,€¦00003,€¦00004,€¦00005,€¦00001,PRIMARY
the relevant script steps are...
5) creates table [dbo].Switch on the switch out filegroup with columns, PK and indexes matching those of [dbo].EventArchive (allows default location for indexes)
6) switches partition 1 of [dbo].EventArchive to [dbo].Switch
7) ALTER PARTITION FUNCTION TimeTicksRangePFN() MERGE RANGE (633294720000000000)
8) truncates [dbo].Switch
9) drops all indexes on [dbo].Switch except a clustered index (IX_TimeTicks), leaves PK constraint alone
10) ships the new data whose values range from 3/1/8 to less than 3/5/8 to [dbo].Switch and deletes them from their source
11) recreates all non clustered indexes on [dbo].Switch
12)ALTER TABLE [dbo].[Switch] WITH CHECK ADD CONSTRAINT RangeCK CHECK ([TimeTicks] < the number of ticks represented by 3/5/8)
13)ALTER PARTITION SCHEME TimeTicksRangePScheme NEXT USED [FG_xxx_EventArchive00001]
14)ALTER PARTITION FUNCTION TimeTicksRangePFN() SPLIT RANGE (the number of ticks represented by 3/5/8)
15)ALTER TABLE [dbo].[Switch] SWITCH TO [dbo].[EventArchive] PARTITION 5
step 15 is the one that fails with message "ALTER TABLE SWITCH statement failed. index 'xxx.dbo.Switch.IX_TimeTicks' is in filegroup 'FG_xxx_EventArchive00001' and partition 5 of index 'xxx.dbo.EventArchive.IX_TimeTicks' is in filegroup 'PRIMARY'.
View 5 Replies
View Related
Oct 15, 2009
is there a way to update multiple rows in one update query in tsql? what I wanted to do is for example I got a table containing
code : desc
1 : a
2 : b
3 : c
4 : d
1 : e
3 : f
I wanted to update it to
code : desc
1 : x
2 : b
3 : y
4 : d
1 : x
3 : y
how to do it?
View 5 Replies
View Related
Sep 21, 2006
I am very new to SQL Server 2005. I have created a package to load data from a flat delimited file to a database table. The initial load has worked. However, in the future, I will have flat files used to update the table. Some of the records will need to be inserted and some will need to update existing rows. I am trying to do this from SSIS. However, I am very lost as to how to do this.
Any suggestions?
View 7 Replies
View Related
Jun 16, 2015
I have a requirement where in i have around 15 different flat files , filenames are fixed but folder path can be changed(i think i should use a variable for folder path). These 15 files data should go to their respective tables in the database.
Whether I need to create separate data flow task for each file or separate package? In addition to these, example : while importing product data into product table, if product ID already exists, we need to ignore it and upload only the new records.
View 4 Replies
View Related
Apr 30, 2006
Well I need to know HOW TO REMOVE MANUALLY every pieace of the Database Services for my SQL Server Express CTP previous installation because the final release won't let me update this Database Services, any suggestion besides removing manually the services? thank you all
View 5 Replies
View Related
Jun 27, 2006
I have a couple of hundred flat files to import into database tables using SSIS.
The files can be divided into groups by the format they use. I understand that I could import each group of files that have a common format at the same time using a Foreach Loop Container.
However, the example for the Foreach Loop Container has multiple files all being imported into the same database table. In my case, each file needs to be imported into a different database table.
Is it possible to import each set of files with the same format into different tables in a simple loop? I can't see a way to make a Data Flow Destination item accept its table name dynamically, which seems to prevent me doing this.
I suppose I could make a different Data Flow Destination item for each file, in the Data Flow. Would that be a reasonable solution, or is there a simpler solution, or should I just resign myself to making a separate Data Flow for every single file?
View 9 Replies
View Related
Aug 14, 2012
I am trying to restore multiple .bak backup SQL database files onto a new server. However, I have found that it will not allow me to restore multiple databases at once. Is there a way to do this so that I do not have to manually upload one at a time? I tried adding all the .bak files at once to the backup device window but it only did the first one listed. It would be so much easier to restore them all at once so that I do not have to continue this manual process. I am restoring them via device.
View 13 Replies
View Related
Feb 15, 2008
I need to be able to bulk insert a bunch of tables from their corresponding flat file. I have created an XML file (see below) which has the file name/table name pair at each node. I then created a ForEachLoop task and used the Node enumeration type and the following OuterXpathString: ReferenceFiles/File. At this point I get lost. How do I pass the 2 inside node values (file name and table name) to variables which I can then use as expressions for the bulk insert task inside the Foreach?
Here is XML file:
Code Snippet
<ReferenceFiles>
<File>
<FileName>Ref_Categories.txt</FileName>
<TableName>Ref_Categories</TableName>
</File>
<File>
<FileName>Ref_Configs.txt</FileName>
<TableName>Ref_Configs</TableName>
</File>
</ReferenceFiles>
Thanks.
View 1 Replies
View Related
Nov 29, 2007
I used the data export wizard to export a single table to a single flat file (multiple wasn't allowed). I saved the package as a *.dtsx file which I'm attempting to edit to add the additional tables.
Creating additional sources is fairly easy copy of the first source and change to the table name.
I've tried copying the destination connection and changing to a new text file, but can't get past having to add each column manually to the new destination.
How can I duplicate the mapping that must be taking place in the wizard in the *.dtsx editing environment?
This seems like a simple / common task, but I've been unable to find a solution.
Thanks, Richard
View 1 Replies
View Related
Jun 1, 2007
Hi,
I have searched but not found quite the best way to look at this so far..
I have an application that outputs data to several text files (up to 30). These have commonality by an object name, but then contain completely different column data.
In DTS I had each of the source text file connections going to one OLE DB connection and then individual transform data tasks pointing to the one OLE DB connection.
Looking at SSIS, it would appear that I would need to have one source and one destination for each of these and therefore 30 parallel data flows?
Just wondering if there is a neater way of doing this??
It is a regular data import that happens a few times a day - the text files are named the same as the SQL tables - ie app_userdata.txt goes to app_userdata table.
Hope that explains ok and thanks in advance.
Mike
View 3 Replies
View Related
Aug 28, 2006
Hello,I read Data Tutorials from this site. here in DAL Different TableAdapters are created for different purpose. I want to know when I want to update More then one table at a time then I have to fire Update Query from more than one Table Adapter. now how to maintain Consistancy? what if one adapter is successed and other fails to update tables in my database ? How to solve this problem???
View 3 Replies
View Related
Mar 17, 2008
How would I update a table where id = a list of ids?Do I need to parse the string idList? I am being passed a comma seperated string of int values from a flex application.example: 1,4,7,8 Any help much appreciatedBarry 1 [WebMethod]
2 public int updateFirstName(String toUpdate, String idList)
3 {
4 SqlConnection con = new SqlConnection(connString);
5
6 try
7 {
8 con.Open();
9 SqlCommand cmd = new SqlCommand();
10 cmd.Connection = con;
11 cmd.CommandText = "UPDATE tb_staff SET firstName = @firstName, WHERE id = @listOfIDs";
12
13
14
15 SqlParameter firstName = new SqlParameter("@firstName", toUpdate);
16 SqlParameter listOfIDs= new SqlParameter("@listOfIDs", idList);
17
18
19
20 cmd.Parameters.Add(firstName);
21 cmd.Parameters.Add(listOfIDs);
22
23 int i = cmd.ExecuteNonQuery();
24 con.Close();
25 return 1;
26
27 }
28 catch
29 {
30 return 0;
31 }
32 }
33
34
View 3 Replies
View Related
Jun 25, 2002
I have a problem like this. I have an application which shud be internet hosted. All transactions should be updated to the central server of the net as well as an intranet DB server. (I'll have a DB server - SQL Server 2000 in each intranet). Is this possible.. i'm planning to use ASP to develop the net based application.
Thank you in advance..
Geetha R
View 2 Replies
View Related
Mar 19, 2008
Hi,
I'm trying to update a table with a value that is dependent on another table.
Scenario
Table 1 has 2 fields: FieldID, FieldA
Table 2 links each Table 1 row to a specific row in Table 3 using FieldID
Table 3 has 2 fields: FieldID, FieldA
I need to update Table1.FieldA to equal the value in Table3.FieldA in the appropriate row.
I was attempting this along the lines of:
UPDATE Table1
SET Table1.FieldA=
(
SELECT DISTINCT Table3.FieldA FROM Table1, Table2, Table3
WHERE Table1.FieldID=Table2.Table1ID
AND Table2.Table3ID=Table3.FieldID
)
However, I realised that the select query would not know which Table3 row to look at.
I hope that makes sense and if someone can help me urgently I would be most appreciative!
Ian
View 1 Replies
View Related
Apr 25, 2014
I have a large table with many columns that have rows with N/A or null values, which are gotten from a bulk insert.
In order to format the data properly, we need to convert the data from N/A or null to 0, as the original values come from the supplier.
In my code the best way to achieve this is running multiple queries as
update csv_testing set testing5 = 0 where testing5 like'%N/A%'
update csv_testing set testing6 = 0 where testing6 like'%N/A%'
update csv_testing set testing7 = 0 where testing7 like'%N/A%'
update csv_testing set testing9 = 0 where testing9 like'%N/A%'
etc for about 12 columns
is there a better way of doing this ?
View 2 Replies
View Related
Apr 2, 2007
Hi guys.........i'm tryign to update 2 tables in one stored procedure. However i'm getting errors. heres the code i have:
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[snow_ors_additionalInfoUpdate]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[snow_ors_additionalInfoUpdate]
GO
CREATE PROCEDURE dbo.snow_ors_additionalInfoUpdate
@Reference int,
@CanTravel int,
@SEEmployee varchar(100),
@SE_EMPLOYEE_FROM datetime,
@SE_EMPLOYEE_TO datetime,
@WorkHours Varchar(100),
@DrivingLicence varchar(100),
@CriminalConvictions bit,
@CriminalConvictionsDetails1 text,
@CriminalConvictionsDate1 datetime,
@CriminalConvictionsDetails2 text,
@CriminalConvictionsDate2 datetime,
@CriminalConvictionsDetails3 text,
@CriminalConvictionsDate3 datetime,
@VacancyMonitoring Varchar(100),
@VacancyMonitoringDetails Varchar(50),
AS
UPDATE Account
SET CanTravel= @CanTravel,
SEEmployee = @SEEmployee,
WorkHours= @WorkHours,
DrivingLicence = @DrivingLicence,
CriminalConvictions = @CriminalConvictions,
CriminalConvictionsDetails1 = @CriminalConvictionsDetails1,
CriminalConvictionsDate1 = @CriminalConvictionsDate1,
CriminalConvictionsDetails2 = @CriminalConvictionsDetails2,
CriminalConvictionsDate2 = @CriminalConvictionsDate2,
CriminalConvictionsDetails3 = @CriminalConvictionsDetails3,
CriminalConvictionsDate3 = @CriminalConvictionsDate3
WHERE Account.Reference = @Reference
UPDATE Application
SETVacancyMonitoring = @VacancyMonitoring,
VacancyMonitoringDetails = @VacancyMonitoringDetails,
WHERE Application.Reference = @Reference
GO
and here are the errors i'm getting:
Server: Msg 156, Level 15, State 1, Procedure snow_ors_additionalInfoUpdate, Line 19
Incorrect syntax near the keyword 'AS'.
Server: Msg 156, Level 15, State 1, Procedure snow_ors_additionalInfoUpdate, Line 37
Incorrect syntax near the keyword 'WHERE'.
thanks again guys!
View 2 Replies
View Related