I have to pick up a row from Customers and transfer it to CustomerMaster and CustomerDetails. CustomerId of CustomerMaster will be the CustomerId of CustomerDetails while transfer.
I have 2 tables with the following structure: CREATE TABLE [dbo].[table1] ( [RID] [int] IDENTITY (1, 1) NOT NULL , [RText] [varchar] (400) NULL , [DateModified] [datetime] NULL ) ON [PRIMARY]
CREATE TABLE [dbo].[Table2] ( [GrpRID] [int] IDENTITY (1, 1) NOT NULL , [GrpID] [varchar] (10) NOT NULL , [RID] [int] NOT NULL , [Status] [bit] NULL , [SortOrder] [int] NULL , [DateModified] [datetime] NULL ) ON [PRIMARY] GO I am transfering 2 table between 2 SQL server based on GrpRID from table2. Since RID is identity in table1 sometimes it is different text for spesific Rid in second server. Some how I need to get the match the right text from server1 to server2 and if text doesn't exists create a bew entry in table1 with the update to table2 wich should reflect correct RID.
Has any1 noticed that when they are transferring SQL tables from one server (or machine) to another that the primary keys drop from the table (or is it just me). If so, has someone figured out why? and how to rectify this (apparent) error.
I have a SSIS package that transfers a couple of tables from one database to another database on the same server. It works fine in most of the machines we tested with. However on one of the customer machines, it consistently fails with the error message, The Return value was unknown. The process exit code was -1073741795. The step failed.
This package runs as a scheduled job on the sql agent. When i did a sql profiler to see what is going on, i noticed that the last step before the bulk insert, it gets the collation and the schema id of the 2 tables. My guess is, it compares these values from the source and destination databases and makes sure everything is ok before copying.
On a machine where this works, it goes ahead with the the next step which is the bulk insert itself. Whereas on the machines where it doesn't work, it stops right after this step. i.e. it does not even bother to call the 'bulk insert' api. Which makes me think it is doing some kind of validation with these values and it is getting something that is not expected.
If the collation or the schema was an issue, why throw an 'return value unknown' error? Can the error be more specific? Any other possible reasons for such a behavior? Any clues?
I imported all rows of my txt file using SSIS 2005 into a table. I am now trying to figure out how to split out the header, payment rows, and maintenance rows. First, some information.
An example of table results is here: http://www.webfound.net/split.txt The table has just one field of type varcha(100) because the incoming file is a fixed length file at 100 bytes per row
The header rows are the rows with HD in them...then followed by detail rows for that header (see here http://www.webfound.net/rows.jpg).
I need to
1) Split out the header into a header table 2) Split out the maintenance rows (related to the header) into a maint table 3) Split out the payment rows (related to the header) into a payment table
I'll need to maintain a PK/FK relationship between each Header and it's corresponding maint and payment rows in the other 2 tables.
To determine if it's a payment vs. maintenance row, I need to compare chars 30 - 31. If it contains 'MT' then you know it's a maintenance row, else it's a payment row.
I am using MS Access 2003 SP2 to maintain some data tables. I use SSIS to transfer them to SQL Server 2005, Enterprise Edition.
When I run the SSIS package from within Visual Studio 2005, the package runs without error.
When I try to run the same SSIS package by double-clicking on it in my File System (which invokes the Execute Package Utility, Version: 1.0) none of the tables get copied. Instead all I receive is a message for each table,
Error: SSIS Error Code DTS_E_PRODUCTLEVELTOLOW. The product level is insufficient for component "Data Conversion 1" (49).
The only data conversion I perform is double-byte characters to single-byte characters.
Bob Bojanic, MSFT, made a few suggestions about this in another thread -- but I have created this new thread to help focus on this specific issue. In particular, he asked if we have installed the complete SSIS support for SQL Server 2005, Enterprise Edition, and my network support and database support staff assure me that such complete SSIS support was installed.
Are others having this problem?
Dan
(I just took a look at some of the transformations in the Data Conversion task, and many of them are using an Output Alias identical to the Input Column name. Might that be causing the problem? I will try changing the Output Alias for some tables and see if they then transfer correctly. The "identical name" Output Alias values were created by the Migration Wizard for a DTS 2000 package.)
I have 3 tables with the follwing schema Table <Category> {
UniqueID, LastDate DateTime }
Assume the follwing tables with data following the above schema
Table Cat1 {
1, D1 2, D2 3, D3 } Table Cat2 {
2, D4 3,D5 4, D6 } Table Cat3 {
1, D7 3,D8 5,D9 }
I have a Master and the schema is as follows Table master {
UniqueId, Cat1 DateTime, -- This is same as the Table name Cat2 DateTime, -- This is same as the Table name Cat3 DateTime -- This is same as the Table name }
After inserting the data from all these 3 tables, I want the my master table to look like this Table Master {
I have a form to assign JOB SITES to previously created PROJECT. The JOB SITES appear in the DataList as it varies based on customer. It can be 3 to 50 JOB SITES per PROJECT. I have "PROJECT" table with all necessary fields for project information and "JOBSITES" table for job sites. I also created a new table called "PROJECTSITES" which has only 2 columns: "ProjectId" and "SiteId". What I am trying to do is to insert multiple rows into that "PROJECTSITES" table based on which checkbox was checked. The checkbox is located next to each site and I want to be able to select only the ones I need. Btw the Datalist is located inside of a formview and has it's own datasource which already distincts which JOBSITES to display. Sample: ProjectId - SiteId 1 - 5 1 - 9 1 - 16 1 - 18 1 - 20 1 - 27 1 - 31 ProjectId stays the same, only values for SiteId are being different. I hope I explaining it right. Do I have to use some sort of loop to go through the automatically populated DataList records and how do I make a multiple inserts to database table? We use SQL Server 2005 and VB for code behind. Please ask if I missed on some information. Thank you in advance.
- INSERT INTO TABLE2 (COL1, COL2, COL3) SELECT COL1, COL4, COL7 FROM TABLE1 - what i want is i INSERT INTO TABLE2 (COL1, COL2, COL3) SELECT COL4, COL7 FROM TABLE1 Username (add one more value exp: Username)
I have a table on one server that I would like to copy over to another server. I have done the full backup and restore, however that seems to overwrite the formatting on all of my existing tables. As there are some differences in the tables between servers, I would prefer to not have to do the full restore. I have been able to use the SQL export utility to copy a table from one database to another on the same server, but I was wondering how that can be done between different servers with different Windows logins.
Hi I'm new to SQL server. What I'm trying to do is to automate the import of data from an Excel spreadsheet to a table within SQL Server. The index of the table has a seed set to increment by 1 and I assume that this is the same as the Autonum datatype in Access. When I import data I miss out this field as it is autogenerated, but the DTS package fails as it is trying to put a NULL in to this field. Where am I going wrong....is there something that I should change on the DTS import?
I want to transfer ACCESS DataBase(ABC.mdb) to SQL Table Using c#.(SSIS Programming) i need to use this query on access database
which will when run create 4 fileds ABC.mdb Sorce_Db_Id_Col
Attribute_Type
Attribute_Value
Query_Flag
And sorce database Indentity_col Att_typ Att_value Att_catg
I m very new in this stuff. so please please any one have any sample code for this then send me ! please i realy need it My email id is ripal.parikh@softwaresolutionsindia.com Thanks lot!!!!
I am currently woking on transfering the table contents in sql server in an csv file i have created a stored procedure which would do same but the prob i am facing is the data in the table is not clean it contains tab,newline etc so i had to clean the data i had applied the folowing procedure
set @columns='' --'@columns+''replace(replace(replace(''+column_name +'',Char(10),''''''''),Char(13),''''''''),Char(19) ,'''''''')''' --print @sql select @columns=@columns+'replace(replace(replace('+colum n_name+',Char(10),''''),Char(13),''''),Char(19),'' '') as '+column_name+ ', ' from
[code]...
the table_name contains around 300 columns with the column_name of min 20 characters .When i run the above query i get only few columns instead of all the columns so i tried to find the length which gives me the result as 4000.I had declared @columns as Varchar(8000) i dont know why this issue is coming up?? is there any other way i can clean the data an the transfer it into the file
If TABLE2 has a data in resource 01-05 that isn't in resource01-05 of TABLE1 then I want to added it to the next free slot where ref is the unique key.
Hi All, I've got a problem transferring objects between SQL Databases. The source database is an SQL 2000 dbase, and the destination is a local SQL2005 instance. When trying to transfer I get the error: 0x80040E37 (table) does not exist at the source - but it does.
I am guessing its a schema related issue, I've tried setting the copyschema setting to true, but no joy. The copyallschemas function doesn't work with an SQL 2000 box so its set to false.
I just used the SSIS Import and Export Wizard to copy 50+ tables from SS05 to SS2K.
I found that the wizard created a package that I could not figure out how to edit, e.g., to change whether or not it had to CREATE a table, or just use an existing one. (I created some problems by manually editing the receiving table names to be ones that already existed -- but the original names it had did not exist, so it knew it had to create them. What I should have done, and eventually ended up doing, was scroll through my list of tables in the "receiving" box; I just figured editing the name would be faster, not realizing what problems I would create for myself.)
Anyhow, now that I see the complex package that the wizard creates, with a LOOP over the 50+ tables, I would like to know how/where in the package it is storing the information about the tables to copy.
Basically the wizard creates the following Control Flow tab entries (in processing sequence order):
an Execute SQL Task: NonTransactableSql an Execute SQL Task: START TRANSACTION a Sequence Container: Transaction Scoping Sequence, which contains an Execute SQL Task: AllowedToFailPrologueSql an Execute SQL Task: PrologueSql a Foreach Loop Container, which contains a Transfer Task with an icon I did not notice in the Toolbox an Execute Package Task: Execute Inner Package an Execute SQL Task: EpilogueSql an "on success" arrow to an Execute SQL Task: COMMIT TRANSACTION an Execute SQL Task: PostTransaction Sql an "on failure" arrow to an Execute SQL Task: ROLLBACK TRANSACTION an Execute SQL Task: CompensatingSql
Where, and how, can I look within this package to see the details about the tables I am transferring? I see that one of the Connection Managers is "TableSchema.XML" -- but it points to a temporary file on my hard drive, that I presume is populated by the package. Where does it get its information?
This is certainly much more complex than the package I would have written, based on my limited knowledge of SSIS. I would have been inclined to create 50+ Data Flow tasks, one for each table.
So now I'm trying to understand why the Wizard created this more-complex package.
Any help will be appreciated, including references to non-Microsoft books/websites/etc.
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
How can we handle transactions in SSIS 1. when some error/something happens during export and the # of rows are not exported fully to destination, how to rollback the transaction in SSIS.
When expoting data from excel to sql server table, using SSIS package, after exporting is done, how would i check source rows are equal to destination rows. If not to throw an error message.
I have a SQL script to insert data into a table as below:
INSERT into [SRV1INS2].BB.dbo.Agents2 select * from [SRV2INS14].DD.dbo.Agents
I just want to set a Trigger on Agents2 Table, which could delete all rows in the table , before carry out any Insert operation using above statement.I had below Table Trigger on [SRV1INS2].BB.dbo.Agents2 Table as below: But it did not perform what I intend to do.
USE [BB] GO /****** Object: Trigger Script Date: 24/07/2015 3:41:38 PM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
I have created a table Table with name as Varchar and id as int. Now i have started inserting the rows like, insert into Table values ('arun',20).Yes i have inserted a row in the table. Now i have got the values " arun's ", 50. insert into Table values('arun's',20) My sqlserver is giving me an error instead of inserting the row. How will you solve this problem?
Using the the NumId from TitleData, I would like to delete thecorresponding row in Bookdata using pure SQL. I want it to delete allrows in bookdata where the Titledata.NumID is a match to bookdata.idThe two tables are linked in that the NumId of table Titledata isidentical to the Id of table bookdata. I can, using ADO, loop thrudeleting one by one but I would like to do this in a pure SQLstatement. Is this possible? Any help is appreciated.I was thinking something like this way :"Delete from Bookdata where Titledata.NumID = Bookdata.id"But of course it will error.My current code is:(frmlogon.tablename is really Titledata)Dim rstry As New ADODB.RecordsetDim values As VariantSQLQuery = "Select Numid from " & frmLogon.TablenameSet rstry = frmLogon.cnConnection.Execute(SQLQuery)values = rstry.GetRowsSet rstry = Nothing'now loop thruDim xx As Integerxx = 0Do Until xx > UBound(values, 2)SQLQuery = "Delete from Bookdata where bookdata.Id = '" & values(0,xx) & "'"frmLogon.cnConnection.Execute (SQLQuery)xx = xx + 1Loop'create statements for 2 tables involved areconn.Execute "CREATE TABLE TitleData" & _"(Id INT IDENTITY (1, 1) NOT NULL PRIMARY KEY," & _"NumId INT DEFAULT 0 )"conn.Execute "CREATE TABLE BookData" & _"(Id INT IDENTITY (1, 1) NOT NULL," & _"Titles TEXT DEFAULT ''," & _"GeneralNote TEXT DEFAULT ''," & _"Author VARCHAR(100) DEFAULT ''," & _"Imprint VARCHAR(100) DEFAULT ''," & _"ISBN VARCHAR(100) DEFAULT ''," & _"Description VARCHAR(100) DEFAULT ''," & _"CallNumberPre VARCHAR(5) DEFAULT ''," & _"CallNumber VARCHAR(25) DEFAULT '',LOCNumber VARCHAR(30) DEFAULT '',"& _"Accession VARCHAR(25) DEFAULT ''," & _"Bibliography VARCHAR(100) DEFAULT ''," & _"Series VARCHAR(100) DEFAULT ''," & _"MyStatus VARCHAR(70) DEFAULT ''," & _"Barcode VARCHAR(50) DEFAULT ''," & _"LocalData VARCHAR(100) DEFAULT ''," & _"CheckoutPeriod VARCHAR(10) DEFAULT ''," & _"CatalogCard TEXT DEFAULT ''," & _"Summary TEXT DEFAULT ''," & _"MyCount VARCHAR(10) DEFAULT ''," & _"ItemDate DATETIME DEFAULT ''," & _"MyUser VARCHAR(50) DEFAULT ''," & _"MarcData TEXT DEFAULT ''," & _"SdlsRecord TEXT DEFAULT '', LOSC VARCHAR(5) DEFAULT '', LOSNDecimal(14,6) DEFAULT 0," & _"Edits Char(1) DEFAULT '', TitleDuplicate VARCHAR(50) DEFAULT '')"
Dear Friends, Is there any way to display a table data separately like odd rows and even rows?I dont know this is possible or not?If it is possible means how can i achieve it?Please guide me a proper way. Thanks all!
What I'm trying to do is delete a user and all their related information within the other tables. I'm not wanting to delete the table, just one column with that user and their related information. So my Primary_Key is UserID within the table [alumni] and my three Foreign_Keys are CommentID, PhotoID, and AlbumID within the tables [comments], [photos], and [albums]. Here is some of the code that I have: <asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:SoderquistString %>" DeleteCommand="DELETE FROM [alumni] WHERE [UserID] = @UserID" SelectCommand="SELECT [UserID], [UserName], [FirstName], [LastName], [State] FROM [alumni] WHERE ([State] = @State)"> <DeleteParameters> <asp:Parameter Name="UserID" Type="Int32" /> </DeleteParameters> <SelectParameters> <asp:ControlParameter ControlID="DropDownList1" Name="state" PropertyName="SelectedValue" Type="String" /> </SelectParameters> </asp:SqlDataSource> The users are set up in GridView form. Is there some type of DELETE command that I need to be writing that is different than the one above? I have tried adding onto the following DELETE statment: DeleteCommand="DELETE FROM [alumni] WHERE [UserID] = @UserID DELETE FROM [photo] WHERE [UserID] = @UserID; DELETE FROM [album] WHERE [UserID] = @UserID; DELETE FROM [comment] WHERE [UserID] = @UserID; ...but that doesn't work...and doesn't look right. I would really appreciate anyones suggestions or help that you may be able to provide. Thank you!
I want to compare the tables using first name, and I have a log variable which I want to have the value as per the differences in the table that is if the first name matches and second name and dob dont match it shows log value for that FN as 'LN and DOB dont match'.
similarly if First name matches and dob matches then @log is 'LN not match'.
And in case all three match it should show 'match'as log value.The query I use is a s follows:-
Code: USE testing GO DECLARE @NR int DECLARE @log varchar(200) SELECT @NR = COUNT(*) FROM a WHILE @NR>0
This is a question that I have not had an opportunity to test. Was wanting to know if anyone in the SQl world knows the answer. In SQL 2K and 2005 your rolls are limited to 8060 bytes without using varchar(MAX). My question is do you have to specify varchar(max) before your roll can exceed 8060 or does SQL 2005 exceed without specifing varchar(Max). Also does it expand it across multiple pages automatically. Please assist if you can.
Dear All,I have a table with 10 billion records but there are no key on it. I cannotbuild a key on it as it is the data source.However, the data source exits the duplicated rows.I have used the DTS to transform the data into a new table and delete theduplicated rows. As there are 10 billion records, i need to divide it into 3parts and also the process lasts for 6 hours each part.I want to ask is there any other good methods to slove my problem??ThxEsther