Hi,
I have two tables named Tab1 and Tab2. Both are identical in structure. The only diff is Tab2 has two more additional fields (FromDate and ToDate).
The structure is like below :
Col1
Col2 (Date field)
Col3
Col4
Also Tab 2 have
Col5 (From Date)
Col6 (To Date)
Now I want to transfer some set of reocrds from Tab1 to Tab2. The additional Tab2 field (Col5 and Col6) values should be the minimum and maximum values of Tab1 date field for the current set.
How to accomplish this? Kindly help me in this regard.
I am have a two tables. I want to transfer data from Table1 to Table2 but depending on BRK_TITLE. For example, if BRK_TITLE is "Testing" then table1.BRK_QUANT would go to Table2.Qty1, if BRK_Title ="My Testing" then it would go to table2.qty2 and so on... but problem is every time BRK_TITLE changes. these are not constant titles(e.g. title may be 'abc', '134' etc)
If data belongs to title3 then it would go to the specific quantity i.e. table2.qty3
i have to transfer data from one table to three other new tables. and if there are any duplicates in original table i have to send them to duplicates table.the remaining data should be send to three other tables. so can anyone help writting stored procedure for this.
I am transferring data from one SQL table to another. The first table has a PK on the unique id only, the second table has PK on five fields (the idea being to reject duplicate records etc. etc.). I am using a DTS package to do this, but when run it will fail when it hits a PK violation. How do I getround this??????? What simple thing am i missing??
When executing an inline insert into command from isqw/w the command executes perfectly. Yet when I try to create a stored procedure with exactly the same insert into statement it will not create. It does not like the database.owner.table prefix on the selected rows. Any suggestions ??
I have a database with three different tables having the exact same fields. New records are written to table1, before moving to table2 and ultimately table3. I was wondering if it's possible to run the same query on all three tables at the same time. I need to get all unique instances in the JC field from each table after a specified date. I get an "Ambiguous column name" error on the JC and TimeID fields.
SELECT distinct [JC] FROM [table1], [table2], [table3] where timeid > '20090900';
Hi, I have a Users table in Oracle database and same table (Users) in SQL Server 2000 database. I want to create a DTS Package through which I can copy the data from Oracle database to SQL Server 2000 database. This package should run automatically at mid-night daily so that if there are some entries done in Oracle database then it get copied in SQL Server 2000 database. Also is there any way to copy only those entries from Oracle database which are not present in SQL Server 2000 database. Please help me in this regard as I am new to DTS.
Does anyone have a good query that would return records from two tables that are found in one, but not it the other table? In my situation I have 2 tables that are duplicate tables and I need to find an additional 3000 records that were added to one of the tables. I also have a composite key so the query would have col1, col2 and col3 as the composite key. So far I have tried concatenating the 3 columns and giving the result an alias and then trying to show the ones that were not in both tables, but have been struggling. Thanks..
I have two different tables... one for all Staff, and another for all Temp Staff. I need both to output to a datagrid, and so I need to grab both tables from a SQL query to output to my datagrid, but I can't seem to get the logic right for it to work. Can someone give me some suggestions on why my results are blank when I'm running this query? I thought a simple join would allow both sets of identical column names to coexist in peace...SELECT TOP 100 PERCENT dbo.StaffDirectory.UserName, dbo.StaffDirectory.LastName, dbo.StaffDirectory.FirstName, dbo.StaffDirectory.Dept, dbo.StaffDirectory.Title, dbo.StaffDirectory.EMail, dbo.StaffDirectory.LocationFROM dbo.StaffDirectory INNER JOIN dbo.TempStaff ON dbo.StaffDirectory.Location = dbo.TempStaff.Location AND dbo.StaffDirectory.EMail = dbo.TempStaff.Email AND dbo.StaffDirectory.Title = dbo.TempStaff.Title AND dbo.StaffDirectory.Dept = dbo.TempStaff.Dept AND dbo.StaffDirectory.FirstName = dbo.TempStaff.FName AND dbo.StaffDirectory.LastName = dbo.TempStaff.LName AND dbo.StaffDirectory.UserName = dbo.TempStaff.UName AND dbo.StaffDirectory.MDNo = dbo.TempStaff.MDNoIs something wrong here? It just doesn't work =(Any suggestions would be really appreciated.Thank you
I'd like to extend a package functionality. I created it drag/drop way with hard-coded table names.
Now for the same source and destination connections I'd like somehow in a loop transform 20 source tables of the same structure to 20 destination tables of the same structure providing table names in a loop. I also have in the package preparation SQL tasks such as dropping destination table if exists, and then re-creation , so it needs to consume a table name as parameter from my loop.
Im wondering if it is possible to write a procedure that check two identical tables for any missing records. The table design is excatly the same, but some records (of the 40,000) have not copied over to the second table.
We have written an application which splits up our customers data intotheir individual databases. The structure of the databases is thesame. Is it better to create the same stored procedures in eachdatabase or have them in one central location and use the sp_executesqland execute the generated the SQL statement.Thank you.Mayur Patel
I'm trying import 7 tables from each of 30 SQL2005 databases into a SQL2005 Consolidation database. I can simply create data flow tasks for each one but instead I would like loop through a list instead.
I've created a table to house the names of the databases from which I want to import the data. I've created SQL task to return the database names from the table as a "Full Result Set". I've assigned the result set to a user variable (type = Object) an named the result name 0
What I'd like to do is create a data flow task which connects to each of the databases and imports 7 specified tables from each database appending the table name with my database name in the result set.
I'm stuck on how I'd set the connection strings in my OLE DB Source in my Data Flow task. Any insight would be greatly appreciated.
Sorry if this is a super-basic question...I'm used to join selects but not sure how to approach an append select (or something like it) I have two tables with identical field structures: a Master table with 10,000 rows and a Custom table table with 1,000 rows To keep it simple, let's say the two tables each have a FirstName field and a LastName Field. Is it possible to use a View or a Select statement (or any other method) to 'append' the rows of both tables so that the result set still has only the two columns (FirstName and LastName) and has 11,000 rows? Thanks for your help! Randy
I am replicating an 80GB database between NY can CT and would like toknow why table sizes are different between the two.Here is an example of sp_spaceused::NY IOI_2007_04_23 rows(279,664) reserved(464,832)data(439,960) index_size(24,624)CT IOI_2007_04_23 rows(279,666) reserved(542,232)data(493,232) index_size(48,784)Thanks,
CREATE TABLE [RS_A] ([ColA] [varchar] (10)[ColB] [int] NULL)CREATE TABLE [RS_B] ([ColA] [varchar] (10)[ColB] [int] NULL)INSERT INTO RS_AVALUES ('hemingway' , 1)INSERT INTO RS_AVALUES ('vidal' , 2)INSERT INTO RS_AVALUES ('dickens' , 3)INSERT INTO RS_AVALUES ('rushdie' , 4)INSERT INTO RS_BVALUES ('hemingway' , 1)INSERT INTO RS_BVALUES ('vidal' , 2)I need to find all the rows in A which do not exist in Bby matching on both ColA and ColBso the output should bedickens 3rushdie 4So if i write a query like this , I dont get the right result setSELECT A.ColA, A.ColBFROMRS_A AINNERJOIN RS_B BONA.ColA <B.ColAORB.ColB <B.ColBBut if i do the following, i do get the right result, but followingseems convoluted.SELECT A.ColA, A.ColBFROMRS_A AWHERE ColA + CAST(ColB AS VARCHAR)NOT IN (SELECT ColA+CAST(ColB AS VARCHAR) FROMRS_B B)
Is it possible/advisable when transfering very large amounts of data from server to server to: trasnfer the data to a new table first second alter new table adding indexes, defaults, ets based on original table
if it is what flow item would be used to transfer/alter the indexes and defaults?
I'm very new to ssis so the more detail you can give the better.
Hello,I have this peculiar problem concerning MS SQL Server.My company works with an mailing application (ASP) which uses SQLServer as it's repository. What I want to do is send data directlyfrom my own application to this SQL Server in order to feed themailing application.To test if this was possible I linked the tables from SQL Server in MSAccess and entered the data. This worked fine and the data was pickedup correctly by the mailing application.The problem occurs when I send the data from my application (Javaapplication with JDBC connection). The data is in this case no longerpicked up by the application. The strange thing is that the data whichis entered through Access and the data from the Application lookidentical in de database view. The problem also occurs when the datais send with the tool winSQL and when I view the data in here it stilllooks identical.Even more strange is when I select the record which is not working inAccess and copy it into a new record (only changing the key) itsuddenly works!Has anyone have an idea how this can be?Thanks in advance,Sander Janssen.
Please guide me urgently how to extract data in SSIS from 10 identical oracle database into 1 sql server database. There is a table which list all the 10 databases.
Transfer tables from one server to the other on a regular basis. I'm doin this using the DTS import wizard (saved it as a package and scheduled a job). But i noticed that if the table structure changes on the soruce tables, the DTS package will fail.
I want to make a DTS package that imports 6 tables, now how do i make it such that if the structure of the tables changes it shud still work. ie, everytime the tables are tranferred, the old tables on the target server must be dropped and re-created with the source server table definition. I hope this is doable.
Hi! I have 2 tables (both have the same structure): ID -> bigint (identity, not for replication, primary key) Url -> nvarchar(1000) MainUrl -> nvarchar(1000)
Tbl1 cantains about 0,5 mln records, and tbl2 - 1 mln. What I need, is to copy records from tbl2 to tbl1. But records in tbl1 are unique, and it can't change. (Unique must be only "Url"; (and ID, but it's automatic)). How can I do this in fast way? Now I'm using SELECT for each record in tbl2 to see if it exist in tbl1. But it's a bit slow... Is there any faster method? (One thing: I'm beginner in databeses, so I'm wrote VB application to transfer records. How can I do it using only Microsoft Sql server?) -------------- I'm forgot to write, I'm using MsSql 2005.
In the full recovery model, if i run a transaction that inserts 10MB of data into a table, then 10 MB of data is moved in the data file. Does this mean then that the log file will grow by exactly 10MB as well?
I understand that all transactions are logged to the log file to enable rollback and point in time recovery, but what is actually physically stored in the log file for this transactions record? Is it the text of the command from the transaction or the actual physical data from that transaction?
I ask because say if I have two drives, one with 5MB/s write speed for the log file and one with 10MB/s write speed for the data file, if I start trying to insert 10 MB of data per second into the table, am I going to be limited to 5MB/s by the log file drive, or is SQL server not going to try and log all 10 MB each second to the log file?
Hi, I'm working on Sql Server 2005 Express and I'm trying to transfer over some of my tables with the primary and foreign keys relations as well as the data in them from one asp.net 2.0 website to another.But I can't seem to find the Import/Export option in Sql Server 2005 Express Edition, does anyone know how I can do this?Thanks in advanced.
I need to develop an off-line database that replicates/copies some key tables for reporting. The info is fairly static (codes. etc.)and does not need to be real-time. Is it feasible that I can set up a scheduled task on SQL 6.5 using Transfer Manager to copy & append data to this off-line db on a schedule. I may not be able to set-up replication on our production server. So, I am considering alternatives. Appreciate any input. Thank you :)
I have a table in mssql7 that has 7000000 record and I want to take100000 records out of it and place them into the new machine withmssql2000. The new machine will also have the same table name, so Iwant to append the 100000 records into that table.Thanks,Royal344--Direct access to this group with http://web2news.comhttp://web2news.com/?comp.databases.ms-sqlserver
How is it possible to set data to flow through development,test,and production environment???????
I have created one package.Right now i am transferring data to tables in the development environment.
I want to use the same package for several diffrent distinations two based on environment.
Is there a way that the destination componnent sets itself automatically by passing destination tables through registry or is there anything i could or do i need to change manually everytime which ever i want to..