I'm kind of new to SQL and would appreciate some help with this. I have a table named Company. In this table there are fields called CompanyID, CompanyName, Address, and Year. (There are other fields too, but they are not relevant for this post.) There are multiple companies in the table and each company has a unique ID number. However, each company can have up to three records in the table. (One for the year 2005, one for the year 2006, and one for the year 2007.) Currently, only a company's 2007 record has data for the address. It's 2005 and 2006 records have addresses that are blank. I want to be able to update the 2005 and 2006 records with address from the 2007 record. Obviously, a very simple update won't work because there are multiple companies and multiple records that need updating. Can anyone fill me in on the SQL statement(s) I would need to make this happen? I would be extremely appreciative.
How can I create a Table whose one field will be 'tableid INT IDENTITY(1,1)' and other fields will be the fields from the table "ashu". can this be possible in SQL Server without explicitly writing the"ashu" table's fields name.
i have sql local database in the application . I want to copy the table from one local database to another. here the detination table is already created with one field which is incremental and other field is image and some other fields are text. any solutions on how to do it
I have a database called marketing in it i have a table called products and right now there are five products in the table with product_id as 8003,8004,8005,8006,8007 i want to create the same table in the database but my product_id should start from 1 and i only want three products from the old table to be copied into the new table any idea how to make this happen.
I'd like a really simple way of making a replica of a table. The thing is i'd like the table name to be a variable. The following code doesn't work, any ideas??
Thanks in advance,
Alph
CREATE Procedure Test
@vMonth as varchar(3)
As
SELECT tbl_Targets.* INTO @vmonth FROM tbl_Targets; GO
Hi, i restored a backup from a database thas has replication configured.When i restored it the system tables that the merge replication createsare restored too. I was investigating on internet and I found that i candelete it using this query:sp_configure 'allow updates', 1goreconfigure with overridegoDROP TABLE aonflict_SiacDataEEC_security_info...sp_configure 'allow updates', 0goreconfigure with overridegoDo somebody know if i use this queries to delete this tables i candamage the database or is correct to use it.Thanks a lot for your help.*** Sent via Developersdex http://www.developersdex.com ***
Is there any simple way to copy tables from one database to another in SQL Management Studio or VS 2005? I sometimes work split work between home and work and I often need to copy and table and its data (data, stored procedure, etc) to a different database, but having to create a new database then copy the data is a pain. Is there an easier way?
Hi! I've got a very simple problem I can't find an answere to. I've got an MSDE database and I want to copy a table. I've tried something like: create table2 as select * from table1 with and without the "as", but I can't get it to work and I can't find a good answere on the internet. very thankful for an answere!
We've had a new server set up with SQL 2012 and I'm in the process of moving data to it from a 2008 (SP2) server.
Details are as follows:- 2012 instance:- Microsoft SQL Server 2012 - 11.0.5058.0 (X64) May 14 2014 18:34:29 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)
2008 instance:- Microsoft SQL Server 2008 (SP2) - 10.0.4000.0 (X64) Sep 16 2010 19:43:16 Copyright (c) 1988-2008 Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (VM)
I don't want to do a backup/restore routine as there are collation conflicts on the 2008 server.I've created the database and tables on the 2012 instance and now I want to transfer the data from the 2008 instance to the 2012 one.
The 2012 instance has a linked server to the 2008 instance.I was trying to use sp_MSForEachTable (I know, it's old and will probably disappear shortly) but that doesn't seem to work properly because some of the columns have an Identity field set up.
Some of the tables have upwards of 10 million records in them and are quite sizeable.how I can achieve the transfer without a back-up/restore?
HI, I current have two 2005 boxes running 9.0.3050 in different DMZ with the source running a DTS to drop and copy its tables to the source every night. It was working up until last thursday. Nothing has changed in the FW rules and getting no errors. One is 3.5GB which copies fine, the other is 21GB and runs all night with only getting a fraction of the tables populated. I'm the hardware guy, but have some understanding w/ sql. Thanks in advance for any help.
I have a large table that I need to copy, but I need to generate a new value for my id field using a SPROC and replace my existing ID value. I also have a few mapping tables I need to copy, so I need to store this new ID for later use. I currently have a SPROC that performs all these actions, but it takes about 3 or 4 minutes to complete and completely hogs the CPU time. Thus, I can't perform any actions until it finishes.
I'm looking for a way to run this procedure in the background. Unfortunately, my ID field value is not a GUID nor an IDENTITY column. I've researched Integration Services, but I was unable to find any DataFlow Tranformations to call a SPROC to retreive a new id nor could I find anything that would let me store my new id to update my mapping tables. SQLBulkCopy wasn't a good solution either.
If anyone has any insight to this, it would be greatly appreciated. Thanks,
Hi. I need to move data from one database table to another across database instances. A simple example of the typical move would be:
[CODE]
INSERT into destination_db.dbo.table1
SELECT column1, column2, column3, column4 from source_db.dbo.table2
[/CODE]
My options are:
1. Create an SSIS package to perform the move.
2. Create sprocs and schedule the data move as jobs.
3. Write .NET code using sprocs to perform the move.
I'll have to move hundreds of thousands of records, so I want the option that provides the best performance. I'm guessing that option 3 will be the slowest.
I need to create a fairly simple package. And almost because of the simplicity, I'm stumped.
I need to copy all non-system tables from server1.database1 to server2.database2. Additionally, four of the 30+ tables need to be renamed on the fly -- i.e. their name will reflect the year and month that the copy takes place.
I've tried using the Transfer SQL Server Object Task to simply copy the tables, but I get flaky results at best with it. Sometimes it tells me the source table doesn't exist, when I can clearly see it (and I've selected it from the list). And even though I have turned on the Include Indexes option, they don't always come through.
I'm wondering if I need to do a For Each loop looking at an ADO object?
Hi, I'm trying to create a package with SSIS to replace the DTS process that we have in place already. DTS package copy four table content from one server to another. I have created a simple SSIS to do the same processes but the process it alot slower than DTS!!
I did ran the SSIS package using ctrl+F5 and also from command prompt but still it's quite slow. SSIS uses SMO to access to server and both are running on 2005
I have found a bunch of duplicate records in our housing database that ideally I need to delete.There are two tables that I need to remove data from ih_cml_log_entry and ih_cml_log_notes. There is no unique identifier between the tables for a log entry. So I have had to join on the person_ref, log_seq and the date/time of entry.How do I go about deleting the data - I've used the script below to identify what I need to delete -
SELECT * FROM ( select cml.person_ref, cml.open_date + open_time as 'datetime',cml.open_user,cml.log_type ,ROW_NUMBER() OVER (PARTITION BY cml.person_ref, cml.open_date + cml.open_time,cml.open_user,cml.log_type ORDER BY (SELECT 0)) AS RowNo ,n.note FROM ih_cml_log_entry cml
1. Take a subset of data from about 100 tables that have multiple references to other tables in this group of 100 from a first DB. 2. Insert the above data into a second DB, a database that already has data in the 100 tables, while maintaining the correct references.
As a general approach, the best way I can think of doing this is as follows:
1. Create mapping tables for every ID that is referenced in a different table (OldID NewID) 2. Insert the old data into the new table and output the OldID and NewID into the mapping table. 3. Use that mapping data to make sure all tables that use those IDs have the new IDs in DB2.
This approach is extremely labor intensive both on initial implementation and would require a fairly substantial amount of work to maintain going forward.
I have been using the below query to copy a record of data that exists in several draft tables to the original tables. how do i change this query such that i can input multiple records at one time so multiple records get copied to original tables from the draft tables
At this time when i feed mutiple requestids to my query it simply errors out
SET NOCOUNT ON GO DECLARE @oldrequestid varchar(50),@newrequestid varchar(50) DECLARE db_cursor CURSOR FOR SELECT RequestId from report_request_draft where requestid in (320762)
I have a table, dbo.Table1(Id,Col1,Col2,Col3,Col4,Col5,Col6,Col7,Col8) that I need to split between two tables dbo.Table2(Id, Col1, Col3, Col4) and dbo.Table3(Id,dbo.Table2_Id,Col5,Col6,Col7,Col8). But in dbo.Table3 I need to have the Id column from dbo.Table2 populated since its a foreign key constraint in dbo.Table3. How do I go about doing this?
I have 3 tables with the follwing schema Table <Category> {
UniqueID, LastDate DateTime }
Assume the follwing tables with data following the above schema
Table Cat1 {
1, D1 2, D2 3, D3 } Table Cat2 {
2, D4 3,D5 4, D6 } Table Cat3 {
1, D7 3,D8 5,D9 }
I have a Master and the schema is as follows Table master {
UniqueId, Cat1 DateTime, -- This is same as the Table name Cat2 DateTime, -- This is same as the Table name Cat3 DateTime -- This is same as the Table name }
After inserting the data from all these 3 tables, I want the my master table to look like this Table Master {
Hi there, my question is really simple. I want to setup an automatic task in SSIS that drops the tables in the target database and substitutes them with tables from the source database. We are talking about two or three dimension tables and one fact table. The dimension tables are pretty small. The fact table will contain, at maximum, 300,000 rows and 12 columns. I do not use delta or flag historisation btw. What tasks in SSIS would you suggest to use?
I have a rather sizeable SQLServer 2000 database. To work on an issue, I would like to copy just a couple tables into SQL Server 2005 Express. How does one go about this efficiently?
I need to know what would be the best way to perform a task I have been assigned. I have read multiple post online, and I came to the conclusion that the Import/Export wizard was my best choice. I'm trying to copy at least 80 tables from a SQL 2000 server to a SQL 2005 server. Currently I have these tables over on the destination server (SQL 2005) but this data is outdated and needs to be updated. The ulitimate goal would be to set up a SSIS process so that I can schedule this process to copy over once the data has passed QA. I followed through the Import/Export Wizard inside of the BID and I manually highlighted all of the tables and performed a edit "delete rows in destination table" . But to my alarm this did not occurr and now I have duplicate records in all of my 80 tables. I'm going to go through this process again, but I wanted to make sure this was going to be my best option.
I am trying to export a databse from access into sql server express. The access database is on a network and the sql server express is on my local machine.
Could someone give me setp by step instructions please as to how to export the data from the tables into my sql server express.
hi I've got a job which copy tables between different servers . I am feeding the tables one by one and the process of copying is in a loop so I have cotrol over the copying process. it works fine but sometimes randomly I am getting Execution failed with the following error: "ERROR : errorCode=-1071636471 description=SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Invalid character value for cast specification".An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Invalid character value for cast specification". helpFile=dtsmsg.rll helpContext=0 idofInterfaceWithError={8BDFE893-E9D8-4D23-9739-DA807BCDC2AC}".
and the process fails and this might happen for in any point and on any table and sometimes it runs all the way successfully!! any idea what the problem might be! Thanks
It seems that there should be a solution for my situation, but for the life of me I can't seem to figure it out.
I need to compare two "like" tables, containing similar data. Tbl 1 is "BOOKED" (which is a snapshot of inventory) and tbl 2 is "CURRENT" (the live - working inventory table). If I write my query as follows the the subsequent result is "duplicate" data.
Code Block SELECT booked.item, booked.bin, booked.quantity, current.bin, current.quantity FROM BOOKED LEFT JOIN CURRENT ON booked.item = current.item
No matter what type of join I use, there is duplicate data displayed for each table. For example, if there are more bins in the BOOKED table that contain a certain product then the CURRENT table will repeat data and vica versa.
As follows:
Item Bin Quantity Bin Quantity
12345 A01 500 A01 7680
12345 B01 6 A01 7680
12345 C01 20 A01 7680
54321 G10 1032 E15 1163
54321 G10 1032 F20 523
54321 G10 1032 H30 750
98765 Z20 7000 Z20 8500
98765 Y15 2500 Y15 3000
98765 X10 1200 Y15 3000
What I would like to do is display Bin and Quantity only once and the repeating values as NULL or [BLANK]. Or, to display all of the bins from both tables and only the quantities from each table in relation to the bin found in that table, returning a "0" if no quantity exists.
This is what I'm after:
Item Bin Quantity Bin Quantity
12345 A01 500 A01 7680
12345 B01 6 B01 0
12345 C01 20 C01 0
54321 G10 1032 E15 1163
54321 F20 0 F20 523
54321 H30 0 H30 750
98765 Z20 7000 Z20 8500
98765 Y15 2500 Y15 3000
98765 X10 1200 X10 0
Is this possible? If so, how?
I also might add that it is ok for each table to contain multiple entries for any given item. This is basically being requested as an inventory variance report - inventory before physical count and immediatly after physical count - and will only be run once a year.
----------------------------------------------- Just thinking out loud here: What if I created three subqueries, the first containing only BOOKED information, the second containing only CURRENT information and the third being a UNION of both tables? Something like this:
Code Block SELECT q3.bin, q1.item, ISNULL(q1.quantity, 0) as QTY_BEFORE, ISNULL(q2.quantity, 0) as QTY_AFTER
FROM
(select item, bin, quantity from BOOKED)q1 Left Join
(select item, bin, quantity from CURRENT)q2 on q1.item = q2.item Left Join
(select bin, item from BOOKED UNION CURRENT)q3 on q1.item = q3.item
Order By q1.item
I don't know if I wrote the UNION statement correctly, but I will have to try this when I get back to work...
I have 3 tables with the follwing schema Table <Category> {
UniqueID, LastDate DateTime }
Assume the follwing tables with data following the above schema
Table Cat1 {
1, D1 2, D2 3, D3 } Table Cat2 {
2, D4 3,D5 4, D6 } Table Cat3 {
1, D7 3,D8 5,D9 }
I have a Master and the schema is as follows Table master {
UniqueId, Cat1 DateTime, -- This is same as the Table name Cat2 DateTime, -- This is same as the Table name Cat3 DateTime -- This is same as the Table name }
After inserting the data from all these 3 tables, I want the my master table to look like this Table Master {
I have a real table with an identity column and a trigger to populate this column.
I need to import / massage data for data loads from a different format, so I have a temp table defined that contains only the columns that are represented in the data file so I can bulk insert.
I then alter this table to add all the other columns so that it reflects all the columns in the real table. I then populate all the values so that this contains the data I need.
I then want to insert into the real table pushing the data from the temp table, however this gives me errors stating that the query returned multiple rows.
I specified all the columns in the insert grouping as well as on the select from the temp table.
ANY thoughts / comments are appreciated. This is beginning to drive me nuts.