Only Copy Missing Rows From One Database To Another
May 24, 2007
Hello everyone,
I'm trying to create a performant script to copy records from a table in a source database, to an identical table in a destination database.
In SQL 2000, I used to create a little lookup which did a count using certain fields. If the record was missing, I executed an INSERT query, otherwise an UPDATE query. The result was that the table on the destination side was always up to date. Duplicate rows were out of the question.
This was, if I'm not mistaking, a Data Transformation, using a bit of custom VBA code to govern the transfer. For each source row, the custom code was executed. Depending on the result of the custom code, a different query was launched.
Now I'm trying to do the same using SSIS in SQL 2005. Is there a task which does this for me, or do I have to script again? In the latter case, which type of task would I use?
(I thought of the Script Task, but then I would need to set up quite a bit myself.)
Hi, I need to write a query which I have never attempted before and could do with some help.... I have a Groups table and a Users_Groups look up table. In this model, users can only be assigned to 1 group. If a group is deleted, a trigger should fire and delete any rows in User_Groups having a matching Groups.Ref. Unfortunately, the trigger hasn't been firing and I now have a load of defunct rows in Users_Groups relating users to groups which do not exist.I now need to find all of these defunct rows in Users_Groups so that I can delete them. How can I find rows in Users_Groups where the parent rows and refs in Groups are null? I've tried searching the net for something similar but don't even know how to word the search properly to get any half relevant results. Cheers PS, I do realise I need to tighten the constraints on my database
On my workstation the option is there, but on a couple other workstations that option is not available from the file menu. I tested doing the exact same thing and the option just isn't there. Anyone have any idea's?
Hi... Please help. I am having problem with my hard drive so I git a new one. I installed a new instance of SQL Server 2005 and copy over all my projects from the old hard drive into the new.
The problem is, when I open my packages specifically the data flow task it is empty. All my dataflow items are gone.
I am not sure what I am missing during the copy. Please help how I can recover my complete packages.
Just thought I would ask the question on here with regards a problem we have. We have PDA based auditing tool which use SQL Compact to store information via on Compact Framework application. Now we have a user who swears blind they have carried out an audit but there is no trace of it anywhere on the database even though there are audits on there prior to the said date and after and there is no other faciity to delete data from the app other than a complete wipe of the database hence everything would of gone!
Now, the data is stored over 4 tables from which there are no traces of any unrelated data between the tables etc. so its dissappeared, anybody any experience of data simply vanishing into thin air??
I have a sql 2000 database in which reports are generated on a monthly basis from the data inside on of my tables. The reports have been working fine, until some of the rows seemed to have disappered!
I know the data use to be in the table, since it is showing on the old reports, however, when I try to pull that same data, it is not in the database at all.
Does anyone have any ideas on what could have caused this or how I can resolve??
I set up DB mirror between a primary (SQL1) and a mirror (SQL2); no witness. I have a problem when I issue command:
alter database DBmirrorTest Set Partner = N'TCP://SQL2.mycom.com:5022'; go
The error message is:
The remote copy of database "DBmirrorTest" has not been rolled forward to a point in time that is encompassed in the local copy of the database log.
I have the steps below prior to the command. (Note that both servers' service accounts use the same domain account. The domain account I login to do db mirror setup is a member of the local admin group.)
1. backup database DBmirrorTest on SQL1
2. backup database log
3. copy db and log backup files to SQL2
4. restore db with norecovery
5. restore log with norecovery
6. create endpoints on both SQL1 and SQL2
CREATE ENDPOINT [Mirroring]
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = PARTNER)
7. enable mirror on mirror server SQL2
:connect SQL2
alter database DBmirrorTest
Set Partner = N'TCP://SQL1.mycom.com:5022';
go
8. Enable mirror on primary server SQL1
:connect SQL1
alter database DBmirrorTest
Set Partner = N'TCP://SQL2.mycom.com:5022';
go
This is where I got the error.
The remote copy of database "DBmirrorTest" has not been rolled forward to a point in time that is encompassed in the local copy
I have enabled drillthrough on a cube (AS 2005) and selected the columns required. This worked. When I use the drillthrough option in Excel, drillthrough queries return less rows than expected. For example, for a cells combinations (pivot table) I saw 21198 rows, but after access to detail, query return just 21106 rows. Less rows than expected.
Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Desktop Engine on Windows NT 5.1 (Build 2600: Service Pack 2)
I am using BULK INSERT to import some pipe-delimited flat files into a database.
I am firstly converting the file using VB.NET, to ensure each line of the file has a carriage return (by using streamwriter.writeline), and I am also ensuring there is no blank line at the end of the file (by using streamwriter.write).
Once I have done this, my BULK INSERT command appears to work OK. This is how I am using the statement:
BULK INSERT tempHISTORY FROM 'C:TEMPHISTORY.TXT' WITH ( FIRSTROW = 2, FIELDTERMINATOR = '|', ROWTERMINATOR = ' ' )
NB: The first row in the file is a header row.
This appears to work OK, however, I have found that certain files seem to miss the final line of the file! I have analysed these files incase they have an inconsistant number of columns but they don't.
I have also found that if I knock off the last column of the tempHISTORY table, the correct number of rows are imported. However, of course, I can't just discard one of the columns from the file, I need to import the entire file.
I cannot understand why BULK INSERT is choosing to miss the final line in the file, when the schema of the destination table matches the structure of the file.
Hi, I used the /e in my bcp code. yet did not get all the rows from the main frame into the sql talbes... here is the case I have 11 million rows in an ftp server I use this code to bcp into sql server can anyonecheck if this code is good for the process, I am missing one million row in the bcp process and do not know why??? I put the /e to see if there is any error but could not see any error file in my hard drive? Please check it out and let me know
SQL 2k, DDL below.I have a simple table with the following data:fldYear fldCode1 fldCode22000 ABC1 ABC122000 ABC1 ABC132001 ABC1 ABC122002 ABC1 ABC122002 ABC1 ABC13I need to know, for every distinct combination of fldCode1 andfldCode2, if there are any years missing.For example,SELECT DISTINCT fldCode1, fldCode2 FROM MyTablereturnsABC1 ABC12ABC1 ABC13I need to know that in 2001 there was no entry for ABC1/ABC13Thanks!Edwardif exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[MyTable]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)drop table [dbo].[MyTable]GOCREATE TABLE [dbo].[MyTable] ([fldYear] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[fldCode1] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[fldCode2] [char] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL) ON [PRIMARY]GO
I have written a reporting application which has a SQL2005 backend. An import routine into SQL, written by a 3rd party, frequently fails. The main problems are missing rows in certain tables.
I am going to write an SP that will accepts a from and to date. I then want to search for rows of type X between those dates that do not exist so we then know between a date range, we have no data for these XYZ days.
I have this working by returning all rows between the dates into a dataset, sorted by date, and then running through the rows and testing if the next rows date is the next expected date. This works but I think is a very poor solution. This is all done on the client in C#.
I want to learn and implement the most efficent way of doing this. My only solution in a SP was to make a temporary table of all dates between the date range for row type X and then do a right outer join against the data table, returning all rows which are missing.
Something like this:
SELECT twmd.date FROM #temp_table_with_all_dates ttwad RIGHT OUTER JOIN table_with_missing_date twmd ON ttwad.date = twmd.date WHERE twmd.date IS NULL
Would this be a good, efficent solution, or should I just stick to my processing of a dataset in C#?
I also have a RESOURCES table of phrases (for translation purposes) similar to this:
res_id res_lang res_phrase AccessDenied en Access Denied
For some rows in the resources table I do not have all language codes present so am missing some translations for a given res_id.My question is what query can I use to determine the RESOURCE.RES_IDs for which I do not have a translation for.
For example I might have a de, en, cz translation for a phrase but not a pl phrase and I need to identofy those rows in order that I can obtain translations for the missing RESOURCE rows.
We are using a mix of SQL 2005 and 2000 servers and our "main" database server is running SQL 2005 x64 (SP2 ver. 3042).
Our system has run perfectly for months, then subsequent to an SP2 update we are seeing several instances where the data record counts are different for several tables among all the servers.
We are using Merge Replication, with no filters and published every 2 minutes.
I was using the MDX Query Builder to create MDX queries for a SSRS report. I'm not sure what happened, but when I tried to create another dataset against the cube, the "Drop Column Fields Here" and "Drop Row Fields Here" areas were no longer available for me to drop attributes onto.
I have restarted VS, rebooted, you name it, I've tried it (short of re-installing). Has anyone encountered this and how did you "fix" it.
BTW: In order to continue working, I decided to use ProClarity to build the MDX for me and when I tried to paste it into the MDX editor, I get the following error: "The query cannot be prepared: The query must have at least one axis. ..". So, as I've seen from other posts, you can't use "any" MDX in the MDX Query Builder.
if i have a given database (a model) and i want to copy this database in the same database instance. Is it ok to copy the mdf and ldf file and attach the files with a new database name in the same instance.
I have access to a stored procedure that was written previously for a process that uses the output from the stored procedure to provide input to a BCP operation in a bat file that builds a flat text file for use in a different system.
To continue with the set up, here is the stored procedure in question: CREATE PROCEDURE [dbo].[HE_GetStks] AS
select top 15 Rating, rank, coname, PriceClose, pricechg, DailyVol, symbol from
(selectf.rating, f.rank, s.coname, cast ( f.priceclose as decimal(10,2)) as PriceClose, cast ( f.pricechg as decimal(10,2)) as pricechg, f.DailyVol, f.symbol from dailydata f, snames s where f.tendcash = 0 and f.status = 1 and f.typ = 1 and f.osid = s.osid) tt order by rating desc, rank desc
GO
The code in the calling bat file is: REM ************************* REM BCP .WRK FILE REM ************************* bcp "exec dailydb.[dbo].[HE_GetStks]" queryout "d:TABLESINPUTHE_GetStks.WRK" -S(local) -c -U<uname> -P<upass>
This works just peachy in the process for which it was designed, but I need to use the same stored procedure to grab the same data in order to store it in a historical table in the database. I know I could duplicate the code in a separate stored procedure that does the inserting into my database table, but I would like to avoid that and use this stored procedure in case the select statement is changed at some point in the future.
Am I missing something obvious in how to utilize this stored procedure from inside an insert statement in order to use the data it outputs? I know I cannot use an EXECUTE HE_GetStks as a subquery in my insert statement, but that is, in essence, what I am trying to accomplish.
I just wanted to bounce the issue of y'all before I go to The Boss and ask him to change the procedure to SET the data into a database table directly (change the select in the proc to an INSERT to a local table) then have the external BAT file use a GET procedure that just does the select from the local table. This is the method most of our similar jobs use when faced with this type of "intercept" task.
In a t-sql 2012 sql update script listed below, it only works for a few records since the value of TST.dbo.LockCombination.seq only contains the value of 1 in most cases. Basically for every join listed below, there should be 5 records where each record has a distinct seq value of 1, 2, 3, 4, and 5. Thus my goal is to determine how to add the missing rows to the TST.dbo.LockCombination where there are no rows for seq values of between 2 to 5. I would like to know how to insert the missing rows and then do the following update statement. Thus can you show me the sql on how to add the rows for at least one of the missing sequence numbers?
UPDATE LKC SET LKC.combo = lockCombo2 FROM [LockerPopulation] A JOIN TST.dbo.School SCH ON A.schoolnumber = SCH.type JOIN TST.dbo.Locker LKR ON SCH.schoolID = LKR.schoolID AND A.lockerNumber = LKR.number
Stepping thru the code with the debugger shows the dataset rows being deleted.
After executing the code, and getting to the page presentation. Then I stop the debug and start the page creation process again ( Page_Load ). The database still has the original deleted dataset rows. Adding rows works, then updating works fine, but deleting rows, does not seem to work.
The dataset is configured to send the DataSet updates to the database. Use the standard wizard to create the dataSet.
cDependChildTA.Fill(cDependChildDs._ClientDependentChild, UserId); rowCountDb = cDependChildDs._ClientDependentChild.Count; for (row = 0; row < rowCountDb; row++) { dr_dependentChild = cDependChildDs._ClientDependentChild.Rows[0]; dr_dependentChild.Delete(); //cDependChildDs._ClientDependentChild.Rows.RemoveAt(0); //cDependChildDs._ClientDependentChild.Rows.Remove(0); /* update the Client Process Table Adapter*/ // cDependChildTA.Update(cDependChildDs._ClientDependentChild); // cDependChildTA.Update(cDependChildDs._ClientDependentChild); } /* zero rows in the DataSet at this point */ /* update the Child Table Adapter */ cDependChildTA.Update(cDependChildDs._ClientDependentChild);
Hi to all. I have a starnge question but i need to work out the situation that I'll describe here: i need to replicate one row in my table to the same table but with small changes in some fileds. I mean - copy the row and to insert it again to the same table, that I copy it from, with changes in fields values. Do you know any way i can to this - temporary table/ store procedure and so on... BUT in one shot - one action for the select and insert and update operations.
How would I copy entire rows within a table and change one value?
insert into user (user_id,account_id,user_type_cd,name,e_mail_addr,login_failure_cnt,admin_user,primary_user) select * from pnet_user where account_id='DDD111'
but now I want to change DDD111 to DDD222 on the inserted entries.
i have data in a table.. i want the values in the rows to place in columns and columns into rows.. for ex:-there is one table name id dept a 1 x b 2 y c 3 z i want the resultant table should look like this a b c 1 2 3 x y z
Hello,I have 2 tables, Table1 and Table2. I have copied all data from Table1to Table2.However Table1 is dynamic it has new rows added and some old rowsmodified everyday or every other day...How can I continue to keep Table2 up to date without always having tocopy everything from Table1?Basically from now on I would only like to copy new rows or modifiedrows in Table1 to Table2 and skip rows that are already present andhave not been modified in Table1. I would like to not do anything forany rows that were removed in Table1 and continue to keep a copy ofthem in Table2.Is using a DTS package the best way to automate this update of Table2to make sure Table2 is always up-to-date with Table1?Thanks for any help or advise :-)Yas
Copy out all data from a DB table into/across delimited text file(s) ensuring that each text file size is no more than 3MB.Have created a SSIS solution where it achieves this requirement ..well sort of achieves the requirement ... Here it what the current solution (sparing the minute details) does in a nutshell & Problems with it:
1) Created a function (Script below) which finds the maximum row size in bytes in a given DB table & uses it to calculate how many rows can be copied out into a text file without exceeding 3MB size limit.
For instance: A DB table selected had 788 rows in total and this function for this particular table returned a value of 181 rows { select [dbo].[udf_GetRowPartitionNumber](‘<TableName>’)as #ofRowstoPartitionTableby --181} meaning in order to not exceed the requirement of 3MB per text file, we had to copy all the data from DB table across (create) 5 text files {Select CEILING(788
My Question: Does anyone know of a decent way (i.e. I do not want to loop to insert each row and check the SCOPE_IDENTITY() field or anything like that) to copy parent/child rows to their same respective table besides using the method I have listed below (my manager does not really like the idea of the "PreviousID" field)? More details are listed below.
My Table Situation: I have a parent table and a child table. Both tables have an identity column as the primary key. The relationship between the tables is established using the parent table's primary key to the column in the child table that stores the relationship. The identity column in both tables is the only column that is unique in the tables.
Sample Data (made up and simplified for visualization purposes): ParentTable - "Items" ID ItemCategory Price 1,T-Shirt,$20 2,Blue Jeans,$50 3,T-Shirt,$40
My Need: I need to make a copy of the records (keeping the parent/child relationship intact) to the same table as the source records. For example, in my data example above, I may need to make a copy of all the "T-Shirt" items and their child records. As the parent records are copied, they will be assigned new keys since the primary key is an identity. Obviously, this new key needs to be used when creating the child records, but I need to somehow associate this new key to the new child records.
Possible Solution: I know this can be achieved by adding another column to the parent table to store the "PreviousID" (INT NULL). Using this new field, when I want to copy the "T-Shirt" items, I would insert the new records and store the ID of source records (i.e. the identity value of the row that was copioed would be stored in this "PreviousID" field). Once the parent record has been copied, I could then insert the child records, and I could join on the "PreviousID" field to get to the new ID to use for inserting the copies of the child records.
Hello I am a software developer with minimal SQL server administration skills. Currently I am using SQL Server 2000.I need to know if there is a way to copy a particular table from a database, and to copy the table into a different database.Basically on a project I am working on we are using a table named "Customers" from a database named QTR. We need to copy this database table into a different database named "Research". How can this be done? Is if very complicated?
This is the error I get with a simple Data Reader Source (SELECT * FROM A) to SQL Server Destination copy between identical tables from a linked to a local server:
I am copying data ( ~600000 rows) using SqlBulkCopy. The operation fails after copying 390000 rows with the follwoing exception ( this happens every time when i run and after the same number of rows copied). Is anything else i need to do differently. The server has 35GB of free space & 1 GB of RAM.
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapse d prior to completion of the operation or the server is not responding. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConn ection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error) at System.Data.SqlClient.TdsParserStateObject.ReadSni(DbAsyncResult asyncResult, TdsParserStateO bject stateObj) at System.Data.SqlClient.TdsParserStateObject.ReadPacket(Int32 bytesExpected) at System.Data.SqlClient.TdsParserStateObject.ReadBuffer() at System.Data.SqlClient.TdsParserStateObject.ReadByte() at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataRe ader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlBulkCopy.WriteToServerInternal() at System.Data.SqlClient.SqlBulkCopy.WriteRowSourceToServer(Int32 columnCount) at System.Data.SqlClient.SqlBulkCopy.WriteToServer(IDataReader reader) at PSMigrate.Program.MigrateWorkItesLatestData() in E:dd_tfs_beta3vsetSCMworkitemtrackingTo olsPSMigrateProgram.cs:line 522 at PSMigrate.Program.MigrateData() in E:dd_tfs_beta3vsetSCMworkitemtrackingToolsPSMigrate Program.cs:line 106 at PSMigrate.Program.Main(String[] args) in E:dd_tfs_beta3vsetSCMworkitemtrackingToolsPSMi grateProgram.cs:line 86
Here is the part of the code
using (SqlCommand cmd = new SqlCommand())
{
SqlDataReader dataReader;
cmd.CommandText = sql;
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
conn.Open();
dataReader = cmd.ExecuteReader();
// write the data to the server
using (SqlBulkCopy sqlBulkCopy = new SqlBulkCopy(m_destConnString))
{
// column mappings
// event settings
sqlBulkCopy.NotifyAfter = 5000;
sqlBulkCopy.SqlRowsCopied += new SqlRowsCopiedEventHandler(RowsCopiedEventHandler);
Hello. We have a smaller system on one of our servers where a couple of users where beta-testing. This system used a SQLExpress 2005 database (databaseName_data.mdf).
But yesterday we saw that we couldn't use the system anymore, we got errors about the connection to the database. We open SQL Management Studio and connected to the SQL Server and we saw the name of the database in the list, but it was completly empty. Nothing. Not the "folder" for Tables, Programmability, Security... nothing.
We then browsed to the folder where the MDF file used to be, and there we only found the LDF file. The MDF file was gone.
We "know" that no one here have been shutting down the SQL Service and then deleted the DB, so we are trying to figure out what has happen.
It's not a major issue, because it's just a beta-test, but we don't want this to happen later on again...
Does anyone have a clue of what might be going on?
We are using three instances of SQL Express on this test machine btw... One for the public system (wich used this db), one for development and one for some random tests...
The public server and develop server used databases with the same name, but of course, different files on the hdd (and different instances of SQL Express).
//J
Edit: I might add that we hadn't backed this db up yet... Is there some way to use the LDF-file to restore some of the data?
I've been assigned the task of setting up access to our SQL Server 2005 box. A consultant developing for us has accessing to 2 databases and I've set this up fine. It appears however that one of these databases is re-copied over to the server every night to keep data reasonably current.
I'm not interesting in changing this method as I'm not the maintainer (as yet).
Basically I would like to know if I've setup access to this database (it works fine), when the database is updated (with an SSIS package) the account seems to get deleted. Do the original permissions from the source database overwrite those of its destination?
I'm using SQL Server Management Studio Express and I'm trying to figure out how to copy a table(s) from my local database to my web hosting database. I know how to do it in 2000, but it's completely different now. Is this feature not allowed on SSMSE? If so, then how do I deploy database tables to a web host?Also, how do you add local database(s) to SSMSE? I tried to use 'attach database' in SSMSE and it wouldn't allow me to navigate to My Documents folder where the database resides. Thanks...