In a job of migrating from an old database to a new one (with other structure, other server, other version) i'm copying from the source old tables and inserting into the new destination tables. The problem is that some records have inconsistencies (of any kind) and thus are not inserted due to foreign key, not null, etc validation. When a problem occurs none record is copied! and there is my question: How can i perform the copy in wich it copies the good records (without inconsistencies) and leave aside the bad records. I also want to know wich were not copied and better if in the copy process those were put in a temp table or exported to excel for further analisys o its data.
I'm using this model of "migration":
BEGIN TRY
INSERT INTO DESTINTATION_TABLE (
col_d1,
col_d2,
col-d3,
...)
SELECT
col_s1,
dbo.some_function(col_s2),
col_s3 * 100,
...
FROM SOURCE_TABLE join <other_table> ... where <some filters>
END TRY
BEGIN CATCH
print ERROR_MESSAGE()
END CATCH
(for now, with try/catch I've only get to know the error occurred, if some)
It may not have date records which are just string like in this example is 12345678900. I need a SQL script which can identify those records which are not of date type. Please help.
I'm using MS SQL Server 2008 and I'm trying to figure out if it is possible to identify what tables / columns contain specific records.
In the example below information generated for the end user, so the column headers (Customer ID, Customer, Address, Phone, Email, Account Balance, Currency) are not necessarily the field names from the relevant tables, they are simply more identifiable headers for the user.
Customer ID CustomerAddress Phone Email Account Balance Currency js0001 John Smith123 Nowhere Street555-123-456 jsmith@nowhere.com-100 USD jd2345 Jane Doe 61a Down the road087-963258 jdoe@downthe road.com-2108 GBP mx9999 Mr X Whoknowsville 147-852369 mrx@whoknows.com0 EUR
In reality the column headers may be called eg (CustID, CustName, CustAdr, CustPh, CustMail, CustACBal, Currency).
As I am not the generator of this report, I would like to know whether or not it is possible to identify the field names and / or what tables they exist in, if I were to used the report info to search for it. For example, could I perhaps find out the field name and table for "jd2345" or for "mrx@whoknows.com", because the Customer ID or Email may not be what the actual fields are called.
I'm not a DB admin and I don't have rights to do a stored procedure on the server. I'm guessing what I want is not so simple to do, but is it possible to do via a query?
How to identify different fields with in a group of records?
Example: create table #test (ID int, Text varchar(10)) insert into #test select 1, 'ab' union all select 1, 'ab'
[Code] ...
I want to show additional field as Matched as ID 1 has same Text field on both the records, and for the ID 2 I want to show Unmatched as the Text fields are different but with the same ID.
I have a situation where I need a table if bad items to match to. Forexample, The main table may be as:Table Main:fd_Id INT IDENTITY (1, 1)fd_Type VARCHAR(100)Table Matcher:fd_SubType VARCHAR(20)Table Main might have a records like:1 | "This is some full amount of text"2 | "Here is half amount of text"3 | "Some more with a catch word"Table Matcher:"full""catch"I need to only get the records from the main table that do not haveanything in the match table. This should return only record 2.
I have 1 table. In that table there are thousands of records. I have an ID that is autonumbered for a unique id.
What i need to do is at any given time select as many records from that table that have a certain Forein Key in it. Take all those records and copy them back into the same table but at that time i need to change some of the values including the Forien Key and the name.
Is there a bulk way to do this without having to loop? If so can you show me.
If not can someone show me the best way to get all the records and copy them back into its own talbe with a new Forien Key and Name.
Hi i am looking for some advice about sql databases and wondering if anybody can help me.
First the Background.
I am in the process of writing a payroll program that uses SQL as a data store. However rather then creating individual databases for each payroll that can be ran on the system i have created a core payroll table that generates a unique id for each payroll and then reference that thoughtout the system.
Now The Problem
The problem I now have is when I work out the final pay for the employees I want to make a quick copy of the various records just in case I need to undo the net pay calculations for adjustments to the data, it is effectively rolling back the transactions to the database but I want to be able to keep them till I decide that everything is correct.
I cant take a snapshot of the entrie database because the payrolls stored within may be in all sorts of states.
I need to keep the records for just the payroll I'm working on.
Any Ideas or Solutions !!!!!
Thanks
EJ Gibbons "Where poets speak there hearts, then bleed for it" - U2
I have 2 tables. TABLE 1: AttachmentTypeDefaults (fields: DefaultId (pk), AttachmentType) TABLE 2: AttachmentType (fields: AttachmentTypeId (pk), CompanyId, AttachmentType)
The @CompanyId variable has to be the IDENTITY of a new record that was just INSERTed into another table just before (in this new SP.)
What I want to do is loop thru the AttachmentTypeDefaults table and INSERT every record into the AttachmentType table and use the @CompanyId for every new record. (AttachmentTypeId will create itself.)
How would I do this?
Sounds easy but I am new.
I would like it to be compatible with SQL Server 2000 also.
I tried a query like this, but I don't think I'm close... declare @MyNewIdentity int SET @MyNewIdentity = SCOPE_IDENTITY()
-- Now INSERT default attachments INSERT INTO AttachmentType(CompanyId, AttachmentType) VALUES( SELECT @MyNewIdentity AS CompanyId, AttachmentType FROM AttachmentTypeDefaults)
HELP. I've inherited some less-than-stellar dessign.
I have a table: tbMD
ObjectID uniqueidentifier not null Tag nvarchar(50) not null Data nvarchar(400) not null
This table has 1.6Billion records and has filled up the drive (nearlly 1 TB in the DB, with about 600GB in this one table).
I need to move this table to another drive with ZERO down time.
I have created a new filegroup on a second drive array and created a new table (tbMD_TEMP) and my thought was to copy all the data from tbMD into tbMD_Temp then drop tbMD, and rename tbMD_Temp.
So, anyone have any great ideas on how to get this done very quickly?
Im trying to copy details from a specific header as details of a different header (eg. all sales items from invoice #10 copied as sales items of invoice #11).
So far I have two stored procedures: 1) sp_copyDetailsOne /*Create a recordset of the desired items to be copied*/ CREATE PROCEDURE sp_copyDetailsOne @invoiceIdFrom INT, @outCrsr CURSOR VARYING OUTPUT AS SELECT itemId, itemPrice, itemDescription, itemQuantity FROM tblSalesItems WHERE (invoiceId = @invoiceIdFrom) OPEN @outCrsr
2) sp_copyDetailsTwo CREATE PROCEDURE sp_copyDetailsTwo @invoiceIdFrom INT, @invoiceIdTo INT /*Allocate a cursor variable*/ DECLARE @crsrVar CURSOR
/*Loop through the recordset and insert a new record using the new invoiceId*/ FETCH NEXT FROM @crsrVar WHILE @@FETCH_STATUS = 0 BEGIN
/*Run the insert here*/ INSERT INTO tblSalesItems (invoiceId, itemId, itemPrice, itemDescription, itemQuantity) VALUES (@invoiceIdTo , 5, $25.00, N'Black T-Shirt', 30)
/*Fetch next record from cursor*/ FETCH NEXT FROM @crsrVar END
CLOSE @crsrVar DEALLOCATE @crsrVar
My question comes on the Insert of sp_copyDetailsTwo, as you can see the values are hard coded and I want them pulled from the cursor. However I don't know how to do this - do I need varables or can I access the cursor values directly in my VALUES clause? Or is this whole approach needing overhauled. Any advice is welcome.
I have a parts table which has partid (GUID) column and parentpartId (GUID) column. Need to copy the records to the same table with new GUIDs for partids. How to do that? cursor or temp tables?
I have a table on an offline database having let say 2000 rows and same table on an online database with let say 3000 rows. Now I want to copy just those records (1000) from online database that are not in the offline database table. Also, both tables have same name, same schema etc.
Hi, I already submitted this type of question before and i receive reply. But unfortunately i found out errors when performing on my system.
My problem regarding to this one:
Suppose i have two databases with same tables with different records and I would like to copy the records from one database to another data and vice-versa. So that both the tables contains same number of records inside the tables.
Example:
Database1 (EmployeeTable) contains 6 records. Database2 (EmployeeTable) contains 10 records.
It should copy only those records which is not present in each other database. No duplicate records.
Before i was recommend to use Primary key, if it is not present use index.
The plant number field is a location based field that the application uses to filter/select data on for the end users. What they want to be able to do is to select a record, select another location from a dropdown list and then click a button that duplicates the master record and the detail records to the new location.
I am thinking that a stored procedure passing the JSAID and new Location number to do it, I am just not sure how to get the new ID when I go to copy the detail records.
I am using a SQL Server Agent jobs that run each morning to update the records in a table to match what they should be for that day. I built them and tested it using a test table called "testtable1". It worked fine. But when I switched over to our production table, it fails saying the table has to be decaled. What would be the difference. The production table has a "@" in front of the name, is that causing issues?
USE [Live_build] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO BEGIN DELETE FROM @ZIPLIST INSERT INTO @ZIPLIST SELECT * FROM tblZip3DSWed; END
I am having three table tblTest,tblTestQuestion,tblAnswers
Each test can have multiple question and each Question can have multiple answers.
Now I am already having records in database. I wants to create clone copy of existing test except testdetails in tblTest because the test will be unique, and then insert questions and answers into their respective tables.
I was trying to create SP but stuck.
Please find below tables structure
[code=" SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[tblAnswer](
I have an application which stores records on a local SQL Express and I need to move the records to a SQL 2000 database. I have the SQL 2000 server linked in the Express Management Console (under Linked Servers). I'm trying to use a stored procedure to accomplish this, but get an error "Invalid object name 'ngtxa4-rsmsz-01.newpurchase.tblRequest'." Express uses a table named tblTRequest in the TempPurchase database, while 2000 uses a table named tblRequest in a NewPurchase database. Here is the stored procedure I'm using: USE [tempPurchase] GO /****** Object: StoredProcedure [dbo].[InsertRequestToMain] Script Date: 07/09/2007 08:54:56 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[InsertRequestToMain] AS BEGIN INSERT INTO [ngtxa4-rsmsz-01].newpurchase.tblRequest (fldRequestDate, fldRequiredBy, fldUserID, fldWorkAreaID, fldVendorID, fldClassID, fldEstimate, fldMemo, fldStatusID, fldStatusDate, fldSystemID, fldtmpRequestID, fldUpdateCode) SELECT fldRequestDate, fldRequiredBy, fldUserID, fldWorkAreaID, fldVendorID, fldClassID, fldEstimate, fldMemo, fldStatusID, fldStatusDate, fldSystemID, tmpRequestID, UpdateCode FROM tblTRequest WHERE (fldHold = 0) END
Any assistance with this would greatly be helpful. Thank you.
How can I copy a table from my sql server 2000 db to my sql server 2005 express edition? I have a project in VS.NET 2005 and I have a db in App_Data folder. However, when I look into that folder, there is nothing visible. I now need to copy a table from my existing sql server 2000 to my db located in my project's App_Data folder. Any help would be appreciated..
I am copying data from database to an excel file through SSIS. database is MS SQL 2005 and BIDS is also 2005.However, the job doing this task fails every time.As per investigation, the result of the query is more than 100,000 rows and we know that excel has a limit of 65000 rows of data.Is there a way of setting the limit up? or something? a better approach maybe so that everything will be copied to the excel file successfully.
The PrimeOutput method on component "Source - Query" (1) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. End Error Error: 2015-10-22 04:34:58.05 Code: 0xC0047021
Source: Data Flow Task Description: SSIS Error Code DTS_E_THREADFAILED.
Thread "SourceThread0" has exited with error code 0xC0047038. There may be error messages posted before this with more information on why the thread has exited. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 4:30:00 AM Finished: 4:35:05 AM Elapsed: 304.844 seconds. The package execution failed. The step failed. "
Hi allI have two tables in SqlServer with Exactly Same Structure,I want to Copy all Records fromone of them to another one.I came across to "Insert....select..." statement But i have two problem 1) I don't know any thing about Columns name!!! i just know they have same structure and as far as i know , "Insert...select..." need the Column list to operate correctly, am i right? 2) these two table have One Prinary Key column with IDENTITY feature. Any Help Greatly appriciated.Regards.
I've got to copy several records (almost a 100) in a table from one instance of SQL Server 2008 R2, to another instance of SQL Server 2008 R2, but on a different server. The table structure is identical. I've searched online and found examples of doing a bulk insert into a table from a .CSV file. However I don't know of a way of exporting records to a .CSV file using a SELECT statement.
I've heard of linked servers, but at least as far as I know linking the server would require administrative privileges that I don't have on either machine.
writing the query for the following, I need to collapse the continuity. If the termdate for an ID is one day less than the effdate of the next id (for the same ID) i need to collapse the records. See below example .....how should i write the query which will give me the desired output. i.e., get min(effdate) and max(termdate) if termdate is one day less than the effdate of next record.
I set up DB mirror between a primary (SQL1) and a mirror (SQL2); no witness. I have a problem when I issue command:
alter database DBmirrorTest Set Partner = N'TCP://SQL2.mycom.com:5022'; go
The error message is:
The remote copy of database "DBmirrorTest" has not been rolled forward to a point in time that is encompassed in the local copy of the database log.
I have the steps below prior to the command. (Note that both servers' service accounts use the same domain account. The domain account I login to do db mirror setup is a member of the local admin group.)
1. backup database DBmirrorTest on SQL1
2. backup database log
3. copy db and log backup files to SQL2
4. restore db with norecovery
5. restore log with norecovery
6. create endpoints on both SQL1 and SQL2
CREATE ENDPOINT [Mirroring]
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = PARTNER)
7. enable mirror on mirror server SQL2
:connect SQL2
alter database DBmirrorTest
Set Partner = N'TCP://SQL1.mycom.com:5022';
go
8. Enable mirror on primary server SQL1
:connect SQL1
alter database DBmirrorTest
Set Partner = N'TCP://SQL2.mycom.com:5022';
go
This is where I got the error.
The remote copy of database "DBmirrorTest" has not been rolled forward to a point in time that is encompassed in the local copy
Hi! I did: alter database mydb set single_user with rollback immediate; exec sp_detach_db @dbname='mydb', @keepfulltextindexfile='true';
then I tried to copy files to new location on other drives, same server but got >>Cannot copy <myfile>: Access is denied Make sure the disk is not full or write-protected and that the file is not currently in use<<
I also tried rename of file without success. I also tried with db service stoppet (not preferred) without success.
How to find out, which process locks the files? Best regards
I'm currently on a company with an ms sql server 2000. I'm looking into the indexes and tables to see if there are some bottlenecks there but the LogicalFramentation is very low in the index I have searched.
However, this table has a logicalFragmentation of 99,9215698242188 which I get when I do DBCC SHOWCONTIG ([TInsurance]) WITH TABLERESULTS. Is that a value to be trusted or not to be trusted since this does not check an index? If it is, how do I defrag a table? I know only how to defrag an index. (example: DBCC INDEXDEFRAG (MFSSEK,[TInsurance], PK_InsuranceID) )
Tipps, suggestions, help, all is very wellcome! :-)
DBCC SHOWCONTIG scanning 'TInsurance' table... Table: 'TInsurance' (2051694557); index ID: 0, database ID: 17 TABLE level scan performed. - Pages Scanned................................: 1275 - Extents Scanned..............................: 225 - Extent Switches..............................: 224 - Avg. Pages per Extent........................: 5.7 - Scan Density [Best Count:Actual Count].......: 71.11% [160:225] - Extent Scan Fragmentation ...................: 74.67% - Avg. Bytes Free per Page.....................: 520.0 - Avg. Page Density (full).....................: 93.58% DBCC execution completed. If DBCC printed error messages, contact your system administrator.
I have seen several examples explaining the fact that a tablecontaining a field for each day of the week is for the most part anarray. An specific example is where data representing worked hours isstored in a table.CREATE TABLE [hoursWorked] ([id] [int] NOT NULL ,[location_id] [tinyint] NOT NULL,[sunday] [int] NULL ,[monday] [int] NULL ,[tuesday] [int] NULL ,[wednesday] [int] NULL ,[thursday] [int] NULL ,[friday] [int] NULL ,[saturday] [int] NULL)I had to work with a table with a similar structure about 7 years agoand I remember that writing code against the table was pretty close toHell on earth.I am now looking at a table that is similar in nature - but different.CREATE TABLE [blah] ([concat_1_id] [int] NOT NULL ,[concat_2_id] [int] NOT NULL ,[code_1] [varchar] (30) NOT NULL ,[code_2] [varchar] (20) NULL ,[code_3] [varchar] (20) NULL ,[some_flg] [char] (1) NOT NULL) ON [PRIMARY]The value for code_2 and code_3 will be dependently null and they willrepresent similar data in both records (i.e. the value "abc" can existin both fields) . For example if code_2 contains data then code_3 willprobably not contain data.I do not think that this is an array. But with so many rows wherecode_2 and code_3 will be NULL something just does not feel right.I will appreciate your input.
Does anyone know how to identify the hottest, most active tables in adatabase?We have hundreds of users hitting a PeopleSoft database with hundredsof tables. We are I/O bound on our SAN, and are thinking of puttingthe hottest tables on a solid state (RAM) drive for improvedperformance. Problem is: which are the hottest tables? Would like todo this based on hard data instead of developer/vendor guesses.Any suggestions are much appreciated.
hi, for sql server 2000, how can we find the fixpack(service pack) level installed on this sql server? is there any command, or any gui tool to identify the level?
I have a huge db with many services ,users and applications hitting the db. Suddenly one of our column is nullified , we are not able track who /how it is done,
Can any one tell be whatz the best way to identify this???? trace(what events to select ), trigger or what????