Backing Up Online Log After Loss Of Primary Data File
May 24, 2004
My question is this:
Is there anyway to backup the current online log after complete loss of the current correponding datafile?
Example:
(2) Logical Disk Volumes
Disk 1 (D:) contains pubs_data.mdf
Disk 2 (E:) contains pubs_log.ldf
Disk 1 becomes corrupt and goes offline leaving the the database pubs in a suspect state. Is backup of the current online log pubs_log.ldf possible? If a backup of the log is not possible are there any other restoration methods that can be used to bring this database back online, rescuing as much information as possible from the current online log pubs_log.ldf.
I have an online SQL Server database provided by an ISP. I do not havepermission to create a backup device and I understand this is normalpractice.I am not using Enterprise Manager to administer the online database.I know I can back up the structure of the database using SQLscripts.My question is:How do I back up on my own machine the data contained in the onlinedatabase tables I have created? If I were using Enterprise Manager Icould do it by downloading tables using the DTS facility but how can Ido it without Enterprise Manager?Is there some work around which I have missed eg creating a csv fileof the data?Best wishes for 2005 to all those helpful people in this newsgroup!John Morgan
I'm getting a very strange potential loss of data error on my flat file source in the data flow. The flat file is fixed width and the column in question is defined as numeric [DT_NUMERIC]. The transform runs great if this column IS NOT A ZERO. As soon as a zero value is found, I get the error. It errors on the flat file source, so I haven't been able to use a data viewer to see what's going on.
The following error is encountered when importing a delimited flat file with date of fomat "dd.mm.yyyy"
Error: 0xC02020A1 at Data Flow Task, Source - DCDtest_xpt [1]: Data conversion failed. The data conversion for column "value date" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
This was when I manually built the package.
I get the same error when using the Import/Export wizard
I am even using the "suggest types" button and have tried sampling the default number of rows (?200) and also 2000.
The type it suggests is DT_DATE.
But reading the BOL, this would appear to be the wrong type:
http://msdn2.microsoft.com/en-us/library/ms141036.aspx (obviously the right hand doesn't know what the left hand is doing)
seems to indicate that
DT_DBTIMESTAMP
is the correct value
(I cannot believe that they have different datatypes in SSIS than in SQL. I can't believe for a minute this is for all the hundreds of thousands of Oracle users who obviously switched to SSIS when they saw what a high quality product it is.)
I tried other DT_... values but no dice.
Can anyone help?
I always thought that Classic ASP was the worst product I've ever worked with from the Microsoft stable, but I was wrong.
I am fed up of having to post on this board (no wonder it is so 'popular')
Talking to peers, reading books, googling nearly always enables me to figure out a problem with any application I have ever used, but SSIS breaks the mould in sheer crapness and the weirdnes and unfathomability of its cryptic errors,.
Does anyone have any great suggestions on how I can import an online XML file into an SQL 2005 table?
So far I tried to accomplish it with SSIS Data Flow Task where my source is XML Source (Data access mode: XML file location; XML location: URL, Use inline schema = True). This set up properly identified the columns to be imported.
I used Copy Column data flow transformation task to load data to OLE DB destination table that has same structure.
When I run the task it does execute with no errors, however the table remains empty. It looks like I am failing to read the actual data.
Do you have any suggestions? I am willing to go around this approach with stored procs/com/you name it €“ just make it work!
First of all, Great webcast today. My question is, I have everything up and running and would like to know what to do when the machine my primary is on quits or has a some type of disaster. Do I need to manually run recovery on each db that was mirrored? I'm not currently running a witness.
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
Previously same records exists in table having primary key and table having foreign key . we have faced 7 records were lost from primary key table but same record exists in foreign key table.
Hi Everone, I need help on how to produce a SQL Server Database Primary Data File, i have read the Visual C# 2006 book by Deitel and Deitel unfortunately the book does not say how to produce the forementioned data file. The software that i currently have to help me produce this data file is Visual C#, J#, Web Development, SQL Server 2005 and Office 2007.
I have a good understanding of databases from studying SQL and Oracle but i am not sure which is the best software to use to produce the Primary Data File, ideally i would like to use Access from my Office 2007 package.
Many thanks for your help in resolving this problem.
I have an odd problem that is driving me nutz. I have a very simple SSIS package that imports a 5 colum flatfile into a sql Server 2005 Table.
When I created this package with the wizzard, it will execute perfectly fine and processes all rows into the destination table.
But when I hit F5 to execute it manually it will fail before inserting a single row.
The error it generates is (Spalte 5 is a Datetime in the format DD.MM.YYYY) :
Error: 0xC02020A1 at Datenflusstask, Source - Daten_NC_1_txt [1]: Data conversion failed. The data conversion for column "Spalte 5" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
Error: 0xC0209029 at Datenflusstask, Source - Daten_NC_1_txt [1]: The "output column "Spalte 5" (25)" failed because error code 0xC0209084 occurred, and the error row disposition on "output column "Spalte 5" (25)" specifies failure on error. An error occurred on the specified object of the specified component.
Error: 0xC0202092 at Datenflusstask, Source - Daten_NC_1_txt [1]: An error occurred while processing file "C:WorkDaten_NC_1.txt" on data row 177.
Edit: Modified the Title so it properly reflects the Problem & the Solution
Hi,I have a big problem with a database in MS SQL SERVER 2000.the rows into the some tables, for the second time, have been mixed betweenthey without appearing reason.the application that uses the db is totally TRANSACTIONAL and they do notexist query that they do not have clause WHERE.The database is on a computer with NAS architecture.I have this problem for the first time in 5 years of use of MS SQL SERVERCan someone help me?TIA
Im using merge replication.I have created one publisher with dynamic filtering and i have creted two subscribers from it(pull subscription).It was working fine.but after sysnchronizing 2-3 times data from some tables went missing
I got a .bak file from my friend I want to restore the data base (presently I don’t have any database in my system).
But while I try to restore it I am getting an error
“ System.Data.SqlClient.SqlError: Directory lookup for the file "D:Microsoft SQL Server 2005mydatabase.mdf" failed with the operating system error 2(The system cannot find the file specified.). (Microsoft.SqlServer.Smo)
�
I have installed Microsoft sql server in a different directory Not where he is installed
I can successfully restore the back up files of data base on my own system But not the backup files created from others systems
I am using SQL Server 2005. I am trying to import data from CSV files into an SQL Server table using the Import wizard. The text qualifier is double quotes ("), column delimeter is a comma (,), first row has column names. One of the field name is "id", which is a GUID, whose datatype in SQL Server is uniqueidentifier. It looks like this in the file: ..."data","data","dbf7edf8-0ca8-4e53-91e3-5901cdc1819a","data"... As you can see, there are no enclosing curly braces for the guid value. The DTS chokes on this and throws this error: The value could not be converted because of a potential loss of data If I add curly braces like this {dbf7edf8-0ca8-4e53-91e3-5901cdc1819a}, it imports with no problem. Is there way to import this type of data, because there is no way I can edit these files, and I would prefer not changing the datatype of the id field? Or is this a limitation of SQL Server? Thanks,Bullpit
Hi All,I'm trying to track down a mysterious problem we're experiencing inwhich updates and inserts to tables in our mssql2k server appear to be'disappearing.'To explain our situation:We have a web page (written in ASP, if that's relevant) on which weaccept enrollment information.When that page is submitted, the form data is passed to a storedprocedure on our mssql2k server, which performs several operations,all of which are wrapped in a transaction.In particular, the stored procedure performs an update operation on arecord in one table (i'll call it TableA) and an insert into anothertable (TableB).If the procedure encounters a problem (ie after each update / insertoperation in the procedure we test for IF @@Error<>0) it performs arollback, performs a select similar to the one immediately below, andthen RETURNs.SELECT '1' as error, 'Unable to update TableA' as errormsgIf the procedure doesn't fail any of the @@Error tests, thetransaction is committed, and a membership number is SELECTed to bereturned.SELECT '0' as error, @memnum as membershipnumberThe @memnum variable is populated within the transaction.Back in the ASP page we test both for the proc returning an emptyrecordset, or for it passing an explicit value in the error field, andpush the page to an error page if either of these conditions are met.If, on the other hand, none of these conditions are met, and themembershipnumber field in the recordset is populated with a validmembership number, we push to a confirmation page.This confirmation page receives the membership number in a sessionvariable, performs a SELECT against TableB (the table that receivedthe insert during the proc) using that membership number in the WHEREclause, and the resultant recordset is used to populate theconfirmation details on that page. That recordset is also then used topopulate the details of a confirmation email, which is automaticallysent by the confirmation page.And now here's our problem: we've become aware of a handfull of peoplewho have gone through the enrollment process, have received theconfirmation email containing the information they supplied asexpected, but the data appears to be entirely missing from our tables.By that I mean that the record in TableA does not appear to have beenupdated (under normal circumstances that record should have hadseveral flags set, and several other fields updated with informationsupplied by the person enrolling), and the record in TableB does notappear to have been inserted.In essence, looking at our tables, it *feels* like the transaction inthe stored procedure for that particular enrollment hit a problem andwas rolled back. However, the evidence that we have in the form of theconfirmation email argues strongly that the data must have existed inour tables (particularly in TableB), if only for an unknown period oftime.We're kind of at our wit's end to work out what is going wrong withthese enrollments. From my understanding of transactions (and I couldwell be wrong) any changes to data (ie updates, inserts etc) containedwithin are essentially 'invisible' to any other operation (ie theSELECT that happens in the confirmation page) until the transaction iscommitted, implying that the effect of the update and insert shouldhave been 'permanently' successful if no error code is received and ifa valid membership number was returned. I ask, because someone in ourteam has suggested that maybe the operations in the transaction'lasted long enough' in the tables to have been visible for the SELECTon the confirmation page to have worked, but were then subsequentlyrolled back, explaining why the confirmation email is appropriatelypopulated and why the data then appears to be missing. However, as Isaid, this doesn't match my understanding of how transactions behave.Sorry for the length of this post, but I felt it was best to explainthis as best as I could.Does anyone have any advice they can give us on this situation? ie,are there any known problems with operations in transactions 'bleedingover' into tables, but then being rolled back at some later point?Does anyone have any thoughts or suggestions on how we can furtherdiagnose this issue?Truly, any help will be immensely appreciated...Thanks in advance,M Wells
During import from CSV i am getting following error "The value could not be converted because of a potential loss of data". My CSV file contains a column "years" and i have selected datatime in the import wizar. Can I scape from this error and import all my data.
we had setup merge replication on 2 db servers. For some reason, the subscription started failing a month back with the error " invalid object sysmergexxxx on the subscriber. I did a reinitialize and now all the changes on the subscriber which werent synced got deleted. I have tried all 3 log recovery tools with no luck. Is there any hope of recovering data. the last backup on the subscriber was a month ago.
I've searched the threads and didn't see anything which seemed to fit this specific issue....
I have a Data Flow task which reads from an OLE DB Source (SQL Server 2005), uses a Data Conversion transformation to convert some field values, and finally outputs the result to distinct tabs of an Excel workbook. The task is failing with the following error:
There was an error with input column "oBBCompanyName" (2162) on input "Excel Destination Input" (57). The column status returned was: "The value could not be converted because of a potential loss of data.".
Using the Advanced Editor for the Excel Destination component, I examined the datatype of oBBCompanyName (ID = 2162) in the Input Columns list of the "Excel Destination Input" (identified with ID = 57). The Data Type is defined as DT_WSTR with Length = 255. The ExternalMetaDataColumnID = 203.
Also in the Advanced Editor for the Excel Destination, I examined the datatype of BBCompanyName (ID = 203) in the Extranl Columns list of the Excel Destination Input. The Data Type is defined as "Unicode String [DT_WSTR] with Length = 255.
What could I be overlooking that might be the root cause of this issue? The same error is occurring for different Excel Destination tasks in the data flow.
We have setup transactional replication across several databases using SQL Server 2000 spread across multiple sites in a fully connected network. There is one main table from which data is replicated from the publisher to the destination. Horizontal filtering is being used on this table to enable sending/routing of the records to the correct DB(site). It has been observed that the documents/records are getting lost between some sites. Say 10 documents are being sent fron the publishing database but only 5 are being received at the destination database although the sent history for all the 10 documents is available at the publishing database.
Can anyone guide on how to analyse and resolve this problem? Can unreliable network be the issue, If the network is not reliable and the connection is lost during replication how does replication ensure that no data is lost?
I am getting the error below in my Flat File Source.
I've seen this error many times before, and have successfully resolved this problem in the past.
However, this time it's a little different. It's complaining about row 7 of myFile.csv, column 20. I have column 20 defined as a Numeric(18,6). It also maps to the Price field in the table, which is also a Numeric(18,6).
The problem is, on row 7 of myFile, column 20 is blank. That is, there's no data for row 7, column 20.
So, why should it care about this?? If it's blank, then how can you lose any data?? I have several other blank columns in this file, but they aren't throwing any errors. Just this one.
Thanks
Errors:
[Flat File Source - myFile [1]] Error: Data conversion failed. The data conversion for column "Column 20" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
[Flat File Source - myFile [1]] Error: The "output column "Price" (333)" failed because error code 0xC0209084 occurred, and the error row disposition on "output column "Price" (333)" specifies failure on error. An error occurred on the specified object of the specified component.
[Flat File Source - myFile [1]] Error: An error occurred while processing file "d:myDirmyFile.CSV" on data row 7.
I am transferring data from SQL Server 2005 to SQLAnywhere 9. Dates and numbers are sucessfully transferred but chars and varchars are not coppied. Though there are values in SQL Server 2005 table they are shown as blank on SQL anywhere. I cannot undestand data loss
Is this a code page problem or data size? for example a varchar 20 data type is in both sql server and sql anywhere Sybase. SQL server is in 1252 and sql anywhere 1253
I am having a bit of a difficult trying to understand this one. I had imported two tables around 2-300 rows each ran this in 64 bit scheduled it and it ran okay. I now introduced a 3rd table which if I change the true64bit to false it will run however if left true it keeps mentioning the output column of descr with a loss of data.
I did move it to 32bit and then ran the package it comes up as below. If I rememeber I can run this in 32bit mode which I'm sure will work hmm maybe!! but what I can't understand is why it works for two tables? is it something to do with the translation of the table or do I need alter the select statement?
Currently it is a select *
Executed as user: jvertdochertyr. ...on 9.00.3042.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:30:11 PM Error: 2007-12-06 22:30:21.02 Code: 0xC020901C Source: st_stock st-stock out [1] Description: There was an error with output column "descr" (56) on output "OLE DB Source Output" (11). The column status returned was: "The value could not be converted because of a potential loss of data.". End Error Error: 2007-12-06 22:30:21.02 Code: 0xC0209029 Source: st_stock st-stock out [1] Description: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "output column "descr" (56)" failed because error code 0xC0209072 occurred, and the error row disposition on "output column "descr" (56)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure. End Error Erro... The package execution fa... The step failed.
We had merge replication setup between 2 tables, Table A and Table B using SQL 2000. This was working 100%. The users asked to disable updated/deletes to both these tables if data existed on 2 other tables. Table AA and Table BB. We implemented it as follows:
1) Created Insert/Update/Delete triggers for Table A & B. It basically check for Table A is there a record in Table AA, if it exists, raise an error and don€™t commit.
2) Removed all foreign constraints from Table AA and BB
3) Added Table AA and BB to the current replication.
Then all hell broke loose, we got conflicts all other the place saying that Table AA cannot be updated because records does not exist in Table X. To our surprise we found triggers generated by Erwin in 1998 €“ that check for €œforeign contsraints€? and removed them immediately.
We continued to get conflicts but could see from the error messages it was generated by the triggers in point 1. We added the NOT FOR REPLICATION clause and everything has been running smoothly or so we thought€¦..
After 2 months we got a call that data is missing. It€™s random data and the only explanation I have is that replication caused that. My biggest reason for saying this is tracking the application audit trail I€™ve found that all the data missing was added during the period we had all the conflicts.
I need a solid explanation for this and can anyone confirm that this is possible?
Hi All, Before posting this, I did search for backing up the transaction log file(.LDF).Currently transaction log file is 4 GB size and I want to reduce it to few MBs.Will the following procedure work? 1)Take the backup of transaction log file by executing the statement- BACKUP LOG <name of DB> TO <name of db>backup
2)Run DBCC shrink file statement to reduce the log file size- DBCC SHRINKFILE(<name of log file>,25) This is as per procedure explained in http://support.microsoft.com/kb/272318
Will this free up 4GB physical space on drive? or anything else I need to do ?
We wanted to rebuild our main SQL server machine. How can we backup everything about the SQL server (such as all databases/objects and settings on security and users) and then recover them without any data loss? A related question is how to recover the server machine in case of system failure or whole machine crash down? Thanks!
Hi All,Further to my previous long-winded question about a situation in whichwe appear to be mysteriously losing data from our mssql2k server.We discovered an update statement, in the stored procedure we believeis at fault, after which no error check was being performed.Under certain conditions, this update is fired against the same recordin the same table as the immediately preceding update statement withinthe transaction. We are now suspecting that under some circumstances,these two updates get into a locking conflict that is eventuallyforcing the transaction to be rolled back.However, I'm still left with three questions.1) Where an update in a transaction gets locked, and an error isn'ttested immediately afterwards (ie no 'IF @@Error<>0' test is made),would the transaction proceed as normal?2) Most critically, would statements in the stored procedure thatappear after the COMMIT TRAN statement also be executed, even if anunresolved lock existed within the transaction?3) Assuming that (2) does happen, would a SELECT made on anotherconnection with a 'WITH(NOLOCK)' locking hint be able to see thechanges made in the locked transaction even if the server is set toREAD COMMITTED, and the SELECT takes place some time after the COMMITTRAN is issued? More to the point, given (2), how long would thelocked transaction survive before being rolled back after the COMMITTRAN has been issued? Is it possible that the COMMIT TRAN takes place,the transaction is flagged for potential rollback while a lockresolution is attempted, the stored procedure exists as thougheverything was fine, a subsequent SELECT (ie performed as one of thenext operations in the same application) using WITH(NOLOCK) 'sees' thechanges made by the transaction, reinforcing the impression that thetransaction succeeded, and then at some point thereafter the lock isdetermined to be unresolvable and the transaction is rolled back,making it seem as though the data disappeared, even though it had beenSELECTable via a different connection to the server?Thanks, by the way, to Simon and Erland for your advice on my previousquestions about this problem.Much warmth,M Wells
We have SQL cluster installed on top of windows cluster on VM environment. Node1 and Node2 under Windows Failover Cluster. SQL instance is currently on node2 the instance is up and running, but SQL Cluster service remains online pending and it restarts the instance on every 5 minutes.
SQL Browser service are running successfully.TCP/IP ports are enabled and configured.If we start the SQL server agent it is on for seconds and stopped immediately  .Cluster Service is attempt to connect to the SQL service every few minutes (setting in SQL cluster resource) for the IsAlive check, if this fails then the SQL resource is restarted even if the instance was online. Hope this is what happening exactly.
[sqsrvres] ODBC Error: [08001] [Microsoft][SQL Server Native Client 11.0]SQL Server Network Interfaces: Error Locating Server/Instance Specified [xFFFFFFFF]. (268435455) 00001024.00053314::2015/10/30-19:57:50.772 ERR  [RES] SQL Server <SQL Server (SIMAH_COMMDB)>: [sqsrvres] ODBC Error: [HYT00] [Microsoft][SQL Server Native Client 11.0]Login timeout expired (0) 00001024.00053314::2015/10/30-19:57:50.772 ERR  [RES] SQL Server <SQL Server (SIMAH_COMMDB)>: [sqsrvres] ODBC Error: [08001] [Microsoft][SQL Server
Native Client 11.0]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books
Online. (268435455) 00001024.00053314::2015/10/30-19:57:50.772 INFOÂ [RES] SQL Server <SQL Server (SIMAH_COMMDB)>: [sqsrvres] Could not connect to SQL Server (rc -1
I'm trying to build a DTSX package that FTP's an XML file to the local file system and then imports it into an existing table.
My "Data Flow" for the package starts with an XML Source component and then goes to Data Conversion component and then to an OLE DB Destination component.
It all executes with out error and seems to work fine, but when I look at the data in the table after I've run the package it seems to have inserted the appropriate number of rows from the XML file but all of the column values are NULL.
All of the data in the XML file is surrounded by <![CDATA[ ]]> and I discovered that if I remove the CDATA wrapper by hand then it inserts the data properly. The only problem is that I'm not in a position to have the data provider remove the CDATA tags and some of the data in the XML file needs the CDATA wrapper or else it will not validate.
Anyone know of anything I can do to get the CDATA to import properly?
With my SSIS package, I want to import data from a flat file (TXT- delimited with ?) to a table in my database in sql server 2005. The problem is that I have a column of type datetime in my table. But as you know, the data in txt is string. First I created my package through importing data and using import/export wizard in management studio to my database and selecting flat file connection. There, I selected my txt file and column delimiter as ?, then suggested types for the columns. There it selects 8 byte signed integer type for the datetime column in my table. After these steps I create my package and execute it. But it does not put data in my table in the database. It gives the error of "The value could not be converted because of a potential loss of data" or "cast conversion failed" . I tried other types of date, timestamp, string but none of them was successful. What should I do to put data in my table from txts. Please can you help me urgently!!!