our sql server databases are on a window cluster server.how to correct this issue or why would this happen? FILESTREAM data container 'M:TessituraDB DataDocuments' is corrupted. Database cannot recover.
Get a filestream download link with only access to read and with folder navigation
I need a link with the path to get the file stream blob, that path could be used to download a document using any windows app like windows explorer, etc. the requirement is that path does not allow customer to navigate in filesstream share folders or see other files and only can read the file of the path,
Checking :
[file_stream].GetFileNamespacePath(2)
Allow you navigate in folders.
NON_TRANSACTED_ACCESS read_only, resolve the requirement to disable the save in file table, but allow you navigate and see other files.
I run MSDE on a Windows XP Server and my database was marked as "suspect" by Enterprise Manager and its icon become gray. I detach that database and try to re-attach it but there is an error as the following:
After that, I uninstall the MSDE and re-install new MSDE SP4 and try to re-attach the database file again and there is still an error. Is there anyway i can recover the data from that database?
A friend of mine called and said that his database was suspect and he detached it and was unable to attach it. The SQL Server Version is 2000 with the latest service packs installed. When I checked it the database (MDF) size was 1MB and the LDF size was 3.8 GB. I was not able to attach the two together and I was not able to attach the database using sp_attach_single_file_db. I did find an old backup of his database and attached that so he can work off of it but it is 1 yr old and he needs recent data. Since the MDF seemed to be blank we cant do much with that, but there seems to be data in the LDF. Is it possible to extract any data from the LDF file?
We have a server with a database with filestream enabled. The filestream data is in a filegroup with three files spread across 3 LUNs F:, G:, and H: each with a capacity of 1.8 TB.
The file stream containers in those three LUNs reference the same column in the same table.
The F: Drive has only 64 GB free space left. The H: However has around 700 GB free.
We are looking to move some filestream content from the container in F: to the container in H:.
Our development organization has created an application employing FILESTREAM, which through the pilot has been incorporated into a schema within our Data Warehouse Staging database. Going to production, the development and BI teams have determined that they want it separated out into a separate database, and they'd like to separate it in the current pilot environment (DEV).
How can I best move (or at least copy) the existing FILESTREAM data from the current database into a new one?
I have been asked to look into using Filestream for centralising MS Office documents (mostly excel).I am worried about the "user interface" aspect.I read that there are "this and that" APIs to read/write data to the filestream but surely one would need to write a specific interface to Word/Excel... which feels like far too hard.I am a great admirer of SQL Server but is it the right tool for document management?
We use SQL Server 2012 and have offices round the world with various internet connection quality.Our main aim is to stop the current "spreadsheet nightmare" so common with Excel.
I have three FileStreams (FS1 on F drive, FS2 on H drive, FS3 on E drive) belonging to the same FileStream group of one particular database (DB) which is in Simple recovery mode in the SQL Server 2012.
FS1 contains huge number of files due to which F drive is completely full.
So, I am trying to move some of the extra files from one FileStream (FS1 on F drive) to another FileStreams (FS2 on H drive and FS3 on E drive) using command:
dbcc shrinkfile('FS1', emptyfile)
Then, I take the Full and Differential backup of the database and issue the CheckPoint and try to delete the already duplicated files from the Filestream FS1 to get some space in the F drive using command:
We have one database with Filestream enabled. There is one table "dbo.files" which uses Filestream.
We created a filestream filegroup Filegroup1 and added 3 data containers to it. (3 filestream data containers within the same filegroup.)
We have three LUNs F:, G:, H: each with a capacity of 2TB (That is the limitation). F: and G: are almost full. So, I restricted their growth so inserts do not happen into these data containers. Inserts are now going into H: drive which has lots of free space. Our application code prevents any sort of deletes or updates to this table. So data in the growth restricted containers will never change.
Now the database is around 6 TB in size and backups is a challenge. We are contemplating on migrating storage to netAPP and use their snapmanager console which is much faster.
However, until then, we need a solution with native SQL backups. We tried partial backups and piecemeal restore.
WE tried this on a test server :
1) Partial backup only the read-only data containers first, (F: and G:) (The plan is to back these up just once a month as this data never changes).
2) Partial backup the primary filegroup plus the third data container in the Filestream filegroup which is subject to inserts (H:)
While restoring, we tried the online restore, First, I restored the backup obtained from step 2 above with recovery option. Then I restored the backup obtained from step 1 with recovery. I see that the database was brought online. However, when I try to query the dbo.files table, I get an error stating that some files of the filestream filegroup are offline.
I store files in db in sql server 2008 by filestream. But when a column would be added to table which have filestream, properties of table would be changed. by every things change on table, retrieve files will faced to error. but store process work probably.
and filestream filegroup at following address will be empty. why?
Right click on table --> properties --> storage --> filestream filegroup
We have one database of 5 GB of which when we take the backup and restore in another location, it says "Device Activation Error. The Physical file name e:databaseike_log.ldf may be incorrect". The database has more than one log file. Now when we use another procedure to create new database and stop the sql, then replace the old mdf file and REBUILDLOG. (Ie dbcc rebuildlog(nike,1,0)). It says incorrect DBCC Statment.
Need to recover from a corrupted log file. The database in EnterpriseManager shows the database as corrupt.Using sp_Attach_db produces a corrupt log message.Any way to recover the data?Thanks
hello, i am using sql server ce 2.0 on windows ce 4.2.sometimes my sdf file gets corrupted but i cant understand why?i know how this file repair.but for me it is not a solution.please somebody help me!
ok..my friend sent me a video via file transfer, but when i tried to view it, and open it up with windows movie maker (that is the program she made the video on) it said the file was corrupted. is there any way i can fix the corrupted file? or any other program i can view it on? windows media player also declares it corrupted.
Hi This is chetan jain sql server DBA. I came accross a probplem while synchronising production with DR in logshipping. logshipping was stopped and the message was " Cannot copy the ......TRN . file is corrupted.
can anybody suggest that what should be done if TRN file is corrupted...because if i manully restore the logs at DR i will need every log file to sync Prod with Dr.log file that is corrupted cannot be skipped...neither it can be restored at Dr.
Recently, one of our database's mdf and ldf was corrupted. We were able tobring back the database with the capability of importing and querying thedata. However, the data is not the full list. Some of the data are stillmissing because the tables we want still has sone 'chain mis linkage' errorwhen we did a DBCC checkdb/checktable. DBCC couldn't fix those errorseither. We're pretty sure that there are some data still burry under thecorruption. Then we thought of the life LDF file. This file should haverecorded every transaction up to the point of failure. So we trying torecover that file, but there are 2 problems. First is we don't have back upof that file and 2nd, that file is also corrupted. I couldn't backup thecorrupted ldf file at its current state. It's giving me some unrelated errormessage like disk space is out. All I know now is the only hope to recoverthose missing data are in this LDF file. We tried to use this software fromLimingent Log Explorer to recover it. So far no such luck with that. SoI'm asking anyone who has similar experience or problem. Please let me know.I need to know anyway to recover this LDF file, either by using sql server'snative tools or 3rd party tools. Any input is helpfull. thanks in advance.Wei
i developed an application which running on PDAs.i am using sqlserver ce database.my database file sometimes corrupted,why sqlce databases are corrupted and what can i do not corrupted.i can repair my database file when it corrupted but my customers not understand when database file was corrupted,so they think that program doesnot work.
Msg 1813, Level 16, State 2, Line 1 Could not open new database 'world'. CREATE DATABASE is aborted. Msg 3456, Level 21, State 1, Line 1 Could not redo log record (9767813:109:10), for transaction ID (1:1182802845), on page (1:27846), database 'world' (database ID 5). Page: LSN = (9767810:70:66), type = 2. Log: OpCode = 3, context 19, PrevPageLSN: (9767812:47:11). Restore from a backup of the database, or repair the database. Msg 3313, Level 21, State 2, Line 1 During redoing of a logged operation in database 'world', an error occurred at log record ID (9767813:109:10). Typically, the specific failure is previously logged as an error in the Windows Event Log service. Restore the database from a full backup, or repair the database.
I use this code to get Tiff files from db , but output file seems to be corrupted : Dim pubsConn As SqlConnection = New SqlConnection("Server=(local);uid=sa2;pwd=sony;database=pubs;") Dim logoCMD As SqlCommand = New SqlCommand("SELECT pub_id, logo FROM pub_info", pubsConn) Dim fs As FileStream ' Writes the BLOB to a file (*.bmp). Dim bw As BinaryWriter ' Streams the binary data to the FileStream object. Dim bufferSize As Integer = 100 ' The size of the BLOB buffer. Dim outbyte(bufferSize - 1) As Byte ' The BLOB byte() buffer to be filled by GetBytes. Dim retval As Long ' The bytes returned from GetBytes. Dim startIndex As Long = 0 ' The starting position in the BLOB output. Dim pub_id As String = "" ' The publisher id to use in the file name. ' Open the connection and read data into the DataReader. pubsConn.Open() Dim myReader As SqlDataReader = logoCMD.ExecuteReader(CommandBehavior.SequentialAccess) Do While myReader.Read() ' Get the publisher id, which must occur before getting the logo. pub_id = myReader.GetString(0) ' Create a file to hold the output. fs = New FileStream("\Server1shared1logo" & "KK" & ".tiff", FileMode.OpenOrCreate, FileAccess.Write) bw = New BinaryWriter(fs) ' Reset the starting byte for a new BLOB. startIndex = 0 ' Read bytes into outbyte() and retain the number of bytes returned. retval = myReader.GetBytes(1, startIndex, outbyte, 0, bufferSize) ' Continue reading and writing while there are bytes beyond the size of the buffer. Do While retval = bufferSize bw.Write(outbyte) bw.Flush() ' Reposition the start index to the end of the last buffer and fill the buffer. startIndex += bufferSize retval = myReader.GetBytes(1, startIndex, outbyte, 0, bufferSize) Loop ' Write the remaining buffer. bw.Write(outbyte, 0, retval - 1) bw.Flush() ' Close the output file. bw.Close() fs.Close() Loop ' Close the reader and the connection. myReader.Close() pubsConn.Close() Me.Div1.InnerHtml = ("<embed height=650 width=100% toolbar='' src= '" & "\Server1shared1logoKK.tiff" & "' type='application/x-alternatiff'>")
I trying to update my Aggregation Design for a partition using BIDS Helper. The current aggregation design contains about 60 aggregations and the new aggregation I am trying to add is across 5 dimension attributes, the product of which is about 500,000 unique values. The fact table is about 13,000,000 rows.
When I deploy the aggregation and run ProcessIndex, I get the follow error:
File system error: The following file is corrupted: Physical file: ?E:Program FilesMicrosoft SQL ServerMSAS10_50.MSSQLSERVEROLAPDataTestDB.14.dbcube_2.607.cub cubemeasuregroup_2.633.detDefaultPartition_2.579.prt852.agg.flex.data. Logical file.
If I remove the new aggregation, deploy, and run ProcessIndex again, it processes fine.
Is there some file size limitation I am running into? The agg.flex.data file is 7.8 GB before adding the new aggregation, so it isn't subject to the same 4 GB limit as .asstore.
We have database when trying to make read only throwing below error: with stack dump
Location: recovery.cpp:4517 Expression: m_recoveryUnit->IsIntendedUpdateable () SPID: 51 Process ID: 6448 Msg 926, Level 14, State 1, Line 1 Database 'XXXX' cannot be opened. It has been marked SUSPECT by recovery.
See the SQL Server errorlog for more information. Msg 5069, Level 16, State 1, Line 1 ALTER DATABASE statement failed. Msg 3624, Level 20, State 1, Line 1
A system assertion check has failed. Check the SQL Server error log for details. Typically, an assertion failure is caused by a software bug or data corruption. To check for database corruption, consider running DBCC CHECKDB. If you agreed to send dumps to Microsoft during setup, a mini dump will be sent to Microsoft. An update might be available from Microsoft in the latest Service Pack or in a QFE from Technical Support.
Msg 3313, Level 21, State 2, Line 1
During redoing of a logged operation in database 'XXXX', an error occurred at log record ID (0:0:0). Typically, the specific failure is previously logged as an error in the Windows Event Log service. Restore the database from a full backup, or repair the database.
Msg 3414, Level 21, State 1, Line 1
An error occurred during recovery, preventing the database 'XXXX' (database ID 7) from restarting. Diagnose the recovery errors and fix them, or restore from a known good backup. If errors are not corrected or expected, contact Technical Support.
Investigation DONE:
- DBcc checkdb shown Clean - database is online and able to access -Detached database and attached with rebuild log, still could not bring database read_only
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
We have a large 'History' database that is currently about 4.5TB, with most of that in a datafile that is 4.2TB. We wanted to stop growth on the one large data file and have SQL Server allocate new data to the other data files, but this throws an error when we attempt to change the MAXSIZE settings:
ALTER failed for Database 'History' MODIFY FILE failed. Specified size is less than or equal to current size.
The SQL Server is saying we can have a max size of 2TB, and anything over that is blocked. Since this is being blocked, the file continues to grow.
Is there any way to cap the growth of the 4.2TB file and not allow any more data to be written to it?
yes,I have an error, like 'The database file may be corrupted. Run the repair utility to check the database file. [ Database name = SDMMC Storage Cardwinpos_2005WINPOS2005.sdf ]' .I develope a program for Pocket Pcs and this program's database sometimes corrupt.what can i do?please help me
Hi, I have a corrupted table wich has some valuable data in it. I found out that there is only one raw is corrupted. How do I find out which row of the table is it? Can any one Help me in this matter Please?
I have several sites which refer to a table in an MS SQL data base on the server.
I'm looking for a good way to check that my tables don't get corrupted over time. It seems that I can't create a duplicate by selecting the individual table and going SaveAs..
Can someone point me to the fool proof method that everyone else already uses, please ?
This is my procedure for "rescuing" data from a corrupted database. Obviously restoring from backup is a lot easier!
0) Set the damaged database to Read-Only. if you don't have a backup make one now.
1) Script the database
2a) Create a new, TEMP database - preferably on a different machine in case of hardware problems on the original machine
2b) Size the Data for the TEMP database same size as the original (to avoid dynamic extensions). Size the Log something large-ish!
3) Run the Script on the TEMP database. Do NOT create any FK etc. yet
4a) Attempt to transfer all tables:
-- Prepare script of: INSERT INTO ... SELECT * FROM ... SET NOCOUNT ON SELECT 'PRINT ''' + name + '''' + CHAR(13) + CHAR(10) + 'GO' + CHAR(13) + CHAR(10) + CASE WHEN C.id IS NULL THEN '' ELSE 'SET IDENTITY_INSERT dbo.[' + name + '] ON' + CHAR(13) + CHAR(10) END + 'INSERT INTO MyTempDatabase.dbo.[' + name + ']' + CHAR(13) + CHAR(10) + 'SELECT * FROM dbo.[' + name + ']' + CHAR(13) + CHAR(10) + CASE WHEN C.id IS NULL THEN '' ELSE 'SET IDENTITY_INSERT dbo.[' + name + '] OFF' + CHAR(13) + CHAR(10) END + 'GO' FROMdbo.sysobjects AS O LEFT OUTER JOIN ( SELECT DISTINCT C.id FROMdbo.syscolumns AS C WHEREC.colstat = 1-- Identity column ) AS C ON C.id = O.id WHERE type = 'U' AND name NOT IN ('dtproperties') ORDER BY name SET NOCOUNT OFF
this generates statements like this:
PRINT 'MyTable' GO SET IDENTITY_INSERT dbo.[MyTable] ON INSERT INTO RESTORE_XFER_TEMP.dbo.[MyTable] SELECT * FROM dbo.[MyTable] SET IDENTITY_INSERT dbo.[MyTable] OFF GO
4b) This will give some sort of error on the tables which cannot be copied, and they will need to be rescued by some other means.
5a) Each "broken" table needs to be rescued using an index. Ideally you will have a clustered index on the PK and that will be undamaged, so you can "rescue" all the PKs into a temp table:
-- Copy PK fields to a temporary table -- DROP TABLE MyRestoreDatabase.dbo.TEMP_RESCUE_PK -- TRUNCATE TABLE MyRestoreDatabase.dbo.MyBrokenTable SELECT[ID]=IDENTITY(int, 1, 1), [IsCopied]=CONVERT(tinyint, 0), MyPK INTOMyRestoreDatabase.dbo.TEMP_RESCUE_PK FROMMyBrokenDatabase.dbo.MyBrokenTable ORDER BY MyPK
5b) If that is successful you have a list of all the PKs, so can can try to copy data matching those PKs, in batches:
-- If OK then selectively copy data across -- First Prep. a temp Batch table -- DROP TABLE MyRestoreDatabase.dbo.TEMP_RESCUE_BATCH SELECT TOP 1 [ID]=CONVERT(int, NULL), [IsCopied]=CONVERT(bit, 0), MyPK INTOMyRestoreDatabase.dbo.TEMP_RESCUE_BATCH FROMMyRestoreDatabase.dbo.TEMP_RESCUE_PK GO -- DECLARE@intStartint, @intStopint, @intBatchSizeint
-- NOTE: After the first run set these to any "gaps" in the table that you want to fill SELECT @intStart = 1, @intBatchSize = 10000, @intStop = (SELECT MAX([ID]) FROM MyRestoreDatabase.dbo.TEMP_RESCUE_PK)
SELECT@intStart = MIN([ID]) FROMMyRestoreDatabase.dbo.TEMP_RESCUE_PK WHERE IsCopied = 0 AND [ID] >= @intStart
WHILE@intStart < @intStop BEGIN SET ROWCOUNT @intBatchSize
-- Isolate batch of Keys into separate table TRUNCATE TABLE MyRestoreDatabase.dbo.TEMP_RESCUE_BATCH INSERT INTO MyRestoreDatabase.dbo.TEMP_RESCUE_BATCH SELECTT.* FROMMyRestoreDatabase.dbo.TEMP_RESCUE_PK AS T WHERE IsCopied = 0 AND [ID] >= @intStart AND [ID] < @intStart + @intBatchSize
-- Attempt to copy matching records, for this batch PRINT CONVERT(varchar(20), @intStart) INSERT INTO MyRestoreDatabase.dbo.MyBrokenTable SELECTS.* FROMMyRestoreDatabase.dbo.TEMP_RESCUE_BATCH AS T LEFT OUTER JOIN MyRestoreDatabase.dbo.MyBrokenTable AS D ON D.MyPK = T.MyPK -- This will try to get the data from the broken table, it may fail! JOIN MyBrokenDatabase.dbo.MyBrokenTable AS S ON S.MyPK = T.MyPK WHERED.MyPK IS NULL-- Belt and braces so as not to copy existing rows
-- Flag the rows just "Copied" UPDATEU SETIsCopied = 1 FROMMyRestoreDatabase.dbo.TEMP_RESCUE_PK AS U WHEREIsCopied = 0 AND [ID] >= @intStart AND [ID] < @intStart + @intBatchSize
-- Loop round, until done SELECT@intStart = @intStart + @intBatchSize END GO SET ROWCOUNT 0-- Turn OFF!! GO
5c) This will copy in batches of 10,000 [you can adjust @intbatchSize depending on table size] until it gets to a damaged part of the table, then it will abort.
Change the @intStart to the last ID number displayed, and reduce @intBatchSize (by an order of magnitude each time) until you have rescued as many records as possible in the first "part" of the table.
5d) Reset the batch size @intBatchSize to 10,000 [or whatever size is appropriate], and increase the @intStart repeatedly until you are past the damaged section - copying will start again, and will abort if there are further damaged sections
5e) Repeat that process until you have rescued as much of the data as is possible
6) Check what is left to be rescued
-- Check amount NOT done: SELECTCOUNT(*), MIN([ID]), MAX([ID]) FROMMyRestoreDatabase.dbo.TEMP_RESCUE_PK WHERE IsCopied = 0 --AND [ID] > 123456-- Optionally count items after a "gap" -- -- Double check that IsCopied set correctly, and the number of records "lost" SELECTCOUNT(*), [IsCopied] = SUM(CONVERT(int, IsCopied)), [IsCopied+Record] = SUM(CASE WHEN IsCopied = 1 AND C.MyPK IS NOT NULL THEN 1 ELSE 0 END), [IsCopiedNoRecord] = SUM(CASE WHEN IsCopied = 1 AND C.MyPK IS NULL THEN 1 ELSE 0 END), [IsNOTCopied] = SUM(CASE WHEN IsCopied = 0THEN 1 ELSE 0 END), [IsNOTCopied+Record] = SUM(CASE WHEN IsCopied = 0 AND C.MyPK IS NOT NULL THEN 1 ELSE 0 END), [IsNOTCopiedNoRecord] = SUM(CASE WHEN IsCopied = 0 AND C.MyPK IS NULL THEN 1 ELSE 0 END) FROMMyRestoreDatabase.dbo.TEMP_RESCUE_PK AS T LEFT OUTER JOIN MyRestoreDatabase.dbo.MyBrokenTable AS C ON C.MyPK = T.MyPK -- -- List of "Lost" records SELECTMyPK FROMMyRestoreDatabase.dbo.TEMP_RESCUE_PK WHERE IsCopied = 0 ORDER BY [ID]
You will then have to "find" and rescue the lost records somewhere.
I have a further process using OPENQUERY() to rescue records to fill the gaps in the event that they are available on a remote system - a straight JOIN to get them is going to be far to slow on anything other than tiny tables!
7a) Create the FKs etc. Update the statistics, rebuild indexes, and possibly shrink the Log if it is unrealistically big 7b) Backup and Restore over the original database 7c) DBCC CHECKDB ('MyDatabaseName') WITH ALL_ERRORMSGS, NO_INFOMSGS