I created sql job using maintenance plan to run DBCC Checkdb for all databases. But I didn't choose the option Ignore databases where the state is not online. If any db is restoring the same time as DBCC Check db job running, is check db job fails?
Hi, I have to monitor log. I am using dbcc sqlperf(logspace). I need to refresh it evry 15-20min.to monitor the growth(ofcourse in %)of log files. How is this possible if I do think of 'automating' it? TIA pd
Hi All, I want to check the fragmentation for a table. But I do not have permissions to run DBCC commands on that server. Do we have any other method to identify the health of the table?
Recently one of our DB went to suspect mode, we have resolved it(repair_allow_dataloss) and DB came online but when we fire CheckDB on that it is throwing following error
Msg 7985, Level 16, State 2, Line 2 System table pre-checks: Object ID 3. Could not read and latch page (1:355) with latch type SH. Check statement terminated due to unrepairable error. DBCC results for 'xxxxxxx'. Msg 5233, Level 16, State 98, Line 2 Table error: alloc unit ID 196608, page (1:355). The test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. The values are 12716041 and -4. CHECKDB found 0 allocation errors and 1 consistency errors not associated with any single object. CHECKDB found 0 allocation errors and 1 consistency errors in database 'xxxxxxx'.
And error log is also continuously popping the below message
Error:
824, Severity: 24, State: 2. SQL Server detected a logical consistency-based I/O
I am not a DBA but have responsibility for a particular MSSQL 2008 R2 file server running a particular application.how to solve a database consistency check problem.The database fails dbcc checkdb with multiple 8903 errors. Unfortunately this was not discovered until well after any good backups were deleted. The good news is that the DB otherwise seems fine. We have experienced zero problems with the DB or the applications. Running the checkdb with the "repair_allow_data_loss" option does not fix the problem.
However, I would still like to fix the problem. Using a popular SQL recovery product I am able to recover the database.The original, vendor designed and supplied DB, has 2 file groups, and three files (MDF, NDF, LDF). The output of the recovery process produces 1 file group and 2 files (MDF and LDF). Vendor says they cannot support me since the recovered DB is 'non-standard' according to their design.
I am able to set up a new, blank version of the vendors database on another dev system with the proper file and filegroup structure. How can I get the data moved/copied from the recovered (MDF/LDF) database into the dev database (MDF, NDF, LDF). I've tried the import/export function but it fails (I can rerun and give details if necessary).
I had a pivot to pull data from cube previously.The pivot had certain measures which are now set as invisible measures at the cube level itself.After the cube publish,i just reconnected my pivot to the cube so that the new measures and dimensions are shown in the field list.
I just tried refreshing the pivot with my old measures(which are now set as invisible) and it allowed me to refresh.How can this be possible if the measure itself is set as invisible at cube level?
((sum(playtime))/ 3600) when 0 then 0 else ((sum(vtp)-(sum(moneyout)))/100) / ((sum(playtime))/ 3600) end avgloss from dbo.total where machineID = @mID and convert(varchar,njdate,121) between convert(varchar,@startdate,121) and convert(varchar,@enddate,121)
When we use Partition switch and load the data to a table, can we refresh the indexes for specific partition, so that we don't need to rebuild / refresh for complete is this possible ?
Something weird happened tonight.. here's the deal. I have two databases exactly the same.. one is for dev, one's for live data.
Anyway, the live DB had an outdated view.. so I exported from the dev DB to the live.. just copying that one view over (using "copy objects" in the export DTS wizard). Very oddly, it actually copied over the data in the tables referenced by the view! Not good, cuz I told my coworkers I'd leave the data in the tables alone :( How do I copy a view over, but just the view definition, and NOT the actual table data??
I have a VM set up for offloading DBCC checks. Specs are below. I've read through this, but I'm not seeing the performance gains by enabling the trace flags and using the physical only switch.
Is the whole drawback that I'm on SATA storage? Is there a VM configuration with the CPU I can/should change? I've been playing with MAXDOP trying to see if I can get any benefits but I'm not seeing a much.
Every night all our DBCC CHECKDB runs on all our databases. The trouble is one of them is very large and the database is inaccessible whilst this runs. DBCC CHECKDB (dbname) WITH physical_only executed by user found 0 errors and repaired 0 errors. Elapsed time: 0 hours 24 minutes 46 seconds
This normally happens fairly shortly after the backup, and normally (but not always) after a series of these entries in the log.SQL Server has encountered 7976 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [D:Program FilesMicrosoft SQL ServerMSSQL11. MSSQLSERVERMSSQ LDATA empdb.mdf] in database [tempdb] (2).Would this cause SQL to automatically run a CheckDB.
I have encountered an anomaly. The dbcc checkident(reseed) command behaves differently on two SQL servers.
In both cases, I am deleting from (not truncating) data in two tables, due to foreign key constraints. (I am truncating other tables, the issue is not with those tables, only with the deleted-from tables.) On one server, I need to use dbcc checkident(reseed,0) so that when I insert fresh data, it begins with identity key #1. According to MS documentation, that appears to be the correct behavior, when data has been deleted from a table, rather than truncated.
However, on the other server, I need to use dbcc checkident(reseed,1); if I use ...(reseed,0) on that server, it begins inserting data with identity key #0.
This is consistent, repeatable behavior on both servers.
I can find many examples of loading DBCC results into tables. They all begin with a create table statement defining the results. My question is , other than trial and error, is there a way to determine what data types will be returned. Sure you can say that first element looks like an integer, but is it really a bigint, and that text string can be varchar(max) but will char(2) work.
I'm not looking for an answer for a specific DBCC function, but rather a generic way I can determine the characteristics of any DBCC result set.
I tried
SELECT * INTO #tmp FROM OPENROWSET('SQLOLEDB', 'Server=ray;Trusted_Connection=Yes;Database=Ed_sandbox', 'Set FmtOnly OFF; DBCC loginfo WITH tableresults ')
but I got back
Msg 11527, Level 16, State 1, Procedure sp_describe_first_result_set, Line 1
The metadata could not be determined because statement 'DBCC loginfo WITH tableresults' does not support metadata discovery.
I have a database in the 3rd normalized form. There is a need to load rows into a child table. To avoid having to drop RI, an ALTER TABLE WITH NOCHECK is being used. I am looking for a way to verify the integrity after the load is complete. After the load an ALTER TABLE WITH CHECK will be executed. Can I used DBCC to verify, that no orphaned child rows were loaded? If so, which parms would I have to use with DBCC?
We have got a query for fine tuning and it is using lot of CTE ,how can i check the execution plan of that?
CREATE VIEW Mercy AS with ADR as ( SELECT urpx.RoleID , urx.UserID FROM [DBA].dbo.URPX WITH ( NOLOCK ) INNER JOIN [DBA].dbo.URX WITH ( NOLOCK ) ON urpx.RoleID = urx.RoleID WHERE PermissionID = '1' ), SDR as (
-- Collect the roles that a configured with Sales Team Create permission
-- This will include Sales Director , Suite Admin,
SELECT urpx.RoleID FROM [DBA].dbo.URPX WITH ( NOLOCK ) INNER JOIN [DBA].dbo.URX WITH ( NOLOCK ) ON urpx.RoleID = urx.RoleID LEFT OUTER JOIN ADR ON ADR.UserID = urx.UserID WHERE ADR.RoleID IS NULL AND PermissionID='2' )
As part of my backup routine, I have a SQL job for each DB which which is called "DB-NAME - Backup Job"
I need to put a script together to check that each database has a backup job associated to it.
select * From sys.sysdatabases where name not in ('msdb','model', 'master', 'Tempdb', 'DBA') order by name desc
select * from msdb.dbo.sysjobs order by name desc
I know all the details i need are in there, but i cant figure out the best way to tackle it. Do I need to do a cursor to go through each DB name, put it into a variable, then select the job name where name like '$variable%' ?
Is there anyway to check if server is having disk latency or IO issues?Found below in SQL error log
Date10/1/2014 8:28:58 AM LogSQL Server (Current - 10/1/2014 12:00:00 AM)
Sourcespid10s
Message SQL Server has encountered 8500 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [D:Fin.mdf] in database [Fin] (5). The OS file handle is 0x0000000000001368. The offset of the latest long I/O is: 0x0001104a7da000
I want to create a check constraint on a table but the constraint values depend upon another table column as well, now one possible way is to create a function to check the column value. But I don’t want to use the function.
Can I do this in a view if so then how can I achieve this.
I have a job I want to run everyday but before this job starts and I want to check and see if another job has completed before I start this job. i would like to do this in the job steps in SSMS. step 1 is job 'xxxxxxx' running if no go to step 2 if yes exit
I have a table called SrcReg which is having a column name called IsSortSeqNo smallint. I am mapping this column in SSIS and the problem comes when I try to execute against different database which has this table but the column name as ISSortSeqNo. I mean both databases having same name but one with upper case. So SSIS fails executing due to meta validation issue.Is there any way to check whether the column name is in small case or upper case through query?
I am looking for the best way to check to see if any columns are still NULL in a record. I have a form that gets filled out by users and the values entered into TableA. There are 6 columns in the table, 5 are responses and column6 indicates if the record is complete. So I want a way to see if all of the first 5 columns are NOT NULL and, if so, mark column6 with a 1.
I am thinking this would be a good thing for a trigger on INSERT or UPDATE to check to see if the first 5 columns are filled in and then mark the record as complete.
I have a table with a column Capacity which is char(10) and gets populated from user files. I want to check records which have negative Capacity value. So i first checked if its numeric and then for negative.
select * from table WHERE ISNUMERIC(LTRIM(RTRIM(Capacity))) = 1 AND Capacity < 0
BUT still it checks for char fields too giving errors like - Conversion failed when converting the varchar value 'asdf ' to data type int.