Running DBCC CHECKIDENT Command Within Stored Proc?
Jun 17, 2015
For reasons beyond the scope of my question, is there a way to run this command within a Stored Procedure from a low privileged user login? I can grant the entity "db_ddladmin" privilege and the proc runs, but I'd rather not give out that level of permission to what is basically a glorified web access login.
I'm trying to set up merge replication and when I try to synchrinize with the subscriber I get the error "invalid column name ROWGUIDCOL". Following an article it recomended to reseed the identity columns on the subscriber so that there is no conflict. Now the tables with the identity column have the "Not for replication "option on. Using DBCC CHECKIDENT (table_name, RESEED, some_value) does not change the SEED when I check. SQL server help file remarks on DBCC CHECKIDENT say that if the column was created with the "Not for replication " option on it cannot change the value.
From sql server help file: Remarks
If necessary, DBCC CHECKIDENT corrects the current identity value for a column. The current identity value is not corrected, however, if the identity column was created with the NOT FOR REPLICATION clause (in either the CREATE TABLE or ALTER TABLE statement).
Any way around it or any help so I dont get that "invalid column name ROWGUIDCOL" error?
I have encountered an anomaly. The dbcc checkident(reseed) command behaves differently on two SQL servers.
In both cases, I am deleting from (not truncating) data in two tables, due to foreign key constraints. (I am truncating other tables, the issue is not with those tables, only with the deleted-from tables.) On one server, I need to use dbcc checkident(reseed,0) so that when I insert fresh data, it begins with identity key #1. According to MS documentation, that appears to be the correct behavior, when data has been deleted from a table, rather than truncated.
However, on the other server, I need to use dbcc checkident(reseed,1); if I use ...(reseed,0) on that server, it begins inserting data with identity key #0.
This is consistent, repeatable behavior on both servers.
I have a set of staging tables that need to be used to update a hierarchy of tables with foreign keys between them, and identity columns for the primary keys. One way I'm thinking of doing this is to reset the identity seed on the target tables based on the number of rows I have in the staging tables, then to update the staging tables keys to match the vacated range of identity values. I'd insert them with SET IDENTITY_INSERT ON.
The question is: can this be done atomically? It seems that DBCC CHECKIDENT will return the current identity value, but can only change the seed to an absolute value. That would require that I get the current value, add "n" to it, then set the seed value. This would seem to be non-atomic, in that a new row could be inserted between the time I find the "current" value and the time I set the new value.
Does anyone know of a way to pre-allocate a block of identity values atomically? This has to be done in a live OLTB database.
One of my stored procs, taking one parameter, is running about 2+ minutes. But if I run the same script in the stored proc with the same parameter hardcoded, the query only runs in a couple of seconds. The execution plans are different as well. Any reason why this could happen? TIA.
I have a stored procedure in SQL Server 2008 (the stored proc is actually stored there with a name) and I can run it with the 'exec storedproc_name insert_date' command (my stored proc needs a date to run).
The stored proc just creates a temp table with some data (but we can ignore this bit).
I only need a way to run this stored proc remotely (I dont care about getting the data, I just need to run it in the server).
Is there any way of doing this? Preferably via a unix system? I just need a way to run the 'exec' command. Returning data etc. isn't needed.
How does one prevent a long running procedure form crapping out in CLR? I am trying to do a pull from a distant data source and it works, except I have to break down my stored procedure call into several smaller calls. I would like to do everything in one shot, but I get the thread abort exception when I try to get a lot of data.
im getting an error when i run the stored proc with a string parameter in execute sql task object.
this is the only code i have:
exec sp_udt_keymaint 'table1'
I also set the 'Isstoredprocedure' in the properties as 'True' though, when you edit the execute sql task object, i can see that this parameter is disabled.
I need to execute a dts package from a stored procedure. I can call up the command prompt, enter the dts command, and it executes perfectly. Here is what I am attempting to do from the stored procecure:
set ANSI_NULLS ON set QUOTED_IDENTIFIER ON SET NOCOUNT ON
GO -- ============================================= ALTER PROCEDURE [dbo].[spPK] -- Add the parameters for the stored procedure here @varNSC varchar(4), @varNC varchar(2), @varIIN varchar(7), @varIMCDMC varchar(8), @varOut as int output AS Declare @varPK int set @varPK = 0 BEGIN
--This checks Method 1 --NSC = @varNSC --NC = @varNC --IIN = @varIIN begin if exists (select Item_id From Item Where NSC = @varNSC and NC = @varNC and IIN = @varIIN) set @varPK = (select Item_id From Item Where NSC = @varNSC and NC = @varNC and IIN = @varIIN) set @varOut = @varPK if @varPK <> 0 Return end
[There are some more methods here]
Return
END
How do I get at the output value?
I have tried using derived column and ole db command but can't seem to grasp how to pass the value to the derived column. I can get oledb command to run using 'exec dbo.spPK ?, ?, ?, ?, ? output' but don't know what to do from here.
Hi. I've got a report with 4 different sections - the datasets coming from some tables that are populated via a stored procedure. I'd love it if the the first thing this report did was run that stored procedure and then the data would be available for the actual reporting piece. Is that possible? And if so, how do I make it work?
Please forgive the simplicity of this question - I am not the dba type. When I connect to a server and look at my connection attributes in activity monitor, the user column shows the correct information for my domainusername. When I run a certain stored procedure in that connection, the domainusername changes to another person. We are not using execute as, setuser, or anything special to explicitly change the user. The stored procedure is in a schema that is owned by dbo (principal_id = 1 - I verified by checking sys.database_principals.)
I am using SQL Server 2005 and I have an endpoint that exposes some stored procedures as web-methods in the endpoint.
One particular stored procedure I have exposed takes a long time to execute: about 10 - 15 minutes. While, it is OK, that this stored procedure takes this long, it is not desirable for the HTTP Request that executed this proc to not wait for that long.
What I want to be able to do is to call the stored procedure and have the call return immidetaly but the stored proc continues what its doing. I will call another stored proc at a later time to retrive the result of the first stored proc. The first proc will store its results in a temp table. I am thinking of using SQL Server Service Broker to achieve this.
Is there a better a way to achieve this? And how does SQL Server process the Service Broker requests, i.e., I dont want the query to be executed when the server is busy. Are there any hints that I need to give to Service Broker to be able to do this?
In my environment, there is maintenance plan configured on one of the server and while running DBCC checkdb on a database of size around 200GB, log file usage of tempdb is increasing and causing the maintenance job to fail.
What can I do to make the maintenance job run successfully, size of the tempdb database is only 50GB and recovery model is set to simple. It cannot be increased as the mount point on which it is residing is 50GB.
Hullo all I have two machines, One is a standalone machine and the other is on a domain network. How can I run a stored procedure/job on the standalone machine from the domain machine ? running the procedure as a Domain user results in a failed job/stored proc. also creating an sql login and attempting to run it as that user also fails. How can I solve this problem ? please e-mail me at wayde@sunnygrp.com if you have any thoughts...
I am working with a large application and am trying to track down a bug. I believe an error that occurs in the stored procedure isbubbling back up to the application and is causing the application not to run. Don't ask why, but we do not have some of the sourcecode that was used to build the application, so I am not able to trace into the code. So basically I want to examine the stored procedure. If I run the stored procedure through Query Analyzer, I get the following error message: Msg 2758, Level 16, State 1, Procedure GetPortalSettings, Line 74RAISERROR could not locate entry for error 60002 in sysmessages. (1 row(s) affected) (1 row(s) affected) I don't know if the error message is sufficient enough to cause the application from not running? Does anyone know? If the RAISERROR occursmdiway through the stored procedure, does the stored procedure terminate execution? Also, Is there a way to trace into a stored procedure through Query Analyzer? -------------------------------------------As a side note, below is a small portion of my stored proc where the error is being raised: SELECT @PortalPermissionValue = isnull(max(PermissionValue),0)FROM Permission, PermissionType, #GroupsWHERE Permission.ResourceId = @PortalIdAND Permission.PartyId = #Groups.PartyIdAND Permission.PermissionTypeId = PermissionType.PermissionTypeId IF @PortalPermissionValue = 0BEGIN RAISERROR (60002, 16, 1) return -3END
I am currently running the Back Office Resource Kit Log shipping option for a database running on an SQL 7 installation. As part of the on-going maintenance work that we are being asked to perform by the application vendor I need to run a DBCC REINDEX run on most of the tables in the database. Currently this is done by stopping the log shipping routine and then running the reindex script, then taking a full backup and restoring the backup to the secondary server then restarting the log shipping scripts. This is a very time consuming task that has to be performed at unsociable hours.
Has anybody got an opinion as to if this would work at the same time as the log shipping scripts or do I have to continue as at present.
The syntax of the command is DBCC writepage ({ dbid, 'dbname' }, fileid, pageid, offset, length, data)
what I only know is that this command can change the structure of data page,it replace data in data page, and it may cause exception when i scan the table after executed dbcc writepage command the error message is:Could not continue scan with NOLOCK due to data movement
I want to know what is the purpose that sqlserver provide "DBCC WRITEPAGE" command and how to use it. could anyone give me a particular introduction to dbcc wirtepage command?
I can't find any information about it in internet.
Has anyone had experience in running DBCC in a 24x7 environment. The only time that I can run them is after a server crash. I have had the server lock up when the results page returned. It almost immpossible to go down for more than hour, because we have international clients. The database is 1.2 GB but it is in constant use because we run reports from Crystal Info server and through an ASP for client use. I have consider dump the database to another site, running DBCC, copying back to the original and restoring the logs until the current time. Any suggestions will be greatly appreciated.
So every time i run a DBCC command like DBCC SHOW_STATISTICS and DBCC INDEXFRAG, the query executes and goes on for ever. When i do a sp_who2 i notice that my DBCC SPID goes to sleep mode with high CPUTime and High DiskIO's. i also noticed that my server has no activity, no inserts, no updates and no select statements running by other users. Why cant i run a DBCC command?
You thoughts and suggestions are highly appreciated! Thanks in advance
Lana
THE LADDERS (The Most $100k+ Jobs.) www.TheLadders.com
One of our databases seems to be looking dodgy as some scheduled jobs are failing, but DBCC CHECKDB is no use since it has been running for over 1/2 hour without giving me any results, just the spinning globe.
How do I find out what is wrong without resorting to backups?
I am trying to reindex a large table, and cannot because there isn't enough room on the the primary filegroup. the database consistes of one physical file in the primary filegroup. the table is over 50% of the size of the database. When the table is less than 50% of the size of the database, I do not see this problem.
BTW, the only index on the table is the primary key which consists of two columns, one is an integer and the other datetime.
It seems as if SQL server needs 1x the current size of the table to be free in order to reindex? Is this the case?
It is not an option for me to allow the database to autogrow. Is there anything else I can do?
SQL2000 Server, SP4, a database with a 17Gb log file. It has been backed up so all transactions should be validated, now the real file size needs to be shrunk because I need the diskspace plus I want to speed up the backup process.
http://support.microsoft.com/kb/272318/ Tells me what to do but not where to do it.
So I need to run this code : DBCC SHRINKFILE(pubs_log, 2)
I have a very strange problem. I have installed MS WS2003 SP2 and MS SQL 9.0.3054 SP2. I have database dbTraceIT with data file on D drive and log file on E drive. If I run T-SQL command:
use dbTraceIT go dbcc checkdb or Integration Services Package (task: Check Database Integrity Task, developed with MS VS)
this comand/or task has generated the hard disk errors on D drive. The chkdsk tool reports errors when hdd index verification has been completed. After hdd errors€™ repairing, if I run checkdb T-sql command the situation is repeated again. Question: is it a bug or something different? Do you have similar disk error if you use e.g. Integration Service Packages (for instance index rebuild or whatever)?
Regards, Dariusz
PS Steps to reproduce
Find any DB on your MS SQL. Run chkdsk command on hard drive where DB€™s data file is stored. Verify that everything is OK. Run t-sql command use dbTraceIT go dbcc checkdb 3. Run chkdsk again. It should show hdd errors.
I am having trouble executing a stored procedure on a remote server. On my local server, I have a linked server setup as follows: Server1.abcd.myserver.comSQLServer2005,1563
This works fine on my local server:
Select * From [Server1.abcd.myserver.comSQLServer2005,1563].DatabaseName.dbo.TableName
This does not work (Attempting to execute a remote stored proc named 'Data_Add':
When I attempt to run the above, I get the following error: Could not locate entry in sysdatabases for database 'Server1.abcd.myserver.comSQLServer2005,1563'. No entry found with that name. Make sure that the name is entered correctly.
Could anyone shed some light on what I need to do to get this to work?
Hi All,Quick question, I have always heard it best practice to check for exist, ifso, drop, then create the proc. I just wanted to know why that's a bestpractice. I am trying to put that theory in place at my work, but they areasking for a good reason to do this before actually implementing. All Icould think of was that so when you're creating a proc you won't get anerror if the procedure already exists, but doesn't it also have to do withCompilation and perhaps Execution. Does anyone have a good argument fordoing stored procs this way? All feedback is appreciated.TIA,~CK
I am using SQL Server SP 2 on Windows 2003 Server Standard edition:
Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Feb 9 2007 22:47:07 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2)
I have a datbase that's rather large. The log file is 94656 pages, and the data file itself is 94197200 pages. There's only one data file and one log file. The database passes DBCC CHEKCDATABASE with no errors.
When I run DBCC SHRINKDATABASE against the database, the command runs for about twenty seconds then produces this error:
Msg 0, Level 11, State 0, Line 0 A severe error occurred on the current command. The results, if any, should be discarded.
I can't find anything interesting in the ERRORLOG around the time that I run this command. The error appears if I use the TRUNCATEONLY option or not.
How do I fix this problem?
And in general, why are the engine errors in SQL Server so confusing and not directly actionable?
I'm running a simple DBCC DBREINDEX ('myTable') and I receive thefollowing error:"Server: Msg 169, Level 15, State 2, Line 2A column has been specified more than once in the order by list.Columns in the order by list must be unique. DBCC executioncompleted. If DBCC printed error messages, contact your systemadministrator."I can successfully reindex other tables in this database. I thoughtthat perhaps I had objects in the database that ended up with the samename, but I've pretty much ruled that out.Any suggestions?ThanksJohn D. Morrismailto://jmorris_42@hotmail.com