What Shuold Be Considered Excessive Lock Requsts
May 22, 2006
have a 3rd party sql 2000 app, mostly bad sql. have lock issues, when monitoring sql locks/req per second, I get normally between 500,000 and 1,000,000 requests. For a 4 way box with 16 gig of memory, what is considered an excessive amounts of locks.
View 3 Replies
ADVERTISEMENT
Jul 23, 2005
Hello!I am trying to investigate strange problem with particular storedprocedure. It runs OK for several days and suddenly we start gettingand lotof locks. The reason being [COMPILE] lock placed on this procedure. Asaresult, we have 40-50 other connections waiting, then next connectionusingthis procedure has [COMPILE] lock etc. Client is fully qualifyingstoredprocedure by database/owner name and it doesn't start with sp_. I knowthese are the reasons for [COMPILE] lock being placed. Is theresomethingelse that might trigger this lock? When troubleshooting this issue, Inoticed there was no plan for this procedure in syscacheobjects. Thestoredprocedure is very simple (I know it could be rewritten/optimized butourdeveloper wrote it):CREATE PROCEDURE [dbo].[vsp_mail_select]@user_id int,@folder_id int,@is_read bit = 1, --IF 1, pull everything, else just pull unread mail@start_index int = null, --unused for now, we return everything@total_count int = null output, -- count of all mail in specifiedfolder@unread_count int = null output -- count of unread mail in specifiedfolderASSET NOCOUNT ONselect m1.* from mail m1(nolock) where m1.user_id=@user_id andfolder_id=@folder_id and ((@is_read=0 and is_read=0) or (@is_read=1))orderby date_sent descselect @total_count = count(mail_id) from mail m1(nolock) wherem1.user_id=@user_id and folder_id=@folder_id and ((is_read=0 and@is_read=0)or (@is_read=1))select @unread_count = count(mail_id) from mail m1(nolock) wherem1.user_id=@user_id and folder_id=@folder_id and is_read=0GOI was monitoring server for a couple of day before and I am not surewhythis happens every 3-4 days only!Any help on this matter would be greately appreciated!Thanks,Igor
View 1 Replies
View Related
Oct 4, 2006
Hi all,
first of all I shoud mention it that I have posted a similar question in Oracle forum and recieve enough oracle-related feedback and solution, here (http://www.dbforums.com/showthread.php?t=1609238).
Now I want to know if LIKE searches are awful (performance wise) in SQL Server too, do you recommand any solution to enhance it by some indexing twick? I want to avoid using MS index service (if I spell it correctly) as far as I could ;)
Note: We have a lot of LIKE '%abc%' queries at this system.
-Thanks in advance for your time and help :)
View 3 Replies
View Related
Apr 4, 2006
Hello
I use a Flat File Connection Manager for a file with 18 columns.
My column delimiter is the "~" caracter and my row delimiter is "{CR}{LF}"
The source files contains about 2300 lines. None of them contain NULL values.
My last row is a numeric(16,2). Even if it is not the appropriate type for the value I want extract, it works with all my columns.
My problem is with the last column. I have read the SQL Server 2005 interpretation of the row delimiter as actually the last row column delimiter.
But, here, my values are OK and put in the destination table if it is not 0 : "+0000000001352" for example in the file.
If it is 0 : "+00000000000000", and if I considered errors as INGORE FAILURE and put data in the destination table, accepting NULL values, I have NULL values in table for these 0 values. And correct values for non-0 values.
How do you explain that ? How I can fix the problem and correctly read my file ?
For your information, if I REDIRECT ROW in the Flat File Error Output, the ErrorCode is : -1071607676 and ErrorDescription is "The data value cannot be converted for reasons other than sign mismatch or data overflow."
View 3 Replies
View Related
Mar 21, 2006
Hello
I use a Flat File Connection Manager for a file with 18 columns.
My column delimiter is the "~" caracter and my row delimiter is "{CR}{LF}"
The source files contains about 2300 lines. None of them contain NULL values.
My last row is a numeric(16,2). Even if it is not the appropriate type for the value I want extract, it works with all my columns.
My problem is with the last column. I have read the SQL Server 2005 interpretation of the row delimiter as actually the last row column delimiter.
But, here, my values are OK and put in the destination table if it is not 0 : "+0000000001352" for example in the file.
If it is 0 : "+00000000000000", and if I considered errors as INGORE FAILURE and put data in the destination table, accepting NULL values, I have NULL values in table for these 0 values. And correct values for non-0 values.
How do you explain that ? How I can fix the problem and correctly read my file ?
For your information, if I REDIRECT ROW in the Flat File Error Output, the ErrorCode is : -1071607676 and ErrorDescription is "The data value cannot be converted for reasons other than sign mismatch or data overflow."
View 4 Replies
View Related
Feb 2, 2007
I simply made my script task (or any other task) fail
In my package error handler i have a Exec SQL task - for Stored Proc
SP statement is set in following expression (works fine in design time):
"EXEC [dbo].[us_sp_Insert_STG_FEED_EVENT_LOG] @FEED_ID= " + (DT_WSTR,10) @[User::FEED_ID] + ", @FEED_EVENT_LOG_TYPE_ID = 3, @STARTED_ON = '"+(DT_WSTR,30)@[System::StartTime] +"', @ENDED_ON = NULL, @message = 'Package failed. ErrorCode: "+(DT_WSTR,10)@[System::ErrorCode]+" ErrorMsg: "+@[System::ErrorDescription]+"', @FILES_PROCESSED = '" + @[User::t_ProcessedFiles] + "', @PKG_EXECUTION_ID = '" + @[System::ExecutionInstanceGUID] + "'"
From progress:
Error: The Script returned a failure result.
Task SCR REIL Data failed
OnError - Task SQL Insert Error Msg
Error: A deadlock was detected while trying to lock variable "System::ErrorCode, System::ErrorDescription, System::ExecutionInstanceGUID, System::StartTime, User::FEED_ID, User::t_ProcessedFiles" for read access. A lock could not be acquired after 16 attempts and timed out.
Error: The expression ""EXEC [dbo].[us_sp_Insert_STG_FEED_EVENT_LOG] @FEED_ID= " + (DT_WSTR,10) @[User::FEED_ID] + ", @FEED_EVENT_LOG_TYPE_ID = 3, @STARTED_ON = '"+(DT_WSTR,30)@[System::StartTime] +"', @ENDED_ON = NULL, @message = 'Package failed. ErrorCode: "+(DT_WSTR,10)@[System::ErrorCode]+" ErrorMsg: "+@[System::ErrorDescription]+"', @FILES_PROCESSED = '" + @[User::t_ProcessedFiles] + "', @PKG_EXECUTION_ID = '" + @[System::ExecutionInstanceGUID] + "'"" on property "SqlStatementSource" cannot be evaluated. Modify the expression to be valid.
Warning: The Execution method succeeded, but the number of errors raised (4) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
And how did I get 4 errors? - I only set my script task result to failure
View 11 Replies
View Related
Feb 11, 1999
Help !!
I am running a database of 500-600mb 20-30% of which is new data daily (5 day old data being deleted as part of the nightly maintenance) And my nightly maintenance is regularly taking an hour plus.
CheckDB, New Alloc, Catalog, re-indexing and dumps are performed nightly (2am ish) and as the system is in constant use I cannot afford such a long task. I can't use weekly dumps/checkDB as we use transaction log replication and these are dumped every minute. I really need some suggestions on how I can improve matters. The deletion of old data in particular is taking a long time due to the use local variables but is there a faster way to do this :
OPEN tnames_cursor
FETCH NEXT FROM tnames_cursor INTO @connectionid
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
Select @dRent = DeliveredRetention from ControlDB..connectiontable
where ControlDB..connectiontable.Cid = @connectionid
Delete from MyDB..Table where Cid = @cid
and DateDelivered != NULL
and Datediff(hh,MyDB..Table.DateDelivered,getdate()) >= (@dRent*24)
END
FETCH NEXT FROM tnames_cursor INTO @connectionid
END
DEALLOCATE tnames_cursor
GO
These jobs have also started running out of locks and deadlocking on occaision which seems odd as the system has 10000 available (escalating at 2000)
Any Suggestions would be very much appreciated
Damon
View 1 Replies
View Related
Apr 7, 2004
Hi
We are facing an acute situation in our web-application. Technology is ASP.NEt/VB.NET, SQL Server 2000.
Consider a scenario in which User 1 is clicking on a button which calls a SQL stored procedure. This procedure selects Group A of records of Database Page1.
At the same time if User 2 also clicks the same button which calls same SQL stored procedure. This procedure selects Group B of records of Database Page1.
So, its the same Page1 but different sets of records. At this moment, both the calls have shared locked on the Page1 inside the procedure.
Now, in call 1, inside the procedure after selecting Group A of records, the next statement is and update to those records. As soon as update statement executes, SQL Server throws a deadlock exception as follows :
Transaction (Process ID 78) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction
We are able to understand why its happening. Its because, Group A and Group B of records are on the same Page1. But both the users have shared lock on the Page1. So, no one gets the exclusive lock in records for update, even though, the records are different.
How can I resolve this issue? How can I get lock on wanted rows instead of entire page?
Please advice. Thanks a bunch.
Pankaj
View 1 Replies
View Related
Nov 27, 2006
Hi there group.
Could some please point me in the right direction?
We have a database and it's about 28GB in size, recently the SQL server process that runs uses approximately 1.6GB of Memory.
I have tried running SQL profiler to find out which Stored Procedure is causing this but came up unsuccessful.
When restarting SQL the process it run's at about 50MB for about 20sec and then starts climbing up to 1.6GB of memory usage.
Please assist.
View 12 Replies
View Related
Apr 16, 2008
I'm running into a blocking problem on my SQL 2000 server. I have a table that is frequently read/written to (inserts, updates, deletes) -- I don't place any explicity locks but I do a SELECT @@Identity after I insert a record to get the Identity value via a sqlCommand.ExecuteScalar.
So my questions:
#1 Is blocking normal? (40-90 blocks consistantly - 350 or so client connections)
#2 Is there any better coding solution to avoid blocks?
#3 I need to get the Identity value after the recorded is added and I thought ExecuteScalar is the fastest and least overhead, put perhaps I'm wrong?
Any suggestions or hints welcome.
Thanks, Rob.
.NET 2.0
View 4 Replies
View Related
Mar 16, 1999
We recently upgraded from SQL 6.5 to SQL 7. I have a few .sql files that were each running around 5 - 8 minutes under 6.5. These same files now each take over 30 minutes to run. Has anybody had problems with their queries taking longer to run under 7.0? These files are quite large and are comprised of 3 - 4 batches with several queries in each batch. If anybody has any thoughts on the cause please let me know.
Thanks in advance.
View 1 Replies
View Related
May 23, 2006
Hi there,
Currently using SQL Server 2000 (SP4). The following condition started occurring last week:
- Server has excessive blocking
- Majority of the processes are in runnable state
- Excessive blocking happens for a few mins. and repeats again during the day. Does not happen at night.
- Nothing on the server errorlog, profiler
- CPU averages 40 - 50% at that point of excessive blocking
Any help would be greatly appreciated.
Thanks.
View 7 Replies
View Related
Jun 25, 2007
Since the other related topic is closed/answered...
The Short version:
SQL is now logging too much info with every package. The volume of the new "User: Diagnostic" event has caused some packages to fail and the command-line exclusion option appears to have no effect on the events logged to the SQL provider. Is this a bug in dtexec or am I using the wrong syntax to exclude log entries? I don't want to modify all of my SSIS packages...
More Info:
SQL SP2 introduced new logging events, most of which appear to get logged by default. So far, none of our packages have used any sort of explicit logging configuration; it's all been set at the command line using a syntax like shown below:
dtexec.exe /FILE "D:SSIS PackagesMyAppVendors.dtsx" /MAXCONCURRENT " -1 " /CHECKPOINTING OFF /REP E;Diagnostic /LOGGER "{6AA833A1-E4B2-4431-831B-DE695049DC61}";"MyDBConnName"
This does appear to correctly limit what gets logged to the console (and thereby the SQL Agent's job step log), but has no effect on what's logged to the database. Normally, I'd use /REP EWDCI, but I was attempting to limit the log entries to Errors only.
I first came across this error when a package failed, but it only logged the following to the console with nothing in sysdtslog90 (while not the "latest/greatest" server, this is a relatively low-utilized quad 2.8ghz xeon ProLiant DL580 G2):
Error: 2007-06-21 06:01:30.45
Code: 0xC0202009
Source: MYPACKAGENAME Log provider "{0C3CBE9B-D828-41C2-98D2-99BA498B314A}"
Description: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80004005.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Connection is busy with results for another command".
End Error
Error: 2007-06-21 06:01:30.46
Code: 0xC0014010
Source: MYPACKAGESTEP Load
Description: The SSIS logging provider "{0C3CBE9B-D828-41C2-98D2-99BA498B314A}" failed with error code 0xC0202009 ((null)). This indicates a logging error attributable to the specified log provider.
End Error
I changed this one package to only log OnError events, but I'd rather not have to change every package to do the same, plus I'd like the ability to easily turn on verbose or any other logging level when needed.
View 1 Replies
View Related
Apr 18, 2000
I have got SQLv6.5 SP5a with SMS1.2 SP4 on seperate Alpha boxes. I have automated the backups so they are scheduled for after hours. SMS gets backed up first and TEMPDB shortly afterwards. However, since a back log in SMS MIFS has happened, the TEMPDB backup displays of 100,000pages backed up. When you back it up on its own, it only shows 170+ pages.
The SMS DB is 600MB in size, the Log is 210MB, Open objects is 5000, and TEMPDB is set 210MB on its own device.
Any ideas
View 1 Replies
View Related
Aug 9, 2007
Hi all,
I'm trying to get an understanding of a serious problem I have with a large DB in production. This is going to be obvious to someone (everyone probably) <bg>
I have a table which consists of numerous varchars and ints but also a Text type field. This table resides in a SQL 2000 Database. This DB currently has a data file size of 16Gb and a Transaction Log size of 17Gb. When I edit the table and increase the size of a Varchar field from 50 to 100 these files grow to more than double their size!
Why is this happening and how can I prevent this?
TIA
NozFx
View 1 Replies
View Related
Jun 8, 2015
I am getting this massage in error log .
"Database XYZ has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files."
I am using sql server 2008r2.
View 5 Replies
View Related
Jul 31, 2007
We are running SQL Server 2000 Enterprise Edition on a 2-node cluster with IIS/ASP.NET front-end hosting 150-200 active connections. There is a SVCHOST process running under LOCAL SERVICE account - hosting the Remote Registry process that is using only 4,200K but is page faulting 200-500 times per second. I realize this process is used for failover, but the page fault seems excessive. Any thoughts on this?
The servers are running Windows Server 2003 with 4 processors and 4gb RAM.
View 1 Replies
View Related
Aug 3, 2007
We have a SQL2000 database (Publisher) replicating inserts and updates across a 10Mb link to a SQL 2005 database (Subscriber). The Publisher has two tables we are interested in, 1 with 50 columns and 1 with 15. Both tables have 6 insert/update triggers that fire when a change is made to update columns on the publisher database.
We have set up a pull transactional replication from the Subscriber to occur against the Publisher every minute. We have limited the subscription/replication configuration to Publsih 6 columns from table 1 and 4 from table 2. Any change occuring on any other columns in the Publisher are of no interest. The SQL 2005 database has a trigger on table 1 and table 2 to insert values into a third table. There are around 7,000 insert/updates on table 1 and 28,000 on table 2 per day. All fields in the tables are text.
We are seeing "excessive" network traffic occuring of approximately 1MB per minute (approx 2GB per 24 hrs). We also see that the Distributor databases are getting very large -- upto around 30GB and growing until they get culled. We have reduced the culling intrval from 72 hrs to 24 hours to reduce the size.
Does anyone have any suggestions as to how this "excessive" network traffic can be minimised and how the distributor database size can be minimised. I think that maybe they are both related?
Thanks,
Geoff
WA POLICE
View 5 Replies
View Related
May 22, 2006
Hi,
I have set of 2 DTS packages, one of which calls the other by forming a command-line (dtexec) using a Execute Process task.
From the parent package-> Execute Process Task->
dtsexec /F etc... /<pkg variable> = "servername"
Each of the parent and the called package have a variable: "User::DWServerSQLInstance" which is mapped to the SQL server connection manager server name property using an expression. The outer package has the above variable and so does the inner called package (which gets assigned through the command line from the outerpackage call to inner)
I "sometimes" get the following error:
OnError,I4,TESTDOMAdministrator,ACDWAggregation,{A1F8E43F-15F1-4685-8C18-6866AB31E62B},{77B2F3C7-6756-46EB-8C01-D880598FB4B3},5/22/2006 5:10:28 PM,5/22/2006 5:10:28 PM,-1073659822,0x,The variable "User::DWServerSQLInstance" is already on the read list. A variable may only be added once to either the read lock list or the write lock list.
Help would be appreciated!
I have seen other posts on this but, not able to relate the solution to my scenario.
View 9 Replies
View Related
May 10, 2006
Hi All,
I have seen a few other people have this error.
Package works fine when run from BIDS, DTExec, dtexecui. When I schedule it, It get these random errors. (See below)
The main culprit is a variable called "RecordsetFileDIR" which is set using an expression. (@[User::_ROOT] + "RecordSets\")
A number of other variables use this as part of their expression and as they all fail, pretty much everything dies.
I have installed SP1 (Not Beta) on server. Package uses config files to set the value of _ROOT.
The error does not always seem to be with this particular variable though. Always a variable that uses an expression but errors are random. Also, It will run 3 out of 10 times without a problem. I am the only person on the server at the time.
Any ideas?
Cheers,
Crispin
Error log:
OnError,,,POSBasketImport,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1073659822,0x,The variable "User::RecordsetFileDIR" is already on the read list. A variable may only be added once to either the read lock list or the write lock list.
OnError,,,POSBasketImport,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1073639420,0x,The expression for variable "rsHeaderFile" failed evaluation. There was an error in the expression.
OnError,,,DF_Header_Header,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1071636247,0x,Accessing variable "User::rsHeaderFile" failed with error code 0xC00470EA.
OnError,,,Move All Data,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1071636247,0x,Accessing variable "User::rsHeaderFile" failed with error code 0xC00470EA.
OnError,,,Load Open Batches and Process Files,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1071636247,0x,Accessing variable "User::rsHeaderFile" failed with error code 0xC00470EA.
OnError,,,POSBasketImport,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1071636247,0x,Accessing variable "User::rsHeaderFile" failed with error code 0xC00470EA.
OnError,,,DF_Header_Header,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1071636390,0x,The file name is not properly specified. Supply the path and name to the raw file either directly in the FileName property or by specifying a variable in the FileNameVariable property.
OnError,,,Move All Data,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1071636390,0x,The file name is not properly specified. Supply the path and name to the raw file either directly in the FileName property or by specifying a variable in the FileNameVariable property.
OnError,,,Load Open Batches and Process Files,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1071636390,0x,The file name is not properly specified. Supply the path and name to the raw file either directly in the FileName property or by specifying a variable in the FileNameVariable property.
OnError,,,POSBasketImport,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1071636390,0x,The file name is not properly specified. Supply the path and name to the raw file either directly in the FileName property or by specifying a variable in the FileNameVariable property.
OnError,,,DF_Header_Header,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1073450901,0x,"component "rsHeader" (365)" failed validation and returned validation status "VS_ISBROKEN".
OnError,,,Move All Data,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1073450901,0x,"component "rsHeader" (365)" failed validation and returned validation status "VS_ISBROKEN".
OnError,,,Load Open Batches and Process Files,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1073450901,0x,"component "rsHeader" (365)" failed validation and returned validation status "VS_ISBROKEN".
OnError,,,POSBasketImport,,,10/05/2006 12:03:34,10/05/2006 12:03:34,-1073450901,0x,"component "rsHeader" (365)" failed validation and returned validation status "VS_ISBROKEN".
View 1 Replies
View Related
May 23, 2001
Hi,
I want to lock a table so others cannot lock it but able to read it inside transactions.
The coding I need is something like this: set implicit_transactions on begin transaction select * from table1 with (tablock, holdlock) update table2 set field1 = 'test' commit transaction commit transaction
I have tried the coding above, it won't prevent others from locking table1.
So, I changed the tablock to tablockx to prevent others from locking table1. But this will also prevent others from reading table1. So, how can I lock table1 so others cannot lock it but still able to read it?
Thank you for any help
View 1 Replies
View Related
Sep 25, 2007
We have a large number of clients attempting to replicate two publications on 2005 Express databases (2 publications subscribed to the one subscriber database) with our 2005 Server (9.00.3042.00 SP2 Standard Edition) and experiencing two significant problems:
1) Users experience the following message:
The Merge Agent failed after detecting that retention-based metadata cleanup has deleted metadata at the Subscriber for changes not yet sent to the Publisher. You must reinitialize the subscription (without upload).
This problem should not apparently occur with SQL Server 2005 (or 2005 Express) instances with SP2 applied. All clients experiencing this problem have SP2 installed as does our Server and the retention period is 30 days. The subscribers have been replicating well under that.
2) Replications never succeed after appearing to replicate/loop around for hours
This issue is the most critical as we have clients who have been installed and re-installed with new instances of SQL Server 2005 Express, new empty databases (on subscriber before snapshot extraction), and using fresh snapshots (less than an few hours old) which cannot successfully replicate.
Interestingly there is at least 1 instance where several computers are subscribed and successfully replicating the same database as another where replication refuses to succeed.
To test we have taken a republished database from another 2005 Server which is working fine and restored it to the same server as the one holding the database with which we are experiencing problems and subscribed to it. This test worked fine and replication of both publications went through fast and repeatedly without showing any signs of problem.
This indicates that the problem is perhaps data related as it appears localised to that database.
Below are two screenshots which may assist.
Screenshot 1 Shows that on the server side the replication attempts look like they are succeeding despite the fact that the subscriber end does not indicate success. Also the history indicates the the subscription has spent all it's time initialising and not merging any changes.
Screenshot 2 Shows a rogue process which has appears on many of the problem child subscribers. It shows a process running with no end time even though the job indicates failure in the message and even though other replication attempts appear to have succeeded after it. This process stays in the history showing that it is running even when I can find no corresponding process for it.
Can anyone suggest a further course of action/further testing/further information required which may assist?
This is extremely urgent and any assistance would be greatly appreciated!
Thanks in advance!
Scott
View 5 Replies
View Related
Sep 18, 2006
Hello everyone,I have a web project where users access a aspx page to view information stored in an SQL database.My client want that one user can access a row of information and see it, all other users shouldn't be able to view or update the same row?it means whenever a row of data is displayed by some user, this row should be locked even for beeing viewed by all other users, when this user close this page, this row will be available. ?I should do this in code behind or something in sql...How can I do that???
View 6 Replies
View Related
Oct 6, 2003
I remember there's a command you can type into SP . so no one else can see your SP even the SA. can anyone tell me what's the command is thanks!!
View 3 Replies
View Related
Mar 27, 2001
Hello,
An application we are designing is behaving rather strangely. Basically, we
have a trigger on SQL 2000 on W2K Server watching for a record update. When
that happens a stored procedure gets executed which in turn spawns a VB that extracts that record, applies some rules to it, and writes a text file to disk. 9 out of 10 times, this program will cause a CPU spin, and it can't be killed from the Task Manager.
I did not write the VB app, I am writing the front end for the application in ColdFusion, but I need some ideas on what might be causing such behavior and how can the problem can be diagnosed. Thanks for any help!
Stas Newdel
View 1 Replies
View Related
Sep 3, 2007
Is it good practice to use WITH (NO LOCK) on SELECT statements, ie
SELECT * FROM MyTable WITH(NO LOCK)
Or does the SQL Server optimiser automatically use WITH (NO LOCK) ?
View 2 Replies
View Related
May 30, 2006
Hi,
i have a big problem , i work with sql 2000 on windows 2000.
When user do a select against my database sqlserver lock all the table and nonoe can work.
Haw can i change the isolation level for a ropw and for all the db.
Thanks.
View 3 Replies
View Related
Mar 29, 2007
Hello,
Wondering if anyone might have a guess about this.
I have a small 4 table DB. it's got several stored proc's and it's accessed through .NET to fill it in and get data from it.
It's been working just fine. But this morning, while it was doing its thing, I experimented with it by adding then deleting a View (through the View Wizard.)
Then later I started noticing that my .Net calls had slowed to a crawl (I hadn't made any code changes)and even making direct queries through Query Analyzer had slowed too.
My question is: being that there were no network issues, could the View create/delete have caused the DB to come to a halt or perhaps a table lock-up?
While I'm at it, is there anything that I can put in stored proc's or other places to prevent locking issues (if that's what happened here.)
I already use Begin/Commit Tran pairs.
And sorry if this post doesn't read like a SQL beginner, but believe me, I am.
Thanks for your advice!!
--PhB
View 2 Replies
View Related
Aug 20, 2007
Lets say i have a view.
vuTestingNoLocks
this view looks like this...
Create View vuTestingNoLocks as
SElect *
From dbo.Employees
inner Join dbo.EmployeeTerritories on EmployeeTerritories.EmployeeID = Employees.EmployeeID
If I select from this view using Select * From vuTestingNoLocks (NOLOCK)
Does the (nolock) command propegate down through the tables? Meaning will it scan tables that are locked still ignoring their locks?
View 5 Replies
View Related
Oct 13, 2006
I'm working with SQL Server Express, and I want to configure a named instance so that only the 'sa' user and a specified SQL Server user with a specified password have access. In particular, I'm trying to lock out BUILTINAdministrators. Furthermore, I need to be able to do this from a command-line, since I want to configure it in a script. Nothing I do seems to work.
I've attempted to use sqlcmd and the T-SQL call ALTER LOGIN [BUILTINAdministrators] DISABLE, but that returns the error "Cannot alter the login 'BUILTINAdministrators' because it does not exist or you do not have permission."
What I can (apparently) successfully do is run DENY CONTROL TO [BUILTINAdministrators]. This runs without reporting an error. However, after running it against the 'master' database and the specific database in my named instance I care about, I can still run the following:
sqlcmd -S (local)MyInstance -d MyDB -Q "select * from my_table"
and see the contents of my_table.
What do I need to do to restrict access exclusively to 'sa' and other SQL users I designate?
View 3 Replies
View Related
Jul 19, 2006
Hello.
I need to insert some records to an accounting table and calculate the balance after that. Thus, other users can be trying to do the same. How to lock the db and make the other users wait until the right moment? I'm using SqlDataSource to do that.
Thanks.
View 7 Replies
View Related
Sep 19, 2007
hi all iam working on a ticketing application i want to avoid two users to book the same ticket the requirement is as follows
1. the system should show all the available tickets which is not yet booked
2.when two users book the ticket at the same time time it should not allow the two persons to update at the same tme
the main aim is to avoid data concurency
how can i get this done
View 2 Replies
View Related
Jul 23, 2004
Hi,all:
This problem almost drives me crazy, hope I can get some hints from you guyz!!!
Ok, here is the situation:
I wanna only one users 2 modify the data(update) from my page each time, and if at the same time, there are some other users connecting my database through .aspx page, they can only browse the data until the first users finish updating.
It seems I need to implement locking the database, but I am not sure how I am gonna do that using asp.net!!!
Thanx in advance!
View 5 Replies
View Related