We have several SQL 2000 databases on one server.
One of the applications I'm responsible for has batch jobs that run for an hour; all activity is on the database. During this hour, other applications that use other databases on the same server experience time-outs. One of my coworkers did a count(*) on an empty table and it took 11 seconds.
We pay people to keep our servers up and running. Is this something they might solve by reconfiguring the server? It seems strange to me that a single database is allowed to hog all server resources.
We are meeting with them later this week, and I'd like to have some knowledge about this; we don't want to BS'ed into buying a new server.
Hi, We have a SQL 7 / Win2K cluster and yesterday afternoon the users were complaining about poor performance. Their queries were timing out.. (Not all of them, just some on some large tables)
I ran just an ad-hoc query against the table from my machine and I also timed out. THen I went right to the box that had control of the cluster and did the same thing there and also timed out. Because of time constraints (and we are in testing mode) we tested a failover and everything was back to normal after that.
So now we want to try to figure out what could have been the problem. At the time I checked out the Memory and CPU usage and they were very low (0-5%) and using only 1/3 of the memory. It couldn't be a bad query or index because after the failover it worked fine.
Could there be something wrong with the specific box that had control at the time? I dont' know where to look?
Hi All.I have some rather large SQL Server 2000 databases (around 60GB).I have set up jobs to re-index the tables and update statistics everysunday. This worked will for a few months. Now after a day or two ofusing it the connections to it keep timing out. If i start the jobsmanually, all is well for two days or so.Surely there can be a better solution to this ?TIA.Ryan,.
The C++ application calls the database to look up property data. Onetroublesome query is a function that returns a table, finding data whichis assembled from four or five tables through a view that has a join,and then updating the resulting @table from some other tables. Thereare several queries inside the function, which are selected accordingto which parameters are supplied (house #, street, zip, or perhaps parcelnumber, or house #, street, town, city,...etc.). If a lot of parametersare provided, and the property is not in the database, then several queriesmay be attempted -- it keeps going until it runs out of queries or findssomething. Usually it takes ~1-2 sec for a hit, but maybe a minute insome failure cases, depending on the distribution of data. (~100 milproperties in the DB) Some queires operate on the assumption the input datais slightly faulty, and take relatively a long time, e.g., if WHEREZIP=@Zip fails, we try WHERE ZIP LIKE substring(@Zip,1,3)+'%'. Whileall this is going on the application may decide the DB is never going toreturn, and time out; it also seems more likely to throw an exception thelonger it has to wait. Is there a way to cause the DB function to fail ifit takes more than a certain amount of time? I could also recast it asa procedure, and check the time consumed after every query, and abandonthe search if a certain amount of time has elapsed.Thanks in advance,Jim Geissman
I recently installed sharepoint 3.0 on our fileserver, which has our main db using MSDE. I didn't know at the time that it would also install 2k5 embedded edition, but even if I had, I don't think it would have changed my decision.
Anyway, soon after, performance on MSDE completely tanked. Queries would execute extremely slowly, or not at all. I checked the CPU and mem usage, and all were fine. No blocked sql commands either. We ended up just killing the Sql 2k5 EE service. We're moving off MSDE eventually, but I would still like to find out why it happened, and if there's a fix or workaround.
Okay, we have are running our Master Package (and therefore all related Child packages) through a .bat file. The .bat file is scripted using the following logic for an entire month of daily runs:
Code Snippet
DTExec /FILE E:ETLFinancialDataMartMaster.dtsx /DECRYPT masterpwd /SET Package.Variables[ReportingDate].Value;"2/01/2007" > E:ETLErrorLogsProcessingetl_20070201log.txt IF NOT %ERRORLEVEL%==0 GOTO ERROR%ERRORLEVEL% mkdir E:ETLErrorLogsArchive20070201 move E:ETLErrorLogsProcessing*.txt E:ETLErrorLogsArchive20070201
DTExec /FILE E:ETLFinancialDataMartMaster.dtsx /DECRYPT masterpwd /SU /SET Package.Variables[ReportingDate].Value;"2/02/2007" > E:ETLErrorLogsProcessingetl_20070202log.txt IF NOT %ERRORLEVEL%==0 GOTO ERROR%ERRORLEVEL% mkdir E:ETLErrorLogsArchive20070202 move E:ETLErrorLogsProcessing*.txt E:ETLErrorLogsArchive20070202
etc...
Generally it takes about 40-45 minutes to run one days worth of data. However, we have found unpredictable instances where the job will take 3 hours or even 6 hours and appear to hang....
The weirdness sets in when we kill the job and rerun it. In all instances of a rerun, the job will execute in the normal 40-45 minute time frame. Obviously, we would like to institute some sort of logging, monitoring and error handling....including if need be a method to timeout a process and restart it.
I am reviewing the WMI (Windows Management Instrumentation) Task but I'm not entirely convinced that it's the right tool for the job.
Questions:
Has anyone else experienced the type of processing behavior that I described? Has anyone been successful at using WMI or another process to monitor and timeout packages? If so, are there sample packages or a good tutorial that maps it out? Unrelated to this issue, we also have instances incomplete processing logs. The logs don't finish writing and the weird part is that they all end at the same point, does anyone have experience with incomplete job logs?:
Hi everyone! I'm new to this forum and I suspect I'll be using this forum frequently. Good stuff.
Allow this question may appear to be Web-related, I think the problem is with what I'm doing with the database. Please read.
I'm trying to implement a page tracking solution using ASP and SQL 2000. It basically writes a new record to a table every time a user visits a page on the site. It appeared to work fine at first, then I've increasingly been getting time out errors on my pages -- all pointing to the include file that fires the database write.
Here's the code that's referenced on every page:
Set Conn = Server.CreateObject("ADODB.Connection") Conn.Open "dsn=x;uid=y;pwd=z;"
Set objRecordset1= Server.CreateObject("ADODB.Recordset") objRecordset1.Open "SELECT * FROM table",Conn,1,2 objRecordset1.AddNew objRecordset1.Fi elds("PAGE") = Left(request.servervariables("SCRIPT_NAME"),100) objReco rdset1.Fields("QUERY_STRING") = Left(request.servervariables("QUERY_STRING"),100) objRec ordset1.Fields("DATE") = Date() objRecordset1.Fields("TIME") = Time() objRecordset1.Fields("PLATFORM") = Left(request.servervariables("HTTP_USER_AGENT"),100) obj Recordset1.Fields("REFERRER") = Left(request.servervariables("HTTP_REFERER"),100) objRec ordset1.Fields("USER_IP") = Left(request.servervariables("REMOTE_ADDR"),20) If Request.Cookies("TEST")("ID")<>"" Then objRecordset1.Fields("VISITOR_ID") = Request.Cookies("TEST")("ID") End If objRecordset1.Update
Conn.Close Set Conn=Nothing %>
After taking out the reference to the above code everything speeds back up. So, I know the performance hit and time out issues have to do with the code above.
Is it the simultaneous write to the table, the constant opening and closing of the recordset, the cursor type, the lock type – or combination of things?
We have 4 clustered SQL2000 Servers each contains information specific to its application related to customer information in a casino player tracking database. My problem is as follows On the Playertracking database I can join and return information from the tables there with no problems the performance accross the decently sizable transactional based table is pretty decent. The problem is I need to filter this query down by the Type of machine the customer plays. The child key exists in the playertransaction table the parent key is on another server. Here is the lay out of the tables unecessary information from the tables were truncated for brevity.
CREATE TABLE [dbo].[PlayerSession] ( [PlayerId] [int] NOT NULL , [Mnum] [int] NOT NULL , [CoinIn] [money] NOT NULL , [CoinOut] [money] NOT NULL , [Games] [int] NOT NULL , [Jackpot] [money] NULL , [Win] [money] NULL , [TheoWin] [money] NOT NULL , [PlayerMod] [tinyint] NOT NULL ) ON [PRIMARY]
-- Player Demographics information CREATE TABLE [dbo].[Player] ( [PlayerId] [int] NOT NULL , [Status] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [Title] [varchar] (5) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [FirstName] [varchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [MiddleName] [varchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [LastName] [varchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [SSN] [varchar] (16) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , ) ON [PRIMARY]
--Machine Information that links to Machine Type table
CREATE TABLE [dbo].[Machine] ( [MNum] [int] NOT NULL , [MachineTypeId] [smallint] NOT NULL , ) ON [PRIMARY]
-- Machine type code table
CREATE TABLE [dbo].[MachineType] ( [MachineTypeId] [smallint] NOT NULL , [Denom] [int] NOT NULL , [Par] [decimal](6, 2) NOT NULL , [GameType] [varchar] (40) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , ) ON [PRIMARY]
From the server where all the player information is ran I can query the linked database for the machine and machine type information like this.
SELECT m.MNum, mt.MachineTypeId, mt.GameType, mt.DisplayType FROM ACCTV.Accounting.dbo.Machine m INNER JOIN ACCTV.Accounting.dbo.MachineType mt ON m.MachineTypeId = mt.MachineTypeId
This is the information I am trying to get out but the query times out on me.
SELECT Player.PlayerId, Player.FirstName, Player.LastName, SUM(PlayerSession.CoinIn) AS sumCI, SUM(PlayerSession.CoinOut) AS SumCO,SUM(PlayerSession.TheoWin) AS SumTheo, AVG(PlayerSession.TheoWin) AS AvgTheo, SUM(PlayerSession.Win) AS SumWin, AVG(PlayerSession.Win) AS AvgWin, mt.GameType FROM Player INNER JOIN PlayerSession ON Player.PlayerId = PlayerSession.PlayerId INNER JOIN ACCT.Accounting.dbo.Machine M ON PlayerSession.Mnum = M.MNum INNER JOIN ACCT.Accounting.dbo.MachineType mt ON M.MachineTypeId = mt.MachineTypeId GROUP BY Player.PlayerId, Player.FirstName, Player.LastName, mt.GameType
The other option would be some sort of SubQuery but I dont know how to return results from the subqueries to the root query to be returned to the restulting recordset. I am not necessarily looking for an answer more of looking for a direction to go to find my solution.
Is there any way to find the time when the last DDL was happened in a table? For example: The time when the new column(s) were added into a table or changed the datatype.
Ok, please pardon my complete ignorance; but I am extremely SQL challenged. I manage the server right now and I am the closest thing to a DBA at the moment with this server.
Here is the problem. I have a job that runs every hour and until last week it was running without any problems. Now, once a day it hangs, and I end up having to restart sql to get it to be functional again. I guess I could just kill the spid, but regardless, it's fails.
It doesn't happen at the same time each day. I thought it was from the backups doing a SQL backup job, but I eliminated that possibility. I can't find what is causing the problem, because the SQL logs just stop at the time that this happens. It's like some other job/query is running and taking all the sql server resources. But I can never catch it when it's happening and I have no event logs or SQL logs to go on.
Is there any query that I can run to find what happened around the specific time the job failed to run? I just need to find what process did what when this job stopped working.
Again, I apologize if my lack of SQL knowledge is making me ask dumb questions, but I am really lost.
How to find last login date/time for user DML acitivity on databases on Instance?
Is there any way we can find our the last login date/time for databases?
Note: 1. We can find if the SQL Trace is running and store.This is not good solution         2. Audit logins off/on is also not good solution.        3. Using DMV's also not good option, if reboot sql server instance then historical values can not see.
I am joining the thread that initially Mike started. We have a x64 bit SQL Server with 32GB RAM. On start up sqlserver.exe starts with 15 to 20MB of RAM. After that the memory gradually increases at a step of 100MB and reaches 31.8 GB at the end. We don't see a out of memory situation so far and the memory remains the same, once the sqlserver.exe reaches that level we are facing application in stability issue. There is no other application running in this server. At present the database size is 28GB and we have employed the following,
There is a database replication running
A weekly maintenance plan to reindex, backup and other maintenance is running.
There is a log back up job which runs once in 2hrs time.
The temdb size grows to 3gb max. We didn't see any temp table created left out orphan in the temdb.
We have the required indexes placed in the tables to reduce the scan time. Also the server is configured to use dynamic memory allocation that is all are factory settings.
The database is encounters on an average 200 to 500 connections at a time. One observation is the memory goes up as soon as the replication starts, this is at one of the 2 servers.
Please advise what is causing this issue and how to go abt it.
What is the best to produce user activity log ? Using triggers (insert, delete, update) is rather heavy (?) operation. Has anyone used SQL Profiler for that ? In log should be: who, what, when, workstation, table, changed fields, some data key values.
I have registered a server on to my client machine successfully,but I am not finding the current activity item in the management folder on EM.How do I see the current activity on the server.Do I need to be SA for that server?My machine has NT workstation with SQL server 7.0 on it.Any one to help?
If I run sp_who2, the following message is returned. (it also is returned randomly while using function in EM such as Backup/Restore.)
Msg 268, Level 16, State 1 You cannot run SELECT INTO in this database. The DBO would have to run sp_dboption to enable this option.
What is it??? It's buggin me......
Dano
p.s. I remember a similar problem in the past, it had to do with turning truncate on checkpoint in one of the system databases msdb/master/tempdb or something like that. I turned off all Trunc. and Select/Bulk options on these...
I have a requirment to create a activity log to log all changes that take place in my tables as they happen. All my Inserts,updates & Deletes take place inside stored procs. A windows .NET application calls the stored procs and passes the data set to the stored procs to execute them on a Sql server 2000 database.
When an update statement is run inside my stored proc, the dataset only passes me the new values. but,i will need the old value from the database before the update takes place to report in my activity log. What is the best way to do it ?
My activity log shold report a description like this
"Name Changed from 'Robert Johnson'(old value) to 'Bob Johnson'(New Value)"
Though i can do a select from the tables to get the old values and match the new values to get the differences before running the update statements, it is too much work as i have well over 100 stored procs in my application and i have to check for every single field value in a table. If only one column changed in a table containg 20 columns, i would still have to check for 20 columns before determining which fields changed.
Can any one suggest me a better solution to report the old values and new values for all the updates that take place in a database through stored Procedures.
Hi experts, I just want to know how can i kill all the processes of a database if the database have more than 100 connections.This is for the purpose of restoring a database.
I usually run this command in SQL 2000. Which tells me what is running with all the sql code statements.
Is there an alternative command to run in SQL 2005...rather than using the Reports - i find it easier to execute sp i called this sp_now.
set nocount on declare @handle binary(20), @spid smallint, @rowcnt smallint, @output varchar(500)
declare ActiveSpids CURSOR FOR select sql_handle, spid from sysprocesses where sql_handle <> 0x0000000000000000000000000000000000000000 --and spid <> @@SPID order by cpu desc
OPEN ActiveSpids FETCH NEXT FROM ActiveSpids INTO @handle, @spid
I am new at this site and new at using SQL also.... I was wondering if anyone could help me out with my prob...
I have set up an SQL 2000 server and made some accounts which my friends are using to log onto my server.. Is there a way I could keep a check on who is doin what or maintain a user log....
I have a SSIS which copies data from a table to a flat file. The connection string of this file is variable and the file is reused if not exists and is created if exists.
When I run the SSIS manually from my microsoft visual studio it works properly. However, when I run this SSIS from the Job Activity Monitor, I get the following error:
Message Executed as user: REDCAMadminsql2k5. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 8:41:10 AM Error: 2007-09-10 08:48:04.99 Code: 0xC020200E Source: Crear Historico Historico [1] Description: Cannot open the datafile "\srvnfileHISTORICOSCAJEROSOFI3210C01OFI3210C01_2007-7X.txt". End Error Error: 2007-09-10 08:48:04.99 Code: 0xC004701A Source: Crear Historico DTS.Pipeline Description: component "Historico" (1) failed the pre-execute phase and returned error code 0xC020200E. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 8:41:10 AM Finished: 8:48:53 AM Elapsed: 462.234 seconds. The package execution failed. The step failed.
A developer is running sql transactions from the application and needs to know how it is executed from database, I know I have to run Profiler Trace, but not exactly which traces , Can you please guide me?
Does anybody know how to trace user activity after they log off from the SQL Server 6.5? I know SQL Trace is able to trace the user activity in real time.
Trying to view current activity in EM however get error 1222 Lock request time out period exceeded, this also occurs when trying to view tempdb database.
Is this a EM Gui error or locking issue and hwo do you resolve?
On sql server 6.5 running on a win/nt platform my Current Activity window comes up blank when I click on it. Everything else seems to be working fine. This is a new problem, it just stopped reporting any info when I click on it. Any suggestions would be greatly appreciated.
Hi there! in the ms-enterprise manager , the current activity.. under server menu does not display anything in one of the server in EM but in other servers it is okay.