I have re-initialize some subscription. After generated snapshot successfully, the synchronization view showing 'The process is running and is waiting for a response from the server.' and I found the replication process is under 'async_network_io wait'. The job has run for more than 8 hrs. Any idea to improve it? Thanks in advaise.
Recently we observed a problem. We are running stored procedure through our c# code. Three machines access the server and update or insert in the required tables in the server. If there is no data in the server, on installing first time our application usually our database is clean. then stored procedure works fine, it takes around 10 to 15 sec to execute. Next time if execute the time goes up to minutes like 15 mins. Next time it goes for hours around 4 hrs. Even to update 4 or 5 records it takes time. Initially we thought it was because of the size of the data and we tried to re tune on indexes, it did not solve. But now what we observe is even with less number of records in server also it wouldn't come of the execution for hours. Now are executing the just the SP in the SQL manager studio to see the time. That one also is executing for hours. when looked at the activity monitor the process goes to suspend state with wait type ASYNC_NETWORK_IO.
When we comment one of the query is working fine. Is this something to do with the query I am not sure. If that is the case it should not work every time.
The query makes any sence or is there any way to write it in better way
'UPDATE [server].[dbo].[DocumentMetadata] SET DocumentInfoID = b.DocumentInfoID, [Name] = b.[Name], MetadataType = b.MetadataType, [Value] = b.[Value], ValueType = b.ValueType FROM [server].[dbo].[DocumentMetadata] a WITH (UPDLOCK) INNER JOIN (SELECT c.DocumentInfoID, c.[Name], c.MetadataType, c.[Value], c.ValueType FROM MACHINENAME.[Client].[dbo].[DocumentMetadata] c INNER JOIN MACHINENAME.[Client].dbo.DocumentInfo DINF ON c.DocumentInfoID = DINF.DocumentInfoID INNER JOIN MACHINENAME.[Client].dbo.Path d on DINF.NativeFileID = d.PathID INNER JOIN MACHINENAME.[Client].dbo.ActiveDataSource ADS ON d.DataSourceID = ADS.DataSourceID WHERE ADS.ProjectID = ''' + @ProjID + ''') b ON a.DocumentInfoID = b.DocumentInfoID AND a.[Name] = b.[Name]'
'INSERT INTO [server].[dbo].[DocumentMetadata] (DocumentInfoID, [Name], MetadataType, [Value], ValueType) SELECT c.DocumentInfoID, c.[Name], c.MetadataType, c.[Value], c.ValueType FROM MACHINENAME.[Client].[dbo].[DocumentMetadata] c INNER JOIN MACHINENAME.[Client].dbo.DocumentInfo DINF ON c.DocumentInfoID = DINF.DocumentInfoID INNER JOIN MACHINENAME.[Client].dbo.Path d on DINF.NativeFileID = d.PathID INNER JOIN MYCLI.[Client].dbo.ActiveDataSource ADS ON d.DataSourceID = ADS.DataSourceID WHERE ADS.ProjectID = ''' + @ProjID + ''' AND Ltrim(rtrim(c.DocumentInfoID))+ ltrim(rtrim(c.[Name])) NOT IN (SELECT Ltrim(rtrim(DocumentInfoID))+ ltrim(rtrim([Name])) FROM [server].[dbo].[DocumentMetadata])'
We have been fighting it out for so many days. Can anybody help
I am running a Stored procedure which select from a table and returns approx 800000 records. When calling from any client machine it takes long time to return the result (90 sec). It waits for ASYNC_NETWORK_IO which is pushing the result to client. If select statement is used with TOP operator to return less number of records it executes faster. When calling from the server the stored proc returns data in 13 sec with all records. In another machine of identical HW and configuration this problem is not there. Can anyone help how to improve ASYNC_NETWORK_IO issue?
SQL-2005 SP1 64 bit Standard on Active/Passive cluster Windows -2003 Ent.
We are having a problem with an SSIS package. The package gets stuck when it runs a data flow component that reads data from 5 large tables, performs a union on the data, does a conversion on one field and then writes the data to another table. This all happens in the same database and the ssis package runs on the same server as the database.
When I look at the waiting tasks on the sql server I always see the following type of scenario. It is always the ASYNC_NETWORK_IO that is causing the others to wait.
SPID Wait type duration Waiting Address Blocking Address
In some cases the duration column goes up to several hours until we finally just kill the package. At the same time, we also see that session 118 is consuming very large amounts of memory (in sys.dm_exec_query_memory_grants). However, when we look at the dtexec process that is running the package on the server it only has a working set of about 100Mb. We've tried rewriting the dataflow task as a simple stored procedure and this goes through smoothly.
Can anyone shed some light on what is going on and how to avoid this type of locking situation (other than just using plain old T-SQL).
I am running a Stored procedure which select from a table and returns approx 800000 records. When calling from any client machine it takes long time to return the result (90 sec). It waits for ASYNC_NETWORK_IO which is pushing the result to client. If select statement is used with TOP operator to return less number of records it executes faster. When calling from the server the stored proc returns data in 13 sec with all records. In another machine of identical HW and configuration this problem is not there. Can anyone help how to improve ASYNC_NETWORK_IO issue?
Environment
SQL-2005 SP1 64 bit Standard on Active/Passive cluster Windows -2003 Ent.
I am running into problems while running a large procedure, and i think it may have something to do with a PAGEIOLATCH_SH wait problem. My server, whose sole purpose is to run this one procedure, is doing plenty of disk i/o, and the CPU’s bouncing around, so I assume it’s working. But when I look at its process info, it seems to be sleeping a lot of the time on PAGEIOLATCH_SH. No other users are in the DB, so I'm quite confused. I don't find much info on this anywhere, so any insight would be very appreciated.
I'm new to SQL Sever 2005 and I'm trying to do what Informatica (Power Center - ETL) is trying does.
I have created a work flow and it is scheduled to run at every night 1:00 AM .The process is to load a flat file (CRV.data) into the database from a shared location.The flat file is transfered from a 3rd party and once the file transfer is complete it will create a indicator file (0 byte eg: CRV.DONE file) which indicates the CRV.data transfer is complete.
In my workflow I will be waiting for the CRV.DONE indicator file and once it is avaiable I will start loading the CRV.data and once the load is completed I will delete CRV.DONE file and be ready for the next day load.
Please let me know if there is any way in SQL Server 2005 to achieve it.Thanks
Hi. We are migrating a mainframe datacom database to SQL Server. One of our client-server applications already uses SQL Server. This application uses a middleware product to query and update the datacom database being migrated. We are considering using Service Broker to replace the middleware.
In many cases the client does not need a response provided the message is queued and will eventually get delivered. However, in some cases the client would like to wait for the message to be processed before proceeding. Is there an easy way to both submit and optionally wait for a response - with data - in a single stored procedure? If client does not want to continue to wait, is there a way to use a procedure to check for the returned message later?
We have not used Service Broker before and are doing for a "sanity" check before proceeding. We do not want to tightly couple the two databases at this time.
I have installed performance dashboard on 2 different servers. The first server have User Session CPU Time 71% and Wait Time =28%, The other server have Cpu Time of 20% and Wait Time of 79%. Have I understand that stands in SQL Server Waits And Queues that I have some typ of wait problem in my second server?
Then I tries to run this Select
'%signal waits' = cast(100.0 * sum(signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2)), '%resource waits'= cast(100.0 * sum(wait_time_ms - signal_wait_time_ms) / sum (wait_time_ms) as numeric(20,2)) From sys.dm_os_wait_stats
First Server %signal waits %resource waits --------------------------------------- ---------------------------- 0.07 99.93 Second Server %signal waits %resource waits --------------------------------------- ---------------------------- 0.12 99.88
I'm doing an update on a table with about 113m rows, the update-statement is fairly simple: update tab set col = null where col is not null. The col column is mostly null.
Sysprocesses shows three rows for this statement: 1 CXPACKET (its a dual processor, 2000 box with sp3 installed), 2 PAGEIOLATCH_SH (waitresource is filled). My guess would be that the where-clause is executed in a seperate process blocking the update.
I changed the statement into update [...] set col = null; sysprocesses shows one row with PAGEIOLATCH_SH. Executing forever.
I checked other processes including those outside sqlserver but none are using the db, let alone accessing the table involved. Even restarted sqlserver to be sure there's no dead process blocking the update. Didn't help.
So I added a search condition to the where-clause, involving a clustered index in order to reduce the rowcount. The execution plan shows a 97% hit on the clustered index, but sysprocesses shows the three rows again...
So far the profiler didn't help me out either: there's a SP: CacheInsert on the update-statement... then nothing.
I have an ASP.NET web application that hangs on a single database UPDATE command for 5+ minutes. I can see this occur in SQL Profiler. This is a one row UPDATE statement on a small table (~600 rows). There are no JOINs or sub queries. There are no other users using the system. During this 5+ minutes, I can see the job in Enterprise Manager with a wait type of NETWORKIO. Since both IIS And SQL Server are running on the same system, the network shouldn't be an issue. Any ideas?
I'm writing a small vbscript to backup a db and some related files, so I used a WSShell calling OSQL to run a Sql BACKUP command, then after it's finished I XCOPY the resulting file plus some other related files. But the problem is that OSQL ends it's execution as soon as the BACKUP command is sent to SqlServer, not when the backup itself ends. Anyone knows how to synchronize the two? How to wait, inside OSLQ, for the end of the BACKUP execution? TIA Luigi
I setup a SQL Agent to send me an email when the Average Latch Wait Time is greater than 300ms. Now I receive an email every 15 seconds stating that the current ALWT is 3916ms. That value never changes with the emails. However, the perfmon shows nothing at all (shows zero).
I also have a Buffer cache hit ratio of 2848.00.
These numbers are when there is NOBODY on the DB at all It is just sitting there. When I reboot the server, as soon as SQL starts it starts to send the emails again.
Server: Intel Xeon Quad Core 2.66 RAM: 4GB (with /3GB in the boot.ini) RAID 1: OS RAID 1: Data (DB and logs) CPU Utilization: 0-1% RAM Utilization: 527MB OS: Server 2003 R2 With SP2 SQL: 2005 Standard with SP2
How can I determine if the ALWT is really 3916?
I executed 'Select * from sysprocesses where SPID>50 and waittime>0'
We have a few SSIS jobs that we are currently manually kicking off after we are sure that certain AS400 jobs have run. We want to completely automate this process, so that we don't have to babysit. What is the most efficient way to do this? In the past (on SQL Server 7 no less) I've seen the 400 job setting a flag to 'Y' in a 400 file, FTPing it down to a flat file, and then the SQL job running every five minutes checking the flag. When it was 'Y', the SQL job would run. We do not have the option of using FTP here. Any suggestions would be appreciated! After the job runs, we'd like it to kick off a report as well
So, I'm fairly new to SQL, and I'm working with a SQL2k5 Database with pre-made packages and what-not. This database was setup before I started this job, and now I'm trying to improve part of the processing in SQL, and so far so good, but I can't figure a couple things out.
The main problem is when I start a SQL command to launch a DTS package from a .sql file, how can I make it wait for the package to complete or fail before moving onto the next part of the .sql script? Hope it's a simple question, I've just taught myself enough SQL to get by in a couple of weeks.
I am tring to fiqure out how i can run a TSQL and the have it start again 60 sec after it compleats, without me have to push the button. I just need it to loop over and over until my data is deleted. I have to do it this way so my site will still allow customers to login and I need the break so they can. Any help would be great.
I will start off with the default warning message: I am a beginner. That said, I have an SSIS process that calls an external executable to transform a data file through a homegrown C program. (This will eventually be converted, but for the moment needs to remain.) The end of the run creates a *.done file. How do I use the SSIS tasks to pause/wait, checking periodically, for the existence of this file before continuing with processing tasks? I apologize if this is easy, but I am stumped.
Thanks in advance, Roger
Info: SQL 2005 Microsoft SQL Server Integration Services Designer Version 9.00.3042.00
When I run some of the quiries, in current activity, wait type would say: PAGEIOLATCH_SH what does it means ? Is there a source I can see all types of wait types ?
CXPACKET wait type current makes up 63% of all the wait types which is causing latency. I need to identify the specific workloads responsible for waits so I can optimise or MAXDOP them. I already know how to retreived the top IO, CPU, Memory consuming queries but how do I identify the statements and order them by wait time?
Can someone point me in the direction of a command, DMV or will the top CPU list be adequate?
It seems strange to me that once the UNION ALL component waits for at least one row from each input buffer before it puts anything into the output buffer (that's the behaviour that I have observed anyway).
I would like to know what a particular session is waiting on (cpu, io, memory etc).
For that I try to figure out how to use extended events. I would prefer to avoid the use files written to db-server, why I tried "histogram target".
Well my attempt does not produce usefull output,
-- check current event sessions
SELECT a.name, CASE WHEN b.name IS NOT NULL THEN 'Started' ELSE 'Stopped' END AS current_state
[Code] ....
The intention was to get a pareto chart, to see the top wait events in the session. I would also like to see amount of cpu used. Based on this tuning effort could be prioritized.
OK, I know this is out there all over and yes I did a search for this topic; however, I am confused about tables with an image data type and with moving text file group to another filegroup.
Here is what I have:
I have a table storing imaged documents and has become very large. I want to move the table to another filegroup. The table is created like this:
USE [PD51_Data] GO /****** Object: Table [dbo].[SCANNEDDOCUMENTS] Script Date: 05/13/2008 14:52:40 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[SCANNEDDOCUMENTS]( [DocID] [int] IDENTITY(1,1) NOT NULL, [CaseID] [int] NOT NULL, [DocName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [Doc] [image] NOT NULL, [DocLocation] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [DocNotes] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [TopicID] [int] NULL, [ScannedDocumentsCheckSum] [varchar](128) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,PRIMARY KEY CLUSTERED ( [DocID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO SET ANSI_PADDING OFF GO ALTER TABLE [dbo].[SCANNEDDOCUMENTS] WITH NOCHECK ADD CONSTRAINT [ISCANNEDDOCUMENTS2] FOREIGN KEY([TopicID]) REFERENCES [dbo].[TOPICS] ([TopicID]) GO ALTER TABLE [dbo].[SCANNEDDOCUMENTS] CHECK CONSTRAINT [ISCANNEDDOCUMENTS2]
On a test DB, I moved the clustered and nonclustered indexes to a secondary filegroup no problem, but it still shows to be stored in the primary filegroup. I read an article about having to create a new table in the secondary in order to move the images and text file group. Has anyone come across this?
Do I need to drop the clustered index and FK to move to a secondary filegroup?
Or
Do I create a new table into the secondary filegroup and then add the Clustered index and constraints?
For one of our SQL server 2005 Ent edition 64 bit SP4 which has transnational replication set up and used for heavy reporting, i was trying to counter out the performance of slow running queries which basically runs and get suspended and most often are seeing waiting:So i tried to analyse the wait stats and come up with below stats where ASYNC_NETWORK_IO dominated for a collection of two weeks data.
I'm hoping someone can clarify what I am seeing below:
I have a SQL Server which the 2nd top wait state according to the SQL Server 2005 SP2 dashboard is "CLR_Auto_Event" and "CLR_Manual_Event". This server dosen't have CLR Integration enabled in "Surface Area Configuration", and we do not have any CLR applications. I've checked this dashboard many times and CLR never appeared before, and nothing has changed on the server interms of new databases, and we don't host any applications on the server. I ran a short profiler trace and didn't see any CLR Assembly Loads.
Here's my dilema, I want to run a stored procedure that starts another stored procedure running, but does not wait for the stored procedure to complete execution.
The stored procedure should execute immediately, and leave the other procedure to complete running in the background. Is there any way to do this?
Is there a way to execute a stored procedure and have it move on without waiting for a response from the stored procedure. I am trying to create a button on a webpage that will execute a stored procedure but the procedure takes to long to run and my page times out. Instead I would like the button to start the procedure and the webpage look at a table of data. When the table of data is empty then I will know the stored procedure is complete. Is this possible?
hi. I have a question. I work in BI (with sql server 2000) and this year (2008) my idea is migrate to 2005, but, consulting to expert in Microsoft technology, they suggested wait to 2008, sql 2005 have many problem, sql server 2008 was completed re-write.
The question is, what problem have sql server 2005?
Hello all,I've got a query which suddently became very slow. It now takes about 10secs instead of 2 secs.I've got to identical DB (one is for test and the other is production). Thequery is slow only in production.When running this query in both DB and looking at execution plan,statistics, etc, the onle difference is the Cumulative wait time on serverreplies.In test DB, I get the value: 2200And in production DB: 1.22344e+009What does this mean concretly? What do I have to do to solve this problem?TIA.YannickPS I'm using SS2000 SP3 on NT4.0