Does anyone know how i can keep an ssis package used for real time reporting alive no matter the amount of errors it gets? So for instance the server im streaming to is shutdown for maintenance, and the connection dies, its needs to just keep re-trying. In other words the maximum error count is infinite. i dont just want to set max err count high, i want it out of the picture all together.
What is actually happening when a conversation timer is being used in terms of the dialog_timer column in sys.conversationendpoints? For example, I was using the following query to tell me which conversation timers are currently running:
SELECT * FROM sys.conversation_endpoints
WHERE dialog_timer > '1900-01-01 00:00:00.000'
However, I have noticed that periodically the dialog_timer value goes back to '1900-01-01 00:00:00.000' and the query fails, starting up another timer. Then, the original timer magically appears again, working just fine. So, I changed to this query:
SELECT * FROM sys.conversation_endpoints
WHERE far_service
IN
(
'TimerService1',
'TimerService2',
'TimerService3',
)
But this seems like the long way around and doesn't really indicate that the timer is running or not, just that its present in the sys.conversationendpoints catalog view. What is the proper way to see if timers are running? If one dies for some reason, I need to restart it.
Currently, our application is issuing a 'select count(*) from systypes' command to check to see if the database connection is live (has not been reaped by the firewall.) This is somewhat inefficient. Perhaps a less expensive way to implement this functionality is to issue a ping statement to the instance? At least a more simple SQL query is needed.
I'm sure this function is typical in application architecture. What recommendations do you have for confirming that a database connection is alive and viable from the application?
We are having some problems with the SQL Server 2005 Keep Alive mechanism causing dropped connections. The Keep Alive time and interval appear to work as documented. However the number of retries made before dropping the connection seems to be variable. When running Network Monitor I have seen this vary between 3 and 9 retries. The value for TcpMaxDataRetransmissions in the registry is set to 5. Does anyone know what the correct number of retries should be for SQL Server 2005 Keep Alive? Is this dynamically determined or is this a bug?
We tested this new feature ("Keep Alive" for orphant connections to automatically close) with no success - neither the standard properties nor slightly changed properties worked.
We tested like this: SQL Server 2005 - ADO.NET Client The client established an explicit lock on one row at one database. Afterwards we disconnected the client by pulling out the network-cable. We waited about 35 sec for the sessios to close - but nothing happend; we waited another minute but nothing changed. SQL Server Management Studio and the command line "netstat" told us that the connections are alive ... so what went wrong? Did we miss something?
Our goal is, that such "orphant connections" get cleaned up and also their inked locks on the database. BTW we installed the sp1 before all the tests started!
I created a profiler to run on a remote server in local. Then I logout. After two hours, I login again. The profiler was closed. I don't know when and why. Did someone have same problem? Is this normal?
I have the below query which returns thousands of records. can I optimize the returned result set faster without changing the structure of the database? SELECT dbo.tblComponent.ComponentID, dbo.tblComponent.ComponentName, dbo.tblErrorLog.ShortErrorMessage, dbo.tblErrorLog.LongErrorMessage, dbo.tblErrorLog.LogDate, dbo.tblErrorLevel.Description,dbo.tblErrorLog.ErrorLogIDFROM dbo.tblErrorLevel INNER JOIN dbo.tblErrorLog ON dbo.tblErrorLevel.ErrorLevelID = dbo.tblErrorLog.ErrorLevelID INNER JOIN dbo.tblComponent ON dbo.tblErrorLog.ComponentID = dbo.tblComponent.ComponentID Thanks.
I'm doing an update on a table with about 113m rows, the update-statement is fairly simple: update tab set col = null where col is not null. The col column is mostly null.
Sysprocesses shows three rows for this statement: 1 CXPACKET (its a dual processor, 2000 box with sp3 installed), 2 PAGEIOLATCH_SH (waitresource is filled). My guess would be that the where-clause is executed in a seperate process blocking the update.
I changed the statement into update [...] set col = null; sysprocesses shows one row with PAGEIOLATCH_SH. Executing forever.
I checked other processes including those outside sqlserver but none are using the db, let alone accessing the table involved. Even restarted sqlserver to be sure there's no dead process blocking the update. Didn't help.
So I added a search condition to the where-clause, involving a clustered index in order to reduce the rowcount. The execution plan shows a 97% hit on the clustered index, but sysprocesses shows the three rows again...
So far the profiler didn't help me out either: there's a SP: CacheInsert on the update-statement... then nothing.
2 weeks ago I deleted about 200GB of data from a 300GB+ database. It's a custom DB we want to use to test few things. We wanted a smaller size DB for our testing and since we didn't have any we grabbed a production backup, removed sensitive data and ran a large archiving script on it... Anyway so far so good but our data file was still the same size as before.
So we started a shrinkdatabase... it has been running for 2 weeks now! After about 1 week I interrupted the shrinkdatabase process and ran a dbcc shrinkdatabase('DB', truncateonly) just to see if the data file will get reduced a bit or not. It did get reduced by about 20GB. I assume that dbcc shrinkdatabase('DB', 0) has free up enough pages at the end of the data file so a truncateonly was able to free up some space... Anyway after this we started the dbcc shrinkdatabase('DB', truncateonly) again... still running...
The database was never shrank before and every index is highly fragmented... Is that why it's taking so long? Am I actually going to have to wait for another few weeks before that thing finishes??
Anyone has experience running shrink on large DBs?
I have following common table expression query which is taking like 15 hours to run. would someone suggest what can I do to speed this thing up..
; with a as (select proj_id, proj_start_dt,proj_end_dt, case when charindex('.', Proj_ID) > 0 then left(Proj_ID, len(Proj_ID) - charindex('.', reverse(Proj_ID))) end as Parent_Proj_ID from ods32.dbo.Proj a), --add Parent_Proj_ID column b as (select proj_id, proj_start_dt,proj_end_dt,Parent_Proj_ID from a where PROJ_START_DT is not null and PROJ_END_DT is not null --get all valid rows union all select a.Proj_Id, b.PROJ_START_DT, b.PROJ_END_DT, a.Parent_Proj_ID from b inner join a on b.Proj_Id = a.Parent_Proj_ID where a.PROJ_START_DT is null or a.PROJ_END_DT is null) --get all invalid children of valid rows and give them the dates of their parents update a set PROJ_START_DT = b.PROJ_START_DT, PROJ_END_DT = b.PROJ_END_DT from WPData a left outer join b on a.Proj_ID = b.Proj_ID -- join up and update
Hi, sorry if this has been asked before but I'm pretty strapped for time...
I have two tables: [photos] and [photoFolders]
[photoFolders] contains information about photo albums on the site im creating. The information in [photos] lists information about all photos along with which [photoFolder] they belong. When the user logs in, I want to present a list of all 'folders' in their name along with the TOP image with the corresponding folderId...
Lately I did a mass update on all our scripts and addeddbo. in front of all tables and other objects.There is a function that returns a table and the function isI think 300 lines. There are a lot of UNIONs, EXISTS,NOT EXISTS, etc...Anyhow there's a single place where if i add dbo. in frontof the table this function which is being called from an SP,when i run the SP from the QA or from the ASP applicationit runs forever. As soon as i remove the dbo. again it runsin 6-8 seconds. Here's the thing this same table reference existsa few lines below in the 2nd UNION for example andhaving a dbo. in front doesn't cause any problems.Even more confusing if i add Databasename.dbo.TableNameand run it again, i don't run into any problem.So it's almost like if i specify dbo. in front of the table, somehowSQL Server or our code is getting lost and searching for thistable in other databases?Has anyone run or seen such a problem?I am sure I can make changes to the code and end upwriting a different code but before I make changes I wouldlike to find out more about my mystery problem.I am running SQL 2000 SP4 and the same problem occuredon two different machines. Win 2000 Pro and Win 2000 Server.Any ideas, suggestions?Thank you
I am using RS 2005 and SQL Server 2005. I am having a table with about 6 million rows. I am extracting about 2 milliion rows for a report. When i run the report as a single user the report comes up in 6-7 minutes but when i run the report with 2 users the report takes forever to come up.
The statistics are different each time sometimes 19 minutes sometimes 30 minutes. The report connects to the db with the same dbuser id for both the people running the report. The stored procedure being invoked uses temp tables and also indexes are created on the fly for these temp tables.
The moment 2 people are running the report and when i run an SP_WHO2 i see that one process id that is being started by reportserver blocks another process being run by reportserver.
Timeouts are not happening the report justs goes on forever to come up. Any help? Also if you need any more information please do let me know I will be glad to give them.
The report is a matrix report and there are 4 levels of grouping on the report.
I try to restore a database but it pop error said "the database is in use". So, I try to take the database offline so that I can store the database. But it takes me forever (1 hour till now). It is still showing "1 remaining". Do you have any idea why ? Thank you very much in advance!
I have a simple update/initialization query (set integer column = 0 on all rows) that's been running for over 28 hours. There are just over 27 million rows in the table. In current activity it shows that the transaction is open but it's sleeping, and in locks it shows 1 DB S mode lock, 766 page X mode locks, 1 page U mode lock, and one table X mode lock. Server is 7.0 with 1.7 gig ram. Anyone have any ideas as to why it's taking so long? Table is about 7 gig in size; can't get to it in Enterprise Manager without locking it up...
When I attempt to load a database from dump format across a network (100mb Ethernet) It takes forever. (15 hours for 16GB!) can anyone help me find a starting point to troubleshoot this?
Thanks!
-Chris
P.S. File Copies of the same size move at a rapid fashion, and I cannot find any bottlenecks in the network.
We have a MS SQL Server 6.5 database table with 643,000 records. There are several indexes including some clustered indexes.
We do a statement: update wo set udf3 = '1234567890123456' where woid = '123'
this returns immediately.
Then we try the same statement where the string is 1 character longer and it takes 45 minutes to return. There is no indication of what the server is doing during this time.
There is no index on UDF3 and WOID is the primary key.
Any suggestions what is happening? What can we do to correct it? DBCC CheckTable finds no errors.
name rows reserved data index_size unused -------------------- ----------- ------------------ ------------------ ------------------ ------------------ WO 643124 493418 KB 321580 KB 169824 KB 2014 KB
Hi there i have a sql server 7 database that i copied across to another server this time running windows 2003 and sql server 8. I have a routine that runs every night on each machine. on the old machine it take about 2 hours to do. on this new machine it is taking up to 5.5 hours to do the exact same job. the results are the same but the time delay could become an issue later on so i would like to nip it in the bud now.
Does anyone ahe any suggestions as to why the code would run so much slower on a newer and better spec machine.
I have copied everything across so there is no difference in tables or stored procedures. is there an optimisation tool i can run ??
Back in the days of SQL 7.0 I used a lot of ODBC SELECT querying form VB applications, in which I implemented NOLOCK in order to prevent the primary business applications from being locked out of tables once the queries were run.
Now, quite a few years later, I'm busying myself converting a lot of old Access based forms and queries to TSQL on SQL-Server 2000, and wonder aimlessly why NOLOCK queries (simple select ones) are imensely slower than a standars select clause.
SELECT * FROM employees
This would be much much faster than the code below, but users would get "The current record could not be accessed, as it is being used by another user", evidently because I'm locking the record while producing the output.
SELECT * FROM employees (nolock)
So this could should - as I remember it - do a dirty read on table, not obstructing other users and give me a snapshot of date as they are, although they might be locked for edit.
Could anyone explain to me why the nOLOCK query fials to give me any output? It is as if the nolock request is waiting for the table/records to free? In which case I'll never be able to run a query.
Hello all,I'm using SS 2000 and NT4 (and Access97 as front-end on another server)Well, probably by lack of knowledge about table locks, I don't really knowwhere to start to present this problem. In Enterprise manager, section"Management->Current activity->Locks/Objects", we have a couple (5-7) of"forever runnable" processes, all related to two specific situations. Eachof them are for "SELECT" statements. It's been a long time since it's likethat. I've always been curious about them but the weren't causing anyproblem. Now, after a modification, a third situation happened ("SELECT"again)... and sometime a lock created by this new "forever runnable" processblocks other functions that use the same table. All my table are linked withan ODBC link.Any help or suggestion where to search would be appreciated.Thanks.Yannick
I have 2 reports that report on baiscally the same thing, just group differently.
Report 1 groups summary phone call stats by Department, Day, and Hour - which are all drop down options. This means that the department summary stats are shown when the reports are rendered and can be expanded to see daily stats ... the daily stats can then be expanded to see the hourly stats.
Kinda Like: ------------------------- -Department 1 -10/1/2007 12:00 AM 1:00 AM +10/2/2007
+Department 2 -------------------------
Report 2 shows the same summary stats by department and extension - which is also a drop down option. This means that the department summary stats are shown when the reports are rendered and can be expanded to see summary stats for each extension.
The queries for these reports run from the Management Studio in about 10 seconds each with the Report 1 query returning about 800 rows for the month of October 2007 and the Report 2 query returning about 30 rows for the same date range.
When the reports are rendered, Report 1 (with 800 rows) is rendered in about 20 seconds, while Report 2 (with only 30 rows) takes about 5 minutes to render.
The reports themselves are very similar, with the only difference being the grouping. It is weird that the report that returns the samller Dataset is actually taking longer to render.
One thing I did try was running the queries from the Data tab of the .rdl files (in visual studio) and the query for Report 2 took about 4 minutes to return data, while (as I mentioned above) it ran in about 10 seconds in Management Studio.
select t.name as TriggerName, ta.name as TableName, o.parent_obj into GLPDemo.dbo.Temp_TablesAndTriggers from sysobjects o inner join sys.triggers t on t.object_id = o.id inner join syscomments c on c.id = t.object_id inner join sys.tables ta on ta.object_id = o.parent_obj where xtype = 'tr' and c.text like '%Audit%'
DECLARE @DBTrigger as varchar(100), @DBTable as varchar(100), @exestr as varchar(100)
DECLARE TCursor CURSOR for
SELECT TriggerName, TableName from Temp_TablesAndTriggers
I have a query that uses a lot of joins, subqueries and temp tables (I inherited it and am leary about rewriting it). I tried to rewrite it using table variables at one point, but the run time just got ridiculous for any query spanning more than a few months.
It currently runs as-is in about 3 minutes or less in Management Studio; however, when I try to run it via SRS or the Visual Studios 2005 Designer it runs indefinitely. It also runs indefinitely if I try to refresh the fileds or run it from the data tab (basically any time it has to call the stored procedure).
Does anyone have an idea of how I can get this report to run or know why it runs differently in SRS than in Management Studio?
Hi, I got a weird problem. I've created a sp that takes in the query analyzer 7 seconds to run. When i put in my code dataAdapter.Fill(dataSet.Tables(0)) it takes forever to finish!! What's going on? Any thoughts highly appreciated. t.i.a.,ratjetoes.
I am trying to execute the following query , in Management Studio. But it takes forever. Can someone tell me why is this happening? I am running the query in 'NorthWind' database.The windows account under which I am logged into WinXP (windows authentication is enabled for the SQL Server database) is the database owner for NorthWind database. alter database NorthWind SET ENABLE_BROKER
I had to restore a database late this afternoon. I have the database set to FULL recovery. Database backups are performed nightly and transaction log backups are performed every other hour. I decided to perform a point-in-time restore. When I restored this way everything seems to go ok and it finishes. Then the database is grayed and says "Loading". Although I tried 4 separate times, one time allowing over an hour, the grayed out database and "Loading" never goes away.
Freaking out I deleted the "Loading" database (didn't delete logs and backup files) and tried a manual restore from the previous night's backup file. It attached and restored properly and was ready to go in 2 minutes.
Of course I wanted to get the transaction log files restored too, since it had work from earlier today. So I tried another restore via point-in-time and got the same old messages. Currently, the database is running with the previous night's backup restored but the users aren't too keen on having to do 5 hours worth of work to catch up to the previous transaction log backup come Monday morning.
I'm having a problem with an update operation in a stored procedure. Itruns so slowly that it is unusable, unless I comment a part out in whichcase it is very fast. However, I need the whole thing :). I have atable of email addresses of people who want to get invited to parties.Each row contains information like email address, city, state, country,and preferences for what types of events are of interest.The primary key is an EMAILID, and has a unique constraint on the emailfield. The stored procedure receives the field data as arguments, andinserts the record if the email address passed is not in the database.This works perfectly. However, if the stored procedure is called for anemail address that already exists, it updates the existing row insteadof doing an insert. This way I can build a web page that lets peoplemodify their preferences, opt in and out of the list and so on.If I am doing an update, the stored procedure runs SUPER SLOW (and thepage times out) unless I comment out the part of the update statementfor city, state, country and zipcode. However, I really need to be ableto update this!My database has 29 million rows.Thank you for telling me anything about how I can speed up this update!Here is the SQL statement to run the stored procedure:declare @now datetime;set @now = GetUTCDate();EXEC usp_EMAIL_Subscribe @Email='dberman@sen.us', @OptOutDate=@now,@Opt_GenInterest=1, @Opt_DatePeople=0, @Opt_NewFriends=1,@Opt_OldFriends=0, @Opt_Business=1, @Opt_Couples=0, @OptOut=0,@Opt_Events=0, @City='Boston', @State='MA', @ZCode='02215',@Country='United States'Here is the stored procedure:SET QUOTED_IDENTIFIER ONGOSET ANSI_NULLS ONGOALTER PROCEDURE [usp_EMAIL_Subscribe](@Email [varchar](50),@Opt_GenInterest [tinyint],@Opt_DatePeople [tinyint],@Opt_NewFriends [tinyint],@Opt_OldFriends [tinyint],@Opt_Business [tinyint],@Opt_Couples [tinyint],@OptOut [tinyint],@OptOutDate datetime,@Opt_Events [tinyint],@City [varchar](30), @State [varchar](20), @ZCode [varchar](10),@Country [varchar](20))ASBEGINdeclare @EmailID intset @EmailID = NULL-- Get the EmailID matching the provided email addressset @EmailID = (select EmailID from v_SENWEB_EMAIL_SUBSCRIBERS whereEmailAddress = @Email)-- If the address is new, insert the address and settings. Otherwise,UPDATE existing email profileif @EmailID is null or @EmailID = -1BeginINSERT INTO v_SENWEB_Email_Subscribers(EmailAddress, OptInDate, OptedInBy, City, StateProvinceUS, Country,ZipCode,GeneralInterest, MeetDate, MeetFriends, KeepInTouch, MeetContacts,MeetOtherCouples, MeetAtEvents)VALUES(@Email, GetUTCDate(), 'Subscriber', @City, @State, @Country, @ZCode,@Opt_GenInterest, @Opt_DatePeople,@Opt_NewFriends, @Opt_OldFriends, @Opt_Business, @Opt_Couples,@Opt_Events)EndElseBEGINUPDATE v_SENWEB_EMAIL_SUBSCRIBERSSET--City = @City,--StateProvinceUS = @State,--Country = @Country,--ZipCode = @ZCode,GeneralInterest = @Opt_GenInterest,MeetDate = @Opt_DatePeople,MeetFriends = @Opt_NewFriends,KeepInTouch = @Opt_OldFriends,MeetContacts = @Opt_Business,MeetOtherCouples = @Opt_Couples,MeetAtEvents = @Opt_Events,OptedOut = @OptOut,OptOutDate = CASEWHEN(@OptOut = 1)THEN @OptOutDateWHEN(@OptOut = 0)THEN 0ENDWHERE EmailID = @EmailIDENDreturn @@ErrorENDGOSET QUOTED_IDENTIFIER OFFGOSET ANSI_NULLS ONGOFinally, here is the database schema for the table courtesy ofenterprise manager:CREATE TABLE [dbo].[EMAIL_SUBSCRIBERS] ([EmailID] [int] IDENTITY (1, 1) NOT NULL ,[EmailAddress] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[OptinDate] [smalldatetime] NULL ,[OptedinBy] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[FirstName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[MiddleName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[LastName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[JobTitle] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[CompanyName] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[WorkPhone] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[HomePhone] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[AddressLine1] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[AddressLine2] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[AddressLine3] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[City] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[StateProvinceUS] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_ASNULL ,[StateProvinceOther] [nvarchar] (255) COLLATESQL_Latin1_General_CP1_CI_AS NULL ,[Country] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[ZipCode] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,[SubZipCode] [nvarchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,[GeneralInterest] [tinyint] NULL ,[MeetDate] [tinyint] NULL ,[MeetFriends] [tinyint] NULL ,[KeepInTouch] [tinyint] NULL ,[MeetContacts] [tinyint] NULL ,[MeetOtherCouples] [tinyint] NULL ,[MeetAtEvents] [tinyint] NULL ,[OptOutDate] [datetime] NULL ,[OptedOut] [tinyint] NOT NULL ,[WhenLastMailed] [datetime] NULL) ON [PRIMARY]GOCREATE UNIQUE CLUSTERED INDEX [IX_EMAIL_SUBSCRIBERS_ADDR] ON[dbo].[EMAIL_SUBSCRIBERS]([EmailAddress]) WITH FILLFACTOR = 90 ON[PRIMARY]GOALTER TABLE [dbo].[EMAIL_SUBSCRIBERS] WITH NOCHECK ADDCONSTRAINT [DF_EMAIL_SUBSCRIBERS_OptedOut] DEFAULT (0) FOR [OptedOut],CONSTRAINT [DF_EMAIL_SUBSCRIBERS_WhenLastMailed] DEFAULT (null) FOR[WhenLastMailed],CONSTRAINT [PK_EMAIL_SUBSCRIBERS] PRIMARY KEY NONCLUSTERED([EmailID]) WITH FILLFACTOR = 90 ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_WhenLastMailed] ON[dbo].[EMAIL_SUBSCRIBERS]([WhenLastMailed] DESC ) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_OptOutDate] ON[dbo].[EMAIL_SUBSCRIBERS]([OptOutDate] DESC ) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_OptInDate] ON[dbo].[EMAIL_SUBSCRIBERS]([OptinDate] DESC ) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_ZipCode] ON[dbo].[EMAIL_SUBSCRIBERS]([ZipCode]) ON [PRIMARY]GOCREATE INDEX [IX_EMAIL_SUBSCRIBERS_STATEPROVINCEUS] ON[dbo].[EMAIL_SUBSCRIBERS]([StateProvinceUS]) ON [PRIMARY]GOMeet people for friendship, contacts,or romance using free instant messaging software! See a picture youlike? Click once for a private conversation with that person!<a href="http://www.sen.us"><imgsrc="http://www.sen.us/mirror/SENLogo_62_31.jpg"></a>*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!