Inconsistent Application Performance

Feb 13, 2007

Basically, I have a web-based application that queries aggregates from several big tables that grow rather quickly. I feel I am fairly accomplished at tuning individual queries and have optimized these particular stored procedures as much as I can and often see great results for a period of time. However, after time passes, it seems the stored procedures begins performing really badly. What's frustrating is that I have traced the server, located a stored procedure that consistently executes with an abnormally high duration, but when I run the exact same text directly against the sql server it returns instantly.

If I recompile the stored procedure using my direct connection by including the WITH RECOMPILE clause to the EXEC statement, the problem remains for the web-app. However, if I add the WITH RECOMPILE to the actual stored procedure, it is instantly resolved. I can then remove the WITH RECOMPILE from the stored procedure text and things will remain speedy for a period of time before they again slip back.

I'm curious as to what I can do from a SQL standpoint to correct for this. For now, the app team is setting a counter for each execution and then issuing a WITH RECOMPILE when the counter limit is exceeded.

I apologize if this question has most been asked before and I have attempted a review of the forums but was unable to find any material. Any suggestions, advice, links or general assistance would be most appreciated.

View 4 Replies


ADVERTISEMENT

Inconsistent SP Performance On Different Servers

Jun 8, 2006

One of my developers recently installed a backup of the production database onto his test site. His test server has the same configuration as the production server.

One of the Stored Procedures that is called takes 1:45 to run on his machine, but only 2 seconds on the production server. This same SP takes only 2 seconds on my development database.

The SP is called iteratively, up to 10 times... to run against 10 separate fields. Depending on a value for a parameter called @CriteriaClassID, depends on which portion of the SP runs.

The significant difference in processing time in itself is baffling (since the servers are same specs / configuration, as far as I can tell, and the data is identical, since he has a backup of the most recent production data).

But more baffling: if, in his data, I switch the values from field 1 to field 2, and vice versa, his results take 2 seconds (switching the values in field 1 to field 2 switches the value in @CriteriaClassID which is passed through to this SP).

It's exactly the same SP; the only difference is that field 1 is processed first, field 2 second, field 3 third etc. On the production site and my development site, it doesn't make a difference in the order they are processed. On his machine it does.

Any ideas? I though perhaps his Indexes were corrupted in the rebuild, but we ran a SQL Server maintenance schedule to clean it up, and no improvement.

This is the SP, if it is of any help:

CREATE procedure [dbo].[st_pull_model_data] @ModelID as integer, @CriteriaID as integer
as

declare @ClientID as integer, @CriteriaClassId as char(1)

/*Procedure to pull data from org_model_data and postalcode_model_data for modeling and media analysis */
/*Need to have table #temp_data created outside of SP with fields org_id and zip_code */
/*This procedure is used by SP st_model_data */

If @CriteriaID is not null
begin

set @CriteriaClassId = (Select model_criteria_type from model_criteria where model_criteria_id = @CriteriaID)
if @CriteriaClassID = 'G' -- changes client_id from specific to general, if General is required.
begin
set @ClientID = 0
end
else
begin
set @ClientID = (Select client_id from model where model_id = @ModelID)
end

If @CriteriaClassId in ('G','P')
Begin
update#temp_data
setdata1 = postal_criteria_value
from #temp_data t
left outer join
(select postalcode, postal_criteria_value
from postalcode_model_data pmd
join model_org_trade_area mota on mota.zip_code = pmd.postalcode
join model_org mo on mo.model_org_id = mota.model_org_id
where model_criteria_id = @CriteriaID
and client_id = @ClientID
and mo.model_id = @ModelID) as PMD
onPMD.postalcode = t.zip_code
end
else
Begin
update#temp_data
setdata1 = org_criteria_value
from#temp_data t
left outer join
(select distinct postalcode, org_criteria_value, omd.org_id
from org_model_data omd
join org o on o.org_id = omd.org_id
join model_org_trade_area mota on mota.zip_code = omd.postalcode
join model_org mo on mo.model_org_id = mota.model_org_id and mo.org_id = o.org_id
where model_criteria_id = @CriteriaID and o.client_id = @ClientID and mo.model_id = @ModelID) as OMD
on OMD.postalcode = t.zip_code and omd.org_id = t.org_id
end
end

View 5 Replies View Related

Inconsistent Database Performance

Jul 23, 2005

Hi,I am facing a peculiar problem while looking ahead in a live Databasecurrently under operation in one of my client’s Project. AnApplication that is updating 3 - tables in the Database is missing toupdate a certain number of Fields in one of the Tables. The fact isnot frequent and I have checked through the Server Performance Monitorthat there is no performance slag of the Server during any point oftime.The Tables are indexed with common Key fields. Can anybody help me inthis regard ?Thanks & Regards.--Posted using the http://www.dbforumz.com interface, at author's requestArticles individually checked for conformance to usenet standardsTopic URL: http://www.dbforumz.com/General-Dis...pict193836.htmlVisit Topic URL to contact author (reg. req'd). Report abuse: http://www.dbforumz.com/eform.php?p=655931

View 2 Replies View Related

Inconsistent Performance From Queues

Nov 13, 2006

Hi everyone! I have a very brief question... I have 10 queues in my database and each of them are sent equal number of messages... There are instances where they execute/activate the stored procedures very fast but there are times where they don't, does anyone have an idea why this happens?

Thank you very much for taking the time to read my post. :)

View 10 Replies View Related

Execution Plans Inconsistent With Performance

May 16, 2008

I've been working with SQL Server 2005 for a while now and I've noticed some odd behavior that I want to bounce of other members of the community. I should preface that I've been a forum viewer (and occasional contributer) here at SQL Team for a while and I've naturally developed a keen sense for optimizations.

Fundamentally, longer stored procedures with perfectly fine/optimized execution plans are inconsistent with real world performance. In some of these cases, a low subtree cost on a 4 core machine with 16gb of ram and 2 15 drive SAS arrays with little load takes excessively long to run or in some cases doesn't complete.

This isn't due to blocking or resource bottlenecks as I'm quite familiar with built in tools to troubleshoot and resolve those issues. In all cases, I am able to rearchitect the stored procedure into a higher subtree cost variant and get reasonable performance, but it's frustrating to have to redo work and there seems to be no common theme other than longer multi-statement procedures.

I've used SQL Server 2000 extensively and did not notice this level of inconsistency in performance with that product version. Just wondering if others in the community have experiences similar or if I'm just crazy.

Thanks for reading my rant.

- Shane

View 3 Replies View Related

Indexing Issue, Inconsistent Performance

Feb 13, 2006

Hi -Trying to chase down a baffling performance issue. Our database has beenrunning very slow lately. So we are performance tuning the database. Indoing so, we created a copy of our production database. In that database, Ichanged one clustered index on a table to try to improve performance. I ranone query - saw a slight improvement - but saw "lazy spool" in the executionplan.I tried to change it back to the original index by dropping the changedindex, and recreating the original index. I then ran the original query -which now went from 5 seconds to 36 seconds.I then ran DBCC REINDEX on that table. Performance of the query was stillmarkedly worse. I then reran the DBCC REINDEX on all tables, and then Iupdated each tables statistics. Performance of that query has never returnedto the original 5 seconds.What could be at issue here? Is there something else that I caused inchanging the index and changing it back?Ideas much appreciated.

View 4 Replies View Related

MSSQL Express 2005 Performance So Inconsistent!

Jan 26, 2007

Hi, just wondering if anyone noticed the MSSQL Express 2005 is so inconsistent with its performance. I was testing out a query with 2 joins, with the main table having about 13,600,000 records. With the same criteria value, sometimes it finished executing in 3 secs. sometimes almost a 1 min.

Could it be just the volume of the data? This is really driving me crazy!

Any advise will be greatly appreciated.

Rick..

View 2 Replies View Related

Inconsistent Performance Results With Large Partitioned Tables.

Dec 5, 2007

I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.

The first query completed in around 7s and has 47,000 logical reads.


select mo.monitor_id,

mo.site_id,

mo.testtime,

sum(mo.NumBytes),

sum(mo.DNSTime),

sum(mo.ConnectTime),

sum(mo.FirstByteTime),

sum(mo.ContentTime),

sum(mo.RelocTime)

from monitor_raw mr(nolock), monitor_object mo(nolock)

where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)

and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'

and mo.returncode = 200

and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)

and mr.escalationlevel = 0

and mr.monitor_id = mo.monitor_id

and mr.testtime = mo.testtime

and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime


The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.


select mo.monitor_id,

mo.site_id,

mo.testtime,

sum(mo.NumBytes),

sum(mo.DNSTime),

sum(mo.ConnectTime),

sum(mo.FirstByteTime),

sum(mo.ContentTime),

sum(mo.RelocTime)

from monitor_raw mr(nolock), monitor_object mo(nolock)

where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)

and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'

and mo.returncode = 200

and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)

and mr.escalationlevel = 0

and mr.monitor_id = mo.monitor_id

and mr.testtime = mo.testtime

and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime



The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.


My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?

One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.

Thanks,

Tim

View 5 Replies View Related

Help Me To Improve My Application Performance

Mar 13, 2007

Dear Experts,
I'm a DBA, Working for a Product based company.
We are implementing our product for a certain client of huge OLTP.
our reports team is facing problem (error: all the reports are timed out).though the queries are written properly, Each query is taking some minutes of time.
I've given the command DBCC DROPCLEANBUFFERS.
the time immediately dropped to 10 sec.

now my question is :
please suggest me the DBCC commands or any DBA related commands to improve the performance of the application for my reports team.

Thanks in advance.

Vinod

View 4 Replies View Related

Application Performance Advice Please?

Jul 20, 2005

Hello all,I've been recruited to assist in diagnosing and fixing a performance problemon an application we have running on SQL Server 7.The application itself is third party software, so we can't get at thesource code. It's a Client Management system, where consultants all overthecountry track their client meetings, results, action plans, etc. , and hasapparently been problematic for a long time now. I came into thisinvestigationin mid-stream, but here's the situation as I understand it:We have users reporting it's slow, with no discernable pattern with respectto what part of the application they're using or now particular time of day.I am told that it doesn't appear to be a bandwith or computer resourceproblem. They apparently added two app servers a year or so ago, whichtemporarilyimproved the performance. We're using a nominal percentage of CPU andmemory.There are three large tables (approx 8 million rows) that are queried often,as users click to see their calendar of appointments or review past meetingswith a client, etc. The activity on these tables is over 90% reads(SELECTS) with about 10% INSERTS/UPDATES. We have attempted to run the IndexAnalyzer Wizard twicebut so far it just seems to hang (it could be that the workload file is toobig?) . So, what we're doing now is isolating the SELECT statements thattake a long time to run and manually comparing them to the indexes thatexist on these large tables. Since we can't alter the SQL source code,we're trying to alter the indexes to improve performance.What I would like to know is, is there a good way to get benchmarkmeasurements so we can explicitly measure any performance changes? Also, doyou thinkwe're going about this the right way, or is there some other avenue we couldbe looking at to improve performance?I recognize that performance questions are tricky to post/answer in anewsgroup, because usually you need more information than is provided. Theproblem isthat this is a high profile investigation (they're hauling us into meetingsevery two days to report our progress) and I need to be able to convincinglystate that we have either improved performance by X% , or that it is theapplication itself that's the problem and we're stuck with it.Any thoughts would be deeply appreciated.Thanks and best regards,Steve

View 5 Replies View Related

Can't See 64-bit FTS Performance Counters From 32-bit Application

Aug 24, 2007

I have a 32-bit application which sometimes runs against 64-bit servers. When it does, I can't read the performance counters published by the FTS service because they're 64-bit only. I get event #1022 from PerfLib in the event log:


Windows cannot open the 64-bit extensible counter DLL msftesqlFD in a 32-bit environment. Contact the file vendor to obtain a 32-bit version. Alternatively if you are running a 64-bit native environment, you can open the 64-bit extensible counter DLL by using the 64-bit version of Performance Monitor. To use this tool, open the Windows folder, open the System32 folder, and then start Perfmon.exe.

Does Microsoft make available 32-bit counter libraries for 32-bit clients of the 64-bit FTS? How do I obtain those bits?

.B ekiM

View 1 Replies View Related

Performance Of An Oracle-based Application

Nov 22, 2007

We have Oracle-based enterprise resource planning (ERP) application that facilitates order entry, invoicing, dispatch, stock management, and purchasing. However, it took as much as 30 seconds to perform queries from the remote offices, which were linked via Frame Relay.

Any experiences or ideas shared will be greatly appreciated.



Best,

Steve

View 5 Replies View Related

SQL Server Performance - Hardware Or Application Problem

May 30, 2007

Hi everyone I am analyzing the performance on my SQL Server and need some advice on why I am seeing the results that I am.

One of my problems is that the SQL Server takes an extremely long time to execute the following command 'exec sp_spaceused' on my tempdb. This command usually displays how much free space is remaining on the selected server.

My other problem is that one of the applications that use SQL Server as a back end is acting extremely slugish. Navigating through screens, performing functions and refreshing grids take some time to execute however when I do a trace in SQL Profiler the queries seem to be executing instantly. My first thought was that the queries executed by the application needed to be optimized, but when I do a trace on the queries spawned by the application they seem fine. Does this mean that I have a network bottleneck problem or possibly a PC problem? The PC that we use to do our testing is a P4 3.0 with 512MB of ram running on Windows 2000. The SQL Server is running on Windows 2003 Server with a Raid 5 configuration 4 zeon 3.0 processors with 4 GB of ram. I feel that our desktop PC are sufficient to run any application but have received suggestions that I should upgrade my client PC's to at least 1GB of ram or move to a Citrix server, configure my SQL Server for Raid 10, try file splitting, use gigabit connections from the SQL Server to the desktops etc.

Given the facts above can someone give me advise on the most logical way to resolve my performance issue. Is my problem on the application end of things or do I need to change my network architecture and or hardware?

View 2 Replies View Related

DateTime Inconsistent

Feb 11, 1999

I need some advise on how to check the consistensy of a DateTime attribute.

Some of my Datetime field(s) won't read as a NULL and some does for example:

Select FileNo,DateReceived,DateEntered,DateFU from RECEIVING
where DateReceived is not null and DateEntered is not null and DateFU is not null

I should get several records but the result is zero.

Is there any articles out there about DateTime ?

I appreciate any help.

Thanks

Rey Caunca

View 1 Replies View Related

Inconsistent SQL Results

Apr 11, 2006

HiI have an oddity. If I run a piece of SQL:SELECT EmployeeNo, MailToFROM ST_PPS.dbo.Employeewhere AddedOn BETWEEN '01-jan-2006' and '01-feb-2006'AND MailTo NOT IN ( '3', 'x')order by MailToI get the resultsEmployeeNo MailTo----------- ------608384 1606135 1608689 1609095 1607163 1606165 1606472 1608758 1.....for 2594 rowsIf I create a stored procedure with the same SQL:-CREATE PROCEDURE dbo.PPS_testASSELECT EmployeeNo, MailToFROM ST_PPS.dbo.Employeewhere AddedOn BETWEEN '01-jan-2006' and '01-feb-2006'AND MailTo NOT IN ( '3', 'x')order by MailToGOand run it:-EXEC PPS_testI get three extra rowsEmployeeNo MailTo----------- ------607922 NULL606481 NULL605599 NULL606316 1608871 1607427 1608795 1.....for 2597Does anyone know what is happening here? It appears that the clause:-MailTo NOT IN ( '3', 'x')excludes NULL in raw SQL, but includes NULL (correctly I think) in astored procedure.Chloe CrowderThe British Library

View 5 Replies View Related

Inconsistent DTS Error - Help Please

Jul 20, 2005

I get an error every so often with a DTS package on SQL 7. Error asfollows.The connection is currently being used by a task. The connectioncannot beclosed or re-used.This doesn't happen all the time and I can sometimes (more often thannot) get the DTS package to complete in it's entirety.To explain what the DTS package does...Truncate tables in reporting environment(several in a batch)Clear Transaction LogsCopies data from live environment into CSV (for speed)Copies data from CSV files into tables previously truncated.Builds up a table based on the data copied (for reporting)Clear Transaction logsI'm using a pretty basic set up, Connection (1st DB) -> Transformationto CSV -> Transformation to Connection (2nd DB). It seems to fail oneither the first or second transformation at random (?).I've checked the transformations so that they close the connectionafterwards so it should in theory be releasing the CSV files for thenext step. I suspect that there is a timing issue with this. I cancopy the CSV files over, but this is a little sloppy and I wouldprefer not to do it.Any ideas how to find a tidy way to ensure these are closed bothbefore and afterwards ?ThanksRyan

View 2 Replies View Related

Inconsistent Results

May 5, 2008

Hi,

I am building a report with a recursive hierarchy for drill-down purposes. The hierarchy is built by querying a SSAS OLAP cube and defining a details grouping for the table/matrix.

Every time I run the report one or more of the leaf members in the recursive hierarchy "jumps" up to the highest level. First I thought that this may be due to the fact that the leafs parents are not part of the returned dataset. However, the queries makes sense and the "offending" members does never contain any data (while the query should return only non empty members) which is why this is a very strange behavior. Furthermore, the "offending" member differs between different executions of the report, despite the fact that the parameters is exactly the same and the cube is untouched between executions.

I am actually pressing "View Report", waiting for the report to execute and when I press "View Report" again, the returned datasets seem to differ, yielding different "offending" members in the report.

When I run the queries individually in the Data-tab in BIDS, the returned datasets are always the same. Execution caching is turned off for the report.

Checking against SSRS's ExecutionLog, the RowCount for consecutive executions with the exact same parameters differ. For example, RowCount:
3094
3080
3079
3088
3087

Why does SSRS behave such inconsistently? Any tips or tricks?

View 3 Replies View Related

Inconsistent Query Results

Apr 10, 2001

I am running SQL Server 7.0 on NT 4.0. I have created a simple query:

SELECT SUM(month1) As total_month1
FROM eac_manload
WHERE project_number = 8800
and dept IN (50,51,52,55,57,60,61,62,63,64,65,68,69)

I first time I run the query I get the correct result. Subsequent times that I run the query the result is 1 record with a Null value. The data has not changed. If I stop MSSQLSERVER and restart the service I get the correct result the first time and the Null value each time thereafter. Anybody out there with any idea of what is going on here? Any help will be appreciated!!

View 1 Replies View Related

Inconsistent Return Codes Using BCP

Mar 2, 1999

I apologize for the length of this message, but I think I need to include all this info so that the problem is understood. I am having what appears to be a problem capturing the return code from a failed BCP.


I create a stored proc to use BCP to load a table:

create procedure sp_bcp_load as
declare @RC int
execute @RC = master..xp_cmdshell "bcp JON..W4KPV in e:inetpubftprootfinreslaw4kpv.g4000.data /Sdbmtss1 /m 0 /f d:mssqluserdatafinresW4KPV.fmt /Usa /P /e d:mssqluserdatafinrescp1.err /t""|"" /r "

select 'Return code from bcp = ', @RC
if @RC <> 0
BEGIN
print 'BCP Error.'
return (8)
END
GO



If I execute the SP, and encounter a transaction log full error, the return code is still zero:


1000 rows sent to SQL Server. 45000 total
1000 rows sent to SQL Server. 46000 total
Msg 1105, Level 17, State 2:
Server 'DBMTSS1', Line 1:

Can't allocate space for object 'Syslogs' in database 'Jon' because
the 'logsegment' segment is full. If you ran out of space in Syslogs,
dump the transaction log. Otherwise, use ALTER DATABASE or sp_extendsegment to increase the size of the segment.

(54 row(s) affected)

----------------------- -----------
Return code from bcp = 0




If I execute the SP again, it correctly returns a non-zero value:


Msg 1105, Level 17, State 2:
Server 'DBMTSS1', Line 1:
Can't allocate space for object 'Syslogs' in database 'Jon' because
the 'logsegment' segment is full. If you ran out of space in Syslogs, dump the transaction log. Otherwise, use ALTER DATABASE or sp_extendsegment to increase the size of the segment.

(6 row(s) affected)

----------------------- -----------
Return code from bcp = 1

(1 row(s) affected)

BCP Error.



Does anybody have an idea why this behaves this way? Any suggestions on how to trap an error on the first call?

Thanks,
Jon Carter

View 2 Replies View Related

Inconsistent Sort Orders

Jan 11, 2008

Hello all-
Maybe someone has some insight on this... I currently have some hash values (SHA512 through Enterprise Library) that I'm trying to compare to data in our database. When I sort the values on the pipeline (DT_STR, 1252) with a Sort and compare them against values coming out of an OLE DB Source (SQL_Latin_1_General_CP1_CI_AS) with a Merge Join, any hash that has a '/' in it is treated as inequal to the one in the database.

For even more fun, when I change the sort/merge join to a Lookup, everything (seems) to check out. Is the 1252 code page not the same as SQL_Latin_1_General_CP1_CI_AS? Any other reasons this might be happening?

Cheers,
-Brandon Tucker

View 15 Replies View Related

Inconsistent Package Failures

Jan 19, 2006

I'm debugging my first SSIS package and is getting inconsistent results. The package does not always complete successfully. When the package does fail, it fails at different tasks that accquire database connections. Any of the following error message would show up:
[Execute SQL Task] Error: Failed to acquire connection "FORGE.FMC". Connection may not be configured correctly or you may not have the right permissions on this connection.

[OLE DB Destination [6374]] Error: The AcquireConnection method call to the connection manager "FORGE.FMC" failed with error code 0xC0202009.

[Connection manager "FORGE.FMC"] Error: An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Communication link failure". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Named Pipes Provider: No process is on the other end of the pipe. ".

I never experienced any connection error when executing query through management studio. It's only SSIS packages that fails to connect every now and then. Any help is appreciated.

Thanks.

View 21 Replies View Related

Inconsistent Clearing Of Connection Pool

Jan 15, 2008

I'm opening a large number of connections to a SQL server 2005 database using a SQL object based on the System.Data.SqlClient library. When I log out of the application, I call the ClearSQLConnectionPool method of the SQL object and this should clear the pool of these connections. This has worked in the past, but lately I have noticed some odd and very annoying behaviour.Very often I log out of the application and few, if any of the connections disappear. Just this morning I opened the application, logged out and all but one of the connections disappeared. Then I opened it again, logged out and none of the connections had gone. If I open the application again all of the past connections are still there plus the new ones.I know it works, its just very inconsistent. The problem may be with IIS, SQL or with library; I just can't tell which.Any ideas?Many thanks 

View 1 Replies View Related

Inconsistent Results From Stored Procudure

Jan 11, 2000

I have a stored procedure (see below), in which I would like to check if the create an identity column and make it a primary key succeeded. I check @@error after the exec statement. This used to pick up an error if the table already had an identity column. It has stopped doing that. Why? And, if this is not the way to capture the error after the exec statement, how do I do it?


CREATE PROCEDURE rasp_test3
/*
Written by Judith Farber Abraham
this procedure loops thru sysobjects looking for user tables.
If a user table, does it have a primary key?
If not, add an identity column to table and make it a primary key
*/
--would like to have sp in main db but use from all three
@fixDB nvarchar(50)--the db to which to add PKs

AS
Declare @TableName varchar(50)
Declare @TableID int
Declare @Msg varchar (50)
Declare @ColumnName varchar(50)
Declare @IndexName varchar(50)
Declare @MyCursor nvarchar(500)
declare @MyCursorC nvarchar(500)
declare @CName sysname
--Set @Msg = "********* Finished adding Ident fields *************"
/* */
/*
do for all user tables ( xtype = u )
*/
set @Mycursor = N'Declare SysCursor cursor for select Name, ID from ' + @fixdb +'.dbo.sysobjects where xtype = "u"'
execute sp_executesql @mycursor
open syscursor
Fetch next from SysCursor into @TableName, @TableID
/* -1 = no record; -2 = row deleted; 0 = got a row */
While (@@Fetch_status <> -1)
Begin
If (@@Fetch_status <> -2)
Begin /* have a user row (table) */
/* */
set @ColumnName = @TableName + 'ID'
set @IndexName = 'PK_' + @columnName

--only add ident and PK if no primary key in table
If not exists (Select * from Sysobjects where Parent_obj = @TableID and xtype = 'PK')

--add an identity column to user table and make it a Primary key

EXEC ('ALTER TABLE ' + @tablename + ' ADD ' + @columnName + ' INT IDENTITY CONSTRAINT ' + @IndexName + ' PRIMARY KEY ' )
--
Begin
--if error, assume already ident column, so find column name & make PK
print @@error
if @@error <> 0 print "jerror occured"
--set @MycursorC = N'Declare SysCursorC cursor for SELECT c.name
--FROM syscolumns c, sysobjects o
--WHERE ((c.id = o.id) AND (c.status = 128)) AND (o.name = ' + @tablename + ')'
--execute sp_executesql @mycursorC
--Open SyscursorC
--Fetch next from SysCursorC into @CName
--print @cname
--close syscursorc
--deallocate syscursorc
--Exec ('ALTER TABLE ' + @tablename + ' ADD ' + @columnName + ' INT IDENTITY CONSTRAINT ' + @IndexName + ' PRIMARY KEY ' )
--select @cname=c.name
--print c.name
End

End
Fetch next from SysCursor into @TableName, @TableID
End
--Print @Msg
Close SysCursor
Deallocate SysCursor
Return

Thanks for any help,
Judith

View 3 Replies View Related

Inconsistent Metadata From Linked 6.5 Server

Aug 16, 1999

I have a problem retrieving data from a linked SQL Server 6.5 server in version 7. I have set-up the server as a linked server and most operations occur with no problem. With some tables, though, I get the following error when I attempt to query any column of data:

Server: Msg 7356, Level 16, State 1, Line 1
OLE DB provider 'SQLOLEDB' supplied inconsistent metadata for a column. Metadata information was changed at execution time.

This occurs consistently for a subset of tables. I cannot see any pattern to the configuration of these tables that would make them any different to those where a query succeeds.

Any ideas?

Regards

Philippe

View 1 Replies View Related

Inconsistent Stored Procedure Results

Jul 13, 2006

I'm testing some code to look up values from my database and update a specific field when certain conditions are met. I'm having trouble with some code that is giving me the results I expect when I submit one set of parameters, but is not finding anything in the database for another set, when I know the data exists.

Here's the code for my stored procedure, SP:

Code:

@flightid bigint,
@departuretime smalldatetime
SELECT flightid, flightno, departuretime, origincode, destinationcode
FROM flightschedules

WHERE flightid <> @flightid
AND departuretime = CONVERT(SMALLDATETIME, @departuretime, 120)



And here's the vbscript that calls it:


Code:

vOutboundID = 452
vReturnID = 453

'--- Get the flight details ---
strOrigin = "confirmflightdetails '" & vOutboundID & "';"
set rsOrigin = Server.CreateObject("ADODB.Recordset")
rsOrigin.Open strOrigin, objConn
response.write "origin " & rsOrigin("departuretime") & "<BR>"

strReturn = "confirmflightdetails '" & vReturnID & "';"
set rsReturn = Server.CreateObject("ADODB.Recordset")
rsReturn.Open strReturn, objConn
response.write "return " & rsReturn("departuretime") & "<BR>"

strGetOFID = "SP '" & vOutboundID & "', '" & rsOrigin("departuretime") & "';"
set rsOFID = Server.CreateObject("ADODB.Recordset")
rsOFID.Open strGetOFID, objConn

DO WHILE NOT rsOFID.EOF
response.write "OFNO " & rsOFID("flightno") & " " & rsOFID("flightid") & "<br>"
rsOFID.MoveNext
Loop

strGetRFID = "SP '" & vReturnID & "', '" & rsReturn("departuretime") & "';"
set rsRFID = Server.CreateObject("ADODB.Recordset")
rsRFID.Open strGetRFID, objConn

DO WHILE NOT rsRFID.EOF
response.write "RFNO " & rsRFID("flightno") & " " & rsRFID("flightid") & "<br>"
rsRFID.MoveNext
Loop




Here's the code for confirmflightdetails:

Code:

@flightid bigint
AS
SELECT flightid, flightno, departuretime
FROM flightschedules
WHERE flightid = @flightid



When confirmflightdetails is tested, I the proper results, as confirmed by the response.write statements:

4521092006-07-29 08:00:00
4531102006-07-29 12:05:00


I put the response.write statements and loops in so I could verify the functionality.

Here's what it produces:

out 452
ret 453
origin 7/29/2006 8:00:00 AM
return 7/29/2006 12:05:00 PM
OFNO 109 450

Here's what it should produce:

out 452
ret 453
origin 7/29/2006 8:00:00 AM
return 7/29/2006 12:05:00 PM
OFNO 109 450
RFNO 110 451

If I do this in query analyzer:

Code:

select flightid, flightno, departuretime
from flightschedules
where flightid > 449 and flightid < 454



this is what I get from the database:

flightid flightno departuretime origin destination
4521092006-07-29 08:00:00 A C
4501092006-07-29 08:00:00 A B
4531102006-07-29 12:05:00 C A
4511102006-07-29 13:15:00 B A

What I'm trying to do is look up the chosen flight, then find the flight with the matching origin/destination (the other flight leg) on the same day.

I can't figure out why it's working for one set of parameters and not for the other.

Thanks in advance for any help!

View 1 Replies View Related

DBCC CHECKDB Inconsistent Errors

Mar 28, 2007

I'm in the process of trying to cutover SLQ 2000 database activity to a newer used server running Windows 2003. All the latest Win & SQL Service Packs through late Feb 2007 have been installed. The server has a Promise Fast Track S150 TX2plus SCSI raid with 2 80 gb drives. I have been using at as database development system for 3 months without any issues.

This morning I restored the latest full backup and 2 intervening transaction log backups. I am the only user on the system today. I ran DBDCC CHECKDB WITH TABLERESULTS 5 times over a 45 minute period and received 5 different results!

Run 1 - No Error

Run 2 - Following Errors:

2511162Table error: Object ID 754101727, Index ID 0. Keys out of order on page (1:631938), slots 40 and 41.repair_rebuild

8990101CHECKDB found 0 allocation errors and 1 consistency errors in table 'DatabaseEventLog' (object ID 754101727).

8928161Object ID 1218103380, index ID 255: Page (1:373888) could not be processed. See other errors for details.repair_allow_data_loss

8990101CHECKDB found 0 allocation errors and 1 consistency errors in table 'WeatherTransaction' (object ID 1218103380).

Run 3 - Following Errors:

2511162Table error: Object ID 754101727, Index ID 0. Keys out of order on page (1:631938), slots 40 and 41.repair_rebuild

8990101CHECKDB found 0 allocation errors and 1 consistency errors in table 'DatabaseEventLog' (object ID 754101727).

8928161Object ID 1218103380, index ID 255: Page (1:373888) could not be processed. See other errors for details.repair_allow_data_loss

8990101CHECKDB found 0 allocation errors and 1 consistency errors in table 'WeatherTransaction' (object ID 1218103380).

Run 4 - Following Errors:

2511162Table error: Object ID 754101727, Index ID 0. Keys out of order on page (1:631938), slots 40 and 41.repair_rebuild

8990101CHECKDB found 0 allocation errors and 1 consistency errors in table 'DatabaseEventLog' (object ID 754101727).

8928161Object ID 1218103380, index ID 255: Page (1:373888) could not be processed. See other errors for details.repair_allow_data_loss

894116102Table error: Object ID 1218103380, index ID 255, page (1:373888). Test (sorted [i].offset >= PAGEHEADSIZE) failed. Slot 8, offset 0x2 is invalid.repair_allow_data_loss

894216101Table error: Object ID 1218103380, index ID 255, page (1:373888). Test (sorted[i].offset >= max) failed. Slot 0, offset 0x8 overlaps with the prior row.repair_allow_data_loss

8990101CHECKDB found 0 allocation errors and 3 consistency errors in table 'WeatherTransaction' (object ID 1218103380).

Run 5 - No Error!

I ran several of the obligatory CHKDSK and had no errors:

c:>chkdsk
The type of the file system is NTFS.

WARNING! F parameter not specified.
Running CHKDSK in read-only mode.

CHKDSK is verifying files (stage 1 of 3)...
File verification completed.
CHKDSK is verifying indexes (stage 2 of 3)...
Index verification completed.
CHKDSK is verifying security descriptors (stage 3 of 3)...
Security descriptor verification completed.

78116030 KB total disk space.
45291176 KB in 33231 files.
10496 KB in 3135 indexes.
0 KB in bad sectors.
121086 KB in use by the system.
65536 KB occupied by the log file.
32693272 KB available on disk.

4096 bytes in each allocation unit.
19529007 total allocation units on disk.
8173318 allocation units available on disk.

Are these DBDCC CHECKDB WTIH TABLERESULTS differences normal or is this a bug or hardware issue? Any advice you can offer would be appreciated.

Thanks! Brad

View 10 Replies View Related

Inconsistent, Astronomical Duration In Trace

Jan 16, 2008

Show Profile Email Poster Edit Topic Reply with Quote
I have a dedicated SQL Server 2005 Workgroup server with Rackspace (RAID 1, which obviously needs to get changed). It's been running relatively smoothly and the last time I traced it (a couple of weeks ago), there were no significant issues. I run a reindex and I reboot every single night, and have built many indices on the tables using Analysis Services.

We move a ton of data in a single day, and the users were complaining of lag, which appeared related to this server. After running a trace, I noticed:

- my normal SPs were taking roughly 5-20x the normal duration and many were timing out
- I was frequently running SPs directly on the server that gave me transaction deadlocked on resource errors
- some of the tiny, quick calls made in the trace were showing as impossible, astronomical durations! Like things that should be "3"ms were showing up as "81544123941234123478090"ms

Is it corrupted?

HELP!

View 1 Replies View Related

'SQLNCLI' Supplied Inconsistent Metadata

Dec 20, 2007



I am running a SQL2K on a Server2K box. I have linked servers to SQL2005 Express on WinXP. I am trying to get data off of the SQL2005 DB to the central DB on SQL2K. This has been running for about a year. I am now getting an error message as follows:

Server: Msg 7356, Level 16, State 1, Line 1
OLE DB provider 'SQLNCLI' supplied inconsistent metadata for a column. Metadata information was changed at execution time.
OLE DB error trace [Non-interface error: Column 'TestVarChar' (compile-time ordinal 2) of object '"WKS"."DBo"."testtable"' was reported to have changed. The exact nature of the change is unknown].

When I run the following Queries on the SQL2K box:
SELECT * from WKSA400.WKS.DBo.testtable

or

SELECT * from OPENQUERY(WKSA400, 'SELECT * from WKS.DBO.testtable')

I did notice that if my testtable has only numeric fields, it works fine. It only has a problem with alphanumeric fields.

If I run 'exec sp_tables_ex WKSA400' on the SQL2K, it works fine. I get a listing of all the tables on the remote server. This tells me that my security is fine. It seems to be a conversion from the 2005 to 2000 SQL. If it had not been working for over a year, the dent in the wall the size of my head would be much smaller.

View 7 Replies View Related

Error: The Component Has Inconsistent Metadata

Jul 11, 2006



I have a Source Query with this sql set as a property expression:

"SELECT Category, Server_Name, Entitle_UserID,User_SubID,Start_Time,End_Time,Entitle_User_Name,Stat_Name,Stat_Count,Stat_Type,pk,create_date,run_num,Average,Median,Maximim FROM tbl_ws_stats WHERE pk > " + (DT_STR, 100, 1252)@[pk_var]

There is a message : 'The component has inconsistent metadata.'

Then when I click on the Source Query: 'The component is not in a valid state. Do you want the component to fix itself automatically?'

I also notice that there are no columns on the Column Mappings tab and no way to add columns.

How can I correct?

Thanks

View 9 Replies View Related

Document Map Limits-inconsistent Behavior

Feb 27, 2008

In the preview view in Visual Studio, I can see all of the items in my document map (over 100 labels). After deployment, only the first 30 items are displayed and I can not scroll down to see the missing items. Is this a known bug? is there any workaround?

Thanks
Dean

View 1 Replies View Related

Virtualization Licensing: Confusing And Inconsistent

May 30, 2007

Hi, I'm researching the possibility of housing permanent virtual SQL Server instances. Part of the research includes licensing and costs:



I have to say I am confused by two sources of material related to SQL Server virtualization licensing on Microsoft's website...



Source 1: http://www.microsoft.com/sql/howtobuy/faq.mspx , quote below:



Q. How do I license SQL Server 2005 for my virtual environments?

A. For Standard, Workgroup, and Enterprise, if you decide to license on a per processor basis, you must buy a SQL Server license for each virtual processor. For Enterprise Edition, you can also choose to license all physical processors in a box. This gives you rights to run SQL Server on unlimited number of virtual processors running on the same machine. For Server/CAL mode, for Standard and Workgroup, you must obtain SQL Server licenses for each Virtual Operating System Environment on which you run instances of SQL Server. However, for Enterprise, if you have a Server license for the physical Server, you may run unlimited instances of SQL Server in any Virtual Operating System Environment that you run on that same machine.

If you are using hardware partitioning on a multi-processor server, you get unlimited virtualization for SQL Server Enterprise Edition as long as all processors in that hardware partition are licensed. For example, if you have a hard partition of 10 physical processors on a 32-processor superdome server, 10 processor licenses of SQL Server 2005 give you rights to run free unlimited virtual machines with SQL Server on those 10 physical processors.



I came up with the following table-format explanation to make sense of it all:








Licensing Mode

Applies to

Number of licenses needed

To get this many standalone instances


Per processor per virtual

Standard, Workgroup, Enterprise

#virtual OSs *
#virtual CPUs

Standard, Workgroup:
#virtual OSs * up to 16 instances
Enterprise:
#virtual OSs * up to 50 instances


Per processor per physical

Enterprise

#physical CPUs

¥ virtual OSs * up to 50 instances


Per seat (aka Server + CAL) per virtual


Standard, Workgroup, Enterprise

#virtual OSs + S(users)

Standard, Workgroup:
#virtual OSs * up to 16 instances, among S(users)
Enterprise:
#virtual OSs * up to 50 instances, among S(users)


Per seat (aka Server + CAL) per physical

Enterprise

1 physical OS + S(users)

¥ virtual OSs * up to 50 instances, among S(users)

Where:
€œCPU€? is the number of CPUs as physically-recognized by the virtual OS or physical OS. CPUs enumerated by hyperthreading or dual-core processing and its corresponding host CPU are considered together one CPU. Refer to the Intel CPUCount utility for more information.
€œ¥ virtual OSs€? represents the theoretical maximum number of virtual machines supported on hardware of physical host.
€œS(users)€? represents the sum of unique users or devices that may or may not be concurrently connected at any time



Source 2: http://www.microsoft.com/licensing/userights , select "SQL Server", "-->", quotes below:



"Per Processor" page:

To run instances of the server software in virtual operating system environments on a server, you need a software license for each virtual processor that each of those virtual operating system environments uses. If a virtual operating system environment uses a fraction of a virtual processor, the fraction counts as a full virtual processor.



"Per Server/CAL" page:

For each server software license you assign, you may run, at any one time, one instance of the server software in one physical or virtual operating system environment on the licensed server.



Which I feel equates to this:






Licensing Mode

Applies to

Number of licenses needed

To get this many standalone instances


Per processor per virtual

Standard, Workgroup, Enterprise

#virtual OSs * #virtual instances * #virtual CPUs

Standard, Workgroup, Enterprise:
#virtual OSs * #virtual instances



Per seat (aka Server + CAL) per virtual


Standard, Workgroup, Enterprise

#virtual OSs + S(users)

Standard, Workgroup:
#virtual OSs * #virtual instances, among S(users)
Enterprise:
#virtual OSs * #virtual instances, among S(users)



Can a Microsoft representative clear this up for me, please?



I want to present the information accurately and objectively short-term, as well as legally in the long-term.

Thanks in advance for any information.

View 9 Replies View Related

Inconsistent Result Of Sybase Linked Server

May 4, 2008

help...


i successfully linked the sybase database to sqlserver. at first i can select a record, after i execute the same statement the result become different. i was looking for answer in the web but i can't find one. can somebody help me...

i suspect that sqlserver cache the result.. so i try to create a view for sybase database table. when i select the view it gave me the complete result but when i rerun the statement again it became inconsistent.

is there a cache or something in sqlserver. if there is how can i clear it.. so i will always get the fresh copy of the result.

View 6 Replies View Related

Inconsistent Order By Using Insert Into In A Stored Procedure

Jul 20, 2005

hi there,i am using sql server 7. below is the stored procedure that is givingme grief. its purpose it two-fold, depending on how it is called:either to return a pageset (based on page number and page size), or toreturn IDs of previous and next records (based on current record id).the problem is, that the order in which records are inserted into thetemp table is inconsistent, even though the calling statement and theorder by is always the same: sometimes records are ordered correctly,by project_number, and sometimes the order is broken starting at somerecord (which is always the same).i have no idea what is wrong here, i would appreciate any help!thanks so much.here is the calling statement:EXECUTE spProjects 2,null,'project_number','asc','',6,50here is the proc:CREATE PROCEDURE spProjects@action int,@currID int,@sortBy varchar(50),@sortDir varchar(4),@searchBy varchar(255),@Page int,@RecsPerPage intASSET NOCOUNT ONDECLARE @nextID intDECLARE @prevID intDECLARE @currRow intDECLARE @rowCount intDECLARE @firstRec intDECLARE @lastRec intDECLARE @total intDECLARE @more intDECLARE @sortBy2 varchar(50)-- setup temp tableSELECT r.id as row, r.*,a.name agr_type,pu.name purpose,sp.name sponsor,pr.name prime,p.lname p_lname, p.fname p_fname, p.mname p_mi, p.emailp_email,o.name org,convert(varchar(10), r.created_date, 101) adddate_c,convert(varchar(10), r.updated_date, 101) upddate_cINTO #project_temp_tableFROM spm_projects r, spm_agreement_types a, spm_purpose_typespu, spm_sponsors sp, spm_sponsors pr, spm_pis p, spm_orgs oWHERE 1 = 0IF @sortBy IS NULL SELECT @sortBy = 'project_number'IF @sortBy = '' SELECT @sortBy = 'project_number'SELECT @sortBy2 = @sortBy + ' ' + @sortDirIF @sortBy NOT LIKE '%project_number%' SELECT @sortBy2 = @sortBy2 +', project_number'-- get projectsEXEC ('INSERT INTO #project_temp_tableSELECT r.id as row, r.*,a.name agr_type,pu.name purpose,sp.name sponsor,pr.name prime,p.lname p_lname, p.fname p_fname, p.mname p_mi, p.emailp_email,o.name org,convert(varchar(10), r.created_date, 101) adddate_c,convert(varchar(10), r.updated_date, 101) upddate_cFROM spm_projects r, spm_agreement_types a, spm_purpose_typespu, spm_sponsors sp, spm_sponsors pr, spm_pis p, spm_orgs oWHERE r.agreement_type_id = a.idAND r.purpose_type_id = pu.idAND r.sponsor_id = sp.idAND r.prime_id *= pr.idAND r.pi_id = p.idAND r.org_id = o.id' + @searchBy + 'ORDER BY ' + @sortBy2)SET @rowCount = 0-- number recordsUPDATE #project_temp_table SET @rowCount = row = @rowCount + 1-- prev/nextSELECT @currRow = row FROM #project_temp_table WHERE id = @currIDSELECT @prevID = id FROM #project_temp_table WHERE row = @currRow -1SELECT @nextID = id FROM #project_temp_table WHERE row = @currRow +1-- pagingSELECT @firstRec = (@Page - 1) * @RecsPerPageSELECT @lastRec = (@Page * @RecsPerPage + 1)SELECT @more = COUNT(*) FROM #project_temp_table WHERE row >=@LastRecSELECT @total = COUNT(*) FROM #project_temp_tableSET NOCOUNT OFF-- prev/nextIF @action = 1 SELECT @prevID as prevID, @nextID as nextID--pagingIF @action = 2SELECT *, @more as more, @total as totalFROM #project_temp_tableWHERE row > @firstRec AND row < @lastRecDROP TABLE #project_temp_table

View 7 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved