Trying to chase down a baffling performance issue. Our database has been
running very slow lately. So we are performance tuning the database. In
doing so, we created a copy of our production database. In that database, I
changed one clustered index on a table to try to improve performance. I ran
one query - saw a slight improvement - but saw "lazy spool" in the execution
plan.
I tried to change it back to the original index by dropping the changed
index, and recreating the original index. I then ran the original query -
which now went from 5 seconds to 36 seconds.
I then ran DBCC REINDEX on that table. Performance of the query was still
markedly worse. I then reran the DBCC REINDEX on all tables, and then I
updated each tables statistics. Performance of that query has never returned
to the original 5 seconds.
What could be at issue here? Is there something else that I caused in
changing the index and changing it back?
One of my developers recently installed a backup of the production database onto his test site. His test server has the same configuration as the production server.
One of the Stored Procedures that is called takes 1:45 to run on his machine, but only 2 seconds on the production server. This same SP takes only 2 seconds on my development database.
The SP is called iteratively, up to 10 times... to run against 10 separate fields. Depending on a value for a parameter called @CriteriaClassID, depends on which portion of the SP runs.
The significant difference in processing time in itself is baffling (since the servers are same specs / configuration, as far as I can tell, and the data is identical, since he has a backup of the most recent production data).
But more baffling: if, in his data, I switch the values from field 1 to field 2, and vice versa, his results take 2 seconds (switching the values in field 1 to field 2 switches the value in @CriteriaClassID which is passed through to this SP).
It's exactly the same SP; the only difference is that field 1 is processed first, field 2 second, field 3 third etc. On the production site and my development site, it doesn't make a difference in the order they are processed. On his machine it does.
Any ideas? I though perhaps his Indexes were corrupted in the rebuild, but we ran a SQL Server maintenance schedule to clean it up, and no improvement.
This is the SP, if it is of any help:
CREATE procedure [dbo].[st_pull_model_data] @ModelID as integer, @CriteriaID as integer as
declare @ClientID as integer, @CriteriaClassId as char(1)
/*Procedure to pull data from org_model_data and postalcode_model_data for modeling and media analysis */ /*Need to have table #temp_data created outside of SP with fields org_id and zip_code */ /*This procedure is used by SP st_model_data */
If @CriteriaID is not null begin
set @CriteriaClassId = (Select model_criteria_type from model_criteria where model_criteria_id = @CriteriaID) if @CriteriaClassID = 'G' -- changes client_id from specific to general, if General is required. begin set @ClientID = 0 end else begin set @ClientID = (Select client_id from model where model_id = @ModelID) end
If @CriteriaClassId in ('G','P') Begin update#temp_data setdata1 = postal_criteria_value from #temp_data t left outer join (select postalcode, postal_criteria_value from postalcode_model_data pmd join model_org_trade_area mota on mota.zip_code = pmd.postalcode join model_org mo on mo.model_org_id = mota.model_org_id where model_criteria_id = @CriteriaID and client_id = @ClientID and mo.model_id = @ModelID) as PMD onPMD.postalcode = t.zip_code end else Begin update#temp_data setdata1 = org_criteria_value from#temp_data t left outer join (select distinct postalcode, org_criteria_value, omd.org_id from org_model_data omd join org o on o.org_id = omd.org_id join model_org_trade_area mota on mota.zip_code = omd.postalcode join model_org mo on mo.model_org_id = mota.model_org_id and mo.org_id = o.org_id where model_criteria_id = @CriteriaID and o.client_id = @ClientID and mo.model_id = @ModelID) as OMD on OMD.postalcode = t.zip_code and omd.org_id = t.org_id end end
Basically, I have a web-based application that queries aggregates from several big tables that grow rather quickly. I feel I am fairly accomplished at tuning individual queries and have optimized these particular stored procedures as much as I can and often see great results for a period of time. However, after time passes, it seems the stored procedures begins performing really badly. What's frustrating is that I have traced the server, located a stored procedure that consistently executes with an abnormally high duration, but when I run the exact same text directly against the sql server it returns instantly.
If I recompile the stored procedure using my direct connection by including the WITH RECOMPILE clause to the EXEC statement, the problem remains for the web-app. However, if I add the WITH RECOMPILE to the actual stored procedure, it is instantly resolved. I can then remove the WITH RECOMPILE from the stored procedure text and things will remain speedy for a period of time before they again slip back.
I'm curious as to what I can do from a SQL standpoint to correct for this. For now, the app team is setting a counter for each execution and then issuing a WITH RECOMPILE when the counter limit is exceeded.
I apologize if this question has most been asked before and I have attempted a review of the forums but was unable to find any material. Any suggestions, advice, links or general assistance would be most appreciated.
Hi,I am facing a peculiar problem while looking ahead in a live Databasecurrently under operation in one of my client’s Project. AnApplication that is updating 3 - tables in the Database is missing toupdate a certain number of Fields in one of the Tables. The fact isnot frequent and I have checked through the Server Performance Monitorthat there is no performance slag of the Server during any point oftime.The Tables are indexed with common Key fields. Can anybody help me inthis regard ?Thanks & Regards.--Posted using the http://www.dbforumz.com interface, at author's requestArticles individually checked for conformance to usenet standardsTopic URL: http://www.dbforumz.com/General-Dis...pict193836.htmlVisit Topic URL to contact author (reg. req'd). Report abuse: http://www.dbforumz.com/eform.php?p=655931
Hi everyone! I have a very brief question... I have 10 queues in my database and each of them are sent equal number of messages... There are instances where they execute/activate the stored procedures very fast but there are times where they don't, does anyone have an idea why this happens?
Thank you very much for taking the time to read my post. :)
I've been working with SQL Server 2005 for a while now and I've noticed some odd behavior that I want to bounce of other members of the community. I should preface that I've been a forum viewer (and occasional contributer) here at SQL Team for a while and I've naturally developed a keen sense for optimizations.
Fundamentally, longer stored procedures with perfectly fine/optimized execution plans are inconsistent with real world performance. In some of these cases, a low subtree cost on a 4 core machine with 16gb of ram and 2 15 drive SAS arrays with little load takes excessively long to run or in some cases doesn't complete.
This isn't due to blocking or resource bottlenecks as I'm quite familiar with built in tools to troubleshoot and resolve those issues. In all cases, I am able to rearchitect the stored procedure into a higher subtree cost variant and get reasonable performance, but it's frustrating to have to redo work and there seems to be no common theme other than longer multi-statement procedures.
I've used SQL Server 2000 extensively and did not notice this level of inconsistency in performance with that product version. Just wondering if others in the community have experiences similar or if I'm just crazy.
Hi, just wondering if anyone noticed the MSSQL Express 2005 is so inconsistent with its performance. I was testing out a query with 2 joins, with the main table having about 13,600,000 records. With the same criteria value, sometimes it finished executing in 3 secs. sometimes almost a 1 min.
Could it be just the volume of the data? This is really driving me crazy!
I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.
The first query completed in around 7s and has 47,000 logical reads.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.
My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?
One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.
I have an asp.net application on SQL Server 2005. I have completed indexing all the physical primary and foreign keys, virtual primary and foreign keys, sorting order, where clause fields and so on. On first day, I only index all the physical primary and foreign keys, virtual primary and foreign keys. I noticed the loading performance has improved. So I continue with the remaining index process on the second day. This time, I noticed the loading performance is slower by 0.5 to 1 second. Is there any possibility that the loading performance will be slower after indexing? Please advise. Thanks.
Anyways, I recently imported some records (4000 or so) into it which didn't have a UPC Code (NULL) , Since then, all the Stored procedures in my database that use this table have been incredibly slow. I'm talking like things that used to take 2 minutes to run now take an hour. I'm not really sure what is causing this, but since it work fine before I imported the records I have an idea that it has something to do with that. I'm wondering if maybe since there was a index on UPC code, maybe it is choking on all the Nulls? I'm still just learning about the wonderful world of indexes.
The types of queries I'm running on this table are not that involved, mainly stuff like:
SELECT Cost, ItemID FROM SupplierData WHERE UPC = @UPC and Supplier = @SupplierName ORDER BY Cost SELECT Top 1 Cost FROM SupplierData WHERE UPC = @UPC ORDER BY Cost
Hello friends, I am in need to tune my stored procedures to get best performance. First I have a doubt in some SQL statement execution plan,
-- assume "Table_Heat" has one Million records. -- assume "Mill" column hold value either 1 or 2.
-- assume "HeatNumber" column is Unique.
1. If I have statement like this, -----SELECT * FROM Table_Heat WHERE HeatNumber = @nHeat AND Mill = @nMill-----
How WHERE clause filter the result from table??? -- In which order the AND statements of WHERE clause will be executed??
2. If I have created Indices for the table Table_Heat
For the above SQL statement - -- If Index is for HeatNumber --Execution Speed?? -- If Index is for HeatNumber, Mill -- Execution Speed?? -- If Index is for Mill, HeatNumber -- Execution Speed??
I know how to work with indices, but creating too much indices and unwanted indices also cause performance inefficiency, isn't it?. so please help me on this....
I am about to heavily index a table and have to include atleast 3 to 4 olumns in the fulltext index for this table. The table is updated very frequently and the also the columns that are involve in the fulltext indexing undergo frequent updates. As of now, I can't avoid using full text indexing as these columns are very very lengthy and basically contail text. The users of the database will give some key words as the search criteria to get infomation as to what they are looking for. How frequently should I update my full text catalog. This is a scenario where the full text is operating on various tables and each of thses table might be containingaround 300,000 to 800,00 rows. I would appreciate an intelligent siggestion as I need it as soon as possible.
I want to know is a flat file faster than a RDBMS for indexing for example a search engine indexing would a flat file be better in terms of performance, scalability etc than a RDBMS?
The other day we tried online re-indexing feature of SQL 2005 and it€™s performing faster than offline re-indexing. Could you please validate if it€™s supposed to do be this way? I always thought offline should be faster than online.
HiI have an oddity. If I run a piece of SQL:SELECT EmployeeNo, MailToFROM ST_PPS.dbo.Employeewhere AddedOn BETWEEN '01-jan-2006' and '01-feb-2006'AND MailTo NOT IN ( '3', 'x')order by MailToI get the resultsEmployeeNo MailTo----------- ------608384 1606135 1608689 1609095 1607163 1606165 1606472 1608758 1.....for 2594 rowsIf I create a stored procedure with the same SQL:-CREATE PROCEDURE dbo.PPS_testASSELECT EmployeeNo, MailToFROM ST_PPS.dbo.Employeewhere AddedOn BETWEEN '01-jan-2006' and '01-feb-2006'AND MailTo NOT IN ( '3', 'x')order by MailToGOand run it:-EXEC PPS_testI get three extra rowsEmployeeNo MailTo----------- ------607922 NULL606481 NULL605599 NULL606316 1608871 1607427 1608795 1.....for 2597Does anyone know what is happening here? It appears that the clause:-MailTo NOT IN ( '3', 'x')excludes NULL in raw SQL, but includes NULL (correctly I think) in astored procedure.Chloe CrowderThe British Library
I get an error every so often with a DTS package on SQL 7. Error asfollows.The connection is currently being used by a task. The connectioncannot beclosed or re-used.This doesn't happen all the time and I can sometimes (more often thannot) get the DTS package to complete in it's entirety.To explain what the DTS package does...Truncate tables in reporting environment(several in a batch)Clear Transaction LogsCopies data from live environment into CSV (for speed)Copies data from CSV files into tables previously truncated.Builds up a table based on the data copied (for reporting)Clear Transaction logsI'm using a pretty basic set up, Connection (1st DB) -> Transformationto CSV -> Transformation to Connection (2nd DB). It seems to fail oneither the first or second transformation at random (?).I've checked the transformations so that they close the connectionafterwards so it should in theory be releasing the CSV files for thenext step. I suspect that there is a timing issue with this. I cancopy the CSV files over, but this is a little sloppy and I wouldprefer not to do it.Any ideas how to find a tidy way to ensure these are closed bothbefore and afterwards ?ThanksRyan
I am building a report with a recursive hierarchy for drill-down purposes. The hierarchy is built by querying a SSAS OLAP cube and defining a details grouping for the table/matrix.
Every time I run the report one or more of the leaf members in the recursive hierarchy "jumps" up to the highest level. First I thought that this may be due to the fact that the leafs parents are not part of the returned dataset. However, the queries makes sense and the "offending" members does never contain any data (while the query should return only non empty members) which is why this is a very strange behavior. Furthermore, the "offending" member differs between different executions of the report, despite the fact that the parameters is exactly the same and the cube is untouched between executions.
I am actually pressing "View Report", waiting for the report to execute and when I press "View Report" again, the returned datasets seem to differ, yielding different "offending" members in the report.
When I run the queries individually in the Data-tab in BIDS, the returned datasets are always the same. Execution caching is turned off for the report.
Checking against SSRS's ExecutionLog, the RowCount for consecutive executions with the exact same parameters differ. For example, RowCount: 3094 3080 3079 3088 3087
Why does SSRS behave such inconsistently? Any tips or tricks?
I am running SQL Server 7.0 on NT 4.0. I have created a simple query:
SELECT SUM(month1) As total_month1 FROM eac_manload WHERE project_number = 8800 and dept IN (50,51,52,55,57,60,61,62,63,64,65,68,69)
I first time I run the query I get the correct result. Subsequent times that I run the query the result is 1 record with a Null value. The data has not changed. If I stop MSSQLSERVER and restart the service I get the correct result the first time and the Null value each time thereafter. Anybody out there with any idea of what is going on here? Any help will be appreciated!!
I apologize for the length of this message, but I think I need to include all this info so that the problem is understood. I am having what appears to be a problem capturing the return code from a failed BCP.
I create a stored proc to use BCP to load a table:
create procedure sp_bcp_load as declare @RC int execute @RC = master..xp_cmdshell "bcp JON..W4KPV in e:inetpubftprootfinreslaw4kpv.g4000.data /Sdbmtss1 /m 0 /f d:mssqluserdatafinresW4KPV.fmt /Usa /P /e d:mssqluserdatafinrescp1.err /t""|"" /r "
select 'Return code from bcp = ', @RC if @RC <> 0 BEGIN print 'BCP Error.' return (8) END GO
If I execute the SP, and encounter a transaction log full error, the return code is still zero:
1000 rows sent to SQL Server. 45000 total 1000 rows sent to SQL Server. 46000 total Msg 1105, Level 17, State 2: Server 'DBMTSS1', Line 1:
Can't allocate space for object 'Syslogs' in database 'Jon' because the 'logsegment' segment is full. If you ran out of space in Syslogs, dump the transaction log. Otherwise, use ALTER DATABASE or sp_extendsegment to increase the size of the segment.
(54 row(s) affected)
----------------------- ----------- Return code from bcp = 0
If I execute the SP again, it correctly returns a non-zero value:
Msg 1105, Level 17, State 2: Server 'DBMTSS1', Line 1: Can't allocate space for object 'Syslogs' in database 'Jon' because the 'logsegment' segment is full. If you ran out of space in Syslogs, dump the transaction log. Otherwise, use ALTER DATABASE or sp_extendsegment to increase the size of the segment.
(6 row(s) affected)
----------------------- ----------- Return code from bcp = 1
(1 row(s) affected)
BCP Error.
Does anybody have an idea why this behaves this way? Any suggestions on how to trap an error on the first call?
Hello all- Maybe someone has some insight on this... I currently have some hash values (SHA512 through Enterprise Library) that I'm trying to compare to data in our database. When I sort the values on the pipeline (DT_STR, 1252) with a Sort and compare them against values coming out of an OLE DB Source (SQL_Latin_1_General_CP1_CI_AS) with a Merge Join, any hash that has a '/' in it is treated as inequal to the one in the database.
For even more fun, when I change the sort/merge join to a Lookup, everything (seems) to check out. Is the 1252 code page not the same as SQL_Latin_1_General_CP1_CI_AS? Any other reasons this might be happening?
I'm debugging my first SSIS package and is getting inconsistent results. The package does not always complete successfully. When the package does fail, it fails at different tasks that accquire database connections. Any of the following error message would show up: [Execute SQL Task] Error: Failed to acquire connection "FORGE.FMC". Connection may not be configured correctly or you may not have the right permissions on this connection.
[OLE DB Destination [6374]] Error: The AcquireConnection method call to the connection manager "FORGE.FMC" failed with error code 0xC0202009.
[Connection manager "FORGE.FMC"] Error: An OLE DB error has occurred. Error code: 0x80004005. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections.". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Communication link failure". An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Named Pipes Provider: No process is on the other end of the pipe. ".
I never experienced any connection error when executing query through management studio. It's only SSIS packages that fails to connect every now and then. Any help is appreciated.
I'm opening a large number of connections to a SQL server 2005 database using a SQL object based on the System.Data.SqlClient library. When I log out of the application, I call the ClearSQLConnectionPool method of the SQL object and this should clear the pool of these connections. This has worked in the past, but lately I have noticed some odd and very annoying behaviour.Very often I log out of the application and few, if any of the connections disappear. Just this morning I opened the application, logged out and all but one of the connections disappeared. Then I opened it again, logged out and none of the connections had gone. If I open the application again all of the past connections are still there plus the new ones.I know it works, its just very inconsistent. The problem may be with IIS, SQL or with library; I just can't tell which.Any ideas?Many thanks
I have a stored procedure (see below), in which I would like to check if the create an identity column and make it a primary key succeeded. I check @@error after the exec statement. This used to pick up an error if the table already had an identity column. It has stopped doing that. Why? And, if this is not the way to capture the error after the exec statement, how do I do it?
CREATE PROCEDURE rasp_test3 /* Written by Judith Farber Abraham this procedure loops thru sysobjects looking for user tables. If a user table, does it have a primary key? If not, add an identity column to table and make it a primary key */ --would like to have sp in main db but use from all three @fixDB nvarchar(50)--the db to which to add PKs
AS Declare @TableName varchar(50) Declare @TableID int Declare @Msg varchar (50) Declare @ColumnName varchar(50) Declare @IndexName varchar(50) Declare @MyCursor nvarchar(500) declare @MyCursorC nvarchar(500) declare @CName sysname --Set @Msg = "********* Finished adding Ident fields *************" /* */ /* do for all user tables ( xtype = u ) */ set @Mycursor = N'Declare SysCursor cursor for select Name, ID from ' + @fixdb +'.dbo.sysobjects where xtype = "u"' execute sp_executesql @mycursor open syscursor Fetch next from SysCursor into @TableName, @TableID /* -1 = no record; -2 = row deleted; 0 = got a row */ While (@@Fetch_status <> -1) Begin If (@@Fetch_status <> -2) Begin /* have a user row (table) */ /* */ set @ColumnName = @TableName + 'ID' set @IndexName = 'PK_' + @columnName
--only add ident and PK if no primary key in table If not exists (Select * from Sysobjects where Parent_obj = @TableID and xtype = 'PK')
--add an identity column to user table and make it a Primary key
EXEC ('ALTER TABLE ' + @tablename + ' ADD ' + @columnName + ' INT IDENTITY CONSTRAINT ' + @IndexName + ' PRIMARY KEY ' ) -- Begin --if error, assume already ident column, so find column name & make PK print @@error if @@error <> 0 print "jerror occured" --set @MycursorC = N'Declare SysCursorC cursor for SELECT c.name --FROM syscolumns c, sysobjects o --WHERE ((c.id = o.id) AND (c.status = 128)) AND (o.name = ' + @tablename + ')' --execute sp_executesql @mycursorC --Open SyscursorC --Fetch next from SysCursorC into @CName --print @cname --close syscursorc --deallocate syscursorc --Exec ('ALTER TABLE ' + @tablename + ' ADD ' + @columnName + ' INT IDENTITY CONSTRAINT ' + @IndexName + ' PRIMARY KEY ' ) --select @cname=c.name --print c.name End
End Fetch next from SysCursor into @TableName, @TableID End --Print @Msg Close SysCursor Deallocate SysCursor Return
I have a problem retrieving data from a linked SQL Server 6.5 server in version 7. I have set-up the server as a linked server and most operations occur with no problem. With some tables, though, I get the following error when I attempt to query any column of data:
Server: Msg 7356, Level 16, State 1, Line 1 OLE DB provider 'SQLOLEDB' supplied inconsistent metadata for a column. Metadata information was changed at execution time.
This occurs consistently for a subset of tables. I cannot see any pattern to the configuration of these tables that would make them any different to those where a query succeeds.
I'm testing some code to look up values from my database and update a specific field when certain conditions are met. I'm having trouble with some code that is giving me the results I expect when I submit one set of parameters, but is not finding anything in the database for another set, when I know the data exists.
I put the response.write statements and loops in so I could verify the functionality.
Here's what it produces:
out 452 ret 453 origin 7/29/2006 8:00:00 AM return 7/29/2006 12:05:00 PM OFNO 109 450
Here's what it should produce:
out 452 ret 453 origin 7/29/2006 8:00:00 AM return 7/29/2006 12:05:00 PM OFNO 109 450 RFNO 110 451
If I do this in query analyzer:
Code:
select flightid, flightno, departuretime from flightschedules where flightid > 449 and flightid < 454
this is what I get from the database:
flightid flightno departuretime origin destination 4521092006-07-29 08:00:00 A C 4501092006-07-29 08:00:00 A B 4531102006-07-29 12:05:00 C A 4511102006-07-29 13:15:00 B A
What I'm trying to do is look up the chosen flight, then find the flight with the matching origin/destination (the other flight leg) on the same day.
I can't figure out why it's working for one set of parameters and not for the other.
I'm in the process of trying to cutover SLQ 2000 database activity to a newer used server running Windows 2003. All the latest Win & SQL Service Packs through late Feb 2007 have been installed. The server has a Promise Fast Track S150 TX2plus SCSI raid with 2 80 gb drives. I have been using at as database development system for 3 months without any issues.
This morning I restored the latest full backup and 2 intervening transaction log backups. I am the only user on the system today. I ran DBDCC CHECKDB WITH TABLERESULTS 5 times over a 45 minute period and received 5 different results!
Run 1 - No Error
Run 2 - Following Errors:
2511162Table error: Object ID 754101727, Index ID 0. Keys out of order on page (1:631938), slots 40 and 41.repair_rebuild
8990101CHECKDB found 0 allocation errors and 1 consistency errors in table 'DatabaseEventLog' (object ID 754101727).
8928161Object ID 1218103380, index ID 255: Page (1:373888) could not be processed. See other errors for details.repair_allow_data_loss
8990101CHECKDB found 0 allocation errors and 1 consistency errors in table 'WeatherTransaction' (object ID 1218103380).
Run 3 - Following Errors:
2511162Table error: Object ID 754101727, Index ID 0. Keys out of order on page (1:631938), slots 40 and 41.repair_rebuild
8990101CHECKDB found 0 allocation errors and 1 consistency errors in table 'DatabaseEventLog' (object ID 754101727).
8928161Object ID 1218103380, index ID 255: Page (1:373888) could not be processed. See other errors for details.repair_allow_data_loss
8990101CHECKDB found 0 allocation errors and 1 consistency errors in table 'WeatherTransaction' (object ID 1218103380).
Run 4 - Following Errors:
2511162Table error: Object ID 754101727, Index ID 0. Keys out of order on page (1:631938), slots 40 and 41.repair_rebuild
8990101CHECKDB found 0 allocation errors and 1 consistency errors in table 'DatabaseEventLog' (object ID 754101727).
8928161Object ID 1218103380, index ID 255: Page (1:373888) could not be processed. See other errors for details.repair_allow_data_loss
894116102Table error: Object ID 1218103380, index ID 255, page (1:373888). Test (sorted [i].offset >= PAGEHEADSIZE) failed. Slot 8, offset 0x2 is invalid.repair_allow_data_loss
894216101Table error: Object ID 1218103380, index ID 255, page (1:373888). Test (sorted[i].offset >= max) failed. Slot 0, offset 0x8 overlaps with the prior row.repair_allow_data_loss
8990101CHECKDB found 0 allocation errors and 3 consistency errors in table 'WeatherTransaction' (object ID 1218103380).
Run 5 - No Error!
I ran several of the obligatory CHKDSK and had no errors:
c:>chkdsk The type of the file system is NTFS.
WARNING! F parameter not specified. Running CHKDSK in read-only mode.
CHKDSK is verifying files (stage 1 of 3)... File verification completed. CHKDSK is verifying indexes (stage 2 of 3)... Index verification completed. CHKDSK is verifying security descriptors (stage 3 of 3)... Security descriptor verification completed.
78116030 KB total disk space. 45291176 KB in 33231 files. 10496 KB in 3135 indexes. 0 KB in bad sectors. 121086 KB in use by the system. 65536 KB occupied by the log file. 32693272 KB available on disk.
4096 bytes in each allocation unit. 19529007 total allocation units on disk. 8173318 allocation units available on disk.
Are these DBDCC CHECKDB WTIH TABLERESULTS differences normal or is this a bug or hardware issue? Any advice you can offer would be appreciated.
Show Profile Email Poster Edit Topic Reply with Quote I have a dedicated SQL Server 2005 Workgroup server with Rackspace (RAID 1, which obviously needs to get changed). It's been running relatively smoothly and the last time I traced it (a couple of weeks ago), there were no significant issues. I run a reindex and I reboot every single night, and have built many indices on the tables using Analysis Services.
We move a ton of data in a single day, and the users were complaining of lag, which appeared related to this server. After running a trace, I noticed:
- my normal SPs were taking roughly 5-20x the normal duration and many were timing out - I was frequently running SPs directly on the server that gave me transaction deadlocked on resource errors - some of the tiny, quick calls made in the trace were showing as impossible, astronomical durations! Like things that should be "3"ms were showing up as "81544123941234123478090"ms
I am running a SQL2K on a Server2K box. I have linked servers to SQL2005 Express on WinXP. I am trying to get data off of the SQL2005 DB to the central DB on SQL2K. This has been running for about a year. I am now getting an error message as follows:
Server: Msg 7356, Level 16, State 1, Line 1 OLE DB provider 'SQLNCLI' supplied inconsistent metadata for a column. Metadata information was changed at execution time. OLE DB error trace [Non-interface error: Column 'TestVarChar' (compile-time ordinal 2) of object '"WKS"."DBo"."testtable"' was reported to have changed. The exact nature of the change is unknown].
When I run the following Queries on the SQL2K box: SELECT * from WKSA400.WKS.DBo.testtable
or
SELECT * from OPENQUERY(WKSA400, 'SELECT * from WKS.DBO.testtable')
I did notice that if my testtable has only numeric fields, it works fine. It only has a problem with alphanumeric fields.
If I run 'exec sp_tables_ex WKSA400' on the SQL2K, it works fine. I get a listing of all the tables on the remote server. This tells me that my security is fine. It seems to be a conversion from the 2005 to 2000 SQL. If it had not been working for over a year, the dent in the wall the size of my head would be much smaller.
In the preview view in Visual Studio, I can see all of the items in my document map (over 100 labels). After deployment, only the first 30 items are displayed and I can not scroll down to see the missing items. Is this a known bug? is there any workaround?
Q. How do I license SQL Server 2005 for my virtual environments?
A. For Standard, Workgroup, and Enterprise, if you decide to license on a per processor basis, you must buy a SQL Server license for each virtual processor. For Enterprise Edition, you can also choose to license all physical processors in a box. This gives you rights to run SQL Server on unlimited number of virtual processors running on the same machine. For Server/CAL mode, for Standard and Workgroup, you must obtain SQL Server licenses for each Virtual Operating System Environment on which you run instances of SQL Server. However, for Enterprise, if you have a Server license for the physical Server, you may run unlimited instances of SQL Server in any Virtual Operating System Environment that you run on that same machine.
If you are using hardware partitioning on a multi-processor server, you get unlimited virtualization for SQL Server Enterprise Edition as long as all processors in that hardware partition are licensed. For example, if you have a hard partition of 10 physical processors on a 32-processor superdome server, 10 processor licenses of SQL Server 2005 give you rights to run free unlimited virtual machines with SQL Server on those 10 physical processors.
I came up with the following table-format explanation to make sense of it all:
Licensing Mode
Applies to
Number of licenses needed
To get this many standalone instances
Per processor per virtual
Standard, Workgroup, Enterprise
#virtual OSs * #virtual CPUs
Standard, Workgroup: #virtual OSs * up to 16 instances Enterprise: #virtual OSs * up to 50 instances
Per processor per physical
Enterprise
#physical CPUs
¥ virtual OSs * up to 50 instances
Per seat (aka Server + CAL) per virtual
Standard, Workgroup, Enterprise
#virtual OSs + S(users)
Standard, Workgroup: #virtual OSs * up to 16 instances, among S(users) Enterprise: #virtual OSs * up to 50 instances, among S(users)
Per seat (aka Server + CAL) per physical
Enterprise
1 physical OS + S(users)
¥ virtual OSs * up to 50 instances, among S(users)
Where: €œCPU€? is the number of CPUs as physically-recognized by the virtual OS or physical OS. CPUs enumerated by hyperthreading or dual-core processing and its corresponding host CPU are considered together one CPU. Refer to the Intel CPUCount utility for more information. €œ¥ virtual OSs€? represents the theoretical maximum number of virtual machines supported on hardware of physical host. €œS(users)€? represents the sum of unique users or devices that may or may not be concurrently connected at any time
To run instances of the server software in virtual operating system environments on a server, you need a software license for each virtual processor that each of those virtual operating system environments uses. If a virtual operating system environment uses a fraction of a virtual processor, the fraction counts as a full virtual processor.
"Per Server/CAL" page:
For each server software license you assign, you may run, at any one time, one instance of the server software in one physical or virtual operating system environment on the licensed server.
i successfully linked the sybase database to sqlserver. at first i can select a record, after i execute the same statement the result become different. i was looking for answer in the web but i can't find one. can somebody help me...
i suspect that sqlserver cache the result.. so i try to create a view for sybase database table. when i select the view it gave me the complete result but when i rerun the statement again it became inconsistent.
is there a cache or something in sqlserver. if there is how can i clear it.. so i will always get the fresh copy of the result.