I'm using sql server database engine tuning advisor, and its reporting a bunch of syntax errors. I'm not getting any errors in my apps, and when I manually execute the sprocs from QA they aren't erroring out either.
Is this something to be concerned about ? Here is an example error
(the Index 'IX_Paths' on table '#Paths' is a temp table , perhaps thats why its reporting error? or perhaps I messed up the code somehow ? )
Thanks very much for any help!
mike123
E000exec uspDijkstraResolve @fromID=157504,@ToID=697395 1[Microsoft][SQL Native Client][SQL Server]Index 'IX_Paths' on table '#Paths' (specified in the FROM clause) does not exist.
Does anyone know how to DTA to correctly read the output from the Profiler?
I get the error:
TITLE: DTAEngine ------------------------------ 50% of consumed workload has syntax errors. Check tuning log for more information. ------------------------------ BUTTONS: OK ------------------------------
And the log is full of lines like:
E000 SELECT COUNT(*) FROM Pictures WHERE AdRecId = 16329 2 [Microsoft][SQL Native Client][SQL Server]Invalid object name 'Pictures'.
The "trouble" is that those sprocs do exist and that the site apparently is working fine. But not for DETA. As far as DETA is concerned... 54% of my processing power is used to serve syntax errors!
A couple of hints. The database was an upgrade from 2000.: - I changed the compatibility level to 2005 but no luck there. I tried with a brand new database, and the errors keep cropping up. B. The errors were observed in a kit that comprises of a 32bit IIS and 64bit SQL2005 and thought that it had to do with the connectivity of those two. - I run the traces in one (32bit) server that hosts both IIS and SQL and I am getting the same errors.
In my case I have to log the errors raised by any task in a package to either windows event log, text file or SQL server. Also I need to send an email notifications to a group of people telling them about the error.
Now can I use SSIS package logging for logging the errors into the required destinations. I mean right clicking on the package and selecting Logging, then adding the required log providers and enabling the events for logging into those. I think I have to upfront select the log providers to log the error, I will not have the liberty to log the error to the destination, the name of which is passed as a variable to the package. This is okay with me though.
Now what will a custom log provider help me to do in this case. Also can I somehow configure my package to call the send mail task everytime an error is raised.
Also, one more option can be developing a package that only does the error handling. It will take in the paramters or the error codes and descriptions, the destination to write to and a flag to send mail or not for that particular type of error.
I recently updated the datatype of a sproc parameter from bit to tinyint. When I executed the sproc with the updated parameters the sproc appeared to succeed and returned "1 row(s) affected" in the console. However, the update triggered by the sproc did not actually work.
The table column was a bit which only allows 0 or 1 and the sproc was passing a value of 2 so the table was rejecting this value. However, the sproc did not return an error and appeared to return success. So is there a way to configure the database or sproc to return an error message when this type of error occurs?
I have a parent package that calls child packages inside a For Each container. When I debug/run the parent package (from VS), I get the following error message: Warning: The Execution method succeeded, but the number of errors raised (3) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
It appears to be failing while executing the child package. However, the logs (via the "progress" tab) for both the parent package and the child package show no errors other than the one listed above (and that shows in the parent package log). The child package appears to validate completely without error (all components are green and no error messages in the log). I turned on SSIS logging to a text file and see nothing in there either.
If I bump up the MaximumErrorCount in the parent package and in the Execute Package Task that calls the child package to 4 (to go one above the error count indicated in the message above), the whole thing executes sucessfully. I don't want to leave the Max Error Count set like this. Is there something I am missing? For example are there errors that do not get logged by default? I get some warnings, do a certain number of warnings equal an error?
Starwin writes "when i execute DBCC CHECKDB, DBCC CHECKCATALOG I reveived the following error. how to solve it?
Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -2093955965, index ID 711, page ID (3:2530). The PageId in the page header = (34443:343146507). . . . . . . . .
CHECKDB found 0 allocation errors and 1 consistency errors in table '(Object ID -1635188736)' (object ID -1635188736). CHECKDB found 0 allocation errors and 1 consistency errors in table '(Object ID -1600811521)' (object ID -1600811521).
. . . . . . . .
Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -8748568, index ID 50307, page ID (3:2497). The PageId in the page header = (26707:762626875). Server: Msg 8909, Level 16, State 1, Line 1 Table error: Object ID -7615284, index ID 35836, page ID (3:2534). The PageId in the page heade"
I have a view which joins 3 tables. One has 15 million rows the next another 5 million and the third 500k. When I join them the execution plan tells me that 15 million rows were retrieved from the first (taking about 5 mins) 1.5 million from the 2nd taking 3 mins and 4.5 million from the third taking almost no time.
The first two ause a clustered index, one being a seek the other a scan and the third a regular index seek. All followed by a hash match/inner join which takes 2 mins.
Any ideas on optimizing the SQL?
Here is the syntax: SELECT b.packno, b.COMM_DATE, a.ben_grp_cr, a.ben_dsc_cr, c.gst_inc, SUM(a.credit) FROM TABC c INNER JOIN TABB B ON b.PACKNO = c.PACKNO AND b.COMM_DATE = BENDTL.COMM_DATE AND b.BEN_NUM = c.BEN_NUM INNER JOIN TABA a ON b.tran_id = a.tran_id WHERE b.tran_date > '20040401' AND c.gst_inc = 0 GROUP BY b.packno, b.COMM_DATE, a.ben_grp_cr, a.ben_dsc_cr, c.bendtl.gst_inc
I tried on finding out the problem in a slow running sp with profiler. I found that there are some waiting resource 'tracewrite' and 'async_network_io'. Any idea on it? Thanks in advance
I need to find a way to make this query run faster. Please let me know if you can help. Here are some of the issues.
1) I have a database with one table, and No indexes can be put in place.
2) This is a very large database. (4,000,000) records, and growing daily.
3) the following query returns 720,000 records.
4) The query takes about 18 minutes to complete.
I understand that by doing a table scan, the way this has to run is difficult, but is there anything I can do?
The Problem: When I run this query, I also run performance monitor, I have found that my page file is at 100%. I have 1 Gig of Ram on the server with 2 P III Xeion Processors. I don't know what other information I can give, but please let me know.
I want to use the Index tuning wizard on some of my tables. Is it OK if I use when people are on the server or to do it during off-pick period. Thanks!!!
I am interested in finding out if anyone out there has experience with extremely high-performance SQL Server applications. The I/O needs of my database server are growing very quickly, and I am on the verge of launching a major upgrade project.
We have done all the standard tuning tasks: proper indexing, stored procedure tuning, etc... and are running on good small-server scale hardware ( dual PIII 700s, 1G RAM, but no RAID). The only path I can see to achieving higher performance are:
- lots of RAID, perhaps on a SAN. - server upgrade, maybe 4 proc? I've been looking at RAIDZONEs and Netfinity's - data partitioning ( I REALLY want to avoid this if I can! )
What do you do when you need Major Enterprise scale database performance from SQL Server? I've found lots of resources for Oracle and DB2, but I can't find many case studies for serious SQL Server installations.
Hi, does the upgradition SQL Server 6.5 to 7.0 will simply solve some problems which we are facing currently like ODBC errors Insert failed and update failed and also supporting more users ? We have Access front end to SQL Server backend, so do we need to touch code in front end for optimizations ?
Other than the SQL Server 7.0 Index Tuner wizard (which isn't suggesting anything). Is there a 3rd party Index Tuner piece of software out there? Or is there something special that needs ot be done to get the SQL Index Tuner to work?
Good Morning Everybody, I had a single table consists of one million records. To retrive data from that table it takes lot of time. how i can reduce execution time? what is the procedures to tune the database. i implemented cluster index on primary key of that table. still i can't able to reduce execution time. can anybody help me in this issue?
Hi, i am working on sql server 6.5 version.actually this is developed just one year back. but now the system is almost dead(low performance).i think the reasons r database design,networking,hardware etc.is it correct. and how to rectify these errors. i am suggesting that upgradation is the best option.so pl give the suggestions asap. Thanx Janreddy
What are some of the things i can do to improve query performance if the querey performance is realatively slow today compared to yesterday's performance: Here are some of things i looked at: -updating statistics -checking the execution plan -DBCC showcontig
In the older versions 6.5-7.0 you could adjust max_async_io setting if you had fast controllers and disks that could handle more IO from SQLServer. In 2000 this setting is removed. Are there any settings that I could adjust/tune regarding IO and disk handling ?
Hi Guys I need someone to assist me in having full understanding of what performance turning is. My Challenge is interpreting the System Monitor Graph of Performance tools.
I needed to know what does the value on vertical axis represent, while there is non on the horizontal axis rather I have Last, Average, Minimum, Maximum and Duration; Please What does all this value stands for, I indeed observe that when I click on any of the counters selected the value changes, therefore kindly assist me so that I can make meaning of this. Apart from all this please assist me with any material that can explain performance tuning to my maximum benefit, thanks in anticipation.
Mine Below Query takes considerable time at the time of execution. Can any one help me, what is the other way to write this query?
Declare @p_Mkt_View_Id int Set @p_Mkt_View_Id = 17
Select Distinct Customer_id From Active_Product_Cust_Dtl Where Product_Group_Code in (Select Distinct Product_Group_Code From Products Where Product_code in ( select Distinct ProductId from pit where pitid in (select pitid from marketviewdef where mktviewid = @p_Mkt_View_Id)))
Hi experts, I've run sql profiler with %processor time counter .it showed a large value of 75 so please give me steps to tune the long running query with high cpu utilisation.
for what purpose we are splitting the non-clustered index into 3 instead of 1
create index si_acct_info_dtl_INDX1 on si_acct_info_dtl(account_code, ctrl_acct_type) create index si_doc_hdrfk_ci_acct_info_dtl on si_acct_info_dtl(tran_ou, tran_type, tran_no)
whether index rebuit everycolumn when search is given????.if we build the index in one statement like this:create index si_acct on si_acct_info_dtl(batch_id) create index si_acct_info_dtl_guid on si_acct_info_dtl(batch_id,account_code, ctrl_acct_type,tran_ou, tran_type, tran_no)