Need Help Interpreting Some SQL

Apr 28, 2008

can someone tell me what the folowing SQL does?

 

SELECT MIN(QUOTENAME(TABLE_SCHEMA) + '.' + QUOTENAME(TABLE_NAME))

FROM INFORMATION_SCHEMA.TABLES

 

 

Thanks in advance, Ralph

View 4 Replies


ADVERTISEMENT

Interpreting Product A-2 &&>=1.978

Aug 14, 2007

Dear Jamie,
Thanks for the reply.
We have another problem to solve.

on the node we are getting product A -2 >=1.978

What does it mean (-2) ?
It is mentioned as two time slices ago. Please help me to undertand this.
From
menik

View 1 Replies View Related

Need Help Interpreting Error Message From Job

Feb 29, 2000

I have a job whose first step is to run a DTS package via a DTSRUN Operating System Command. I get the following message.

DTSRun: Loading... DTSRun: Executing... Error: -2147220499 (800403ED); Provider Error: 0 (0) Error string: No Steps have been defined for the transformation Package. Error source: Microsoft Data Transformation Services (DTS) Package Help file: sqldts.hlp Help context: 700. Process Exit Code 1. The step failed.

Prior to 2/29/2000, it had run dozens of times successfully, the last time on 2/23/2000.

I would be most appreciative of any help.

Thanks.

View 1 Replies View Related

Interpreting Index Statistics On SQL 2005

Nov 28, 2006

I ran the DBCC SHOW_STATISTICS command for all of my indexes; I was told that high density numbers are bad, low numbers good. I have some questions about my results, though; I'm not sure how to interpret them.

Of 48 indexes, 14 have a density of 0. Does this mean that the indexes are not selective enough? Does it mean they're garbage and I should toss them?

6 have a density of NULL. They are all primary keys. I suppose this just means that they're never used because these tables are rarely queried. Would this assumption be correct?

13 have a density of 1. I have no idea what this means.

The others have densities ranging from 0.01210491 to 0.5841165. I was told that the lower this number is, the more selective and thus more useful an index is. I think 0.5841165 is too high a number. Would this be correct?

Thanks in advance.

View 14 Replies View Related

SQL 2012 :: Interpreting Query Statistics

Jun 5, 2014

I'm designing a new database which will be the back-end to a heavily-used web-based application (all these terms are relative - I guess the use won't be that heavy in the grand scheme of things, I'm only talking 100 users or so at the very most). Data from the old application database will be migrated to this one, and the old database is around 7GB in size after 5 years of use.

I have two different ways of linking some tables in mind, one which is slightly more complex than the other but which potentially has benefits over the simpler method. However, I'm concerned that I might be 'over-cooking' the design, and that performance would suffer as a result, so I've tried creating the two different versions of the database (the part of it I'm concerned with, anyway), one for each of the solutions I've got in mind, migrated the data into the relevant tables and carried out some queries on the data to collect some statistics.

The problem is that, whilst I can see that the more complex method is more expensive, as expected, I don't really understand if the difference is significant. Since I don't know what the numbers in the Client Statistics window actually mean (there are no units! I'm guessing times are in milliseconds?), or how much of real-world impact the difference will have, I'm finding it hard to interpret my statistics and come to a decision.

Querying the entirety of my tables to return ~20,000 records listing one column from each of the main tables I'm playing with, the simpler method had a Total Execution Time of 199, and the more complex a Total Execution Time of 272. Is that the statistic I should be most concerned with? Is that a difference I should be concerned about? Is the difference likely to be magnified when the database is much larger and in use, such that a difference of 73 milliseconds in this test scenario could end up being as much as a whole second in production, for example?

View 1 Replies View Related

Whitepaper On Interpreting CHECKDB Results

Aug 10, 2005

Folks,

I'm going to write an advanced whitepaper on interpreting the results of CHECKDB in SQL Server 2005 (mostly applicable to SQL Server 2000 as well), should be available before end of the year. Couple of questions for you:

1) would this be interesting/useful to you?
2) anything in particular you'd like to see covered?

Thanks

Paul Randal
Dev Lead, Microsoft SQL Server Storage Engine
(Legalese: This posting is provided "AS IS" with no warranties, and confers no rights.)

View 2 Replies View Related

Need Help Interpreting Results Of SET STATISTICS TIME ON

Dec 14, 2007

Hi,

I used SET STATISTICS TIME ON to get execution stats for a query. I found that the CPU Time was sometimes greater than the elapsed time. How is this possible? The query does not use any parallelism since I used the query option MAXDOP 1. Is the elapsed time wait time? Is the total execution time the sum of the CPU time and elapsed time?


SQL Server Execution Times:

CPU time = 797 ms, elapsed time = 162 ms.

View 3 Replies View Related

Interpreting The Percentage In Decision-tree Model

Sep 15, 2006

Hi,

I used a decision-tree mining-model to describe and predict fraud. The table contains 1039 records with 775 distinct value of A-number (the calling party). I used 9 columns in the model. SQL Server reports that only 3 columns are significant in predicting the fraud

- BPN_is_too_short (called party-number is too short)
- Duration_is_zero
- Invalid_area_code

The key-column in A-number, and the predicted column is Is_Fraud with the range of values are only 0 and 1. There's no record with NULL (missing-value) in the column Is_Fraud.

Mining Legend shows in the first split
[-] 625 cases of fraud
[-] 150 cases of non-fraud
[-] 0 cases of missing

In addition to that, Mining Legend shows
[-] 79.69% of fraud
[-] 19.64% of non-fraud
[-] 0.67% Missing

Now when I compare those values, they don't match.
(A) 625/775 is 80.645%, not 79.69%
(B) 150/775 is 19.355%, not 19.64%
(C) 0 cases of NULL (missing value) should imply 0% of missing, not 0.67% of missing

Furthermore in one node (with the split on duration_is_zero), there are 541 cases of fraud and 0 cases of non-fraud. This implies the node is leaf-node. However, Mining Legend shows

514 cases of fraud, 99.35%

0 cases of non-fraud, 0.33%

[F] 0 cases of missing, 0.33%


My questions
(1) Why the values don't match like in cases A through C ?
(2) Why the values don't match even in cases D through F when we have no subtree at all ?

I've searched explanation by reading the mathematical reasoning, entropy, Gini index; but it does not answer the discrepancies of those values and percentages in the Mining Legend.

Regards,

Bernaridho

View 3 Replies View Related

SQL Server 2012 :: Interpreting JSON Data For Reporting Purpose

May 16, 2014

We have a gaming application which generates transactional data in MongoDB which eventually sends the data to SQL Server and it is in JSON format. This data needs to be used for reporting tool but visualizing this data in forms of a table is proving to be difficult. One example of a column we receive is:

{responseCode:0 transactionId:null amount:200.00 message:account balance }

We need to build a sort of ETL or batch job but need to interpret this in a form which SQL Server can understand.

View 9 Replies View Related

Reporting Services :: Interpreting Specific Report Rendering From What The Log Shows

Jul 7, 2015

We run std 2008.   In my ssrs log I see this for one of our most critical reports...

library!ReportServer_0-64!2244!07/07/2015-08:24:53:: Call to GetPermissionsAction(/somedirectory/somedirectory1).... which I assume is an indication of a report starting to render by first checking permissions.

Around the time my user says he still saw the revolving arrow and he stopped the report because he felt it was running too long, I see...

webserver!ReportServer_0-64!1dbc!07/07/2015-08:54:44:: i INFO: Processed report. Report='/somedirectory/somedirectory1/importantreport', Stream=''

How can it be true that he stopped it and ssrs reports that it processed the report?

About 4 minutes later I see this entry in the log...

webserver!ReportServer_0-64!15e4!07/07/2015-08:58:34:: i INFO: Processed report. Report='/somedirectory/somedirectory1/importantreport', Stream=''

Which processed report message is right?  Could there be multiples cuz of subreports? I see a number of errors and exceptions around these same times but do not know how to tie either to a specific report. Is there a way?

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved