Does Multiplication With 1 Affect Query Performance?
May 20, 2008
Does multiplication with 1 affect query performance?
I have a a stored procedure that converts results to another unit if required.
In alternative 1 below, the results are returned with a separate select statement if no conversion is necessary - in other words, no multiplication with a conversion factor is required. However, the code is not very nice since I need to repeat the select statement again in case a conversion is required, this time including the conversion factor.
Alternative 2 uses cleaner-looking code. The conversion factor is set to 1 if no conversion is required, and a single SELECT statement is used to return the data.
The @factor variable is defined as a float.
I would rather use alternative 2, but I wonder if there is any performance penalty for doing that if no conversion is required since the results are always multiplied with the @factor? Or can SQL server somehow understand that @factor = 1 and no multiplication is required?
--- Alternative 1: ---
IF @fromunit_sid = @tounit_sid
-- Return unconverted results
SELECT ISNULL(ls_totalWaterConsumption,0) AS ls_totalWaterConsumption,
ls_theoreticalWaterConsumption AS ls_theoretical_WaterConsumption,
ls_totalWaterConsumption - ls_theoreticalWaterConsumption AS ls_extra_WaterConsumption
FROM Results WHERE scenario_id = @scenario_id
ELSE
BEGIN
-- Get conversion factor
EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT
-- Get the converted results
SELECT
ISNULL(ls_totalWaterConsumption * @factor,0) AS ls_totalWaterConsumption,
ls_theoreticalWaterConsumption * @factor AS ls_theoretical_WaterConsumption,
(ls_totalWaterConsumption - ls_theoreticalWaterConsumption) * @factor AS ls_extra_WaterConsumption
FROM Results WHERE scenario_id = @scenario_id
END
--- Alternative 2: ---
IF @fromunit_sid = @tounit_sid
SET @factor = 1
ELSE
-- Get conversion factor
EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT
-- Get the converted results
SELECT
ISNULL(ls_totalWaterConsumption * @factor,0) AS ls_totalWaterConsumption,
ls_theoreticalWaterConsumption * @factor AS ls_theoretical_WaterConsumption,
(ls_totalWaterConsumption - ls_theoreticalWaterConsumption) * @factor AS ls_extra_WaterConsumption
FROM Results WHERE scenario_id = @scenario_id
And another question: is using an IF function considerably faster than making a call to another stored procedure?
In alternative 2 above I use an IF statement to check if @fromunit_sid = @tounit_sid, and . But in fact the function getConversionFactor that I'm calling does exactly the same thing: if I pass in identical from- and to-values, it simply returns 1, so I could omit the IF statement completely and just use alternative 3. But is it slower?
--- Alternative 3
-- Get conversion factor
EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT
We think we're having performance problems, and among the areas of investigations is the tempdb database. Since it resets itself after SQL is restarted, is there a way to find out how big it has grown in the past ? Does leaving it at the default size cause a performace hit ?? Right now it's 8.75 MB, with 7.38 MB available, which sounds pretty harmless.
Everywhere I read, it states that running SQL Profiler can affect performance of your SQL Server. My question is - how much of an impact will it really make? Will I see a 1% degredation in peformance? 5%? 50%? I haven't been able to find a good answer. We currently have SQL Profiler running all day long for almost 3 years, and the databases are still humming.
Is it the amount of data you are requesting from the trace that affects performance? There are some compliance tools out there (Idera Compliance Manager, IPLocks, etc) that run a profiler trace to get data. There are other DBAs in my organization who don't want to use them because "profiler traces will degrade my SQL Server performance". How true is this really.
Any help I can get would be extremely appreciated.
I have a db which I have little control over most of it's makeup because of the vendor supplied tools. We currently have over 700 tables and 19000 columns. Has anyone seen a problem or saturation pont with these kinds of numbers? The database delivered to the clients will be from 2-50 gig depending on the site. I can probably through hardware at problems, but if anyone has been down this road any suggestions are appreciated.
I'm considering adding domain integrity checks to some of my database tableitems. How does adding such constraints affect SQL Server performance? Forexample, I have a simple constraint that restricts a couple of columns tohaving values within the values assigned in my application by anenumeration:(([Condition] >= 0 and [Condition] <=3) and ([Type] >= 0 and [Type] <=2))This enforces domain integrity for two enumerations having values 0, 1, 2, 3and 0, 1, 2 in the application. Is this an efficient way of performing suchchecks? What are the pitfalls of domain integrity checking?ThanksRobin
I am currently building a website to deal with different product information and sales with php. I am using SQL to sort the database and pull out information.
The final thing i need to do is work out the total revenue of each product however the problem i am having is that the 'Price' column and 'SalesVolume' column are in two different tables and they need to be multiplied together.
The two tables and column headings are as follows:
Product ID Name Price
MonthlySales ID ProductCode Month Year SalesVolume
(ID and ProductCode are linked together in a relationship)
I cannot see anything wrong with the syntax in my query however i believe there is.
Here is the query I am using:
Code: "SELECT SUM(Products.Price * SUM(MonthlySales.SalesVolume)) as revenue FROM Products INNER JOIN MonthlySales ON(Products.ProductCode = MonthlySales.id) GROUP BY Products.ProductCode";
Recently we added a new table into our SQL2000 database specifically to store scanned in images of documents. This new table contains a PK field, a couple of datetime fields, a couple of char(1) fields and one 'image' field.
Before adding this table, the database size was approx 6GB. Six months after adding this new table, the database has grown to 18GB - 11GB of this is due to the scanned in images.
Would this new table affect the SQL performance with regards to accessing other data in the database that has nothing related to the new table?
If so, would moving this new table into it's own database be recommended?
and in this table, i have found index for Name1-Nam4,
i don't why sql below is very slow?
select Name1, sum(C1), ...., Sum(C100) from ( select Name1, Name2, sum(C1) as C1, ...., Sum(C100) as C100 from ( select Name1, Name2, Name3, sum(C1) as C1, ...., Sum(C100) as C100 from ( select Name1, Name2, Name3, Name4, C1, ...., C100 from My_Table group by Name1, Name2, Name3, Name4 ) as T group by Name1, Name2, Nam3 ) as T group by Name1, Name2 ) as T group by Name1
Hi all...I have a friend who has a problem I'm trying to help with (no...REALLY...it's a FRIEND, not ME!!! *LOL*)
I haven't run across the need...but in a nutshell...we have a table with many rows of data, one column in each row needs to be multiplied by all other same-columns in the table's other rows.
For example... MyKey int, MyFloat float(53)
I want to multiply Myfloat by all other Myfloat columns in the table.
Similar to SUM(MyFloat) but something like PRODUCT(MyFloat).
Is there a aggregate kept in a basement closet somewhere, or a way to perform this operation rather than using a cursor to do it:
An example of my table: 1 3.2 2 4.1 3 7.1
if I could do a PRODUCT(MyFloat) I would want the result to be (3.2 X 4.1 X 7.1) or 93.152
I'm trying to create a case statement that if a field = a certain code, I'd like to take another field * 0.9. But, I'm getting a Null value for the answer..here is the statement:,case when parts.ndc = '50242-0138-01' then labels.BAGSDISP*0.9 end "Units Dispensed"..For this example labels.BAGSDISP is a value of 2. So, in theory it should be 2 * 0.9 and the result should be 1.8 but I'm getting a NULL
Suppose I have users that can belong to organizations. Organizationsare arranged in a tree. Each organization has only one parentorganization but a user maybe a member of multiple organizations.The problem that I'm facing that both organizations and individualusers may have relationships with other entities which aresemantically the same. For instance, an individual user can purchasethings and so can an organization. An individual user can havebusiness partners and so can an organization. So it seems that I wouldneed to have a duplicate set of link tables that link a user to apurchase and then a parallel link table linking an organization to apurchase. If I have N entities with which both users and organizationsmay have relationships then I need 2*N link tables. There is nothingwrong with that per se but just not elegant to have two differenttables for a relationship which is the same in nature, e.g.purchaser->purchaseditem.One other approach I was thinking of is to create an intermediateentity (say it's called "holder") that will be used to hold referencesto all the relationships that both an organization and an individualmay have. There will be 2 link tables linking organizations to"holder" and users to "holder". Holder will in turn reference thepurchases, partners and so on. In this case the number of link tableswill be N+2 as opposed to 2*N but it will have a performance cost ofan extra join.Is there a better way of modelling this notion of 2 different entitiesthat can possess similar relationships with N other entities?
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
every morning I have 7-10 identical messages in error log
1. Configuration option 'allow updates' changed from 1 to 0. Run the RECONFIGURE statement to install.. 2. Error: 15457, Severity: 0, State: 1 3. Configuration option 'allow updates' changed from 0 to 1. Run the RECONFIGURE statement to install..
It is standby server with custom log shipping and DTS transfering logins every 15 min.
Hi, I am wrote the following code in one store procedure called p_bcp_all. and then scheduled it to run over night. what if the first two bcp were successful but the third one failed. Is the whole procedure going to fail? also what if the first one failed, is the rest of the code going to be executed have the bcp process going for the second and the third table? thanks for your adivce
regards Ali
create procedure p_bcp_all as Exec master..xp_cmdshell "bcp servername..tblone in d:blone.txt /fd:formatfileblone.fmt /servername /Usa /password/b250000 /a8000"
I'm using a unique index on a table with ignore_dup_key to get around using distinct in a stored procedure. In Query analyser, when I issue the stored procedure, you get a message and also the data in the Grid. In a stored procedure call, initiated from MS Access, the message means I have to put additional client side programming to work around it. I know that duplicate keys are ignored; that is why I've created the table with this feature (unique index in combination with ignore_dup_key). So what's with the message. If I delete the Error message 3604 will I still get an error message?
In Flat File Source properties windows there's Preview node, when we check that node there's an option to skip the data in how many rows. Is it affect the result ?
Hi, I have a column in table which tells me whether Tax should be calculated on a price. This is stored in a column called taxIncluded. I have a 'price before tax' column e.g. grossPrice. I want to calculate the price after tax e.g. netPrice if the taxIncluded column is set to 1. How do I form my sql statement to test whether the taxIncluded is set to 1 & therefore add the tax at say 20%.
e.g. id name grossPrice taxIncluded netPrice <--- calculated within SQL -- ------------- ------------------- ---------------- ------------- 1 Product1 10.00 1 12.00 2 Product2 20.00 0 20.00
Im using MS SQL7 as database software ,but now I plan to upgrate SQL7 to SQL2000 . I would like to know that are there any affect on the old database that already use with sql7 after I upgrade to MSSQL2000 ?
I had a report (.rdl) using sql reporting services in sql server 2005, where it was running quite good. I have just installed sql server 2005 sp2. After that, when I run the report through report viewer, the result of the report contains the first record of the dataset keep on repeating for all other records. If I run the dataset the results are correct,but if I preview it, then the first record alone repeating for all other records.
I feel that sp2 might cause this rendering problem. Any suggestion please?
Does Service Pack 2 affect client tools? In other words, if you have a client machine with just the tools installed (management studio, BIDS, etc) do I need to run the SP2 package on this client as well?
I am developing a report that should be localized dynamically. The report will have both arabic and western users. Therefore I need to use the direction property of some single text boxes, which works fine. But when setting the direction property to RTL for a table it has no affect! You would assume that the whole tables content should be "reversed" and also the text in the table went from right to left. But no.
When viewing the report in IE and right clicked and click "View Source" you can see that the table do not have a direction style set, even though I set the property on the table in design mode.
I am going crazy about this issue! Could this be a MS bug?
I really need some help here...
PS. I have tried numerous of combinations of the different International properties to get the direction property to work - with no luck.
Could a simple update statement on a user database ever caused space usage in tempdb? Assuming the update statement fires no triggers and not using any temp tables?
IE:
User DatabaseA Update TableX Set col1 = X
Reason I ask is tempdb filled up and the only thing I could see running at that time was the update statement.
Hi below sample data incoming from a source that cannot be changed. Please ignore the mishandling of zls. Obviously it is not insurmountable - I am just interested in why it is happening because I cannot explain it. DECLARE @t TABLE(the_data CHAR(73)) SET DATEFORMAT dmySET NOCOUNT ON INSERT INTO @tSELECT ' 11'+SPACE(5)+'1649KN889001 2'+space(10)+'0'+space(10)+'08 01 2002'+space(10)+'04 10 2002'UNION ALLSELECT ' 11'+SPACE(5)+'1649KN889001 2'+space(10)+'109 08 2004'+space(20)+'21 07 2005'UNION ALLSELECT ' 11 13026721XX198734 1'+space(10)+'0'+space(10)+'XXXXXXXXXX'+space(10)+ '09 01 2003' SELECT CAST(REPLACE(REPLACE(date1_text,' ','/'),'XXXXXXXXXX',NULL) AS SMALLDATETIME) AS date_1_prob,CAST(REPLACE(REPLACE(date1_text,' ','/'),'XXXXXXXXXX','') AS SMALLDATETIME) AS date_1_ok_ish,CAST(NULLIF(REPLACE(date1_text,' ','/'),'XXXXXXXXXX') AS SMALLDATETIME)AS date_1_fine, date1_textFROM--derived table - selecting relevant substring(SELECT LTRIM(RTRIM(SUBSTRING(the_data, 44, 10))) AS date1_textFROM @t)AS der_t date_1_prob date_1_ok_ish date_1_fine date1_text----------------------- ----------------------- ----------------------- ----------NULL 2002-01-08 00:00:00 2002-01-08 00:00:00 08 01 2002NULL 1900-01-01 00:00:00 1900-01-01 00:00:00 NULL 1900-01-01 00:00:00 NULL XXXXXXXXXX Can anyone explain the result in the first row first column? Thanks
We have an Access application using Jet. I added some new indexes yesterday and now they are being blamed for poor Access application performance. I then dropped the new indexes. The poor performance continued until the Access application was re-linked to the SS2000 database. Then things returned to normal.
Question, does Access/Jet persist SqlServer schema info in its MDB (or elsewhere?) I am told that the MDB is copied from a share to the local PC where it grows during its use. Some people are saying the MDB persists schema info about the SS2000 schema which influences how Jet accesses the SS2000 database. Is that true? Is there a link where I can read about this? I am a dba and am not an Access developer . . .
With SQL2005 SP2, we are seeing that when auto stats run on one or more indexes of a large table (1.5M rows), then immediately the stored proc using that table starts acting as if the query plan is no longer any good. This causes a drastic slowdown in response time and a corresponding increase of table reads to get the data. E.g, the next execution of the procedure after the auto stats kick in goes from 355 reads to 755000 reads (as depicted by Profiler). Generally, there are about 25 people using the DB at any one time. They connect through a mid-tier VB component.
I tried adding WITH RECOMPILE to the stored proc in question, but that caused almost all executions to run at the higher number. I thought that the WITH RECOMPILE hint would create a new query plan for each execution of the procedure and that plan would the the latest and greatest. Perhaps it did, but most users got stuck with the higher number of reads anyway. After taking the hint out, everyone went back to getting the 335 number and quick response times.
What we are wrestling with is that when those auto stats hit, it really messes up everyone until we manually recompile the procedure. Daily we delete all records in the table that are over 45 days old, so the table stays pretty much the same size. We also set the recompile flag to cause a new plan to be generated that will reflect the smaller amount of data. Should we also run a stats update before recompiling the procedure? Profiler has been very helpful in capturing what is going on, so I think I have a good handle on that. However, I don't understand why WITH RECOMPILE produced a messed up plan for everyone. The compile itself seems to take only 1 ms when done from the query screen.
In a recent attempt to keep the size of my transaction log files down I altered the schedule of my SQL Server log backups from running every 15 minutes from 07:00 to 19:00 to run every 15 minutes.My company also uses a Dell AppAssure application to also take backups. The backups are of the entire drives so I don't this will affect the size of my SQL log files but I did notice that AppAssure has a tick box to truncate the SQL logs so it made me wonder that it could affect the size of my log file. The Appassure backups currently run every 15 minutes from 07:00 to 19:00. I'm wondering if I would be able to maintain my log files at a smaller size if I ran this every 15 minutes from 07:00 to 07:00.