How To Test Column Value To Affect Calculation Of Another
Jan 30, 2008
Hi,
I have a column in table which tells me whether Tax should be calculated on a price. This is stored in a column called taxIncluded. I have a 'price before tax' column e.g. grossPrice. I want to calculate the price after tax e.g. netPrice if the taxIncluded column is set to 1. How do I form my sql statement to test whether the taxIncluded is set to 1 & therefore add the tax at say 20%.
e.g.
id name grossPrice taxIncluded netPrice <--- calculated within SQL
-- ------------- ------------------- ---------------- -------------
1 Product1 10.00 1 12.00
2 Product2 20.00 0 20.00
Is there a way to use a column alias in an another calculation within thesame query? Since I am using some long and complex logic to compute total1and total2, I don't want to repeat the same logic to compute the ratio ofthose two columns. I know that I can do a nested query, but that seems toolengthy as well since I actually have many, many columns.selecttotal1 = sum(case(long complex logic)),total2 = sum(case(another long complex logic)),ratio = total1/total2
I have a clustered index in a table. this column is of datatype date. how can i retrieve the following?:
select [date], valueColumn from myTable where [date] = '2000-01-03' and ('2000-01-03'+1) and ('2000-01-03'+2)
My Goal ist to retrieve 3 values of valueColumn of 3 subsequent days, calculate the average of this 3 values and insert this average in a third colum called [average3days].
SELECT Column_1, Column_2, Column_3, 10*Column_1 AS Column_4, 10*Column_2 AS Column_5, -- I am not being able to understand how to do this particular step Column_1*Column_5 As Column_6 FROM Table_1
First 3 Columns are available within the Original Table_1 The Column_4 and Column_5 have been created by me, by doing some Calculations related to the original columns.
Now, when I try to do FURTHER CALCULATION on these newly created columns, then SQL Server does not allows that.
I was hoping that I will be able to use the Newly Created Columns 4 and 5 within this same query to do further more calculations, but that does not seems to be the case, or am I doing something wrong here ?
If I have to create a new column by the name of Column_6, which is actually a multiplication of Original Column_1 and Newly Created Column_5 "I tried this - Column_1*Column_5 As Column_6", then what is the possible solution for me ?
I have tried to present my problem in the simplest possible manner. The actual query has many original columns from Table_1 and many Calculated columns that are created by me.And now I have to do various calculations that involve making use of both these type of columns.
and I need to create a session temp table (eg ##output) that translates the calculation (NewAmt - OldAmt) into categories such as
"decrease -201 to -500" "decrease -1 to -200" "no change" "increase 1 to 200" "increase 201 to 500"
so that my final output would look like this:
ID NewPer NewAmt OldPer OldAmt Change ChangeCategory 334 1/07/08 200 22/01/08 200 0 no change 2396 1/07/08 4000 10/12/07 3600 400 increase 201 to 500 7650 1/07/08 1100 07/07/06 1200 -100 decrease -1 to -200 . . . I understand how to add the "Change" column to my temp output table, but am struggling with the ChangeCategory column - can someone point me in the right direction?
Do you know of some SQL that I can test a table to see if a column exists or not inside a stored procedure?
What I have is a table that contains data for a 10 year history. I asked at the time and was told at the beginning of every new year the table would create 2 new columns so I dynamically set up some of my stored procedure to provide for that….well as of Jan 22 they have not updated them so I have 2 options:
1) hard code the first year in there and change it when they tell us
2) test for the field if it is there start the countdown to grab the rest of the historic info….if not check for the next year until it finds the field in the DB.
I have created calcalated measures in a SQL Server 2012 SSAS multi dimensional model by creating empty measures in the cube and use scope statements to fill the calculation.
(so I can use measure security on calculations
as explained here )
SCOPE [Measures].[C];
THIS = IIF([B]=0,0,[Measures].[A]/[Measures].[B]);
INSERT INTO MAIN VALUES ('1000', '1/1/2014',3000,1000,700,1500) INSERT INTO MAIN VALUES ('1000', '3/5/2014',1000,2000,650,200) INSERT INTO MAIN VALUES ('1000', '5/10/2014',500,5000,375,125) INSERT INTO MAIN VALUES ('1000', '11/20/2014',100,2000,400,300) INSERT INTO MAIN VALUES ('1000', '8/20/2014',100,3500,675,1300)
I am trying to compare Sales value of year 2015 with sales value of 2016 and the difference stored in alias column as Sales_growth for year 2016 , for year 2015 the alias column be as '1' similarly difference between margin of 2015 and 2016 and result stored in alias column as margin_rate in year 2016for 2015 as 1 but when there is no record for year 2015 and record present in 2016 for a given (month, SM,SG,CUST,SP) then the alias column sales_growth and margin_rate should be 100
last record : as there is no record for year 2015 and record present in 2016 for a given (month, SM,SG,CUST,SP) then the alias column sales_growth and margin_rate should be 100
Is there a way to check for System.BDNull when I use the column name instead of column index? The column Photographer (in the excample below) sometimes contains the value null, and then it throws an error. Or do I have to go back to counting columns (column index of the returned data) again? try { connection.Open(); using (SqlDataReader reader = command.ExecuteReader()) { if (reader.Read()) { albumItem = new Album((int)reader["Id"]); if (reader["Photographer"] != null) albumItem.Photographer = (string)reader["Photographer"]; albumItem.Public = (bool)reader["IsPublic"]; } } } Regards, Sigurd
We are setting up a test lab environment with 100 machines. We want one master testing db that gets replicated to each to run scripted application tests nightly.
My goal is to minimize the amount of work to move this thing to each of the 100 test machines. I am wondering if we need to even have the sql local and invest in a monster db server with 100 copies of the db we restore and each test machine point to their own db on that server, or if I should use db mirroring or something to get the master test db to each of those machines instead.
Now that we have a good programming model in SSIS - the question is whether to write automated unit tests for your packages, and would it generally be a good idea for packages?
Also - if yes to write tests - then where to find more informations regarding How to accomplish that?
hi every one, i need to test SSIS pacakge which will import data from different database where record count is around 5 millions. iam planning to test it through c# code as well as manually also. SSIS source : consist of 7 tables SSIS destination :consist of 7 tables Using c# code iam trying to run ssis package through batch file. i am putting expected rowcount, column count in an excel file and comparing same with destination tables by writing query implementing ADO.Net concept. am i going right way ,can any one suggest best and productive way to test the ssis package . what are the other things i need to test it. do any one can add test cases to it.
S.No
Test Case
1
Verify all the tables have been imported.
2
Verify all the rows in each table have been imported.
3
Verify all the columns specified in source query for each table have been imported
4
Verify all the data has been received without any truncation for each column.
5
Verify the schema at source and destination
6
Verify the time taken /speed for data transfer
7
Fields truncated due to difference in length of the field at destination. Regards Arif shareef
I need to restore test DB from production backup but once it is restored I would need all the permissions of sql logins and windows AD account intact in test Db as it was before.
every morning I have 7-10 identical messages in error log
1. Configuration option 'allow updates' changed from 1 to 0. Run the RECONFIGURE statement to install.. 2. Error: 15457, Severity: 0, State: 1 3. Configuration option 'allow updates' changed from 0 to 1. Run the RECONFIGURE statement to install..
It is standby server with custom log shipping and DTS transfering logins every 15 min.
We think we're having performance problems, and among the areas of investigations is the tempdb database. Since it resets itself after SQL is restarted, is there a way to find out how big it has grown in the past ? Does leaving it at the default size cause a performace hit ?? Right now it's 8.75 MB, with 7.38 MB available, which sounds pretty harmless.
Everywhere I read, it states that running SQL Profiler can affect performance of your SQL Server. My question is - how much of an impact will it really make? Will I see a 1% degredation in peformance? 5%? 50%? I haven't been able to find a good answer. We currently have SQL Profiler running all day long for almost 3 years, and the databases are still humming.
Is it the amount of data you are requesting from the trace that affects performance? There are some compliance tools out there (Idera Compliance Manager, IPLocks, etc) that run a profiler trace to get data. There are other DBAs in my organization who don't want to use them because "profiler traces will degrade my SQL Server performance". How true is this really.
Any help I can get would be extremely appreciated.
Does multiplication with 1 affect query performance?I have a a stored procedure that converts results to another unit if required. In alternative 1 below, the results are returned with a separate select statement if no conversion is necessary - in other words, no multiplication with a conversion factor is required. However, the code is not very nice since I need to repeat the select statement again in case a conversion is required, this time including the conversion factor.Alternative 2 uses cleaner-looking code. The conversion factor is set to 1 if no conversion is required, and a single SELECT statement is used to return the data. The @factor variable is defined as a float.I would rather use alternative 2, but I wonder if there is any performance penalty for doing that if no conversion is required since the results are always multiplied with the @factor? Or can SQL server somehow understand that @factor = 1 and no multiplication is required?--- Alternative 1: ---IF @fromunit_sid = @tounit_sid-- Return unconverted results SELECT ISNULL(ls_totalWaterConsumption,0) AS ls_totalWaterConsumption,ls_theoreticalWaterConsumption AS ls_theoretical_WaterConsumption,ls_totalWaterConsumption - ls_theoreticalWaterConsumption AS ls_extra_WaterConsumption FROM Results WHERE scenario_id = @scenario_idELSEBEGIN -- Get conversion factor EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT -- Get the converted results SELECT ISNULL(ls_totalWaterConsumption * @factor,0) AS ls_totalWaterConsumption, ls_theoreticalWaterConsumption * @factor AS ls_theoretical_WaterConsumption, (ls_totalWaterConsumption - ls_theoreticalWaterConsumption) * @factor AS ls_extra_WaterConsumptionFROM Results WHERE scenario_id = @scenario_idEND --- Alternative 2: ---IF @fromunit_sid = @tounit_sidSET @factor = 1ELSE -- Get conversion factor EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT
-- Get the converted results SELECT ISNULL(ls_totalWaterConsumption * @factor,0) AS ls_totalWaterConsumption, ls_theoreticalWaterConsumption * @factor AS ls_theoretical_WaterConsumption, (ls_totalWaterConsumption - ls_theoreticalWaterConsumption) * @factor AS ls_extra_WaterConsumptionFROM Results WHERE scenario_id = @scenario_id And another question: is using an IF function considerably faster than making a call to another stored procedure?In alternative 2 above I use an IF statement to check if @fromunit_sid = @tounit_sid, and . But in fact the function getConversionFactor that I'm calling does exactly the same thing: if I pass in identical from- and to-values, it simply returns 1, so I could omit the IF statement completely and just use alternative 3. But is it slower?--- Alternative 3 -- Get conversion factor EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT
Hi, I am wrote the following code in one store procedure called p_bcp_all. and then scheduled it to run over night. what if the first two bcp were successful but the third one failed. Is the whole procedure going to fail? also what if the first one failed, is the rest of the code going to be executed have the bcp process going for the second and the third table? thanks for your adivce
regards Ali
create procedure p_bcp_all as Exec master..xp_cmdshell "bcp servername..tblone in d:blone.txt /fd:formatfileblone.fmt /servername /Usa /password/b250000 /a8000"
I have a db which I have little control over most of it's makeup because of the vendor supplied tools. We currently have over 700 tables and 19000 columns. Has anyone seen a problem or saturation pont with these kinds of numbers? The database delivered to the clients will be from 2-50 gig depending on the site. I can probably through hardware at problems, but if anyone has been down this road any suggestions are appreciated.
I'm using a unique index on a table with ignore_dup_key to get around using distinct in a stored procedure. In Query analyser, when I issue the stored procedure, you get a message and also the data in the Grid. In a stored procedure call, initiated from MS Access, the message means I have to put additional client side programming to work around it. I know that duplicate keys are ignored; that is why I've created the table with this feature (unique index in combination with ignore_dup_key). So what's with the message. If I delete the Error message 3604 will I still get an error message?
I'm considering adding domain integrity checks to some of my database tableitems. How does adding such constraints affect SQL Server performance? Forexample, I have a simple constraint that restricts a couple of columns tohaving values within the values assigned in my application by anenumeration:(([Condition] >= 0 and [Condition] <=3) and ([Type] >= 0 and [Type] <=2))This enforces domain integrity for two enumerations having values 0, 1, 2, 3and 0, 1, 2 in the application. Is this an efficient way of performing suchchecks? What are the pitfalls of domain integrity checking?ThanksRobin
and in this table, i have found index for Name1-Nam4,
i don't why sql below is very slow?
select Name1, sum(C1), ...., Sum(C100) from ( select Name1, Name2, sum(C1) as C1, ...., Sum(C100) as C100 from ( select Name1, Name2, Name3, sum(C1) as C1, ...., Sum(C100) as C100 from ( select Name1, Name2, Name3, Name4, C1, ...., C100 from My_Table group by Name1, Name2, Name3, Name4 ) as T group by Name1, Name2, Nam3 ) as T group by Name1, Name2 ) as T group by Name1