T-SQL (SS2K8) :: Varying Calculation For Fixed Number Of Fields
Aug 19, 2014
I'm writing a query that will be used in Jasper Ireports, but prefer to have the values done ahead of time using SQL rather than relying on the report to do the lifting.The fields are pretty straight forward, only the display is where I have a question.
Fields Used: PERIOD ('MON-yyyy') and VALUE
The results must start with the CURRENT PERIOD (AUG-2014) in one column and the VALUE for the current period multiplied by 1/12 (VALUE*(1/12)).The next column should return the VALUE for CURRENT PERIOD - 1 (JUL-2014) and multiply by 2/12 (VALUE*(2/12))
This should continue for the last 11 months and would end with OCT-2013 with the value being multiplied (VALUE*(11/12)).Is the easiest solution to this a CASE statement looking at PERIOD then PERIOD minus one month, minus two months...etc?
Hello All, I have a requirement, where the number of parameters being to a stored procedure, is not fixed. It is to have a list of computers, belonging to a particular Domain, or, more DOMAINS or maybe just a list irrespective of the Domain. For this, the @Domain parameter, could have one value, or more values, or no values as well. Can you please let me know how do I go about this?
I need to import data to a MSSql table from massive (read: a million and a half rows, every single day) logs that come in .txt format separated in tabs with a ";" symbol and then have some stored procedures analyze that data to generate some reports in an excel file with that info. The text files include the column headers in the first row and the data starts on the second one.
The challenge is that the text files differ in column order and count every single day.
The analysis that I need to do only needs about 15 columns from the nearly 90-120 that those files include, and those columns sadly happen to be in a different order in those files.
I have a large SQL 2012 table containing survey details. The number of questions vary in each survey and can range in number from as little as 10 questions to a maximum of 50.I need to adapt my crosstab code below to include a CASE statement that outputs a column (Q1, Q2, Q3 etc) representing the questions up to a maximum 50 questions (Q50) and to place the answer under the corresponding question column within each survey. Ideally I want to avoid having to write 50 CASE statements in my code. I chose the CASE statement method as I understand that the PIVOT option isn't as flexible,I have included some test data and the output should look like this:
I currently have 5 fields in the same SQL Server table, 4 of which pull Stock Levels from other tables and the 5th is a Total Stock field which is to add the 4 previous fields.
The problem is that some of the first 4 fields contain nulls as well as figures and when the Total Stock trys to add these together the outcome is Null, which is incorrect.
E.G
COL 1 COL 2 COL 3 COL 4 TOTALSTOCK
2 NULL 0 1 NULL
I have tried to change it to a NOT NULL Column but won't let me.
Is there a way to do an update to change NULL's to 0?
I am new to SSIS. I have been struggling with this for the past one week. I have a weird task. I need to import several tables from one database to a different server with a new database name. We need to do this at the end of every year. The main problem here is that the number of tables varies every year. You may not have all the tables as last year or may have more tables. So I need to create a dynamic task that takes care of this every year without changing the package.
I have performed the following tasks **
1. Create a new dynamic database. ( I have used Execute SQL Task to do this) 2. Copy all the table structures ( I have used Execute SQL Task to do this)
3. Import Data. This is the main problem. I was trying to create a dynamic connection string with variables as suggested in several forums but I finally came to know that this cannot be done if the table structures are different as the metadata cannot be refreshed at runtime.
4. The final step to create a process to validate the data (the count from each table for both source and destination. I think this can be done with Sql task.
What is the best method to do this? My DBA does not like “Transfer SQL Objects Task” or “transfer Database Task”. I would like to create this as a dynamic process.
I have the following problem with ROUND. When doing the calculation for each value by a percentage using round, the sum of the result does not equal the sum of the values for the percentage (also using round).
IF OBJECT_ID('Tempdb..#Redondeo') IS NOT NULL DROP TABLE #Redondeo Create Table #Redondeo (Orden int Identity(1,1),Valores money) Insert into #Redondeo Select 71374.24 Union Select 16455.92 Union Select 56454.20 Union Select 9495.18
I am trying to produce a report that will show a duration in minutes of a time when a room was occupied for a category. Whilst I have the start and end dates and times, the end user must be able to specify not only a range of dates, but a start and endtime of hours in the day in which they are interested in (it will be applied to all days in the range - they are not allowed to specify a different start/endtime per day).
The example I have is a date range of 6 to 17 October, but they only want the times from 09:00 to 21:00, so if a room was occupied from 08:00 to 11:00 they would only want to know the duration as 120 minutes (09:00 to 11:00) not 180.
The data is supplied by a third party, and duration in minutes is supplied, but it is not much use when they are not interested in the 'real' duration.
A table has a column of int type. I need to select a fixed number of rows for each value. For example, if data in that column (c) are 5, 6, 7, and the number I want to select is 2, then I need 2 rows from c=5, 2 from c=6, and 2 from c=7. How to write that query? Any idea?
I have an issue where I have multiple rows of data and I need to reduce a dollar amount by a fixed maximum. I am going to throw some code in here to give a rudimentary idea of the data and what the final result should be.
I need to run an update so that the result of the following query:
select LineNum, Code, Amt, MaxAmt from
@tblLooks like this:
LineNum Code Amt MaxAmt ----------- ---- --------------------- --------------------- 1 AA 10.00 50.00 2 AA 20.00 50.00 3 AA 20.00 50.00
(3 row(s) affected)
I have tried cursors but got unexpected results or the MaxAmt always defaulted to the original even if I updated it. This seems like a simple problem but I have been banging my head against the wall for 2 days now. I've written some pretty complicated updates with less effort than this and I must have some mental block that is keeping me from figuring this out.
I need to calculate total discount on item in case when user has several discounts, and they each apply on discounted amount. I thought to have something like:
DECLARE @Disc float SET @Disc = 0 SELECT @Disc = @Disc + (100 - @Disc) * Disc / 100 FROM UserDiscounts WHERE UserID = 123
I'm reviewing the CAST function using Microsoft SQL server. I generally understand the functionality, but my confusion lies in the results of a particular script which casts a DateTime value to a fixed-point number.
It is as follows:
DECLARE @From DATETIME DECLARE @To NUMERIC(10,5)
SET @From = '2009-10-11T11:00:00' SET @To = cast(@From AS NUMERIC(10,5))
PRINT @To
This results in: 40095.45833
I am completely lost as to how 40095.45833 = 2009-10-11T11:00:00. I just do not understand the how that fixed point number equates to that source DateTime data.
Is it possible to display only a certain number of columns in a matrix, say the first 6 and then hide the rest? That is, does the matrix allow to somehow control how many columns can be displayed from a column group and hide the remaining columns (I need this to limit the number of columns a user is able to see so that the matrix width does not get infinitely long).
In other words.....
I need to display the subtotals for all dynamically generated columns but display only first 6 columns. This way I can avoid having to display 50 columns and not have user scroll to so far right and keep the page width within reasonable limits. Hope I have made it clear.
I need to count and display the number of records which have GradeTitle="SHO". I'm only starting to use BI development studio and all attempts at using the built in aggregate functions have failed.
Also, the report I wish to create has a fixed number of columns and a fixed number of rows as the info being displayed is really only counting values in the DB. I tried using Table but multiple rows were created.
I'd appreciate if anyone could point me in the right direction, as searching this forum turned out to be pretty fruitless for me.
I have such a problem. Need to add additional column to my query. The column should consist of set of fixed number (same as number of query rows) values (text). At start thought it's simple but now Im lost. Is there any chance to do it. Apreciate any help. I need to tell that I have only access to select on this database so no use of operation on tables.
I have a dataset with 2 columns, a rownumber and a servername - eg
rownumber servername
1 server1
2 server2
....
15 server15
I want to display the servernames in a report so that you get 3 columns - eg
server1 | server2 | server3
server4 | server5 | server6
...
server13 | server14 | server15
I have tried using multiple tables and lists and filtering the data on each one but this then makes formating very hard - i either end up with a huge gap between columns or the columns overlap
I have also tried using a matrix control but cant find a way to do this.
Does anybody know an easy way to do this? The data comes from sql 2005 so i can use a pivot clause on the dataset if somebody knows a way to do it this way. The reporting service is also RS2005
i want to break 2 by 2 rows in reportceiling(rownumber(nothing)/2).i used this expression in row group. URL....but i want to use this expression in matrix and that matirx is with in list . so i am getting error . how to use rownumber in list.here i used list to break the page wise id
1. Copy old data from each table in LiveDB to same table in ArchiveDB. 2. Delete the data from each table in LiveDB which is in ArchiveDB
Both DBs SIMPLE recovery mode.
Each table has a clustered PK on a single int value. In both DBs
The tables with varchar(max) columns are taking a v.long time to copy over.
IS there anything I can change in the ArchiveDB to make it run faster.
It is the insert that is taking the time. I've tried dropping the clustered PKs in ArchiveDB tables and then rebuilding afterwards but it has not made any difference. After all I am adding data to the ArchiveDB in clustered index order, so wouldn't have expected it to.
How I can change the Archive DB but cannot touch the schema/settings of Live DB.
I need to build an update query for all my article beginning with '0.%' for my varchar field refkey, but depending on some conditions.
For example. Case st.base <> '' and st.qttbase <> 0 Theo refkey = B Case st.cos <>'' and st.qttcos <> 0 Then refkey = C Case st.refo <> '' and st.qttrefo <> 0 Then refkey = R
But my problem are That i can have Many combination on refkey, for example:
B,BC,BCR,C,CB,CBR,R...and so on.
Also i need to separate the letters with comma, like:
I have a fairly basic query I want to make but I'm struggling on figuring out how to do it. Let's say I have some fields (e.g. Value1, Value2, Value3, etc). I simply want to do a Select statement that returns the highest value among those fields for every row in my db.
At first I thought of the built in Max function until I realized that is for within a column only.
I would like to store all information on them in this table but that amounts to quite a lot of fields (about 50) and some of these will store a lot of HTML text in them.
Is it best to split the table up for performance reasons or will it make little difference?
i'm wondering if there's a possibility for me to insert values into a table without knowing fix number of fields? in other words, i won't know how many fields i have in a single table until users enter data.
any suggestions or recommendations are welcome. thanks in advance.
I am working on a project which requires having a form that must be filled by the user. The number of the fields on this form is arround 150. Is it a good idea to have a database table with 150 fields (columns)? If its not what would be a better approach for this case? Handling this many fields on a windows form is another issue but first I would like to know about how to deal with the data storage.