I have this bit of code below and would like to bring back stat_code's which have no value as 0...
select stat_code, COUNT(stat_code)AS Stat_Count
--INTO tableSroStatABD
from fs_sro
where sro_stat ='O' and whse = 'ABD'
group by stat_code
having stat_code in ('1 QUOTE WN', '2 PR & DEF', '2.1 PRTORD', '3 PRTS AVL', '4 SCHEDULD', '5 LABB JBC', '6 CSTB JBC', '7 WRKS CMP', '8 CSH APP', '9 TO B INV')
order by stat_code asc
Hello,Thanks for helping me with this... I really appreciate it.I have a table called tblPatientDemographics with a number of columns.I would like to count the number of NULL values per record within mytable.tblPatientDemographicsPatientID Age Weight Height Race1234567 20 155 <NULL> Caucasian8912345 21 <NULL> <NULL> <NULL>In the first example above I want to display '1'In the second example above I want to display '3'Any help would be very much appreciated.Thanks !Chad*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
I joined different tables and got a result like this:
result | process | goal | date | ------- ---------- ------ ----------- ok | process4 | 1 | 12.10.2013 bad | process1 | 2 | 13.10.2013 ok | process1 | 4 | 12.12.2013 good | process4 | 1 | 03.01.2014 ok | process1 | 3 | 10.04.2013 bad | process3 | 6 | 09.01.2014 bad | process4 | 3 | 30.12.2013 best |NULL| NULL
Now I want to count the results by counting the processes and group them by the result.
But it should be count the latest result per process only, e.g. for goal "1" just "good" at 03.01.2014. I solved that with a subquery (date=SELECT MAX(...)..).
But now the result "best" disappears, because that column has no date.
Secondly I want to count results for a specific process, e.g. for process4. Every goal has max. one process, with different dates. But one process could have more than one goal.
I want to have this result for process4:
count | result ------ ------- 1 | good 1 | bad 0 | ok 0 | best
But I got only:
count | result ------ ------- 1 | good 1 | bad
I have tried a lot, but nothing works.
The whole result (best, good, ok, bad) are stored in an other table and I joined it.
I have a DB of professors and information related with them. I created the cube, it consist of: Measures: Measure group Professors: Amount of projects (COUNT proj_id) Amount of pulications (COUNT pub_id) Amount of e_books (COUNT book_id) -------------- Measure group Projects: Distinct amount of projects (DISTINCT COUNT proj_id) -------------- Measure group Publications: Distinct amount of publications (DISTINCT COUNT pub_id) -------------- Measure group E_books: Distinct amount of e_books (DISTINCT COUNT book_id) -------------- Calculated measures: Amnt_Projects iif ([Measures].[ Amount of projects ] = 0 OR [Measures].[ Amount of projects] = NULL,0,[Measures].[ Distinct amount of projects]) Amnt_Publications (similar to the above one) Amnt_E_books (similar to the above one) --------------------------- Dimensions: dimPROFESSORS - prof_id -surname -name -gender dimPROJECTS - proj_id -type name -name dimPUBLICATIONS - pub_id -type name -name dimE_BOOKS - book_id -name Date_Projects -date_id -years Date_Publications -date_id -years Date_E_books -date_id -years
For example, when I browse the cube: prof_id____Amount of projects___Distinct amount of projects___Amnt_Projects 1032------------------- 30 --------------------------1----------------1 1070-------------------90 --------------------------2----------------2 1111-------------------0 ---------------------------1----------------0 1137-------------------0 ---------------------------1----------------0 1234-------------------1404--------------------------9----------------9 1721-------------------504--------------------------7----------------7 2661-------------------85 --------------------------5----------------5 ...--------------------...---------------------------...----------------... 6999------------------- 20---------------------------1-----------------1 9956-------------------50---------------------------5-----------------5 Unknown----------------(empty)---------------------(empty)-----------0 Grand Total------------ 2421------------------------11-----------------11
Grand Total “11“ is the amount of distinct projects +1 (because of the unknown member). So the last column shows the right amount of projects for the professor but I want Grand Total to sum those values and show, how many projects do the professors have (it should be „59“ if for all professors). How could I get the right value to be shown in Grand Total?
I am using SQL Server 2005. I have a DB of professors and information related with them. I created the cube, it consist of: Measures: Measure group Professors: Amount of projects (COUNT proj_id) Amount of publications (COUNT pub_id) Amount of e_books (COUNT book_id) -------------- Measure group Projects: Distinct amount of projects (DISTINCT COUNT proj_id) -------------- Measure group Publications: Distinct amount of publications (DISTINCT COUNT pub_id) -------------- Measure group E_books: Distinct amount of e_books (DISTINCT COUNT book_id) Calculated measures: Amnt_Projects iif ([Measures].[ Amount of projects ] = 0 OR [Measures].[ Amount of projects] = NULL,0,[Measures].[ Distinct amount of projects]) Amnt_Publications (similar to the above one) Amnt_E_books (similar to the above one) --------------------------- Dimensions: dimPROFESSORS - prof_id -surname -name -gender dimPROJECTS - proj_id -type name -name dimPUBLICATIONS - pub_id -type name -name dimE_BOOKS - book_id -name Date_Projects -date_id -years Date_Publications -date_id -years Date_E_books -date_id -years
For example, when I browse the cube: prof_id____Amount of projects___Distinct amount of projects___Amnt_Projects 1032------------------- 30 --------------------------1----------------1 1070-------------------90 --------------------------2----------------2 1111-------------------0 ---------------------------1----------------0 1137-------------------0 ---------------------------1----------------0 1234-------------------1404--------------------------9----------------9 1721-------------------504--------------------------7----------------7 2661-------------------85 --------------------------5----------------5 ...--------------------...---------------------------...----------------... 6999------------------- 20---------------------------1-----------------1 9956-------------------50---------------------------5-----------------5 Unknown----------------(empty)---------------------(empty)-----------0 Grand Total------------ 2421------------------------11-----------------11
Grand Total “11“ is the amount of distinct projects +1 (because of the unknown member). So the last column shows the right amount of projects for the professor but I want Grand Total to sum those values and show, how many projects do the professors have (it should be „59“ if for all professors). How could I get the right value to be shown in Grand Total?
I have a pivot transform that pivots a batch type. After the pivot, each batch type has its own row with null values for the other batch types that were pivoted. I want to group two fields and max() the remaining batch types so that the multiple rows are displayed on one row. I tried using the aggregate transform, but since the batch type field is a string, the max() function fails in the package. Is there another transform or can I use the aggragate transform another way so that the max() will work on a string?
How to count the number of values that exist in a row based on the values from an array of numbers. Basically the the array of numbers I want to look for are in row 1 of table [test 1] and I want to search for them and count the "out of" in table [test 2]. Excuse me for not using the easiest way to convey my question below. I guess in short I have 10 numbers and like to find how many of those numbers exist in each row. short example:
I have a DTSX package which reads values from a fixed-length text file using a data reader and writes some of the column values from the file to an Oracle table. We have used this DTSX several times without incident but recently the process started inserting NULL values for some of the columns when there was a valid value in the source file. If we extract some of the rows from the source file into a smaller file (i.e 10 rows which incorrectly returned NULLs) and run them through the same package they write the correct values to the table, but running the complete file again results in the NULL values error. As well, if we rerun the same file multiple times the incidence of NULL values varies slightly and does not always seem to impact the same rows. I tried outputting data to a log file to see if I can determine what happens and no error messages are returned but it seems to be the case that the NULL values occur after pulling in the data via a Data Reader. Has anyone seen anything like this before or does anyone have a suggestion on how to try and get some additional debugging information around this error?
I have SQL Server 2012 SSIS. I have Excel source and OLE DB Destination.I have problem with importing CustomerSales column.CustomerSales values like 1000.00,2000.10,3000.30,NotAvailable.So I have decimal values and nvarchar mixed in on Excel column. This is requirement for solution.However SSIS reads only numeric values correctly and nvarchar values are set as Null. Why?
CREATE TABLE [dbo].[Import_CustomerSales]( Â [CustomerId] [nvarchar](50) NULL, Â [CustomeName] [nvarchar](50) NULL, Â [CustomerSales] [nvarchar](50) NULL ) ON [PRIMARY]
can somebody explain me how I can assign a NULL value to a datetime type field in the script transformation editor in a data flow task. In the script hereunder, Row.Datum1_IsNull is true, but still Row.OutputDatum1 will be assigned a value '0001-01-01' which generates an error (not a valid datetime). All alternatives known to me (CDate("") or Convert.ToDateTime("") or Convert.ToDateTime(System.DBNull.Value)) were not successful. Leaving out the ELSE clause generates following error: Error: Year, Month, and Day parameters describe an un-representable DateTime.
My query "select blah, blah, rank from tablewithscores" will return results that can legitimately hold nulls in the rank column. I want to order on the rank column, but those nulls should appear at the bottom of the list
We have SharePoint list which has, say, two columns. Column A and Column B.
Column A can have three values - red, blue & green.
Column B can have four values - pen, marker, pencil & highlighter.
A typical view of list can be:
Column A - Column B red  - pen red - pencil red - highlighter blue - marker blue - pencil green - pen green - highlighter red  - pen blue - pencil blue - highlighter blue - pencil
We are looking to create a report from SharePoint List using SSRS which has following view:
          red   blue  green   pen       2    0    1   marker    0    1    0   pencil      1    3    0   highlighter  1    1    1Â
We tried Sum but not able to display in single row.
I have an SSIS package that imports data from an Excel file, replaces any value in Excel that reads "NULL" to "", then writes the data to a couple of databases.
What I have discovered today, is I have two columns of dates, an admit date and discharge date column, and what I need to do is anywhere I have a null value in the discharge date column, I have to replace it with the value in the admit date column.Â
I have searched around online and tried a few things using the Replace funtion in Derived columns but no dice so far.Â
I've tried variations on a SELECT statement like below but have been unable to find a way to count only those types per fileNo that have all partNo completed (and to count all types per fileNo with a partNo of 0 and a completed date as they have no parts):
SELECT [FileNo], COUNT(DISTINCT [Type]) AS CountOfAPs FROM APs WHERE (completed IS NOT NULL) GROUP BY [File]
the Table columns is like this NO ProductNo Area In Out1 0001 US NULL NULL2 0002 UK NULL Y3 0003 FR Y NULL 4 0004 FR Y NULL5 0005 UK Y NULL I have Query get the result belowArea Count In&OutUS 1UK 2FR 2 the Area is Group By by Area and the Count columns is counting how many recoreds Table for each Areathe problem is the column "In & OUt'I have to make sure if the "In" or "Out" is Null , if one of them is Null the plus 1 so the result would like I have Query get the result belowArea Count In&OutUS 1 1 UK 2 1 FR 2 0 which syntax I can use for the problem? I just think maybe I can use IsNULL?but I have no idea how to wirte a query ...can you give me a hint? thank you
I am stuck at a problem, not sure on how to go about writing a query that will return as a percentage the number of fields in a row that are null.
For instance, a row from my table: Row1 : field1 field2 field3
If field3 is empty or null, my query should return 67%.
So far I have gotten the number of fields: select count(1) from information_schema.columns where table_name='myTable'
I could loop through the fields but I am sure there is a simpler way of doing it, I have seen something simpler in the past with some builtin SQL functions. I am using MS SQL 2005.
Total amount = 1000 salemancode1 = space salemancode2 = Staff-99 salemancode3 = space salemancode4 = staff-88
How I can write a one query statement to do this, we expect to count how many salemancode is not space and count the number of salesman to over the total amount.
total amount / (no_of_saleman) as commission the result is 1000/ 2 the commission is $500.
I am trying to create totals of the different values of a certain expression in the Report Footer. Currently I have the expression in a group which gives me a running subtotal of the 4 different values of the expression. Now I need 4 running Totals of the 4 different value subtotals. I tried placing some code in the Report Properties but I had a hard time trying to code visual basic within the editor.
Hi, I ve a table, I want to fetch certain rows based on the value of a Column. That column is nullable, and contains NULL values.I used the following query,SELECT Col_A FROM TABLE1 WHERE SOME_ID = 1317 AND Col_B NOT IN (8,9) Here, Col_B contains NULL values too. I need to fetch all rows where Col_B is not 8 or 9.Now, if I use "NOT IN", it does not work. I tried reading on it and got to know why it does not work. Even "NOT EXISTS" does not help. But still I've to fetch my values. How do I do that?Thanks & Regards,Jahanzeb
I have tried doing a search, as I figured this would be a common problem, but I wasn't able to find anything. I know that my SP is functional because when I use VWD execute the query outside of the webpage, I get the correct results -however I have to ensure that a field is either entered, or set to <NULL>. In my SET's I want it to use the wildcards. What I want is to do a search (plenty of existing topics on that, however none were of help to me). If a field is entered, then it is included in the search. Otherwise it should be ignored. In my VB I have the standard stored procedure call, passing in values to all of the parameters in the stored proc below: CREATE PROCEDURE dbo.SearchDog@tagnum int,@ownername varchar(50), @mailaddress varchar(50),@address2 varchar(50),@city varchar(50),@telephone varchar(50),@doggender varchar(50),@dogbreed varchar(50),@dogage varchar(50),@dogcolour varchar(50),@dogname varchar(50),@applicationdate varchar(50)AS IF @tagnum=-1 SET @tagnum=NULL SET @ownername = '%'+@ownername+'%' SET @mailaddress = '%'+@mailaddress+'%' SET @address2='%'+@address2+'%' SET @city = '%'+@city+'%' SET @telephone='%'+@telephone+'%' SET @dogcolour='%'+@dogcolour+'%' SET @dogbreed='%'+@dogbreed+'%' SET @dogage='%'+@dogage+'%' SET @doggender='%'+@doggender+'%' SET @dogname='%'+@dogname+'%' SET @applicationdate='%'+@applicationdate+'%' SELECT DISTINCT * FROM DogRegistry WHERE ( TagNum = @tagnum OR OwnerName LIKE @ownername OR MailAddress LIKE @mailaddress OR Address2 LIKE @address2 OR City LIKE @city OR Telephone LIKE @telephone OR DogGender LIKE @doggender OR DogBreed LIKE @dogbreed OR DogAge LIKE @dogage OR DogColour LIKE @dogcolour OR DogName LIKE @dogname OR ApplicationDate LIKE @applicationdate ) AND TagNum > 0GO I don't know why it is creating links inside my SP -ignore them. TagNum is the primary key, if that makes a difference. On the webpage, it ONLY works when every field has been filled (and then it will only return 1 row, as it should, given the data entered). Debugging has shown that when nothing is entered it passes "". Any ideas?
I am trying to retrieve data from two different tables. One of the tables has more than 20 columns some of which are null. I would like to retrieve data from both tables excluding the columns which have null values. How do I do this?
Would this take care of null values in either a.asset or b.asset?
SELECT convert(decimal(15,1),(sum(isnull(a.asset,0))/1000.0)+(sum(isnull(b.asset,0))/1000.0)) as total_assets
What's throwing me off is that there are multiple a.asset or b.asset for each unique ID. It seems to work, but I'm not following the logic too well. If I were doing this in another language, I would loop through, summing a.asset and b.asset wherever it's not null for each unique ID.
How can I use "Derived Column" to check if a Datetime value is null or not and if null to insert 00/00/00 instead. ?
The background being that while using a "Derived Column" to change a Column from a (DT_DATE) to a (DT_DBTIMESTAMP) everytime I get a null value it see's it as a error.
And the column in particular has ~ 37 K blank / null fields so Im getting a lot of errors
So far I have tried to use something like
ISNULL([Column 34])
Or
SELECT ISNULL(ID, '00/00/0000') FROM [Column 34]
Or
SELECT ISNULL(au_id, '00/00/0000 00:00') AS ssn FROM [Column 34
but none seems to work [Column 34] being the offending column.
What a normally use is just a simple "(DT_DBTIMESTAMP)[Column 34]" in the expression column, which seems to work well, but here I get alot of errors
I set up a new SQL database file, in that file I allowed nulls, When I went through code to save the record, the exception is saying it doesnt allow nulls.
Before I get to involved with SQL, is it a bad practice to use nulls?
If it is what do you enter in place of the null value,
I need a query to return two values. One will be the total units and the other will be total unique units. See exmaple data below. It does not have to be one query. This will be in SP, so I can keep it seperate if I have to.
Total Units = 7 - easy to do by using count() Total unique units = 4 - I cannot use group by as it would return multiple results for each unit, which is not what we want.
Here is my dataset used by my report definition. The combo of barcode and order id is unique. The 'isDiscountedItem' field indicates if the customer used a coupon to purchased an item at a lower price.
I want to group my report by department id, class id and barcode. Then, I want to count all distinct order ids for which there was at leat one discounted item.
My report would produce the following output considering the above dataset:
Merchandise Number of customers who used a coupon -------------------------------------------------------------------------------------------------------------- Department 1 2
Class 1 2
Barcode 123 2 Barcode 789 1 Department 2 0
Class 7 0
Barcode 456 0
I've been looking at a possible solution using hash tables defined in the report code but I would like to find a 'cleaner' solution. Any help would be appreciated.
select a.Assignment_UniqueID as DeploymentID, a.AssignmentName as DeploymentName, a.StartTime as Available, a.EnforcementDeadline as Deadline, sn.StateName as LastEnforcementState,-----Required Column