I have implemented a login audit on a particular system which catches the users login details, including their application logon name and NT username.
What I want to do is report on users who have logged on to the software using someone else's workstation (i.e. logged on to more than one workstation).
--Insert test data. Please note that loginName and ntUsername are rarely the same
INSERT INTO @logins (loginName, ntUsername, loginDate)
SELECT 'Amy', 'Amy', '20070101' UNION
SELECT 'Amy', 'Amy', '20070102' UNION
SELECT 'Amy', 'Amy', '20070103' UNION
SELECT 'Bob', 'Bob', '20070101' UNION
SELECT 'Bob', 'Bob', '20070102' UNION
SELECT 'Bob', 'Amy', '20070103' UNION --Bob has logged on using 2 different NT accounts
SELECT 'Cal', 'Cal', '20070102' UNION
SELECT 'Cal', 'Amy', '20070102' UNION --So has cal
SELECT 'Dom', 'Dom', '20070102' UNION
SELECT 'Dom', 'Dom', '20070102'
Any ideas? I just can't think of the logic needed to get what I want.
This question is from a deveoper that I work with:
In SQL Server 7.0:
Do you know of a query or sp which will return the list of objects in a DB, sorted in descending order by last changed date?
I need to generate a list of all the stored procedures created or modified since a specified date. I can get the created ones, but I can't see how to get the modified ones.
I have the following query below that I am trying to get working. What I want it to do is check for users who have sat a module and failed it and compare it to a table to check that they have not passed the module second time and report only those who have failed withg no passes. Query below.
SELECT DISTINCT dbo.PPS_SCOS.NAME, PPS_PRINCIPALS.NAME, pps_transcripts.date_created, score, max_score, status FROM (dbo.PPS_SCOS JOIN dbo.PPS_TRANSCRIPTS ON dbo.PPS_SCOS.SCO_ID = dbo.PPS_TRANSCRIPTS.SCO_ID) JOIN dbo.PPS_PRINCIPALS ON dbo.PPS_TRANSCRIPTS.PRINCIPAL_ID = dbo.PPS_PRINCIPALS.PRINCIPAL_ID WHERE dbo.PPS_SCOS.NAME LIKE 'MTB-S001%' AND PPS_PRINCIPALS.LOGIN LIKE '%test%' AND dbo.PPS_TRANSCRIPTS.STATUS LIKE 'F' AND PPS_TRANSCRIPTS.TICKET not like 'l-%' AND dbo.PPS_PRINCIPALS.NAME NOT IN ( SELECT DISTINCT dbo.PPS_SCOS.NAME FROM (dbo.PPS_SCOS JOIN dbo.PPS_TRANSCRIPTS ON dbo.PPS_SCOS.SCO_ID = dbo.PPS_TRANSCRIPTS.SCO_ID) JOIN dbo.PPS_PRINCIPALS ON dbo.PPS_TRANSCRIPTS.PRINCIPAL_ID = dbo.PPS_PRINCIPALS.PRINCIPAL_ID WHERE dbo.PPS_TRANSCRIPTS.STATUS LIKE 'P' AND dbo.PPS_SCOS.NAME LIKE 'MTB-S001%' AND PPS_PRINCIPALS.LOGIN LIKE '%test%' AND PPS_TRANSCRIPTS.TICKET not like 'l-%' ) ORDER BY pps_PRINCIPALS.NAME
if i use this to parse the current date to the right side of the time. right(getdate(),7) - i'll get something like 7:30AM.
i also have Times stored in a column of a table, but as a string not a date time. it seems to compare okay, but when the time is say 1:30PM and im comparing it if its greater than or equal to (>=)to 7:30AM - it doesnt return.
i think its ignoring the AM/PM Meridian Values and just comparing the numbers.
is there a conversion i could use to do this? ive tried a military time conversion i found but it converts to hrs,min,milliseconds. convert(char(8),(convert(datetime,current_timestam p,113)),114)
if anyone knows a good way to do this - i would appreciate it.
I've been reading a bit about full-text searches, phonetic values and match-queries and just don't know where to begin.
What I'm eventually going to do, is make procedures for matching names, finding records that are close matches and presenting them in a subform below the actual member that you look up.
E.g. if an employee looks up Sergej, he or she will also see Sergey, Sergei etc. below the membersheet.
BOL isn't very practical in examples, and its about 7 years since I took my SQL-Server 7.0 MS courses, plus I've primarily worked as an administrator up until last fall, not a developer. So where to begin?
What is the best method for ignoring the time in datetime comparisons. Say I want all records on 07/08/1996 regardless of their time. Or all records between 01/01/1999 and 04/01/1999 even if one of the records on 04/01/1999 had a time of 16:32:22
This looks like a bug - hopefully somebody can explain what is actuallyhappening. Using SQL Server 2000 SP4.Here's a repro script with comments:/* repro table */CREATE TABLE dbo.T (ID int NOT NULL,Time datetime NOT NULL,CONSTRAINT PK_T PRIMARY KEY (ID, Time))GO/* the problem does not happen without this index */CREATE NONCLUSTERED INDEX IX_T ON dbo.T (Time)GO/*sample row - note thatCAST('2006-04-08 13:14:58.870' AS smalldatetime) = '2006-04-08 13:15:00'*/INSERT INTO dbo.T (ID, Time)VALUES (1, '2006-04-08 13:14:58.870')GO/*This does not return any rows - why?The comparison should evaluate to TRUE.*/SELECT *FROM dbo.TWHERE CAST(Time as smalldatetime) >= '2006-04-08 13:15:00'GO/*This does return the row.*/SELECT *FROM dbo.TWHERE CAST(DATEADD(millisecond, 0, Time) as smalldatetime) >='2006-04-08 13:15:00'GODROP TABLE dbo.TGOThe difference between the two SELECT statements is that the first one usesa non-clustered index seek, whereas the second one uses a scan of the sameindex.--(remove a 9 to reply by email)
I am trying to compare the ADDRESS FIELDS Between 2 tables in SQL SERVER 2008. However when I run the comparisons below it throws the error below:
Query: select inner JOINÂ Â TABLE2 B ON COLLOAN= COLLOAN1 a.ADDRESS<>b.PropertyAdd
Error : Cannot resolve the collation conflict between "SQL_Latin1_General_Pref_CP437_CI_AS" and "SQL_Latin1_General_CP850_BIN" in the equal to operation.
I have a pretty intensive query that I need performance help on. There are ~1 million de-normalized 'adjustment rows' that I am checking about 20 different conditions on, but each of these conditions has multiple possibile entries.
For example, one condition is 'what counties apply' to each row? Now I could cross-join a table listing every county that applies to every row, which would mean 1 million rows X 3,000 potential counties. And for every one of these 20 condition, I'd need to be joining tables for each of these lookups.
Instead, I was told to do a binary comparison of some sort, but I'm not exactly sure of how to do it. This way, I'm not needing to do any joins, but just have a large binary string, with bits representing each county.
Since each query I know the exact county searched, I can see if each row applies (along with each of the other conditions I must check vs the other binary strings).
I accomplished this using: AND Substring(County, @CountyIndex, 1) = '1' I have a character string for county, which is painfully slow when running all of these checks.
My hope is if the county in the lookup is 872, I can just scan the table, looking at bit #872 for the county field in each record, rather than joining huge tables for every one of these fixed fields I need to test.
My guess is the fastest way is some sort of binary string comparisons, but I can't find any good resources on the subject. PLEASE HELP!
I have a pretty intensive query that I need performance help on. There are ~1 million de-normalized 'adjustment rows' that I am checking about 20 different conditions on, but each of these conditions has multiple possibile entries.
For example, one condition is 'what counties apply' to each row? Now I could cross-join a table listing every county that applies to every row, which would mean 1 million rows X 3,000 potential counties. And for every one of these 20 condition, I'd need to be joining tables for each of these lookups.
Instead, I was told to do a binary comparison of some sort, but I'm not exactly sure of how to do it. This way, I'm not needing to do any joins, but just have a large binary string, with bits representing each county.
Since each query I know the exact county searched, I can see if each row applies (along with each of the other conditions I must check vs the other binary strings).
I accomplished this using: AND Substring(County, @CountyIndex, 1) = '1' I have a character string for county, which is painfully slow when running all of these checks.
My hope is if the county in the lookup is 872, I can just scan the table, looking at bit #872 for the county field in each record, rather than joining huge tables for every one of these fixed fields I need to test.
My guess is the fastest way is some sort of binary string comparisons, but I can't find any good resources on the subject. PLEASE HELP!
I have a column that has an expression with a runningvalue in it, a "Carrying Cost" for each month. I need to create another column that aggregates the monthly Cost. I can't to do a Runningvalue on the Runingvalue. I can't even do a Sum on the Runningvalue.
I have a table that has 4 colums (id,projectno,date,price) i want to make a select that returns the sum per project no i used this query select projectno,sum(pice) as sum from supplier group by projectno
but i want to include additional columns like id and date for the result but its giving this message: Column 'supplier.id' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
is there a better way to do so without joining the main table with the upper select query? Best Regards
Hi, I have we have a client who gives their invoices in a flat file format, we import it into a SQL Server table. Nothing is normalized – everything is repeated in every record. The fields are: customerNumberInvoice_numberPO_numberQtyDescriptionLine_numberLine_totalFreightTaxInvoice_date So an if an order has 10 line items, the header information (invoice number, PO number, ivoice date) are repeated on each of the lines I am writing a query to show the following Order number, Invoice total, Date select invoice_no, sum(line_total + freight + tax) as invoiceTotal, customerNumber, Invoice_date from invoices group by invoice_no, Invoice, customerNumber This works great - for each invoice I get the invoice number, InvoiceTotal, and Date Then I was asked to add the PO Number – this is where I can’t get it right. When I added “PO_number� to the query, I got two lines for each invoice select invoice_no, sum(line_total + freight + tax) as invoiceTotal, customerNumber, Invoice, PO_number from invoices group by invoice_no, Invoice, Sold_To_Cust_No, PO_number Please help - I need to end up with: invoice_no, invoiceTotal, customerNumber, Invoice_date and PO_number (sequence does not matter) Thanks
I am attempting to wrie a query that will return aggregate totals from two different tables. The problem is that the TotalForecast totals are way to high. How do I write a query to obtain the correct totals?Table 1 - dbo.QM_ResultsColumns - dbo.QM_Results.Special8, dbo.QM_Results.SessionName, dbo.QM_Results.PersonNumberTable 2 - dbo.PM_ForecastViewColumns - dbo.PM_ForecastView.Hierarchy, dbo.PM_ForecastView.ForecastSelect substring(dbo.QM_Results.Special8,0,6) AS Hierarchy, substring(dbo.QM_Results.SessionName,0,11) As CourseCode,count(dbo.QM_Results.PersonNumber) TotalAssociates,sum(dbo.PM_ForecastView.Forecast) TotalForecastFrom dbo.QM_Results INNER JOIN dbo.PM_ForecastView ON dbo.PM_ForecastView.Hierarchy = substring(dbo.QM_Results.Special8,0,6)where SessionMid in ('96882139', '23620891', '45077427', '29721437')AND substring(dbo.QM_Results.Special8,0,6) in ('EZHBA')Group By substring(dbo.QM_Results.Special8,0,6),substring(dbo.QM_Results.SessionName,0,11)Sample of data returned with my current query.Hierarchy CourseCode TotalAssociates TotalForecastEZHBA CARD167200 1179 141480EZHBA CARD167201 1416 169920EZHBA CARD167202 1119 134280EZHBA CARD167204 99 11880Results when I run aggregate query separatelyActual Total takenHierarchy CourseCode TotalTakenEZHBA CARD167200 393EZHBA CARD167201 472EZHBA CARD167202 373EZHBA CARD167204 33Forecasted Total takenHierarchy CourseCode ForecastEZHBA CARD167200 999EZHBA CARD167201 900EZHBA CARD167202 800EZHBA CARD167204 800
Does anyone know how to make a query and use an aggregate function? This is my current code...any help would be great. "SELECT tblTopic.Topic_ID, tblTopic.Subject, MAX(tblThread.Message_date) AS MessageDate, tblThread.Message FROM (tblThread INNER JOIN tblTopic ON tblThread.Topic_ID = tblTopic.Topic_ID) WHERE (tblThread.Message_Date LIKE '%' + @fldGenus + '%' GROUP BY tblTopic.Topic_ID, tblTopic.Subject, tblThread.Message"> Also, How can i limit the query to only bringing up 5 records? I'm trying to get a datagrid to show the 5 most recent forum posts for a particular category. Thanks.
I have a table that is used for employee evaluations. There are six questions that are scored either 1, 2, 3, 4, or 5. I want to tally the responses on a page, but I wonder if I can do it without 35 separate calls to the database (I also want to get the average response for each question). I know I can do "SELECT COUNT(intWorkQuality) AS Qual1 FROM dbo.Summer_Project_Req WHERE intWorkQuality = '1' " and then "SELECT COUNT(intWorkQuality) AS Qual2 FROM dbo.Summer_Project_Req WHERE intWorkQuality = '2' " and so on. But can I somehow do the aggregating at the page level, and just refer back to a datasource that uses a generic statement like "SELECT intWorkQuality, intDepend, intAnalyze, intWrite, intOral, intCompatibility FROM dbo.Summer_Project_Req"? If I can, I am not sure what type of control would be best to use or what syntax to use to write the code-behind. I would like the results to be displayed in a grid format. Thanks in advance for your help.
I was doing a SUM on my returned rows and i found that what i really want is an aggregate bitwise OR on all the returned rows. Do you know what's the function for that?
I have two tables tb1 with item and qtyOnHand and a second table tb2 with item and qtyOrdered I am trying without success to make this happen;select sum (onHand-Ordered) from (select sum (qtyOnHand) from tb1 where item = RD35 group by item) as onHand, (select sum (qtyOrdered) from tb2 where item = RD35 group by item) as OrderedI kind of gathered it would work based on this http://weblogs.asp.net/jgalloway/archive/2004/05/19/135358.aspxI have also tried this;select tb1.item from (select sum (qtyOnHand) from tb1 where item = RD35 group by item) as onHand, (select sum (qtyOrdered) from tb2 where item = RD35 group by item) as Ordered, sum (onHand-Ordered) as available from tb1 where tb1.item = RD35Any ides, there are multiple rows of each item in each table tb1 is inventory with several different locations and tb2 is an orders table.
What I'm trying to solve: I have an application that generates SQL queries, and sometimes uses DISTINCT where the result set has no dupe rows. In terms of database resources, I'm trying to figure out if it's worth it to change to app to be smart enough to not use DISTINCT where it won't serve any purpose, or whether to let it do the DISTINCT and save added complexity to the query building application. I.e. what is the cost of DISTINCT where there are no dupe rows?
What I want to know: Can someone explain how the stream aggregate operator actually goes about doing its work?
Does this always create a temp table for sorting and discarding duplicates (for DISTICNT)? If the answer is "no or sometimes", how does it do so in the case where a temp table is not involved? I noticed the the estimated I/O for this operator was zero for some queries I wrote agains pubs. Does this mean that the optimizer believes the temp table needed will fit in-memory and creates it in-memory? Or does the estimated I/O figure not included disk writes for work tables?
I was told that on Oracle there's something called an Aggregate Navigator which should be capable of changing the table you're addressing in a query to another table (with aggregate data) and in this way optimize performance in a data warehousing environment.
I need to run a query to get the following result(by carrier and for each calc_date, calculate the percentage of all individuals who have rcf greater than 0.73):
carrier,calc_date,count of ind with rcf > 0.73, count of all individual, percentage of individuals with rcf's greater than 0.73.
does anyone have an idea of how to achieve that result?
How can I aggregate a top 5 count across two satellite tables?
e.g. Orders and downloads table each have multiple entries for the same customer ID I would like to count the orders and add them to the downloads count too e.g. 5 orders added to 10 downloads giving 15 as the total for this customer and get a total 'site activity' result which I would like to select the top 5 for.
I have three tables, tblschedule, tblresource and tblemployeename. in tblschedule table there are scheduleID, resourceID and employeeID. In tblResource there are ResourceID and ResourceName. In tblemployeename there are EmployeeID, EmployeeFName and EmployeeLame. I want to have a report that show how many times the resource has been reserved by employee. i would like to have a report. Look like the following:
ResourceName EmployeeFName EmployeeLName (Or use EmployeeName) Number of record.
I need to find an aggregate for several fields in a row e.g. Max(date1, date2, ..., dateN)
I can pass this to a delimited string, pass the string to an UDF that returns a table and run Max(tablefield) on that UDF
Unfortunately I can only get this working for 1 delimited string at a time
Ideally I would want to include the function in a SELECT statement, e.g. something like
SELECT t1.a, dbo.MaxOfFieldValues(t1.d1+','+t1.d2+...+','+t2.dN ) FROM t1
I got it working with the following two udfs, but I am sure visitors here have solved this a bit smarter:
ALTER Function [dbo].[MaxOfFieldValues] ( @ListOfValues varchar(8000) , @delimiter varchar(10) = ',' ) RETURNS VARCHAR(8000) AS BEGIN --Need to get the maximum changedate first --pass the fields as one value (a delimited string) --and calc the max declare @result varchar(8000) declare @remainder varchar(8000) set @remainder = @ListOfValues declare @NoOfItems int --items = delimiters +1 SET @NoOfItems = (len(@ListOfValues) - Len(Replace(@ListOfValues,@delimiter,''))/Len(@ListOfValues))+1 declare @counter int set @counter =1 set @result = dbo.TakePart(@remainder,@delimiter,@counter) WHILE @counter <= @NoOfItems BEGIN set @counter = @counter + 1 IF @result < dbo.TakePart(@remainder,@delimiter,@counter) BEGIN SET @result = dbo.TakePart(@remainder,@delimiter,@counter) END END RETURN (@result) END
ALTER FUNCTION [dbo].[TakePart] ( @param varchar(8000) , @delimiter varchar(10) , @NumPart int ) RETURNS varchar(8000) AS BEGIN --Note: maybe smarter to whack the delimiter to the end of the string to avoid the IF statement declare @result varchar(8000) declare @remainder varchar(8000) declare @counter int set @result = '' set @remainder = @param set @counter = 1 WHILE @counter < @Numpart BEGIN SET @remainder = SUBSTRING(@remainder,CHARINDEX(@delimiter,@remaind er,1)+Len(@delimiter),8000) SET @counter = @counter +1 END
IF @counter > (len(@param) - Len(Replace(@param,@delimiter,''))/Len(@delimiter)) BEGIN SET @result = @remainder END ELSE BEGIN SET @result = LEFT(@remainder,CHARINDEX(@delimiter,@remainder,1) -1) END
I have a table called sample and i have the following requirement. i.e i need sum(credit) group by ssn no.
One special condition is as follows:
For each distinct ssn if "flag" has the same CX value,then out of all the records with the same CX value, the highest "credit" value is added to the sum for that "ssn" and the rest are ignored. If while adding "credit" to the sum and if "credit" value is equal to zero then "sum" value is used for summing else "credit" value is used. Can any one help me out in trying this logic. I have tried but i could'nt able embed the conditions inbetween the Sql statetment.