where each letter represents a column. I need a SELECT statement that results one row if there is an entry with a date within the last 6 mo for each CP,P,ST combination.
Hello everyone.. I'm working here on a little problem which's driving me nuts : I have two tables :EQUIPMENT :
-EQID -EQName -EQblabla QUALITYCHECK :
-QCID -EQID <---- Is connected with EQUIPMENT -QCDATE -QCACTION -QCnextdate They're needed to manage Qualityactions on some machines we own. Every year, we do quality actions on every machine, after then, we Inserta new QUALITYCHECK-entry into our database and automatically theQCnextdate is generated with QCdate+365days. So far, so easy. So now, how will i do a query against the DB with the NEWEST QUALITYCHECK-entrys for every machine ? The result has to besmth. like that : EQID | EQName | QCDATE | QCNextdate
Hello,I am running SQL Server 2000. I would like to know whetherMicrosoft Transact-SQL has a method for limiting the resultset from a query in a way analogous to MySQL's LIMIT keyword,so that, for instance, if the result set contains 10,000 rows,then only the first 10 rows from the record set are output.Thank you,Best Regards,Neil
We are trying to limit are query that returns items from our database. The query currently returns 32,000 records. We are trying to figure out an effecient way so we can request the 1st 50, or the 3rd 50, or the 5th 50 to display to the screen. We dont want to return the entire 32,000 then limit whats displayed to the screen in ADO. We want the select statment to only return 50 at a time. Any suggestions?
I am trying to write a query that gives me the personal records from speed skaters on e.g. the 500 mtrs. I do this with the query:
SELECT cdsDistance AS Distance , prsFirstName , prsLastName , min(crtFinalTime) AS MinTime FROM tb....... INNER JOIN etc.. GROUP BY cdsDistance, prsFirstName, prsLastName ORDER BY min(crtFinalTime)
In itself this works fine. However, there are complicating factors. Sometimes a speed skater has multiple PRs, meaning the he/she has the same fastest time more than once.
If these times are achieved on multple days, the 1st date is the official PR. (meaning: "Min of racedate") If they are raced on the same day the 1st race is the PR (meaning: "Min of distancenumber")
Changing the code to:
SELECT cdsDistance AS Distance , prsFirstName , prsLastName , MIN(crtFinalTime) AS MinTime , MIN(cdsStartDate) AS RaceDate , MIN(cdsDistanceNumber) AS DistanceNumber
FROM tb....... GROUP BY cdsDistance, prsFirstName, prsLastName ORDER BY min(crtFinalTime)
This gives me the wrong outcome because it gives me the "MIN" of every field, and they are not necessarily on the same row.
An option would be to calculate min(crtFinalTime), if for a person there is more than 1 result, calculate min of date, and then (if there is still more than 1 row) min of distancenumber.
Seems complicated, and I have the feeling there must be a better way (apart from: how to get this code)
Stacking subqueries in the FROM statement seems like a option be costly (time wise). There are more than 10 million rows (and growing) to run through.
As an example a few times:
DistanceFirst nameLast name Time Date Distance nr. 500 Yuya Oikawa 34.49 201311155 500 Yuya Oikawa 34.49 201311153 500 Yuya Oikawa 34.49 201311172
Yuya has 3 best times (34.49), 15-11-2013 is the 1st date, then distance nr 3 is the 1st distance raced. Therefore the 2nd row is the only row I would like to get in my endresult.
I'm trying to decide whether MS SQL will allow me to accomplish the following objectives at no cost, or whether I'd eventually have to pay for an MS SQL upgrade to accompish my objectives.
I have big, unrealistic dreams. I want to create a humorous newsblog into which I would post more than a dozen times a day. Most of the posts would have large photographs. Presumably, I'd archive the posts by subject and ranking and 'most viewed," etc., using a database.
1) Will the space limitations of the MS SQL Express edition be an issue after a while?
2) Could I hire a web developer to help me from a remote location, once the website is large enough to warrant expansion? Or does the MS SQL Express edition allow only one user? I read something I didn't understand about CPU restrictions.
3) I'm confused because web hosts advertise the availability of MS SQL databases on their server...so does that mean I wouldn't have to buy an upgrade if it became neccessary? (I know, I'm shockingly uneducated.)
4) I'm going to buy Office 2007. Is it important to purchase a package that includes Microsoft Access, given my goals?
5) Any other thoughts in plain english on how the MS SQL express edition imposes limitations....basically, I don't understand how MS SQL Express might limit me down the road if the site were actually a success. What would have to happen before I would be forced to spend a lot of money on an upgrade later?
I'm almost completely new to computing. I've read a bunch of criticisms of MS SQL Express on internet forums that I didn't understand, but that really made me worried about my decision to go with Microsoft Products and Asp.net web hosts. (I understand some people have an irrational dislike of Microsoft, but there was A LOT of bashing.)
how to write a query to get current date or end of month date if we pass year and month as input
Eg: if today date is 2015-09-29 if we pass year =2015 and month=09 then we have to get 2015-09-29 if we pass year =2015 and month=08 then we have to get 2015-08-31(for previous months we have to get EOMonth date & for current month we have to get current date).
please do help. Have been wrestling with this for about 3 hours and read a buncha forums + searched the tutorial lists.. AARRGH!
Anyhow,
I have to paginate a datalist (and I really can't use a datagrid because of the layout, blame the bluddy graphic designer)
I want to return the top 8 rows, the next 8 top rows and so on.
This is the sql string I have: 'retrieve pagination in order to construct the rest of the sql string Dim startrec As Integer If pageno = 1 Then startrec = 0 Else startrec = (pageno - 1) * pagesize End If ' this builds the sql string, getting ONLY the records we need for this. Page size is a constant expressed on the base page ' startrec is the record where I want to start in the db. strsql = "select top " & pagesize & " * " & strsqlbasic & " and itemID>" & startrec & " order by itemnotes1 asc" noresults.text = strsql & " <br> " & searchwhat & searchfor
strsqlbasic is constructed elsewhere and is just the 'from X where y = var
Of course, this returns all records where the value of itemID is greater than the value of startrec. How would I change this so it returns the next 4 rows starting from the row number given by the value of startrec?
Orcale uses a psuedo colum ROWNUM to limit the number of rows returned by a query. Does MSSQL server 6.5 provide some similar feature (or any indirect way of doing that).
I have a problem on microsoft sql server 2000. I need to limit the amount of resources(eg cpu, mem..) a query uses Is there a way to do this in SQL server 2000
I'm trying to write a query where if the value of one of the paramters is equal to "-1" it will not include one of the search criteria. For example I have a param called "Gender". 0 = Male and 1 = Female. If the value of the "Gender" param is "-1" I'd like it to find either Males or Females.
Code (please not this doesn't compile):
SELECT [UserID],[Gender] FROM [table_user] WHERE Country = @Country AND IsOnline = @IsOnline IF (@Gender != -1) AND Gender = @Gender
Obviously this doesn't work, but I think it conveys the gist of what I'm trying to do. Does the secret lie in default parameters? Or is my IF syntax just flat out wrong?
I am trying to convert some mySql code to sqlserver and was amazed to see that sqlserver has no functionality to specify the rows of a select statement. In mySql its e.g. "SELECT * FROM table LIMIT 20, 10" this returns ten rows starting from row 21. Using TOP in sqlserver only returns the top n rows but what if I want row 20 to 30? Is there an easy way that I have just missed or does one realy have to do something like "SELECT TOP 60 * FROM myTable WHERE id NOT IN (SELECT TOP 40 id FROM myTable);"?
Hi Group!I am struggling with a problem of giving a date range given the startdate.Here is my example, I would need to get all the accounts opened betweeneach month end and the first 5 days of the next month. For example, inthe table created below, I would need accounts opened between'5/31/2005' and '6/05/2005'. And my query is not working. Can anyonehelp me out? Thanks a lot!create table a(person_id int,account int,open_date smalldatetime)insert into a values(1,100001,'5/31/2005')insert into a values(1,200001,'5/31/2005')insert into a values(2,100002,'6/02/2005')insert into a values(3,100003,'6/02/2005')insert into a values(4,100004,'4/30/2004')insert into a values(4,200002,'4/30/2004')--my query--Select *[color=blue]>From a[/color]Where open_date between '5/31/2005' and ('5/31/2005'+5)
I am trying to limit the number of records to start in a position and show me "n" records. Here is an example query:
SELECT TOP 60 * FROM Entries WHERE Id NOT IN ( SELECT TOP 45 Id FROM Entries ORDER BY Id DESC ) AND Static = False AND Authorized = False ORDER BY Id DESC
The application retrieves limited records for paginated results. This way we can access large databases, but only return rows for one screen.
For example: Select row 1 - 10, then 11 - 20 then 21 to 30 etc.
With MySQL we can use LIMIT and OFFSET and it works great. MySQL SELECT <columns> FROM <tables> [WHERE <condition>] [ORDER BY <columns>] [LIMIT [offset,]howmany>]
Does anybody know how we can do something similar with MS SQL?
I know how to do this in MySQL; but I was hoping there was a way to do it in MS SQL.
I want to be able to limit the number of rows an UPDATE or DELETE will effect regardless of the WHERE clause. I want to do this as a stop gap in the event that there is a logical error in the WHERE clause that would make it effect more rows than is humanly intended.
If I write a complex query that I know should drill down to only affecting one row, I just want to lock it in before I run it on database and take the risk of damaging some data.
In MySQL you would just do something like this:
Code:
DELETE FROM table_name WHERE ... LIMIT 1;
The only thing close to LIMIT I could come up with for MS SQL was TOP but it seems to only work for SELECT.
Any ideas would be greatly appreciated.
I ask this because if you screw up; manually saving 1, 2 or even 5 rows is a lot easier than having to rescue a whole table of data (even if it on a development server).
CREATE TABLE [dbo].[Competition] ( [Id] INT IDENTITY (1, 1) NOT NULL, [Name] NVARCHAR (150) NOT NULL, [Remarks] NVARCHAR (350) NULL, [Start] DATETIME NOT NULL, [IsActive] BIT NULL, [End] DATETIME NOT NULL,
[Code] ....
Then here is my query with the problem below. The problem is that multiple EntryImages can relate to a single Entry. My goal is to only select one Row for each Entry (using @EntrySelection) but when I join to EntryImages I always get back multiple rows when an Entry has multiple EntryImages.
Declare@EntrySelection int= 10 select top (@EntrySelection) [dbo].Entries.Id, [dbo].Entries.CatchDate, [dbo].Entries.CompetitionId, [dbo].Competition.Name,
We currently have a routine that "forks" out (to use the unix term)TSQLcommands to run asynchronously via SQL Agent jobs. Each TSQL commandgets its own Job, and the job starts immediately after creation.Sometimes we can have too many of these jobs running at the same, andthe box crawls to a slow speed until the jobs finish up.Is there a way we can limit the number of active jobs running under theSQL Agent at one time? Or is there away to limit the number of active(runnable) processes on SQL Server, in general?
Hello,i am in a problem that i am having a table with 100 rows while i ampresenting in report i want to limit rows to be 10 in a page in thereport like that i want to get 10 pages .Please say me the procedurefor this.Thanks,Baba.
SQL Server 2000 SP4. I built a large DTS package that grabs a numberof tables from an Oracle DB, does some scrubbing and date verificationand loads to a SQL Server DB. Most of the tables are full refresh anda few are incremental.Main DW: DwSQLStaging Area: DwLoadAreaSQLThe DW is about 60 Gigs. The Staging Area is about 80 Gigs. This isall good.However, the log file for the staging area is 50 Gigs and I'm tryingto find ways to not require such a large log file. I tried adding afew "BACKUP LOG DwLoadAreaSQL WITH TRUNCATE_ONLY" statements in theDTS package but figured out that because it's 1 DTS package it's all 1transaction. I've thought about breaking it up into multiple DTSpackages and truncating the log between running them but was hoping toavoid this. To be clear, I know how to shrink DB's and LogFiles...that's not the issue.Any Ideas? Thanks.
I have another question. If I use FOr Each Loop Container (For each file Enumerator), it will select all the files in that folder. What if I want to select just 100 files (assuming 500 files in the folder)
I have a matrix object in a report that sometimes runs off the side of the page based on the underlying data. Essentially if there are more than 11 columns it stretches out my page.
How can I fix this? Ideally, I would like to show only the Top 11 results but cannot seem to figure out how (or where, or on what data element) to properly set a filter.
Is there any way to limit the content that is placed into the Progress Tab during execution of a package in debug mode?
My problem is that I have a complex package that puts lots of info in there. This package is called in a loop from another package. Each time the package is called, the new info is added to the existing info in the Progress Tab. Eventually the instance of Dev Studio hangs when it gets too much content in that screen.
The only solution would be if I can limit the output content of that screen or turn it off.
I'm currently using the SQL to find records older than todays date in the SSD_SED field. I'm having to update the date manually each day. Is there a way I can automate this?
im doing network monitoring app where basically i run a checks on servers every few minutes and log the data to a table. Naturally the table can get big, quite quickly. What I want is to be able to overwrite the table data at the start of each new day. Alternatively, rollup the data into a daily or weekly packets and then overwrite table data. How do i do this?
Hello all, I have a table called jobs that has a total column which contains integers between 0 and 25. I have a deal worked out with a client that caps the total per job at 9. How do I sum every entry to get the months total while capping any day whose total is over 9. I have tried looping it in php which I can't get to work, seems like it should be part of the query structure anyway.
Just to clarify: If I had 4 days last month whose totals were 4, 6, 10 and 17. I would need 10 and 17 to cap out at 9 and the total = 28 not 37.
Ideally, I'd like to move away from using SQL-based logins for our internal applications and take advantage of integrated security instead.
Defining AD groups and their permissions in SQL is simple and getting the application to work with that is not an issue.
Where I'm having difficulty, though, is in isolating the accessibility in integrated security. Because the SQL-based login was isolated from the windows user, they could only get access to the sql server via our app -- their normal windows accounts had no access.
If we switch to use only windows authentication, the user would be able connect fine from our application and have rights to various tables. The issue is that they could also connect via Enterprise Manager, Excel, or any other tool. Is there any way to limit the exposure so that we can take use of AD for our access but further limit to allow connections based upon the application? I realize that this could be impersonated, but it's still better than nothing...
I have a small requirement in SSIS Error Logging Mechanism. Presently in my SSIS package i am using a File Connection Manager for creating a Log file. I have a problem on this regard. Every time when i am executing my DTS package, the error log messages are getting appended to my error log at OS level (say D:error_messg.log). And for this reason whenever my DTS package is getting executed the size of the file is keep on increasing and there by killing my disk space.
I have a requirement for this error logging mechanism. At any time my log file should not exceed more than 20MB. Or can we remove the log events a week ago or say more than 2 days or say. Just ensuring the log file do not fill up the disk space eventually.
How can we do this? Any suggestions are greatly appreciated.
1) How can I keep my package from running more thatn 1 instance at a time?
I tried changing "MAXCONCURRENT" to "1" in my DTEXEC command in batch file, however, this doesn't limit the # of instances. (If I run the batch file twice, one after the next, I get 2 instances running simultaneously).
2) What "executable files" is this definition referring to?
MAXCONCURRENT is defined as:
"Specifies the number of executable files that the package can run concurrently. The value specified must be either a non-negative integer, or -1. A value of -1 means that SSIS will allow a maximum number of concurrently running executables that is equal to the total number of processors on the computer executing the package, plus two."
I’m thinking about the best way to run these queries, which need to be run regularly.
The first query is two tables linked, one is a data table containing a unique person identifier and three activity fields, the second a lookup table with an activity_type linked to the activity in the table data.
Data Table PersonID Lots of other fields Activity1 Activity2 Activity3
The ACTIVITY fields can contain data anywhere between all being NULL and all being complete.
Lookup Table ActivityID ActivityDesc ( which appears in Activity 1 – 3) ActivityType
I’d like to create a function which will create a recordset containing the Person ID and the Activity Type. I am unsure as to whether to do this in a way which will create one record for each person, or potentially 3 records for each person. This is how I have done the 3 records:
SELECT PersonID, Activity1 As Sport, ActivityType From dbo.tblActivity LEFT JOIN dbo.tblLUActivityType ON dbo.tblActivity.Activity1 = dbo.tblLUActivityType.ActivityDesc UNION SELECT PersonID, Activity2 As Sport, ActivityType
[Code] ...
And this is how I have done the 1 record:
SELECT ClientID, Activity1, (SELECT ActivityType from dbo.tblLUActivityType where ActivityDesc = Activity1) As ActivityType1, Activity2, (SELECT ActivityType from dbo.tblLUActivityType where ActivityDesc = Activity2) As ActivityType2, Activity3, (SELECT ActivityType from dbo.tblLUActivityType where ActivityDesc = Activity3) As ActivityType3 From dbo.tblActivity LEFT JOIN dbo.tblLUActivityType ON dbo.tblActivity.Activity3 = dbo.tblLUActivityType.ActivityDesc Order by PersonID
The reason I would like to do this is because I need to create a stored procedure which returns one record per person with two fields (Person Id, ActivityType) which states their ActivityType depending on certain rules:
Rule 1: If any of Activity 1 – 3 are ‘O’ report the Person ID and ActivityType = ‘O’
Rule 2: Of the rest of the recordset, if any of Activity 1 – 3 are ‘N’ but none of Activity 1-3 are ‘O’ or ‘A’ then report the Person ID and ‘N’.
Rule 3: Of the rest of the recordset, if any of Activity 1 – 3 are ‘A’ but none of Activity 1-3 are ‘O’ or ‘N’ then report the Person ID and ‘A’.
Rule 4: Of the rest of the recordset, if any of Activity 1 – 3 are ‘A’ and one of the is ‘N’ then report the Person ID and ‘AN’.
At the end this I’m looking for one recordset with two fields containing a personID and one of ‘O’, ‘A’, ‘N’ or ‘AN’. I can do the first part of this in any way necessary, as long as the second part returns the recordset I need.