Retrieving First N Rows From A Large Query
Feb 15, 2007
Sriram writes "Hi,
I want to retrieve only the first n rows from a query which returns a large number of rows.
Say,
select empno, name from emp where deptno=100
returns 1000 rows.
I want to improve the query so that it returns only the first 10 rows and not 1000 rows.
Thanks in Advance,
Sriram."
View 1 Replies
ADVERTISEMENT
Jun 28, 2002
Message:
Hi,
Suppose i execute a query,
Select * from emp,
I get 100 rows of result, i want to write a sql query similar to the one available in MySql database where in i can specify the starting row and number of rows of records i want,
something like,
select * from emp LIMIT 10,20
I want all the records from the 10th row to the 20th row.
This would be helpful for me to do paging in the front end , where is i can navigate to the next previous buttons and fetch the corresponding records.
I want to do something like in google, previous next screens.
So I am trying to limit the number of rows fetched from a query.
somethin like,
select * from emp where startRowNum=10 and NoOfRecords = 20
so that i can get rows from 10 to 30.
Has any1 done this, plz let me know.
Cheers,
Prashant.
View 1 Replies
View Related
Sep 11, 2001
Hello,
I have a web application which retrieves values from a table and the table has grown really huge now. This has slowed down the performance and I need to do something else instead of the select statement that I currently have. Can cursors be a solution?
Any help/suggestions would be greatly appreciated.
Thanks,
Subha
View 2 Replies
View Related
Sep 30, 2005
i need to retrieve a large amount of data from the sql server database and make changes to one field and put the data back using a console application. how do i do it?
View 3 Replies
View Related
May 5, 2003
Hi,
My application needs to retrieve data from a table which has more than 15 lakh records. The records keep increasing in thousands every 15 days.
Is there anyway i can reduce the time to retrieve? basically i have a select statement with a few conditions and a clause for the id's of these records.
View 2 Replies
View Related
Jun 3, 2008
I have the following sql:
SELECT DISTINCT patient.patientID, patientFirstName, patientLastName, patientDOB, patientGender, completed_date
FROM patient
LEFT JOIN patient_record ON patient_record.patientID = patient.patientID
WHERE (sub_categoryID = 4 OR patient_record.allocated = 4)
AND (patient_status = 1 OR patient_status = 2 OR patient_status = 5)
GROUP BY patient.patientID, patientFirstName, patientLastName, patientDOB, patientGender, completed_date
This brings up duplicate records, my aim is to bring distinct records, now if I take out the other returned fields after patientID
and using the following sql:
SELECT DISTINCT patient.patientID
FROM patient
LEFT JOIN patient_record ON patient_record.patientID = patient.patientID
WHERE (sub_categoryID = 4 OR patient_record.allocated = 4)
AND (patient_status = 1 OR patient_status = 2 OR patient_status = 5)
GROUP BY patient.patientID
This bring up distinct results, but I need to retrieve the other fields from the database i.e. patientFirstName and patientLastName
Please can you help.
View 2 Replies
View Related
Sep 21, 2007
I have a table like this.
Depositors Table
Value(int) StartDate(Date) AccountID(int)
I want to create a report from this table. the report should look like this.
Value No of Accounts Average Value
For Yesterday
For Last 7days
For Last 30 days
Please Can anyone write a simple query for this?
Thanks
View 3 Replies
View Related
Oct 11, 2007
Hello,Currently we are in the process of implementing a sql server database where couple tables will have millions of rows ( about 98 millions and will grow) and a web site that will retrieve and sort the data ( read only). How asp.net gridview and sqldatareader act situation like that? Will it be a very slow response? Is there any alternative? Is there any example on the net?
Assuming tables are well tuned and well indexed.
Thank you in advance.
View 4 Replies
View Related
Dec 4, 2007
I wonder if you can help...
I have a simple setup: 2 tables and a joining table, and want to retrieve a data set showing every possible combination of table A and table B together with whether that combination actually exists in the joining table or not.
My tables:
channels
======
channel_id
channel_name
items
====
item_id
item_name
channels_items (joining table)
===========
channel_id
item_id
created
An example of the dataset I want (assuming 2 items and 2 channels, with itemA not being in channelB):
item_id item_name channel_id channel_name exists
======= ========= ========== ============ ======
1 ItemA 1 ChannelA True
2 ItemB 1 ChannelA True
1 ItemA 2 ChannelB False
2 ItemB 2 ChannelB True
I'm completely stuck on how to achieve this. Any guidance would be very much appreciated.
View 2 Replies
View Related
Nov 9, 2004
Hi All
by using this query
"select * from sample order by newid()" im getting a set of rows. On refreshing this query i need the same set of rows to validate.(provided sufficient data in the table).
Please provide me the query to use for this
Adv. Thanks
Hari...
View 2 Replies
View Related
Sep 28, 2000
Hi, can someone plz give me an idea of how to proceed with this...
I've got a server side cursor that retrieves a list of customers from a sql server table which match a specific criteria (those who have had orders in the yr 2000)
For each customer, I need to retrieve a list of 5 top items purchased along with dollars spent on each item by that customer.
I've tried to do this in a loop that uses a counter but probably my syntax was off and I'm not sure that this is better than a correlated join? Can someone show me a temlate of the appropriate syntax to use for this operation
I appreciate suggestions, thanks again.
Irene
View 2 Replies
View Related
Sep 15, 2006
OK here's my question. I want to retrieve from my database employee table all those employees with the name eg. Smith and display them in a list. Can anyone give me any pointers please. I'm using VB 2005 Express Edition. So far this is what I have but it only seems to return 1 row when I know there are more than one entries with the name I am inputting
Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
Dim searchString As String
searchString = Me.SearchStringBox.Text
Try
Dim filter As String
filter = "LastName LIKE '" & searchString & "'"
Dim search() As System.Data.DataRow
search = myCDDataSEt.ClientData.Select(filter)
If search.Length > 0 Then
'no code as yet
Else
MessageBox.Show("The client " & searchString & "is not in the database")
End If
Catch ex As Exception
MessageBox.Show(ex.Message)
End Try
End Sub
View 2 Replies
View Related
Sep 27, 2005
hi,
i need SP that receive 2 integers ,@NUM_ROWS and @PAGE_NUMBER,
and return the rows in that page.
for example:
SP(4,2) will return 4 rows in page number 2 .
So if i have table with 9 rows i will get rows 5-8,
the first page is rows 1-4 the second page is 5-8 and the 3 page is row 9.
i have to assume that rows can be deleted form that table.
thanks
View 3 Replies
View Related
Jan 11, 2008
Mean_A Std_Dev_A Mean_B Std_Dev_B Mean_C Std_Dev_C X_Co Y_Co Posn
71.7
9.36
73.23
3.62
70.87
4.06
12
14
1
72.69
8.02
79.39
2.66
73.39
5.16
13
15
2
74.37
10.27
77.33
4.10
79.33
3.44
14
16
3
The Above is my database, I need help in retrieving the X_Co and the Y_Co using values of rcv_A, rcv_B and rcv_C to compare with the Mean_A, Mean_B, Mean_C. The values of rcv_A, rcv_B and rcv_C are instances of values that are not exact of the mean columns , and we want is to compare it against our database and retrieve the row that is the closest to the rcv_A, rcv_B and rcv_C.
Here is an example of what i need. Let's say my rcv_A = 71, rcv_B = 73 and rcv_C = 70.8, so the row with mean value closest would be row 1, followed by row 2, then row 3.
So the result i hope to retrieve is in order of the closest value and i only need the X_Co and Y_Co.
This is what i want
X_Co Y_Co
---------------------------
12 14
13 15
14 16
So anyone please can help me in querying for the above results? Thanks
View 5 Replies
View Related
Jul 23, 2005
We are busy designing a generic analytical system at work that willhold multiple analytic types over time. This system is being developedin SQL 2000.Example of tableIDENTITY intItemId int [PK]AnalyticType int [PK]AnalyticDate DateTime [PK]Value numeric(28,15)ItemId - the item for which the analytic is being storedAnalyticType - an arbitrary typeThe [PK] tag indicates the composite primary key.Our scenario is the following:* For this time series data, we expect around 250 days per year(working days) and the dataset could extend to over 20 years* Up to 50 analytic types* Up to 20,000 itemsLooking at the combined calculation - this comes to roughly somethinglike25 * 20,000 * 50 * 250 or around 5 billion rows.We will be inserting around 50*20,000 or around 1 million rows each day(the inserts will take place in the middle of the night (outside themain query time) - this could be done through something like BCP orBULK INSERT.Our real problem is we have not previously worked with such largetables before and are nervous that our system is going to grind to ahalt. Our biggest tables are around 20 million rows at the moment.Scanning through google and microsoft's own site we have found aparititioning method that is available.http://www.microsoft.com/resources/...art5/c1861.mspxHaving experimented with the above system it seems rather quirky andlooking at the available literature it seems that this is not moreeffective than a clustered index as far as queries go.It needs to be optimized for queries like:Given the ItemID and the AnalyticType search for a specific date or aspecific range of dates.If anyone has any experience or helpful suggestions I would reallyappreciate it.ThanksA
View 4 Replies
View Related
Apr 3, 2007
Hi all,
A select query returns around 1 million rows. The column in the WHERE condition is indexed. This query takes nearly 1 minute for returning the all the records. Is this normal ?
Does the number of records returned affect the performance inspite of the indexing ?
Thanks,
DBLearner
View 3 Replies
View Related
Nov 24, 1999
Hi,
I need to insert a very large number of rows into a table (in SQL Server 7.0) using ADO.
Could you please tell me i there is a way for FAST insert, something similar to BCP ... or any other way of
inserting large number of rows efficiently
Thanks
View 2 Replies
View Related
Jul 23, 2005
Hello,I have experienced that some of my tables occupies an extremely large amountof pages but with few rows. An example is a table with 37 rows over 22000pages !. The columns are simple integer and char. I fixed the problem byintroducing a clustered index. Now it only uses 1 page. But can anyoneexplain this behaviour in SQLServer 2000 ?regards Jakob Mathiasen
View 4 Replies
View Related
Sep 14, 2007
I have a table with about 10 million rows, its been logging data incorrectly and I want to start again.
DELETE FROM tblLog
I just get a message about the server timing out, what can I do?
Thanks
View 5 Replies
View Related
May 23, 2002
Table structure: col1 IDENTITY (seed=1 increment=1) + few other columns (col2...col7) + one text column (col 8)
I have around 50,000,000 rows per day inserted in the table T1. At the end of the day 40,000,000 rows are deleted. I have to keep the records for 12 months and then archive it. Database is 24/7 web serving and there is no down time allowed. IDENTITY column will go out of range (overflow) after less than two years, unless the identity seed is reset to the start value (seed=1, increment=1).
At the end of 12th month data is archived in another table and only last month is kept in the table T1. So table T1 enters new year with data from last month of the previous year. There are few other tables that refer to this table by using there own field with values from T1.IDENTITY column (referential integrity is not enforced). Identity column in T1 is needed as a unique id for some search actions. Performance is an issue therefore bigint data type is used for this identity column rather than decimal.
Another problem I have is how to do table update on one column (1 mil rows to be updated out of 2 mil of rows) with the minimum impact on the users who are querying this table heavily. Not need to mention that it is web app 24/7 no down time.
Thank you in advance.
Goran
View 1 Replies
View Related
Jan 26, 2004
Hi,
I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!
I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.
At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.
98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.
This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......
The server is a 1100 MHz P3 / 512MB / Windows 2000 Server /
SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.
My log is pretty simple, as follows:
LogId - int - primary key - clustered index
ClientId - int - index asc
LogTypeId - int - index asc
LogValue - nvarchar[2500], ikke index
LogTimeStamp- datetime - index asc
I have deducted 3 different solutions:
Method 1:
Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".
This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?
Method 2:
Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.
Method 3:
Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)
But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.
However, it seems as this method is what will work best for my set-up, unless there are other suggestions??
Method 4:
...eagerly awaiting ideas!!! :-)
(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )
Thanks in advance for your help! :-)
Nikolai
View 4 Replies
View Related
Dec 24, 2007
Hey Guys
i need to add a datetime column to an exisitng table that has like 1.2 million records and its being accessed frequently
but i cant afford to stop the db at all
whenever i do : alter table mytable add Updated_date datetime
it just takes too long and i have to stop executing the query after a couple of mins
I am running sql express 2005 sp2. db size is over 3 gb but still under the 4 gb limit
can u plz advice on how to add this column. its urgent!!
thanks in advance
View 5 Replies
View Related
Jan 25, 2001
Iam attempting to generate files containing more than 1000 characters per line by outputting the results of a stored procedure via osql to a flat file. Osql (and isql) appear to force a newline after 1000 characters, even when specifiying a -w2000 parameter.
I have also tried to output the results of the stored procedure via DTS and this appears to do the same thing!
Does anybody know how to prevent osql (or isql) from forcing the newline?
View 1 Replies
View Related
Aug 18, 2014
SQL 2012
I have a source table in the staging database stg.fact and it needs to be merged into the warehouse table whs.Fact.
stg.fact is not a delta feed it is basically an intra-day refresh.
Both tables have a last updated date so its easy to see which have changed.
It will be new (insert) or changed (update) data that I am interested in, there are no deletions.
As this could be in the millions of rows that are inserts or updates then this needs to be efficient.
I expect whs.Fact to go to >150 million rows.
When I have done this before I started with T-SQL Merge statement and that was not performant once I got to this size.
My original option was to do this is SSIS with a lookup task that marks the inserts and updates and deal with them seperately. However I set up the lookup tranformation the reference data set will have a package variable in the SQL commnd. This does not seem possible with the lookup in 2012! Currently looking at Merge Join transformation and any clever basic T-SQL that could work as this will need to be fast, and thats where I think that T-SQL may be the better route.
Both tables will have >100,000,000 rows
Both tables have the last updated date
The Tables are in different databases but on the same SQL Instance
Each table holds 5 integer columns, one Varchar, one datatime
Last time I used Merge it was a wider table with lots of columns so don't know if this would be an option.
View 6 Replies
View Related
Jan 19, 2015
I have a simple query that joins a largeish fact table (3 million rows) to a view that returns 120 rows. The SKEY in the view is returned via a scalar function. The view returns instantly if queried on it's own however when joined to the fact table in the simple query below results in a query execution plan that runs forever. Interestingly if I change the INNER JOIN to a LEFT OUTER JOIN the query returns the matched results almost instantly.
Select
Dimension.Age_Band.[10_Year_Age_Band],
Count(*)
From
Fact.APC_Episodes
Inner Join Dimension.Age_Band ON
Fact.APC_Episodes.AGE_BAND_SKEY = Age_Band.AGE_BAND_SKEY
Group By
Dimension.Age_Band.[10_Year_Age_Band]
I know joining to a view using a column generated by a scalar function is not a good recipe for performance. I also know that I could fix this by populating a physical table with the view first as I have already tested this though I hoping not to have to go down that route.
Why a LEFT OUTER JOIN works and not an INNER JOIN or anyway I can get the query optimizer to generate an execution plan that works?
View 9 Replies
View Related
Aug 30, 2007
I have a row that is being used log track plays on our website.
Here's the table:
CREATE TABLE [dbo].[Music_BandTrackPlays](
[ListenDate] [datetime] NOT NULL DEFAULT (getdate()),
[TrackId] [int] NOT NULL,
[IPAddress] [varchar](20)
) ON [PRIMARY]
There's a CLUSTERED INDEX on ListenDate ASC and a NON CLUSTERED INDEX on the TrackId.
I have a TRIGGER on the Music_BandTrackPlays table that looks like the following:
CREATE TRIGGER [trig_Increment_Music_BandTrackPlays_PlayCount]
ON [dbo].[Music_BandTrackPlays] AFTER INSERT
AS
UPDATE
Music_BandTracks
SET
Music_BandTracks.PlayCount = Music_BandTracks.PlayCount + TP.PlayCount
FROM
(SELECT TrackId, COUNT(*) AS PlayCount
FROM inserted
GROUP BY TrackId) AS TP
WHERE
Music_BandTracks.TrackId = TP.TrackId
When a simple INSERT statement is done on the Music_BandTrackPlays table, it can take quite a long time. When I remove the TRIGGER the INSERTs are immediate. The Execution plan for the TRIGGER shows that a 'Inserted Scan' is taking up most of the resources.
How exactly is the pseudo 'inserted' table formed?
For now, I think the easiest thing to do is update my logging page so it performs 2 queries. One to UPDATE the Music_BandTracks table and increment the counter, and perform the INSERT into the Music_BandTrackPlays table seperately.
I'm ok with that solution but I would really like to understand why the TRIGGER is taking so long. The 'inserted' pseudo table will be 1 row 99% of the time. Does SQL Server perform a table scan on all 20 million rows in order to determine what's new and put it in the inserted pseudo table?
Thanks!
View 6 Replies
View Related
Oct 16, 2015
In another forum post, a poster was deleting large numbers of rows from a table in batches of 50,000.
In the bad old days ('80s - '90s), I used to have to delete rows in batches of 500, then 1000, then 5000, due to the size of the transaction rollback segments (yes - Oracle).
I always found that increasing the number of deleted rows in a single statement/transaction improved overall process speed - up to some magic point, at which some overhead in the system began slowing the deletes down, so that deleting a single batch of 10,000 rows took more than twice as much time as deleting two batches of 5,000 rows each.
good rule-of-thumb numbers (or even better, some actual statistics and/or explanations) as to how many records should be deleted in a single transaction/statement for optimum speed? 50,000 - 100,000 - 1,000,000 or unlimited? Are there significant differences between 2008, 2012, 2014?
View 9 Replies
View Related
May 12, 2015
I am using SQL SERVER 2008R2, not Denali, so I cannot use OFFSET FETCH Clause.
In my stored procedure, I am doing a SELECT INTO #tblTemp FROM... Working fine. This resultset is going to be used in an SSIS package which will generate a pipe-delimited .txt file... Working fine.
For recoverability sake, I am trying to throttle back on the commit chunks to 1000 rows per commit until there are no more rows. I am trying to avoid large rollbacks.
Q: Am I supposed to handle the transactions (begin/commit/rollback/end trans) when the records are being inserted into the temp table? Or when they are being selected form the temp table?
Q: Or can I handle this in my SSIS package for a flat file destination? I don't see option for a flat file destination like I do for an OLE DB Destination (like Rows per batch, Maximum insert commit size).
View 6 Replies
View Related
Feb 19, 2015
We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?
View 2 Replies
View Related
Apr 24, 2013
IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END
It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.
View 4 Replies
View Related
May 14, 2008
HI y'all, I have a problem in my query I hope you dudes can help. Ya see I have this query:SELECT CustId,CustName,CustAddress,CreditLimit,dateStart,DateUpdated From AR_Customer And this is the resultsCustId CustName CustAddress CreditLimit dateStart DateUpdated-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------CS0201 Reena Mayfield Str. 300000.000 2006-08-01 00:00:00.000 2006-08-01 00:00:00.000 CS0202 Bryant Bringfield 202 50000.000 2006-08-01 00:00:00.000 2006-08-01 00:00:00.000CS0203 Jack Marjory 22 3000000.000 2006-08-01 00:00:00.000 2007-05-18 17:30:23.000CS0204 Alan Orchard 25 4500000.000 2006-10-02 00:00:00.000 2006-08-01 00:00:00.000Then I add Where clause to the query, like this SELECT CustId,CustName,CustAddress,CreditLimit,dateStart,DateUpdated From AR_Customer WHERE DateUpdated = '08/01/2006'And the result is just as I expected:CustId CustName CustAddress
CreditLimit dateStart
DateUpdated
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------CS0201
Reena Mayfield Str. 300000.000
2006-08-01 00:00:00.000 2006-08-01 00:00:00.000 CS0202
Bryant Bringfield 202 50000.000
2006-08-01 00:00:00.000 2006-08-01 00:00:00.000CS0204 Alan Orchard 25
4500000.000 2006-10-02 00:00:00.000 2006-08-01
00:00:00.000But when I changed the date parameter to 05/18/2007 for example like this SELECT CustId,CustName,CustAddress,CreditLimit,dateStart,DateUpdated From AR_Customer WHERE DateUpdated = '05/18/2007'It doesn't show any results at all, it just showing the header CustId CustName CustAddress
CreditLimit dateStart
DateUpdated
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------What I want to ask is why is that happen, is it because the date contains hours--according to the database is the date contain hours like this 2007-05-18 17:30:23.000. I already use this operator <=,>= but it just showing the wrong result. Can you guys tell me the correct query, please. I appreciate any help. Thanks. FYI: I use SQL Server 2000Best Regards.
View 4 Replies
View Related
Apr 6, 2006
Hi,
first off, I'm a TOTAL novice at this stuff, I'm just currently blundering my way through a complex site to learn stuff.
I'm trying to call the newest addition to a SQL database into a webpage, in this case, it'll be 'newest user', one result only. I've done several other data retrival sections using a datatable, but the guy who was helping me though it is unavailable at the moment and I get the feeling I've jumped into the deepend slightly.
Could anyone give me an example of how retrieving the First N Records from SQL should look in VS? Does it need to be in a data table or can it go in a label?
Sorry if this is somewhat vague, but as I said, I've really only been using VS for a week!
View 3 Replies
View Related
Feb 20, 2000
I am trying to retrieve the user names of all users within a database. The problem is, that I have only created a login within SQL 7.0 for the NT group that the users belong to.
As such, when I try to query the syslogins table I only get the NT group.
Is there any way of retrieving the users that belong to this NT group.
Any help or suggestions on this would be greatly appreciated.
Thanks,
Karl
View 1 Replies
View Related