SQL Server 2005 Slows Down After A Large Number Of Queries
Jul 4, 2007
Hi,
We are running SQL Server 2005 Ent Edition with SP2 on a Windows 2003 Ent. Server SP2 with Intel E6600 Dual core CPU and 4GB of RAM. We have an C# application which perform a large number of calculation that run in a loop. The application first load transactions that needs to be updated and then goes to each one of the rows, query another table get some values and update the transaction.
I have set a limit of 2GB of RAM for SQL server and when I run the application, it performs 5 records update (the process described above) per second. After roughly 10,000 records, the application slows down to about 1 record per second. I have tried to examine the activity monitor however I can't find anything that might indicate what's causing this.
I have read that there are some known issues with Hyper-Threaded CPUs however since my CPU is Dual-core, I do not know if the issue applies to those CPUs too and I have no one to disable one core in the bios.
The only thing that I have noticed is that if I change the Max Degree of Parallelism when the server slows down (I.e. From 0 to 1 and then back to 0), the server speeds up for another 10,000 records update and then slows down. Does anyone has an idea of what's causing it? What does the property change do that make the server speed up again?
If there is no solution for this problem, does anyone know if there is a stored procedure or anything else than can be used programmatically to speed up the server when it slows down? (This is not the optimal solution however I will use it as a workaround)
Any advice will be greatly appreciated.
Thanks,
Joe
View 3 Replies
ADVERTISEMENT
Jul 28, 2004
Hi All,
I'm currently in the middle of building quite a large CMS using ASP.NET and MSSQL2K and have began to question if the amount of queries I am using for one page to be built is too many?
For one page (View Forum) I am getting all of the templates and checking access then pulling a list of threads, getting the first and last posts, then user info for the first and last posts... anyway to view 10 threads on the page the number of queries comes to about 54 and the page takes 0.064 seconds to load.
My question is, Is this to many queries to be running for a single page load? All queries are using Stored Procedures.
Thanks Guys.
View 3 Replies
View Related
May 21, 2007
I'm hoping someone will be able to point me in the right direction for solving this problem as i've come a bit stuck.
The Sql Server 2005 Stored Procedure runs in about 3 secs for a small table when run from SQL Management Studio (starting with dbcc freeproccache before execution) but times out when run through ADO.NET on .NET (45 sec timeout).
I've made sure the connection was closed prior to opening and executing the adapter. I'm a bit stuck as where to check next though.
Any ideas greatfully received, Thanks
View 6 Replies
View Related
May 30, 2006
I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.
this are my test servers... i have more than 500 databases on my production
servers.
principal and mirror both are using port 5022 for ENDPOINT communication.
View 1 Replies
View Related
Jun 1, 2006
I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.
these are my test servers... i have more than 500 databases on my production
servers.
principal and mirror both are using port 5022 for ENDPOINT communication.
All of the databases are critical and all must be included in the Database Mirroring.
so, after that I tried to implement database mirroring again......
System has 3 GB of RAM, SQL SERVER (Mirror) using 85 MB of RAM but still
giving this error while trying to enable database mirroring for 37th
Database.....
"There is insufficient system Memory to run this query"
WHY?
View 19 Replies
View Related
May 12, 2008
Hi
Will change in index slows down queries in SQL Server2000 sp4.
Please help me
View 4 Replies
View Related
Dec 12, 2007
Hi
We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
Thanks in advance
View 1 Replies
View Related
Feb 28, 2008
Hi MSDN PPL!
I have an SQL Server 2005 instance that slows down over time, she almost grinds to a halt. The data is being exposed via an ASP.Net 2.0 web interface. The web application gets slower and slower over a matter of days. If I restart the SQL Server process she comes back to life and starts serving as it should - nice and snappy.
The web application does not perform much writing to the DB, 90% of the time its just reading. The DB server is worked hard by a console application that produces data each day. This console app runs for about 30 minutes during which there is a lot of reading, processing and writing back to the DB as fast as the hardware will allow. Its this massive workload that is slowing the DB server.
This seems to be related to the amount of memory that SQL Server is using. When looking at Task Manager I can see that sqlservr.exe is using 1,878,904K, this figure continues to rise while the console app runs. I have seen it over 2 GB. When the console app finishes the memory is still allocated and performance is slow. This continues to get worse after a few days of processing.
The machine's specs are:
* Windows Server 2003 R2 Standard
* SQL Server 2005 Standard 9.00.3054.00
* Twin 3.2Ghz Xeons
* 3.5 Gb RAM
I plan to apply "Cumulative hotfix package (build 3152) for SQL Server 2005 Service Pack 2" in a blind hope to solve the problem.
Any suggestions?
Sorry if this is in the wrong place guys, couldn't find a general performance topic. Please move accordingly.
Thanks,
Matt.
View 3 Replies
View Related
Feb 23, 2007
For anyone with a larger number of databases (500+): How many do you have in a single instance. If you are using multiple instance on a single server, how many dbs per instance. This is why I'm asking
We are experiencing 701 "out of system memory" and temporary (usually) system freezes when the error occurs. We have 32bit 2005 version 9.00.2153.00, 32GB of memory, AWE enabled, quad dual-core 3GHz hyperthreaded server. Nether the bPool or VAS show any pressure when the "out of system memory error" occurs. Since this error usually indicates a VAS problem we tried increasing VAS to 1GB w/the -g flag. It made no difference. PSS has been working on the case for 3 weeks. They dont seem to be finding any evidince of memory pressure either. When I last spole to the escalation engineer yesterday it seemed that they are going to recommend reducing the number of databases on the server. I asked for clarification as to whether we are hitting a 32 bit barrior, an instance limitation, or both. I am awaiting the answer. How many databases do you have on your server? We had between 1700 and 1900 (the number varies) at times when the error occured. We are now at 1500, and have not had the error in the 2 days since reducing the number of databases...
View 4 Replies
View Related
Mar 30, 2015
Our monitoring tool shows that our production system periodically experiencing large rate - up to 800 memory pages/sec. How to find out which particular queries, S.P., processes that initiate this?
View 3 Replies
View Related
Jan 26, 2004
Hi,
I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!
I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.
At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.
98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.
This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......
The server is a 1100 MHz P3 / 512MB / Windows 2000 Server /
SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.
My log is pretty simple, as follows:
LogId - int - primary key - clustered index
ClientId - int - index asc
LogTypeId - int - index asc
LogValue - nvarchar[2500], ikke index
LogTimeStamp- datetime - index asc
I have deducted 3 different solutions:
Method 1:
Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".
This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?
Method 2:
Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.
Method 3:
Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)
But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.
However, it seems as this method is what will work best for my set-up, unless there are other suggestions??
Method 4:
...eagerly awaiting ideas!!! :-)
(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )
Thanks in advance for your help! :-)
Nikolai
View 4 Replies
View Related
Sep 8, 2015
I have the following scenario:
SQL database on SQL 2012
Large Production table 15 Million record
The table has 3 years of data
New monthly data is being added every month.
A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.
Delete is done based on trx_date which is a year/month combo, like 201508.
The table has monthly sales by customer aggregated.
The table structure is:
CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,
[code].....
View 9 Replies
View Related
Oct 27, 2015
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.
View 0 Replies
View Related
Nov 6, 2015
I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.
On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.
Is this normal behaviour? I've seen more blocking as a result of this.
Is there any way to reduce the number of SPIDs generated?
View 0 Replies
View Related
Sep 23, 2006
I am importing a text file with a column (serial numbers) with alphanumeric data, some mixed and some only numeric. The very large values that are all numeric are being converted to exponential when I run it thru an import package in SQL Server Integration Services (2005)
Ex. 4110041233214321 --> 4110040000000000 (displays as 4.11E+15)
In the past I dealt with this by importing the text file into Excel and changing the format of the column to number. This works even when many of the values contain alpha characters. I am not sure how to accomplish this same thing without going thru Excel. If you have any ideas on this I would be happy to hear from you.
I am importing the text file into a sql table.
View 1 Replies
View Related
Feb 19, 2015
We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?
View 2 Replies
View Related
Jan 11, 2008
Hi,
I have stumbled on a problem with running a large number of SSIS packages in parallel, using the €œdtexec€? command from inside an SQL Server job.
I€™ve described the environment, the goal and the problem below. Sorry if it€™s a bit too long, but I tried to be as clear as possible.
The environment:
Windows Server 2003 Enterprise x64 Edition, SQL Server 2005 32bit Enterprise Edition SP2.
The goal:
We have a large number of text files that we€™re loading into a staging area of a data warehouse (based on SQL Server 2k5, as said above).
We have one €œmain€? SSIS package that takes a list of files to load from an XML file, loops through that list and for each file in the list starts an SSIS package by using €œdtexec€? command. The command is started asynchronously by using system.diagnostics.process.start() method. This means that a large number of SSIS packages are started in parallel. These packages perform the actual loading (with BULK insert).
I have successfully run the loading process from the command prompt (using the dtexec command to start the main package) a number of times.
In order to move the loading to a production environment and schedule it, we have set up an SQL Server Agent job. We€™ve created a proxy user with the necessary rights (the same user that runs the job from command prompt), created an the SQL Agent job (there is one step of type €œcmdexec€? that runs the €œmain€? SSIS package with the €œdtexec€? command).
If the input XML file for the main package contains a small number of files (for example 10), the SQL Server Agent job works fine €“ the SSIS packages are started in parallel and they finish work successfully.
The problem:
When the number of the concurrently started SSIS packages gets too big, the packages start to fail. When a large number of SSIS package executions are already taking place, the new dtexec commands fail after 0 seconds of work with an empty error message.
Please bear in mind that the same loading still works perfectly from command prompt on the same server with the same user. It only fails when run from the SQL Agent Job.
I€™ve tried to understand the limit, when do the packages start to fail, and I believe that the threshold is 80 parallel executions (I understand that it might not be desirable to start so many SSIS packages at once, but I€™d like to do it despite this).
Additional information:
The dtexec utility provides an error message where the package variables are shown and the fact that the package ran 0 seconds, but the €œMessage€? is empty (€œMessage: €œ).
Turning the logging on in all the packages does not provide an error message either, just a lot of run-time information.
The try-catch block around the process.start() script in the main package€™s script task also does not reveal any errors.
I€™ve increased the €œmax worker threads€? number for the cmdexec subsystem in the msdb.dbo.syssubsystems table to a safely high number and restarted the SQL Server, but this had no effect either.
The request:
Can anyone give ideas what could be the cause of the problem?
If you have any ideas about how to further debug the problem, they are also very welcome.
Thanks in advance!
Eero Ringmäe
View 2 Replies
View Related
Sep 20, 2007
Hi peeps,
We have just upgraded to Service Pack 4 on our SQL Server 2000.
We have had a DTS job that normally takes about four hours to complete(this dts job has been ok for the last three years).
However, after applying SP4, this DTS job now takes over 8 hours to complete.
There are no other processes running on the box and the box is a high end Dell machine with 8 Gig of RAM.
Any advice on this would be greatly appreciated.
Bal
View 14 Replies
View Related
Jul 26, 2007
I'm working with a table with about 60 million records. This monster is growing every minute of the day as well, by 200,000 - 300,000 records/day. It's 11 columns wide, and has one index on a datetime column. My task is to create some custom reports based on three of these columns, including the datetime one.
The problem is response time. Any query executed on this table takes forever--anywhere between 30 seconds and 4 minutes. Queries such as this one below, as simple as it is, can take a minute or more:
select
count(dt_date) as Searches
from
SearchRecords
where
datediff(day,getdate(),dt_date)=0
As the table gets larger and large, the response time is going to get worse and worse. Long story short, what are my options to get the speed of queries down to just a few seconds with a table this big? So far the best I can come up with is index any other appropriate columns (of which there is one for sure, maybe two).
View 6 Replies
View Related
Nov 26, 2007
Hi,
I want to pass below given query into a variable
"if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[ <POS_MONTH>]]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
DROP TABLE [dbo].[ <POS_DATE>]
GO
SELECT * INTO [dbo].[<POS_DATE>] ]
FROM SG_POS_Template
WHERE 1 = 0;
GO"
Where [<POS_DATE>] is a parameter by which value will be assigned dymanically.....anybody please help me out....!!
View 5 Replies
View Related
May 31, 2007
Hi,
I currently have a large table (35 million rows, over 80GB). I have one varchar(max) column on the table that is used in the fulltext index.
To query the complete index is fast, for example:
SELECT 'ipod', COUNT(*)
FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], 'ipod') CT
This took 70 seconds (which I can live with). However, I seldom run queries like this, most are more like:
SELECT 'ipod', COUNT(*)
FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], 'ipod') CT
JOIN Pages ITP ON ITP.PageID = CT.[Key]
JOIN Feeds ITF ON ITP.IPID = ITF.IPID
JOIN Buyers ITB ON ITB.IBID = ITF.IBID
WHERE ITB.ID IN (1342,246)
These queries are much slower (this example took 17 minutes). I understand that FT searches the index and returns all rows that match the query to SQL. SQL then performs the joins and counts only the correct results. (Correct me if I'm wrong here).
One solution I've seen to this to put data or "tags" into the FT column - so my Body column would become something like:
'{ID:1342}' + [Body]
That sounds like a very good idea. I could then change the 2nd query above to be:
SELECT 'ipod', COUNT(*)
FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], '("ID:1342" OR "ID:246") AND "ipod"') CT
That all works well until I want to select 1000 different ID's because the FT query will become very long and complex. Also I'm only including one column (ID) in this example - but I have about 7 or 8 columns that I would need to include in these "tags". Quering multiple columns become very complex quickly and no doubt I will reach a query limit at somepoint.
If anyone has any other suggestions to the above I'd love to hear them. Another thought I'm having is to partition the table. I can find very little online about how FT behaves on partitioned tables - I fear it behaves exactly the same, what I'd like to think is that I could partition the table on an ID say 100 per partition or something, and then fulltext would only search the relevant partitions. If it behaves like this it may work. If no-one knows then I'll give it ago, but this will take me a while due to the table size - so I'm hoping one of you clever lot know!
Many thanks for any advice.
Simon
View 2 Replies
View Related
May 20, 2008
Hello,
We have created several Table Valued User Defined Functions in a Production SQL Server 2005 DB that are returning large (tens of thousands of) rows obtained through a web service. Our code is based on the MSDN article Extending SQL Server Reporting Services with SQL CLR Table-Valued Functions .
What we have found in our implementations of variations of this code on three seperate servers is that as the rowset grows, the length of time required to return the rows grows exponentially. With 10 columns, we have maxed out at approximately 2 500 rows. Once our rowset hit that size, no rows were being returned and the queries were timing out.
Here is a chart comparing the time elapsed to the rows returned at that time for a sample trial i ran:
Sec / Actual Rows Returned
0 0
10 237
20 447
30 481
40 585
50 655
60 725
70 793
80 860
90 940
100 1013
110 1081
120 1115
130 1151
140 1217
150 1250
160 1325
170 1325
180 1430
190 1467
200 1502
210 1539
220 1574
230 1610
240 1645
250 1679
260 1715
270 1750
280 1787
290 1822
300 1857
310 1892
320 1923
330 1956
340 1988
350 1988
360 2022
370 2060
380 2094
390 2094
400 2130
410 2160
420 2209
430 2237
440 2237
450 2274
460 2274
470 2308
480 2342
490 2380
500 2380
510 2418
520 2418
530 2451
540 2480
550 2493
560 2531
570 2566
It took 570 seconds (just over 9 1/2 minutes to return 2566 rows).
The minute breakdown during my trial is as follows:
1 = 655 (+ 655)
2 = 1081 (+ 426)
3 = 1325 (+244)
4 = 1610 (+285)
5 = 1822 (+212)
6 = 1988 (+166)
7 = 2160 (+172)
8 = 2308 (+148)
9 = 2451 (+143)
As you can tell, except for a few discrepancies to the resulting row count at minutes 4 and 7 (I will attribute these to timing as the results grid in SQL Management Studio was being updated once every 5 seconds or so), as time went on, fewer and fewer rows were being returned in a given time period. This was a "successful" run as the entire rowset was returned but on more than several occasions, we have reached the limit and have had 0 new rows per minute towards the end of execution.
Allow me to explain the code in further detail:
[SqlFunction(FillRowMethodName = "FillListItem")]
public static IEnumerable DiscoverListItems(...)
{
ArrayList listItems = new ArrayList();
SPToSQLService service = new SPToSQLService();
[...]
DataSet itemQueryResult = service.DoItemQuery(...); // This is a synchronous call returning a DataSet from the Web Service
//Load the DS to the ArrayList
return listItems;
}
public static void FillListItem(object obj, out string col1, out string col2, out string col3, ...)
{
ArrayList item = (ArrayList) obj;
col1 = item.Count > 0 ? (string) item[0] : "";
col2 = item.Count > 0 ? (string) item[1] : "";
col3 = item.Count > 0 ? (string) item[2] : "";
[...]
}
As you will notice, the web service is called, and the DataSet is loaded to an ArrayList object (containing ArrayList objects), before the main ArrayList is returned by the UDF method. There are 237 rows returned within 10 seconds, which leads me to believe that all of this has occured within 10 seconds. The method GetListItems has executed completely and the ArrayList is now being iterated through by the code calling the FillListItem method. I believe that this code is causing the result set to be returned at a decreasing rate. I know that the GetListItems code is only being executed once and that the WebService is only being called once.
Now alot of my larger queries ( > 20 000 rows) have timed out because of this behaviour, and my workaround was to customize my web service to page the data in reasonable chunks and call my UDF's in a loop using T-SQL. This means calling the Web Service up to 50 times per query in order to return the result set.
Surely someone else who has used Table Valued UDFs has come accross this problem. I would appreciate some feedback from someone in the know, as to whether I'm doing something wrong in my code, or how to optimize an SQL Server properly to allow for better performance with CLR functions.
Thanks,
Dragan Radovic
View 7 Replies
View Related
Jul 19, 2004
hi,
Does large number of connections to a sql server result in slogging queries??
by large number of connections I mean in the range of 2000 to 2500 connections to various databases on the same server.
This happens on the production database, if I download the copy of the database and run the specific queries on the local box it hardly takes 1 second to execute, whereas on the production box it takes about 45 to 60 secs, i m working on the indexes part... was not sure whether number of connections to a server affect the performance of the queries or it is just that the indexes need defragmentation...
View 2 Replies
View Related
Jul 23, 2005
We are busy designing a generic analytical system at work that willhold multiple analytic types over time. This system is being developedin SQL 2000.Example of tableIDENTITY intItemId int [PK]AnalyticType int [PK]AnalyticDate DateTime [PK]Value numeric(28,15)ItemId - the item for which the analytic is being storedAnalyticType - an arbitrary typeThe [PK] tag indicates the composite primary key.Our scenario is the following:* For this time series data, we expect around 250 days per year(working days) and the dataset could extend to over 20 years* Up to 50 analytic types* Up to 20,000 itemsLooking at the combined calculation - this comes to roughly somethinglike25 * 20,000 * 50 * 250 or around 5 billion rows.We will be inserting around 50*20,000 or around 1 million rows each day(the inserts will take place in the middle of the night (outside themain query time) - this could be done through something like BCP orBULK INSERT.Our real problem is we have not previously worked with such largetables before and are nervous that our system is going to grind to ahalt. Our biggest tables are around 20 million rows at the moment.Scanning through google and microsoft's own site we have found aparititioning method that is available.http://www.microsoft.com/resources/...art5/c1861.mspxHaving experimented with the above system it seems rather quirky andlooking at the available literature it seems that this is not moreeffective than a clustered index as far as queries go.It needs to be optimized for queries like:Given the ItemID and the AnalyticType search for a specific date or aspecific range of dates.If anyone has any experience or helpful suggestions I would reallyappreciate it.ThanksA
View 4 Replies
View Related
Apr 3, 2007
Hi all,
A select query returns around 1 million rows. The column in the WHERE condition is indexed. This query takes nearly 1 minute for returning the all the records. Is this normal ?
Does the number of records returned affect the performance inspite of the indexing ?
Thanks,
DBLearner
View 3 Replies
View Related
Nov 24, 1999
Hi,
I need to insert a very large number of rows into a table (in SQL Server 7.0) using ADO.
Could you please tell me i there is a way for FAST insert, something similar to BCP ... or any other way of
inserting large number of rows efficiently
Thanks
View 2 Replies
View Related
Jan 25, 2008
Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.
I would really appreciate if you can give me some advice and if you have any good links that would be great...
View 10 Replies
View Related
Jan 25, 2008
Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.
I would really appreciate if you can give me some advice and if you have any good links that would be great...
Waleed Eissa
http://www.waleedeissa.com
View 9 Replies
View Related
Jun 20, 2006
Hi!
The DB I am working with has about 10 tables and some of the tables have 200,000 to 500,000 records. All tables have a clustered index on the primary key.
I performance during INSERT could be better I think - I add thousands of records at a time from many connections.
Is there a way to defer the update of indicies? So that, I can update the tables and then let the indicies regenerate ?
thanks,
Jas.
View 4 Replies
View Related
Oct 20, 2006
I have datagrid that needs to display a log table which has more than million records. Since it it huge number, it is not possible to get dataset using "select * from log_table" to fill and to bind to datagrid.Is there anyway to display first 100 rows on first page and show next 100 rows if use clicks on page 2?Thank you very much in advance!Justin
View 4 Replies
View Related
Jan 29, 2008
Hi I have created one store procedure which handles global updates I am using cursor to fetch one be one row for updating (It is required for implementing business logic)Now when i execute this store procedure ---it gives me dedlock error , I dont know why i m getting this error(Approx number of rows 1.5lakh)if then i removed unnecessary records from table (Approx -50000) it works fine,Is there any way to handle itI am calling this storeprocedure from my window service.please give me a good solution if possible
View 3 Replies
View Related
Jun 4, 2007
All;
We have a Windows App and Web App that share business objects which points to a single database. When a Windows user logs in, an average of 50 processes are created in the first few seconds and never go away. The details window is blank and they all remain sleeping from that point on.
I have stepped through the code to see if there is anything odd going on but most of the processes are created when validating the number of parameters the stored procedure has or the length of the stored procedure name. This translates to 1000-1500 processes on average.
Is this normal? Will it hurt performance? Is there a way to remove them?
View 1 Replies
View Related
Sep 14, 2007
I have a table with about 10 million rows, its been logging data incorrectly and I want to start again.
DELETE FROM tblLog
I just get a message about the server timing out, what can I do?
Thanks
View 5 Replies
View Related