Slow Database Update
Nov 23, 2007
I have designed a 22 table database in sql server that is to act as a backup/alternate access to the data we have stored in an ADABAS database. I've also written a vb.net console program that will take data from ADABAS through a broker connection (one row at a time), checks the sql server database to see if that information is already stored, and then either performs an insert or an update.
I can write the rows from ADABAS to a text file (not using the broker), at the rate of about 1.3 million rows in 1.3 hours. Data can be imported (I'm not sure how this import is done, possibly via a CSV file. SELECTS/UPDATES are not done, just INSERTS) at about 1 million or so an hour. But when I do the update, receiving information via the broker from ADABAS to the VB program (with it's SELECT, then UPDATE/INSERT), I'm only doing about 20-25 thousand rows an hour.
I ran a trace using the SQL Server Analyzer on the database while running the update program, and then ran Profiler using the generated workload. It created a few indices, but I just restarted the update program (I'm still developing, so I delete all rows from all tables each time I rerun the update), but I haven't seen that it's really any faster.
I have a rather large set of data to transfer over, and this 20-25 thousand row time is not nearly fast enough.
Any help will be appreciated.
Thanks,
View 1 Replies
ADVERTISEMENT
Jan 18, 1999
UPDATE TABLE SET FIELD1=NULL WHERE FIELD2=something - is running veryvery slow. Table has 200 000 recors and on FIELD2 is index.
If number of records to be updated is about 40 000, it takes about 3-4 hours.
I tried this:
- create a new MSSQL DB.
- migration of TABLE from original databes to new test DB
- So, in new DB was only one table, with 200 000 records. Index was only on FIELD2.
- I ran that update and it takes about 3-4 hours
I tried that on Interbase and update takes under ONE SECOND!!!!!!!
Thanks for some notices
View 3 Replies
View Related
Mar 22, 1999
I have the following stored procedure that I use for an update. The table now has just over two million rows and the stored procedure takes days to run. I am looking for any help that would produce the same results faster. This runs on SQL Server 6.5, on an NT machine with 4 pentium pro 200 processors and 512 MB of RAM so I am relatively sure that the performance issue is in the below statement.
UPDATE PAY_CHECK_DETAILS
SET JobID = j.JobID
,HROrganizationID = j.HROrganizationID
,JobDetailID = j.JobDetailID
,GLAccountNumber = j.GLAccountNumber
FROM JOBS j, PAY_CHECK_DETAILS p
WHERE j.EmployeeID = p.EmployeeID
AND j.HRActionEffectiveDate =
(SELECT max(HRActionEffectiveDate)
FROM JOBS jj
WHERE p.EmployeeID = jj.EmployeeID
AND jj.HRActionEffectiveDate <= p.PayDate)
AND j.HRActionSequence =
(SELECT max(HRActionSequence)
FROM JOBS jjj
WHERE p.EmployeeID = jjj.EmployeeID
AND jjj.HRActionEffectiveDate =
(SELECT max(HRActionEffectiveDate)
FROM JOBS jjjj
WHERE p.EmployeeID = jjjj.EmployeeID
AND jjjj.HRActionEffectiveDate <= p.PayDate))
AND NOT (ISNULL(p.JobID,1) = ISNULL(j.JobID,1)
AND ISNULL(p.HROrganizationID,'A') = ISNULL(j.HROrganizationID,'A')
AND ISNULL(p.JobDetailID,1) = ISNULL(j.JobDetailID,1)
AND ISNULL(p.GLAccountNumber,'A') = ISNULL(j.GLAccountNumber,'A'))
Thanks for any help or suggestions you have.
Keith
View 3 Replies
View Related
Jul 20, 2005
I have the following statement that takes quite a long time. Longestof any of my SQL statment updates.UPDATE F_REGISTRATION_STD_SESSIONSET PREVIOUS_YEAR_SESSION_ID = (SELECT s.previous_year_session_idFROM F_REGISTRATION_STD_SESSION R,DS_V4_TARGET.dbo.D_H_Session_By_SessQtr SWHEREr.STUDENT_ID = F_REGISTRATION_STD_SESSION.STUDENT_IDand s.previous_year_SESSION_ID = r.SESSION_IDand s.session_id = F_REGISTRATION_STD_SESSION.SESSION_ID)STUDENT_ID varchar(20) and SESSION_ID char(10) and are indexedWhat I want to accomplish:I want to know if there was a student registration from the prior yearof a registration.Example, if there is a registration for Fall 2004, was there also aregistration for the same student Fall 2003?Maybe there is a better way to approach this?TIARob
View 10 Replies
View Related
Sep 9, 2007
Hi,
I have a table with 110 nvarchar (255) columns in the database. I receive the data from the server (array) and them loop through it to insert the data into the database. The problem is it takes more than 3 minutes to insert (or update) (in memory) the 310 records it got from the array. I am reusing the Statement (with parameters). How could I improve the performance?
It's really a small quantity of data, the CPU in the device is 520Mhz... it should be alright!
Can aynone help me please?
Thanks
View 5 Replies
View Related
Apr 17, 2008
Hi all,
I'm having what you might call an optimisation issue - but I'm also not sure if my approach to this problem is right. I've spent the whole day trying various methods but none seem to be performing as admirably as I'd hoped.
Basically, here's the scenario:
1. Log files are being inserted into a SQL table via Log Parser 2.2.
2. I have a lookup table of ip address ranges to countries and other geographic data.
3. Once the log row has been inserted from Log Parser, I want to update the same log table with data from the lookup table.
The main problem seems to be the updating (the initial insert from Log Parser is lightning quick).
I initially wrote a trigger on AFTER INSERT on the log table that converted the actual user's IP address into IPNumber format using a simple algorithm, then pumped the IPNumber into a quick select query which retrieved the geolocation data. Then, in the same trigger, there was an update statement which basically said:
update dbo.Logs
set [Country] = @Country
where [IpNumber] = @IpNumber and [Country] = Null
Therein lies the problem. The update statement slows everything down to almost a standstill and also seems to obtain a lock on the table.
Critical factors:
1. The log table must remain readable.
2. The query must execute in seconds -- not over half hour :)
I've experimented with various combinations of indexing, so far to no avail.
Any suggestions would be very much appreciated.
Regards
View 10 Replies
View Related
Jul 20, 2005
I have two tables:T1 : Key as bigint, Data as char(20) - size: 61M recordsT2 : Key as bigint, Data as char(20) - size: 5M recordsT2 is the smaller, with 5 million records.They both have clustered indexes on Key.I want to do:update T1 set Data = T2.Datafrom T2where T2.Key = T1.KeyThe goal is to match Key values, and only update the data field of T1if they match. SQL server seems to optimize this query fairly well,doing an inner merge join on the Key fields, however, it then does aHash match to get the data fields and this is taking FOREVER. Ittakes something like 40 mins to do the above query, where it seems tome, the data could be updated much more efficiently. I would expectto see just a merge and update, like I would see in the followingquery:update T1 set Data = [someconstantdata]from T2where T2.Key = T1.Key and T2.Data = [someconstantdata]The above works VERY quickly, and if I were to perform the above query5 mil times(assuming that my data is completely unique in T2 and Iwould need to) it would finish very quickly, much sooner than theprevious query. Why won't SQL server just match these up while it ismerging the data and update in one step? Can I make it do this? If Iextracted the data in sorted order into a flat file, I could write aprogram in ten minutes to merge the two tables, and update in onestep, and it would fly through this, but I imagine that SQL server iscapable of doing it, and I am just missing it.Any advice would be GREATLY appreciated!
View 3 Replies
View Related
May 11, 2007
I have a linked server set up on my local SQL 2000 instance. I try and run an update against an SQL 2005 database and it take 29 seconds. I checked the execution plan and it says it takes the entire time on the Remote Scan. Is there something I need to do to speed this up? There is an index on the PK that I am searching against.
View 2 Replies
View Related
May 30, 2007
Hi
I have an update query that joins with 2 table variables. this update updates about 50,000 rows in a table.
update table1
from @table2 t2
inner join @table3 t3
on t2.id = t3.id
where table1.id = t2.id
the problem is that sometimes this update takes forever. If I replace the table variable with regular tables, it only takes about 11 seconds.
I am not sure what the problem is. Could this be a problem with the tempdb. What should I be looking for in the tempdb that might cause a problem.
View 3 Replies
View Related
Jun 7, 2004
The following basic UPDATE SQL statement has been running for 16 hours and counting. I need to get this done ASAP.
UPDATE Recipients SET UndeliverableTime = getdate()
FROM Recipients
INNER JOIN Domains ON (Recipients.DomainID = Domains.ID)
INNER JOIN Undeliverables ON (
Recipients.UserName + '@' + Domains.Domain =
Undeliverables.EmailAddress)
Is there any way I can see how far this has gone and how long it will take to finish? Will this take another hour to finish or another week?
Both tables (Recipients and Undeliverables) have approximately 80 million records
I did a nearly identical operation with another table that had only 7 million records and it took 10.5 hours. I hope this doesn't scale linearly to 115 hours.
I am tempted to cancel, retune, and rerun but that may be trigger a really expensive rollback operation that could take days. Any ideas?
thanks!
View 14 Replies
View Related
Dec 4, 2014
I have a database with enabling merge replication.
Then the problem is update query is taking more time.
But when I disable the merge triggers then it'll update quickly.
View 3 Replies
View Related
Jul 20, 2005
am using FOR UPDATE triggers to audit a table that has 67 fields. Myproblem is that this slows down the system significantly. I havenarrowed down the problem to the size (Lines of code) that need to becompiled after the trigger has been fired. There is about 67 IFUpdate(fieldName) inside the trigger and a not very complex selectstatement inside the if followed by an insert to the audit table. WhenI leave only a few IF-s in the trigger and comment the rest of thecode performance increased dramatically. It seems like it is checkingevery single UPdate() statement. Assuming that this was slowing downdue to doing a select for every update i tried to do to seperateselects in the beginning from Deleted and Inserted and assigningcolumns name to specific variables and instead of doingif Update(fieldName) i didif @DelFieldName <> @InsFieldNamebeginINSERT INTO AUDITSELECT WHAT I NEEDENDThis did not improve performance. If you have any ideas on how to getaround this issue please let me know.Below is an example of what my triggers look like.------------------------------------Trigger 1 -- this was my original designCREATE trigger1 on TableFOR UPDATEASif update(field1)begininsert into AuditSELECT What I needENDif update(field2)begininsert into AuditSELECT What I needEND...... Repeated about 65 more timesif update(field67)insert into AuditSELECT What I needEND---------------------------------------------------------------------------Trigger 2 -- this is what i tried but did not improve performanceCREATE trigger2 on TableFOR UPDATEASDeclare @DelField1 varcharDeclare @DelField2 varchar....Declare @DelField67 varcharSelect@DelField1 = Field1,@DelField2 = Field2,....@DelField67 = Field67From DeletedDeclare @InsField1 varcharDeclare @InsField2 varchar....Declare @InsField67 varcharSelect@insField1 = Field1,@insField2 = Field2,....@InsField67 = Field67From Inserted-- I do not do if Update() but instead compare variablesif @DelField1 <> InsField1beginInsert into AUDITSELECT what I needendif @DelField2 <> InsField2beginInsert into AUDITSELECT what I needend............if @DelField67 <> InsField67beginInsert into AUDITSELECT what I needend----------------------------------------------IF you have any idea how to optimize this please let me know. Anyinput is greatly appreciated. I do not have a problem with triggersdoing what they are supposed to, they are very slow this is myconcern. The reason I gave you two examples is because i suspect ithas something to do with the enormouse amount of code inside thetrigger. both examples perform about the same whether i use the twohuge selects from the Inserted and Deleted or not.Thanks,Gent
View 1 Replies
View Related
Nov 5, 2006
we just moved a database from a shared SQL 2000 server to SQLExpress on a VPS server. Every thing seems to work great, really no performance loss at all, except for one stored proc that is giving us problems. It's function is to update a history table, and then update the production table. On the old server, it would take less than a second to run this, now it takes anywhere from 45 seconds to 1 minute or it times out. This database is used on a classic asp web app. ANY help at all on this would be appreciated as I am pulling my hair out trying to figure out what's wrong here. here is the proc.
------------------------------------------ code ----------------------------------------------
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
SET NOCOUNT ON
go
ALTER PROCEDURE [dbo].[sp_UpdatePersonalization]
@p_id nvarchar(50),
@cust_num varchar(50),
--@publication varchar(25),
@full_name varchar(500),
@email varchar(200),
@web_address varchar(300),
@include_web bit,
@ReturnAddressYN varchar(1),
@include_email bit,
@job_title varchar(500),
@company_name varchar(500),
@tagline varchar(1000),
@phone1 varchar(30),
@phone2 varchar(30),
@phone3 varchar(30),
@phone4 varchar(30),
@phoneext1 varchar(10),
@phoneext2 varchar(10),
@phoneext3 varchar(10),
@phoneext4 varchar(10),
@phonedesc1 varchar(20),
@phonedesc2 varchar(20),
@phonedesc3 varchar(20),
@phonedesc4 varchar(20),
@company_mail_address varchar(500),
@masthead varchar(250),
@verbiage_page1 varchar(1400),
@verbiage_page4 varchar(2650),
@inc_bbb bit,
@inc_ehl bit,
@inc_eho bit,
@inc_rl bit,
@p_comments varchar(500),
@p_update_flag varchar(10)
--@frequency varchar(50),
--@mail varchar(50),
--@fold varchar(50),
--@contact varchar(50),
-- do not add this on the update! @date_added,
AS
declare @last_updated datetime
select @last_updated=GETDATE()
declare @set_update bit
--if @p_update_flag='False'
--select @set_update = 1
--else
--select @set_update = 0
if @p_update_flag='False'
INSERT INTO pers_main_arch
(
p_id,
cust_num,
publication,
full_name,
email,
web_address,
include_web,
include_email,
job_title,
company_name,
tagline,
phone1,
phone2,
phone3,
phone4,
phoneext1,
phoneext2,
phoneext3,
phoneext4,
phonedesc1,
phonedesc2,
phonedesc3,
phonedesc4,
company_mail_address,
masthead,
verbiage_page1,
verbiage_page4,
inc_bbb,
inc_ehl,
inc_eho,
inc_rl,
p_comments,
frequency,
mail,
fold,
contact,
date_added,
last_updated,
start_month,
ReturnAddressYN
)
SELECT
pm.p_id,
pm.cust_num,
pm.publication,
pm.full_name,
pm.email,
pm.web_address,
pm.include_web,
pm.include_email,
pm.job_title,
pm.company_name,
pm.tagline,
pm.phone1,
pm.phone2,
pm.phone3,
pm.phone4,
pm.phoneext1,
pm.phoneext2,
pm.phoneext3,
pm.phoneext4,
pm.phonedesc1,
pm.phonedesc2,
pm.phonedesc3,
pm.phonedesc4,
pm.company_mail_address,
pm.masthead,
pm.verbiage_page1,
pm.verbiage_page4,
pm.inc_bbb,
pm.inc_ehl,
pm.inc_eho,
pm.inc_rl,
pm.p_comments,
pm.frequency,
pm.mail,
pm.fold,
pm.contact,
pm.date_added,
pm.last_updated,
pm.start_month,
pm.ReturnAddressYN
FROM pers_main pm
WHERE pm.cust_num = @cust_num
if @p_update_flag='True' OR @p_update_flag='False' OR @p_update_flag IS NULL OR @p_update_flag=''
UPDATE pers_main SET
--cust_num=@cust_num,
--publication=@publication,
full_name=@full_name,
email=@email,
web_address=@web_address,
include_web=@include_web,
include_email=@include_email,
job_title=@job_title,
company_name=@company_name,
tagline=@tagline,
phone1=@phone1,
phone2=@phone2,
phone3=@phone3,
phone4=@phone4,
phoneext1=@phoneext1,
phoneext2=@phoneext2,
phoneext3=@phoneext3,
phoneext4=@phoneext4,
phonedesc1=@phonedesc1,
phonedesc2=@phonedesc2,
phonedesc3=@phonedesc3,
phonedesc4=@phonedesc4,
company_mail_address=@company_mail_address,
masthead=@masthead,
verbiage_page1=@verbiage_page1,
verbiage_page4=@verbiage_page4,
inc_bbb=@inc_bbb,
inc_ehl=@inc_ehl,
inc_eho=@inc_eho,
inc_rl=@inc_rl,
p_comments=@p_comments,
--frequency=@frequency,
--mail=@mail,
--fold=@fold,
--contact=@contact,
--date_added,
last_updated=@last_updated,
updated_flag=1,
ReturnAddressYN=@ReturnAddressYN
WHERE cust_num=@cust_num
------------------------------------------ code ----------------------------------------------
View 1 Replies
View Related
Jan 14, 2008
Hi
I have the following problem (on 2 PC's now)
On 1 pc i installed VS2008 besides VS2005 and SQL 2005 (before i installed VS2008 everything was fine)
On the other is a fresh installed XP machine with only VS2008 SQL2550 and Office2007
Latets service packs are installed
On both machines when i open a table in SQL 2005 with the 'open table' command the show of the table is very slow.
Every row is updated file by field, the grid is not showing, the row and column headers are not 'grey' as normal.
Scrollign is impossible as the screen updates start again that slow.
Even a small table with 10 rows and 15 field behaves this slow. Its not workable.
What can be done to resove this???
Peter
View 11 Replies
View Related
Dec 3, 2014
I have a database with enabling merge replication.
Then the problem is update query is taking more time.But when I disable the merge triggers then it'll update quickly.
View 1 Replies
View Related
Dec 6, 2007
Hi All,
I would like to know whether it is possible to add and updated date column in a slow changing dimension table using the slow changing dimension data flow transformation.
I would like to keep track of what record is updated in the dimension table based on the data being processed.
Thanks for you help and information
Regards,
Fadzli
View 1 Replies
View Related
Aug 14, 2001
In an ASP, I have a dynamically created SQL statement that amounts to "SELECT * FROM Server1.myDB.dbo.myTable WHERE Col1 = 1" (Col1 is the table's primary key). It returns the data immediately when executed.
However, when the same record is updated with "UPDATE Server1.myDB.dbo.myTable SET Comments = 'blah blah blah' WHERE Col1 = 1", the page times out before the query can complete.
I watched the program in Profiler, and I saw on the update that sp_cursorfetch was being executed as an RPC once per each row in the table. In a table of 78000 records, the timeout occurs well before the last record is fetched, and the update bombs.
I can run the same statements in Query Analyzer from a linked server and have the same results. The execution plan shows that a Remote Query is occurring on the select that returns 1 row, and a Remote Scan is taking place on the update scanning 78000 rows (I guess this is where all the sp_cursorfetch calls are happening...?).
How can I prevent the Remote Scan? How can I prevent the execution of the RPC sp_cursorfetch for each row in the remote table?
Thank you!
View 2 Replies
View Related
Jul 4, 2007
Hi there,
We have a big problem here with merge replication, specifically whenever a schema change occurs. We are replicating schema changes, and triggers/stored procedures. The example is that we changed about 150 stored procedures and about 30 triggers. This is then replicated to the subscriber database (which is also a merge publisher for further remote systems that were offline at the time) over a 10Mb link - hardly low bandwidth. However, the replication process takes about an hour and a half - considering the SQL on the primary server took less than a minute to run this is a big suprise.
We've run a trace to see if we can identitfy what is going on. There seems to be a great number of calls to sp_MSunmarkschemaobject - we are still waiting for a trace to complete to fully analyse this however it looks like it calls this repeatedly for every stored procedure in the database. We are currently re-testing to one of the remote servers with the merge agent set to the slow profile (not much hope that this will alter the poor performance).
This task looks to be excessive - and certainly does not seem to function in a sensible manner. Has anyone else had similar issues or have any suggestions. This is very infuriating as it means that the servers are effectively offline for a minimum of and hour and a half (in fact the remoter servers take over 4 hours !).
Thanks.
View 2 Replies
View Related
Aug 23, 2000
I have a database that's 2.5GB but only has about 17MB of actual data. I've setup a standby server that I load my dumps into. The load takes about 10 miuntes. The dump takes about a minute and a half (which also seems slow to me for that small amount of data). I don't expect that it should take that long to load 8800 pages into a database. The standby server is the same hardware as the production server (sinlge 500MHz Xeon, 2GB RAM, RAID 5). The server has only a single RAID 5 array to store all the OS, and all the SQL data however, I still don't thinkit should take thta long to load. Let me know what you think.
--Buddy
View 1 Replies
View Related
Oct 5, 2004
I have a access database, the data store in another server. This noon, one of our user is runing the access database too slow. Open the database and search the data, etc. It took a long time to come out, Any body has experience on it, why, we had etrust install on each user machine, is that cause this too slow? Thanks in advance.
View 1 Replies
View Related
Jul 31, 2007
dear friends,
i have dropped many objects and recreated in a database.suddenly my database became very slow. so please any one of friends give solution.
View 9 Replies
View Related
Sep 28, 2007
HI Friends,
My database performing slow.
In which area i will take care.
Can u any body plz explaine me step by step plz
Thsx
siva
Meti BEST OF THE BEST
View 2 Replies
View Related
Jul 20, 2005
We just installed SQL Server version 800.194 on a dual processorserver equipped with a gigabyte of RAM, running Windows 2000 Serveroperating system. We set up a few databases with (so far) very tinytables.When I am working locally (i.e. on the server itself) with QueryAnalyzer, even the simplest operation is incredibly slow. If I bringup Windows Task Manager looking at the Processes pane (Query Analyzershows up as "isqlw.exe"), and use "View/Select Columns..." to choose"I/O Writes" and "I/O Write Bytes", then I observe that doing aselect* on a table with a single row results in over 500 "I/O Writes"and 170,000 "I/O Write Bytes" of data written. It requires 15 secondsto return the single row of information.Even clicking on the Change Database Listbox results in hundreds ofwrite operations!However, when I am working remotely with Query Analyzer, the select*works perfectly normally. Neither "I/O Writes" nor "I/O Write Bytes"are recorded.I figure maybe there is some sort of security logging turned on thatrecords everything you do...Whatever is going on here, how do you turn it off?Tom
View 1 Replies
View Related
Jul 20, 2005
I have a database that had worked under msde 1.0 until reached the 2GBof dimension and the dB was blocked. To make possible the work i haddeleted old data from some table. The database restart, but theanswers from server become very slow. So i decided to pass to SqlServer 2000 without success. May i perform a check of this database?if is a indexs problem there is a way to rebuild them?Thanks in advanceAndy Wet
View 1 Replies
View Related
Sep 15, 2007
I just found out the response time of open a connection or execute a SQL command over VPN is very slow. It takes around 150ms for each round trip. If the same program run on LAN, it takes less than 1ms. I understand that VPN may have encryption and thus have a bit delay. However, if the delay happens whethever I make a SQL call, it will be unacceptable. Is there anything that I have missed out? If the delay occurs once only, it will be still great. (I think this is the point of connection pooling. Right?) However, it's really bad if the delay occurs each time I call SQL. Please help!
View 5 Replies
View Related
Jan 9, 2008
Has anyone else experienced this?
Database Restore takes much longer on Windows 2003 64-bit than on 32-bit...
Is this simply the Service Pack level or does it have to do with the 64-bit/32-bit issue?
We have a Development/QA/Production environment setup in this manner:
DEV - fast restores - (about 2 hours)
OS: Windows Server 2003 R2 Service Pack 2
DB: SQL 2000 Service Pack 4 (32-bit)
QA - slow restores - (about 10 hours)
OS: Windows Server 2003 x64 Service Pack 1
DB: SQL 2000 Service Pack 4 (32-bit)
Production - slow restores - (about 10 hours)OS: Windows Server 2003 x64 Service Pack 1
DB: SQL 2000 Service Pack 4 (32-bit)
Thank you for your time in advance!!!
Adminicrater
View 6 Replies
View Related
Feb 12, 2008
I have a portal site that has many iframes loading various pages. One of the iframes requires data from a database that has a slow connection and right now there is nothing we can do about the slow connection and is something we have to live with.
What seems to be happening though is that even though each page is loading seperatly in an iframe, when the page loads with the slow connection, it seems to hold up processing on the server for the other frames until the connection has been established with the server. It can be something like 10 seconds. I am guessing trying to establish the connection is holding up the worker process on IIS???
So I am trying to find a workaround bearing in mind there is nothing we can do about the slow connection for the time being? Does anyone have any suggestions? One I am thinking of is forcing this frame to load last so at least the other frames are not being held up. Another is maybe to use a seperate thread, but does anyone have any idea on this?
Thanks in advance
View 1 Replies
View Related
Aug 2, 2007
I'm querying a small SQL2005 database and finding that the query can sometimes complete in under a second and then 5 minutes later the same query can take 15 minutes to complete.
The query I'm running is very simple as follows:
select TOP 26 * from vSearchListOpportunityItem WHERE OpIt_OpportunityId=2495 ORDER BY Prod_Name, OpIt_OpportunityItemId
The view it is pulling data from only contains only 1890 lines, which in turn pulls data from 3 tables with 821, 2560, and 1957 lines of data. In other words it's small. I have noticed that if I try and open the smallest of these tables while on a 'go slow' period it also takes around 15 minutes to return the data.
The database was originally on SQL 2000. It is the only database on this powerful quad core server.
The SQL Server CPU usage never goes above 40%, and always has free memory.
No sign of locks.
I can't figure out why such a small database is going so slow with such a simple query. Any ideas?
View 8 Replies
View Related
Feb 28, 2007
I am developing a mobile 5.0 application. I use mobile Sql as the database in the pda.
In the program, i use dataset.xsd to create the table and tableAdapter, but the performance is very slow for just access the data from the database. It takes about 4200ms for just
this.userAdapter = new PDA_USERTableAdapter();
MBDB2DataSet.PDA_USERDataTable ut = userAdapter.GetUser();
the "new PDA_USERTableAdapter()" is very fast.But...
The userAdapter.GetUser() will only return about 20 rows, each rows only contains 5 field .But it cost 4200 ms for this line.
The sql statement in userAdapter.GetUser() is
SELECT User, PASSWORD, TYPE, USER_ID, Supervisor_ID
FROM PDA_USER
WHERE (TYPE = 5)
ORDER BY User
Please Help, Urgent!!!!!
p.s (The total rows in the PDA_USER table is only 30 rows)
Thank you very much
View 3 Replies
View Related
Dec 26, 2003
Hi i have a sql server instance on my system and it is linking into an oracle database on another server. When i run queries against this other server...it takes forever...
However, when i use access, and link the table and run the same query against the oracle database...it runs immediatly.
I am very confused as to why there would be such a performance difference and why sql server would run so slow.
I am wondering if it has something to do with the way i configured the linked server. there are several options that I didn't know what they meant.
collation compatible (not selected)
Data access (selected)
RPC (not selected)
RPC Out (not selected)
collation name
connection timeout
query timeout
View 1 Replies
View Related
Apr 18, 2002
I upgraded from 6.5 to 7.0 SP3. Now when I save (write) an invoice it takes about 10-12 seconds, at 6.5 it was 1-3 seconds. SQL Server and my Materials App are the only thing running on this box. This is the only area that has gotten slower everything else works great. I have 3 users saving invoices and about 15 people total using the system at one time. It's a compaq DL580 loaded with memory, database is 2,195MB in size. Same 6.5 client to access system as before. Should I rebuild/reindex the database? Is there something from the old 6.5 version I need to remove?? Thanks in advance!!!
View 1 Replies
View Related
May 29, 2008
Hi guys,
I am asking this question on behalf of a friend. I have little knowledge of SQL 2005 but my friend is quite knowledgeable, although this is the first time he is dealing with large database for a client. So here's the story.
His client has a database containing 1.5 million books. Now he is setting up a website which will enable users to search books. Searching by ISBN is no problem as it only takes 1 seconds. The problem is, searching by Title takes more than 20seconds, which is unacceptable. My friend has only done smaller database and he just recently thought of implementing indexing and now looking for other ideas.
Each row contains book details such as Title, Author1, Author2, Author3, Publisher, Publication Date, ISBN, etc.
Can anyone who are more experienced in doing large database share with me some design ideas? His client is aiming for 8seconds or less.
Thanks in advance!
View 14 Replies
View Related
Apr 30, 2007
Hi,
My database was previously running on sql2000 with 2 gigs of RAM and 2 x 2.8ghz XEON processors, and was running pretty decently.
I've now upgraded to SQL2005, 8 gigs of RAM, and 1 Intel 5130 2ghz processor (supposed to be more CPU power than previously)
The problem is its now running very SLOW.
I have run a trace and I'm finding queries that used to take 50,000 reads are now taking 1.4 million reads (25x more) The system runs for a decent amount of time but then SLOWS down massively for awhile. I can't find any cause yet.
What could be causing this ? What steps can I take towards resolving this? I have ran Tara's isp_ALTER_INDEX to try and help, I'm not sure what else to do.
Any suggestions are GREATLY appreciated..
Thanks very much,
mike123
View 7 Replies
View Related