Performance Issue With Data Retrievel

Apr 13, 2008

Hi,

I have a table (replica of my original table for test purpose) with half a million records. The structure of the table is as following:

id (int), Name (varchar), DataComplete (DateTime), Age (int), bit1 (bit), bit2 (bit), bit3 (bit), bit4 (bit), bit5 (bit), bit6 (bit), bit7 (bit), bit8 (bit), bit9 (bit), bit10 (bit), bit11 (bit), bit12 (bit)

 

The data retrievel on this table is (in my opinion) pretty slow. A simple (SELECT * FROM MYTABLE) takes 5 seconds. How can i improve the time of retrievel. I have tried to define some indexes but no improvement. Can some guru help me in this regard. Atleast the SELECT * must return data in 1 or under 1 second.

Thanks in adavance...

View 2 Replies


ADVERTISEMENT

Data Replication Performance.

Jul 20, 2005

I'm seearching for information regarding database replicationperformance. We need to compare the performance of replication for SQLServer and Oracle and it is urgent! Anyone who can describe theperformance bottlenecks for each database when performing replication,or can point me to a white paper or webpage.

View 1 Replies View Related

Performance Sqlclient Data Provider

Dec 12, 2005

Well, I hope, after my endless search in the web... maybe, somebody can help me here.I'm passing a application to a web-based application, therefore I have to use ADO.net to access a sql server DB. But the performance is extremly poor!!!!Just to compare: with the normal application or with the SQL Query Analyzer, the query needs about 2 seconds. Well, by using ado.net, it's about 200 seconds......!!!!Of course, the query is quite long. But the difference is extrem... too extrem for using the same query....I already tried a lot... actually everything I found on the web. What is really strange is that neither the processor nor the memory is fully used... And the web server use more of the cpu power than the sql server itself (the 2 server are still on my machine where I'm programming). So, could it be that the problem is by filling up the dataset with the sqladapter or something similar?Thx for your help!!

View 8 Replies View Related

Text Data Type And Performance

Dec 13, 2005

I was wondering if someone could clear this up for me...
I have a table that will be used to store information about products that I will be selling on a new website. I would like to store each product in a table which will include a description of around 1000 words. I was deliberating over whether to store this chunk of text in a column with the data type set as 'text', or to store the text in a seperate txt file on the server. The search facility on the site will not be required to query this text so which would offer the best in terms of performance?

View 3 Replies View Related

How To Import Performance Log Data In Profiler

Nov 9, 2007

I am running a trace and one of the columns is start time
I want to import corresponding performance log data in the profiler
It is a new feature in 2005
however this option is disabled in the profiler
this option is at
File -> Import performance Data

Please advice on how to enable this option

thanks in advance

View 5 Replies View Related

Performance And Data Structure Question

Feb 2, 2006

Hi SQL gurus,I have a table structure question. I will have a table 'Models' thathas one to many 'incomes' and one to many 'costs'. These 2 entitieshave exactly the same structure, which is 7 smallmoney and a name. Isit better to create a table 'Incomes' and a table 'Costs', with boththe same number of fields like this :Incomes-------------in_idmodelin_1in_2in_3in_4in_5in_6in_7in_nameCosts-------------c_idmodelc_1c_2c_3c_4c_5c_6c_7c_nameor is it better to create one single table that will contain bothentities like that :Incomes_Costs-------------ic_idmodelic_1ic_2ic_3ic_4ic_5ic_6ic_7ic_nameic_isIncomewhich only differs from the 2 above by the isIncome field to know whichrow is an income and which row is a cost.I'd like to know which method is the best in terms of performance andgeneral structure and would greatly appreciate if you explain a littlethe reasons that drove you to suggest me a method over the other.Thanks all for your time!ibiza

View 4 Replies View Related

Performance Degradation On Data Transfer

Jul 20, 2005

Hello guys,Wonder if any of you could help me out here. I have just created annew empty database and imported data from another database into it.This was done with the import wizard from MMC.First thing that I noticed was the size difference, the old databasewas well over 1GB, but the new one was only about 400MB.Second thing I've noticed, and this is the problem, is that accessingthe new (smaller) database instead of the old one causes a huge speeddegradation, about 5 times slower than the old version.We are using MS SQL Server 7. Any help would be very gratefullyreceived.RegardsGethyn

View 1 Replies View Related

Performance Issue With Huge Data

Mar 20, 2007

Hello All,

I am using SSIS to transfer data between two SQL Servers (2000). There is no transformation involved as the source and destination table structure is same. Even then the package execution takes lot of time.

The data in the tables is of the order of 66000000 the we were required to kill the package execution after it took more than 24 hours. The CPU usage was more than 13000s and disk I/O was well above 330000000. I am new to the tit-bits of SIS. Can anyone please tell me the reason as to why the package has gone so resource hungry.



Thanks in advance,

Atul

View 3 Replies View Related

Data Model Design For Query Performance

Apr 22, 2008

I have an opportunity to rebuild a database model with the express purpose of improving query performance. So given the following I have a few questions.

Table A (~500M records)
Primary Key Field (int)
Field 1 (varchar)
Field 2 (varchar)
Field 3 (varchar)
Field 4 (varchar)
Field 5 (varchar)

Table B (1B+ records)
Primary Key Field (int)
Foreign Key Field (int)
Field 1 (varchar)
Field 2 (varchar)
Field 3 (varchar)
Field 4 (varchar)
Field 5 (varchar)

* Assumed: Tables are inner joined on all queries. The database is readonly.

-- Most of my lookups are based on querying Field 1 of Table A. The data content of Field 1, Table A is 90% unique.
1) Would it be more beneficial to put the clustered index on Field 1 instead of the PK field in Table A?
2) Can an Identity column be non-clustered?
3) Alternatively, would it be beneficial to build a separate lookup table with just the PK & Field 1 of Table A, with a clustered index on the lookup table Field 1 which I join on Table A? (did that make sense?)

-- I have a secondary lookup that performs queries on Fields 1, 2, 3, 4 & 5 of Table B
1) Would it be more beneficial to create an additional indexed lookup column of the concantenated values of Fields 1-5 of Table B versus a covering index of all 5 columns?
2) Does a clustered index have to be unique?
3) Would a clustered index be more beneficial over Fields 1-5 or the special lookup column versus the PK or FK fields?
4) Would creating a special lookup table with just the requisite fields be more beneficial?

An extra question. The existing data model uses the CHAR datatype for all columns less than 9 characters wide and the columns are set to allow nulls. This requires every select statement to COALESCE() and RTRIM() all these columns. I intend to make all (affected) columns VARCHAR, NOT NULL with a default value of a 0-length string.
Will this enhance query performance?

Thanks in advance for any insight.

View 7 Replies View Related

How Do You Improve SQL Performance Over Large Amount Of Data?

Jul 23, 2005

Hi,I am using SQL 2000 and has a table that contains more than 2 millionrows of data (and growing). Right now, I have encountered 2 problems:1) Sometimes, when I try to query against this table, I would get sqlcommand time out. Hence, I did more testing with Query Analyser and tofind out that the same queries would not always take about the sametime to be executed. Could anyone please tell me what would affect thespeed of the query and what is the most important thing among all thefactors? (I could think of the opened connections, server'sCPU/Memory...)2) I am not sure if 2 million rows is considered a lot or not, however,it start to take 5~10 seconds for me to finish some simple queries. Iam wondering what is the best practices to handle this amount of datawhile having a decent performance?Thank you,Charlie Chang[Charlies224@hotmail.com]

View 5 Replies View Related

## How To Acquire The SQL Server 2000 Performance Data? ##

Apr 11, 2006

I want to write a application monitering program to collect the SQLServer 2000 performance data,such as pages/sec, bytes total/sec, etc, BUT I don't know how to do it, In Oracle , there are the v$ views and DBA view , which I can findthe information I interested, the question is , is there a similar suitof view in SQL Server 2000 to provide the performance information ?Thank you very much, I will be mad by this question, for I have googledall the day , but in vain

View 1 Replies View Related

Performance To Write Data In SQL Server 2005

May 25, 2007

Dear friends,

A simple question to you... witch of this dataflow destination is better for performance to write data in a SQL Server 2005:



1. SQL Server Destination?

2. OLEDB Destination?

3. Other?



Thanks!

View 1 Replies View Related

Performance Of Set-Type Data Types Created Using CLR

May 8, 2008

In my applications, I often encounter situations where the domain of a type is some set (e.g. Student, Staff etc. for Membership type etc.). A common solution is to assign integral values to each possible value of the set & convert them to corresponding strings programatically when fetching data, or use a Foreign Key into a relation defining each of the unique value.

MySql has a Set Data type. So, this time around, I decided to do the same with Sql Server, and resorted to CLR to create a custom data type, whose domain could only be a collection of predefined strings or other data types.

But, just before deploying the assembly, the performance bug struck me. So, assuming a simple custom data type, like:





Code Snippet

<Serializable()> _
<Microsoft.SqlServer.Server.SqlUserDefinedType(Format.Native)> _
Public Structure MemberType
Implements INullable

Private Const Student As String = "Student"
Private Const Staff As String = "Staff"

Private memType As Int16
Private m_Null As Boolean

Public Overrides Function ToString() As String
If (Me.m_Null) Then
Throw New SqlTypes.SqlNullValueException("Null Reference Exception")
End If

Select Case memType
Case 1
Return Student
Case 2
Return Staff
Case Else
Throw New Exception("Invalid State for MemberType")
End Select
End Function

Public ReadOnly Property IsNull() As Boolean Implements INullable.IsNull
Get
' Put your code here
Return (Me.m_Null)
End Get
End Property

Public Shared ReadOnly Property Null() As MemberType
Get
Dim h As MemberType = New MemberType
h.m_Null = True
Return h
End Get
End Property

Public Shared Function Parse(ByVal s As SqlString) As MemberType
If s.IsNull Then
Return Null

ElseIf (StrComp(s.ToString(), Student, CompareMethod.Text) = 0) _
OrElse StrComp(s.ToString(), "1") = 0 Then
Dim u As New MemberType
u.memType = 1
u.m_Null = False

Return (u)

ElseIf (StrComp(s.ToString(), Staff, CompareMethod.Text) = 0) _
OrElse StrComp(s.ToString(), "2") = 0 Then
Dim u As New MemberType
u.memType = 2
u.m_Null = False

Return (u)

Else
Throw New SqlTypes.SqlTypeException("Invalid Value for MemberType")
End If
End Function
End Structure
Would it be a performance issue, if I use such Data types, whose data is fetched or updated frequently???

View 11 Replies View Related

Data Files/ Trans Logs --- Performance Counters?

Nov 7, 2002

There are so many to choose from, which ones are the most important to monitor?

Also if you have your data files and trans logs set to grow automaticlly and would like to change this to a fixed number is there a way to determine how large you should set them at? Thanks in advance.
:confused:

View 1 Replies View Related

Question On Data Mining Report Performance Optimization

May 3, 2007

Hi, all experts here,

Would any of you give me any ideas for how could we optimize the report on data mining models? (as we know, for the data mining report, we have to select the mining model and the case table)

Hope it is clear for your advices and help.

Thanks a lot in advance and I am looking forward to hearing from you shortly.

With best regards,

Yours sincerely,

View 6 Replies View Related

Performance Problems With SQL Commands In Data Flow Task

Jan 29, 2007

SQL statement within an OLE DB Command component is extremely slow (hours, days). Same SQL statement executed within a query window of SQL Server Management Studio takes only a few seconds. Using a fairly simple SQL UPDATE statement against a table with only 21,000 rows. Query:

UPDATE Pearson_Load
SET Process_Flag = 'E',
Error_Msg = 'Error: Missing address elements Address_Line_1, City, and/or State'
WHERE (Address_Line_1 = ' '
OR City = ' '
OR State = ' ')
AND Process_Flag = ' '

Any suggestions on how to improve the performance of this task or an alternate solution are appreciated. Thank you.

View 8 Replies View Related

Performance Issue With Custom Data Destination Component

Feb 12, 2008

Hi,
All the examples I've seen for creating custom data destinations (scripts or components) show that in the ProcessInput method you need to iterate through each record and execute a commit for each row. Here's a sample from http://forums.microsoft.com/msdn/showpost.aspx?siteid=1&postid=70469



Public Overrides Sub MyAddressInput_ProcessInputRow(ByVal Row As MyAddressInputBuffer)
With sqlCmd
.Parameters("@addressid").Value = Row.AddressID
.Parameters("@city").Value = Row.City
.ExecuteNonQuery()
End With
End Sub

This works for me but is extremely slow when processing lots of rows. Has anyone come across a better/faster example for committing rows to a database when creating a custom data destination component? I was thinking about using the ADO.NET batch update. Any thoughts on this or other approaches?

Thanks,

Mike Fulkerson

View 3 Replies View Related

Does A Large Amount 'image' Data Affect Overall SQL Performance?

Jul 20, 2007

Recently we added a new table into our SQL2000 database specifically to store scanned in images of documents. This new table contains a PK field, a couple of datetime fields, a couple of char(1) fields and one 'image' field.



Before adding this table, the database size was approx 6GB. Six months after adding this new table, the database has grown to 18GB - 11GB of this is due to the scanned in images.



Would this new table affect the SQL performance with regards to accessing other data in the database that has nothing related to the new table?



If so, would moving this new table into it's own database be recommended?



Thanks

Rod

View 1 Replies View Related

How To Performance Case-insensitive Search On XML Data Type In SQLServer 2005?

Apr 25, 2006

Does anyone know how to how to performance case-insensitive search onXML data type in SQLServer 2005? Or I have to convert all the xml datato lower case before I store it?Thanks in advance.John

View 2 Replies View Related

[Performance Discussion] To Schedule A Time For Mssql Command, Which Way Would Be Faster And Get A Better Performance?

Sep 12, 2004

1. Use mssql server agent service to take the schedule
2. Use a .NET windows service with timers to call SqlClientConnection

above, which way would be faster and get a better performance?

View 2 Replies View Related

Extremely Poor Query Performance - Identical DBs Different Performance

Jun 23, 2006

Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***

View 2 Replies View Related

Very Poor Performance - Identical DBs But Different Performance

Jun 22, 2006

Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server witha particular query. It would take approximately 22 seconds to return100 rows, thats about 0.22 seconds per row. Note: I ran the query insingle user mode. So I tested the query on the Development server bytaking a backup (.dmp) of the database and moving it onto the devserver. I ran the same query and found that it ran in less than asecond.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue isrelated to some external hardware issue like: disk space, memory etc.Or could it be OS software related issues, like service packs, SQLServer configuations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating systemrelated issue.Any Ideas would help me greatly!Thanks,Brian T

View 2 Replies View Related

Compare Performance (Execute SQL Task Insert And Data Flow Task)

Mar 12, 2008



I am using SQL 2005 SSIS. I am joining several large tables and then the move result into another table in the same database.

I would like know which method is faster:


Use Execute SQL Task to insert the result set to the target table

Use the Data Flow Task to insert the result set to the target table. (Use OLE DB source to execute SQL command and then use the SQL destination)
Could you tell me why then other is slower?

Thanks.

View 7 Replies View Related

SQL Server Admin 2014 :: Setup Performance Monitor Collector On Workstation Collecting Remote Server Data?

Mar 31, 2015

I set up the collector, and specify the Run As as my AD account in the Collector Set - Properties - General screen. My AD account is the local admin of the remote server.

However, the collector does not seem to work. Although the collecting set is shown as running, the The blg file stays at 64K. If I open it, there is nothing inside (no counter at the bottom). What did I miss?

View 1 Replies View Related

Sampling Data Set Via Integration Services Data Flow For Data Mining Models Without Saving Training And Test Data Set?

Nov 24, 2006

Hi, all here,

Thank you very much for your kind attention.

I am wondering if it is possible to use SSIS to sample data set to training set and test set directly to my data mining models without saving them somewhere as occupying too much space? Really need guidance for that.

Thank you very much in advance for any help.

With best regards,

Yours sincerely,

View 5 Replies View Related

Performance...

Mar 9, 2007

We have the same application installed on a few different environments with similar servers and similar hardward.  The only difference is the versions of SQL and the colations.
Is SQL 2005 a lot faster that SQL 2000?  Could colation type make a big effect on performance?
ScAndal

View 1 Replies View Related

How Is The Performance Of The SQL With .Net?

Aug 31, 2007

HiI want to insert 1000s of records into SQL Server 2005 Database with some manipulation. So that i put into the For Loop and inserting record.Inside the loop i am opening the connection and closing after use. The sample code is belowfor(int i=0;i<1000;i++){    sqlCmd.CommandText = "ProcName";    sqlCmd.Connection = sqlCon;    sqlCmd.Connection.Open():    sqlCmd.ExecuteNonQuery();    sqlCmd.Connection.Close();      }    What my Question is.. How is the Performance of this Code..?? Will is take time to get the Connection and Close the Connection in every itration?Or Shall I Open the Connection in Begining of the outside loop and close the connection at end of the Loop? will it increase the Performace?Please clarify me these question.. Thanks in advance. 

View 1 Replies View Related

SQL Performance

Dec 8, 2003

I have a following problem with SQL performance:

this line 'select * from [viewUserLatestFee]' executes instantly (in Query Analiser)
this line 'select * from [viewUserLatestFee] where orgID = 1' takes up to 30 seconds for 1000 rows (still in Query analiser)

can anyone please help - I seem to have ran out of ideas

I have a feeling people might be curious about the view so here it is:

SELECT dbo.viewUserPosition.id, dbo.viewUserPosition.username, dbo.viewUserPosition.password, dbo.viewUserPosition.title,
dbo.viewUserPosition.firstName, dbo.viewUserPosition.lastName, dbo.viewUserPosition.email, dbo.viewUserPosition.address1,
dbo.viewUserPosition.address2, dbo.viewUserPosition.suburb, dbo.viewUserPosition.postcode, dbo.viewUserPosition.country,
dbo.viewUserPosition.state, dbo.viewUserPosition.mailAddress1, dbo.viewUserPosition.mailAddress2, dbo.viewUserPosition.mailSuburb,
dbo.viewUserPosition.mailPostcode, dbo.viewUserPosition.mailCountry, dbo.viewUserPosition.mailState, dbo.viewUserPosition.birthDate,
dbo.viewUserPosition.joinDate, dbo.viewUserPosition.lastUpdated, dbo.viewUserPosition.orgID, dbo.viewUserPosition.positionID,
dbo.viewLatestPaidFee.feeID, dbo.viewLatestPaidFee.mshipID, dbo.viewLatestPaidFee.name, dbo.viewLatestPaidFee.[desc],
dbo.viewLatestPaidFee.terms, dbo.viewLatestPaidFee.period, dbo.viewLatestPaidFee.periodType, dbo.viewLatestPaidFee.fee,
dbo.viewLatestPaidFee.startDate, dbo.viewLatestPaidFee.endDate, dbo.viewLatestPaidFee.deleted, dbo.viewLatestPaidFee.feePaidID,
dbo.viewLatestPaidFee.paidDate, dbo.viewLatestPaidFee.effectiveDate, dbo.viewLatestPaidFee.approved, dbo.viewLatestPaidFee.optionID,
dbo.viewLatestPaidFee.paidAmount, dbo.viewLatestPaidFee.feePaidEndDate
FROM dbo.viewUserPosition LEFT OUTER JOIN
dbo.viewLatestPaidFee ON dbo.viewUserPosition.id = dbo.viewLatestPaidFee.userID

Here is viewUserPosition:
SELECT dbo.tblUser.id, dbo.tblUser.username, dbo.tblUser.password, dbo.tblUser.title, dbo.tblUser.firstName, dbo.tblUser.lastName, dbo.tblUser.email,
dbo.tblUser.address1, dbo.tblUser.address2, dbo.tblUser.suburb, dbo.tblUser.postcode, dbo.tblUser.country, dbo.tblUser.state,
dbo.tblUser.mailAddress1, dbo.tblUser.mailAddress2, dbo.tblUser.mailSuburb, dbo.tblUser.mailPostcode, dbo.tblUser.mailCountry,
dbo.tblUser.mailState, dbo.tblUser.birthDate, dbo.tblUser.joinDate, dbo.tblUser.lastUpdated, dbo.tblRelPosition.orgID,
dbo.tblRelPosition.positionID
FROM dbo.tblUser INNER JOIN
dbo.tblRelPosition ON dbo.tblUser.id = dbo.tblRelPosition.userID

and viewLatestPaidFee:
SELECT dbo.tblMshipFee.id AS feeID, dbo.tblMshipFee.mshipID, dbo.tblMshipFee.name, dbo.tblMshipFee.[desc], dbo.tblMshipFee.terms,
dbo.tblMshipFee.period, dbo.tblMshipFee.periodType, dbo.tblMshipFee.fee, dbo.tblMshipFee.startDate, dbo.tblMshipFee.endDate,
dbo.tblMshipFee.deleted, fp.id AS feePaidID, fp.paidDate, fp.effectiveDate, fp.approved, fp.optionID, fp.paidAmount, fp.endDate AS feePaidEndDate,
fp.userID
FROM dbo.tblRelMshipFeePaid fp INNER JOIN
dbo.tblMshipFee ON dbo.tblMshipFee.id = fp.feeID AND fp.endDate =
(SELECT MAX(fp2.[endDate])
FROM [dbo].[tblRelMshipFeePaid] fp2
WHERE fp2.[userID] = fp.[userID])

View 4 Replies View Related

SQL Performance

Jan 13, 2005

We used a stored proc to pull totals from a database. Everything was fine until the table grew and started to time out. So we created a temp table to populate with a range of data and then pull the totals from there. Everything was fine until the table grew and started to time out. Any suggestion?

View 3 Replies View Related

Performance

Jan 17, 2002

Hi,

I am newly joined as SQL DBA. I want to check the Physical disk Performance. we have RAID 5 with 5+1 disks. I calculated NO Of IO's Per Disk. But how do we know what is actual limit of IO's per disk.


Thanks
Praveen

View 1 Replies View Related

Db Performance

May 8, 2001

What's my best bet in getting better performance out of one of my database servers? Currently we have 1 set of Raid5 disks partitioned into 2 drives. This houses everything (system, database, and logs) If that server has 2 slots left for drives I was thinking of putting 2 mirrored drives and getting the logs off the main database space? (Make sense?) This is a vendored application so working with new indexes etc. isn't something I should do wo/ the vendor's interaction. Will what I describe above help?

Thanks

View 2 Replies View Related

DTS Performance

Mar 31, 2001

hi,

i am using to move data from oracle to oracle.
i have used stored procedure in oracle for the update/insert .

the dts calls the stored procedure for each record, due to this the performance has gone down. how do i increase the speed of data xfer.

has any one done any thing similar ?


Tushar

View 1 Replies View Related

Performance

Jun 26, 2001

We have SQL Server running on a dual processor Pentium 500mhz server. Our database is hit by about 300 users. 200 of those users are doing constant searches though a client table of about 250,000 records, which in turn is linked to a history table containing over 5,000,000 records. This is only the tip of the iceberg, we have many triggers, procedures, updates, etc. going in the background. The database has over 500 tables.

Keep in mind, these searches that are taking place can involve all kinds of fields: phone number, company name, fax number, first name, last name, status, wildcard searches, etc. So as you can imagine, the database is being hit with all kinds of funky requests to find records. I will be the first to admit that our developers (vendor) are not the best code writers, and we have a tough time getting them to optimize something they do not even understand themselves.

As I speak, our processor utilization is maxing out between 95 to 100 percent. I've done a lot of performance tuning and all of the problems lie in the searching. We've built, tested, rebuilt, re-tested each and every index. I even used the Profiler to filter what I could. It has improved, but our database is growing at a rate of 10 megs a day (already close to 3 gigs, not that huge). I think I've optimized my indexes as best as I can considering all the fields and possibilities available to users to search for records.

For a database that requires all of these different search criteria, what would be a more optimal server? We are looking to purchase something ASAP. I could really use help from someone in a similar situation. It seems odd, in mind, that a company of 300 people would need to rely on a quad server (four processor capability.).

Thanks. JT

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved