Performance Of XML Vs Database
Nov 29, 2007
What is the performance comparison for XML and database?
Using system.IO to read the XML file will be slower than reading data from database , if only read data and not sorting?
The RAM/CPU memory usange will be higher for get data from XML compare to get data from database?
View 5 Replies
ADVERTISEMENT
Dec 14, 2000
Same databse server, two databases, one a copy of the other origanol giving bad performance. New copy will return 300000 rows in a second. the origanol will take thirty seconds to return same data set. 7 users on bad one 3 on good one. Bad one has been reindexed, checkdb and newalloced with no errors. Still giving very bad performance. Any one got ant further ideas on what to do??? please help.
View 5 Replies
View Related
Jan 13, 2000
Hello
Im a operator for backup (and many other),but i had a problem
with Sql 6.5 and ArcServeIT. Sql is the database for Arcserve.
Many of the Fileservers (30 Servers) stored over 700.000 Filenames
in one Fullbackupsession (every Friday)into the database.
(the Table called astpdat)
When i search one File in the database (via Arcserve,its like clicking
the explorer-tree)the performance is veerryy slloww.
How can i make this speed up?
the database is 5GB ,the server on it is, has 4 cpu´s ,1gb Ram
the Sql Server has 100 MB Ram ,100 MB TempDB
(HP LXr 8000)
thanks for answer joe from (Bratwurst) germany
View 1 Replies
View Related
Jul 20, 2005
Hi, I wonder if someone can answer a quesiton for me: I'm modifying adatabase with the purpose of adding the new feature of address changehistory. My model would consist of a table for keeping clientname/logon (for a public site) info in one table, and address info inanother table because the login info would likely be more frequentlyaccessed/changed than address updates. Now a group that does dataentry internally through a web interface always need to see theaddress.For the first stage I don't want to change the old table, just have anew one for now. But moving forward, I thought it would be neat tohave all address update records in one table and have a Profile typevalue to distinguish whether data entry or a public website usercreated the update record.However, a thought occured to me: If one table is responsible forshowing current address as well as adding records whenever there is anaddress change, would it hurt performance? Would I get betterperformance, splitting the record types into two tables, or does itmatter since the table I'm thinking of creating would have nodeletions: Only insersions and modifying an expiry date field so weknow which record to use. I'm not a specialist on database performanceso if any of you database gurus out that can advise me on that thatwould be GREAT. Thanks a million guys.Jonah A. Libster
View 2 Replies
View Related
Apr 6, 2007
Hi All
In Oracle i can get Performance varables like Library Cache Hits, Dictionary Cache Hits, Database Buffers Read ,Redolog Buffers Read etc from the system dynamic tables.
I want to know how to get the same / related performance details in sql server 2000 and 2005. ( which are the parameters , Optimal value and which table/dynamic view to query).
Thanks in Advance
View 9 Replies
View Related
Sep 6, 2007
I have an asp.net application on SQL Server 2005. I have completed indexing all the physical primary and foreign keys, virtual primary and foreign keys, sorting order, where clause fields and so on.
On first day, I only index all the physical primary and foreign keys, virtual primary and foreign keys. I noticed the loading performance has improved. So I continue with the remaining index process on the second day. This time, I noticed the loading performance is slower by 0.5 to 1 second. Is there any possibility that the loading performance will be slower after indexing?
Please advise. Thanks.
View 3 Replies
View Related
Oct 10, 2002
Hello world -
Does anyone know any good resources to get ideas / scripts for measuring capacity and statistics on SQL Server 2000 database or otherwise? I want to incorporate SQL script jobs that will email me various statistics everyday and am trying to find a starting point.
Thanx!
View 1 Replies
View Related
Dec 12, 2004
First of all, sorry if this is in the wrong section, didn't know where to put it.
I'm doing a university paper comparing Microsoft Access versus SQL Server 2000 and I want to run a benchmark on them to see which is faster, does anyone know of any applications that will let me do this?
You may think this is silly since SQL Server is quite obviously faster, thing is a can't just say that in my paper, I have to be more specific so was hoping I could run some benchmarks and show the scores on the paper.
I know I can run an ASP script that can time how long the query takes to run, however this can't test multiple concurrent users accessing the system (useless I get all my friends computers around my house, bring up the page and get them to click on refresh all at the same time :) ).
I basically want to run a simple SQL SELECT statement on an identical database in both database systems (Northwind), but for mutliple users. Anyone know of any application?
Or does anyone know any performance tests that has been done on comparing Access with SQL Server? All I can find is material comparing high-end database against high-end database (Oracle vs SQL Server vs IBM DB2 etc.). If I can't do my own I can always use other peoples. Cheers!
View 3 Replies
View Related
Mar 6, 2004
Hi
I am using SQL server database with asp as front end. I use asp command object to call sql stored procedure. The procedure runs a while loop for say 100,000 records and based on the IF condition calls particular stored procs which process the record.
I am running this app on a p4 IBM pc with win 2000 sever and IIS on same m/c . The cpu utilisation goes upto 98-99% and the process has started running very slow of late. Is this slow processing speed a hardware/OS problem or is it due to calls for stored proc within stored proc? how can i optimise the process. Each stored proc called does have if conditions,table scans etc.
please advise
View 7 Replies
View Related
Mar 8, 2007
Hi there
This is the scenario:
I have a heavy duty database, that is being accessed very, very frequently (i.e. 100 times in a minute).
Now, I would like to make a backup of the database, just in case something goes wrong (recovery reasons, etc.).
My question is how will making a backup impact the performance of the database and how will I be sure that the backup is in the consistent state?
Thank you
View 6 Replies
View Related
Jul 31, 2007
dear friends,
i have dropped many objects and recreated in a database.suddenly my database became very slow. so please any one of friends give solution.
View 9 Replies
View Related
Oct 23, 2007
I have client tools installed on a server and I have registered our 30+ instances hosted on various servers to this one MS SQL 2005 Management Studios.
Question:
How can I use this set up to send an e-mail distribution list a nice monthly chat showing the sizes of the database, memory, cpu utilization of all the registered databases?
Many thanks for your help !!
seethem
View 3 Replies
View Related
Jan 15, 2008
Hi All,
from your experience in SQL 2005 - do i have any free software that can help in improve performance or can help in identifying performance bottleneck. two examples of performance and help that i use usually use are the maintenance plan that do (check DB > reorganized index > rebuild index > update statics) and the second software is the SQL 2005 DASHBOARD for the reporting help.
do you have any other free tools and help that you can give me for performance or any thing that i must have in my SQL 2005 servers.
Thx
View 3 Replies
View Related
Jul 23, 2005
Hi,I am facing a peculiar problem while looking ahead in a live Databasecurrently under operation in one of my client’s Project. AnApplication that is updating 3 - tables in the Database is missing toupdate a certain number of Fields in one of the Tables. The fact isnot frequent and I have checked through the Server Performance Monitorthat there is no performance slag of the Server during any point oftime.The Tables are indexed with common Key fields. Can anybody help me inthis regard ?Thanks & Regards.--Posted using the http://www.dbforumz.com interface, at author's requestArticles individually checked for conformance to usenet standardsTopic URL: http://www.dbforumz.com/General-Dis...pict193836.htmlVisit Topic URL to contact author (reg. req'd). Report abuse: http://www.dbforumz.com/eform.php?p=655931
View 2 Replies
View Related
Jul 23, 2005
We have an application with a SQL Server 2000 back end that is fairlydatabase intensive -- lots of fairly frequent queries, inserts, updates-- the gamut. The application does not make use of performance hogslike cursors, but I know there are lots of ways the application couldbe made more efficient database-wise. The server code is running VB6of all things, using COM+ database interfaces. There are someclustered and non-clustered indexes defined, but I'm pretty surethere's room for improvement there as well.Some tables have grown into the millions of records in recent months,and performance of the application slowed to a crawl. Optimizing thedatabase helped a little, but not much. We know that several millionrecords in a table is a lot, but one would think that SQL Server shouldbe able to still handle that pretty well. We do have plans to archivea lot of old data, but in the meantime, we were hurting for a quickfix.So we threw hardware at the problem, and transferred the database to anew, more powerful server. The performance improvement was dramatic.Transactions were many many times faster than before. Withoutimplementing any of the other more difficult performance improvementswe have planned, we suddenly became minor heros. :-)Well, the honeymoon seems to be somewhat over. While performance isstill much better than when the database resided on our old server,performance appears to have degraded rather significantly again.Performance is also not significantly better with fewer users on oursystem. What the heck?Yes, the database continues to grow unchecked as we haven't quite gotan archive utility in place yet, but the growth is relatively gradual,so you wouldn't think that would be the issue. The database isoptimized on a weekly basis, and our web and database servers are bothrebooted monthly. Our database administrators don't seem to haveanswers, so I appeal to the experts reading this forum to maybe offersome clues.Prior to posting I did do a fair amount of research to see what peoplehave suggested in similar situations, and ran this by our databaseadmin. Here's what I can tell you from this research:- Statistics are updated weekly along with whatever else the databaseoptimization does- We do not use the "autoshrink" option for automatically shrinking logfiles- Regarding preallocating space and setting growth factors for log anddata files to minimize time spent allocating disk space, our adminsays, "We do allow database files to grow unchecked, but we do monitorgrowth and manually expand as needed. Autogrow is typically set in50MB increments or less as the amount of time it takes to expand thisamount is negligible."- Transaction logging is turned on, and data and log devices are onseparate physical disks- The database server is monitored to ensure no process is hogging allof the CPU, I/O or memory
View 6 Replies
View Related
Feb 6, 2008
I have a set of disks allocated for a high performance SQL implementation that will entail lots of large queries. My question is do I allocate more IOPS to the logs or to the database?
For example, if I have a 10 disk RAID 1/0 and a 4 disk RAID 1/0 available, which do I allocate to the logs and which do I allocate to the database? Which will require the most IOPS?
Thanks!
View 3 Replies
View Related
Sep 28, 2006
Hi,
I have been running a reporting App on an SQL 2000 server, which reads from one large table (roughly 80 million records which grows at around 2 million records a week).
However, it would seem that when multi-users try to access the App and the one large table is Read from using 1 database user name up to approximately 25 times, the system would slow to a halt. Essentially, each request on the App would be a new connection to the SQL database using the same database user name. Recently, we have been running into performance issues since we have increased the number of users for the App.
What would be causing this slow down? and what could solve this problem?
View 2 Replies
View Related
Feb 12, 2007
Hi,
I'm developing a Data Mart and i'm experiencing a performance gap between my fact table and its snapshot.
I create snapshot with the istruction:
CREATE DATABASE DB_SNAP ON
( NAME = DB_SNAP_Data, FILENAME =
'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDataDB_SNAP_Data.ss' )
AS SNAPSHOT OF DB;
And it works. But executing queries on the snapshot result very "low".
Can anyone tell me why?
Tanks.
F.
View 4 Replies
View Related
Jun 25, 2007
Hi geniuses,
I have a core database question in MS Access and SQL Server.
Assume the following table in MS Access or SQL Server:
Employees table:
EmpID, Name, Hiredate, Photo;
and the following query:
"SELECT EmpID, Name, Hiredate FROM Employees;"
The question is, "does the blob object (Photo) affect the performance of the query even when not included in the select statement?"
I mean, will I gain some additional performance when separating the blob fields to another table?
Thanks for the answer.
View 4 Replies
View Related
Nov 15, 2007
Hi everybody,
We are having a application performance issue, where the user's are complaining that the application is really slow. They access the application thru citrix. I ran sp_who on the database side to see if there are any blocks, but the block column does'nt have anything greater than o. So trying to see how I can track where the problem is. Please help. Thanks.
View 2 Replies
View Related
Jul 25, 2005
I have created an internal “e-mail� type system in our business application. The idea was pretty basic have a message table which would hold the data such as To, FROM, blah blah. Then there would be an Inbox table which would have the User Name and the Message.ID from the message table. By joining these two you would have the person’s inbox. Each table has an ID column which is the clustered index, The Inbox table has an additional Index on the User Name. These tables are very busy and large, here are the stats.
Messages – 860,000 rows est. Total
Inbox – 2,584,000 rows est. Total
For June there were 144,000 Messages of which 780 went to multiple recipients causing 276,000 Inbox references.
Everything works except getting a persons Inbox can get very slow. To address this we are currently Index Defragging and Re-Indexing. Now these action appear to be correcting the problem but only for a short time which is why we are doing it several times a day.
Solution?
I think the problem is that my most used Index (User Name in the Inbox) is not clustered? And I should have a larger Fill Factor than 90%?
Any advice greatly appreciated, the transaction logs are killing me.
View 4 Replies
View Related
Mar 27, 1999
At approximately what db size is sql 6.5 known to degrade in performance?
Also, what is the maximum db size 6.5 can handle?
thanks in advance...
View 1 Replies
View Related
Jul 23, 2005
In a nutshell, I have a stored proc that takes appx 15 minutes when it usesa read-only database, and about 2 minutes when the database is notread-only.Details:SQL Server 7The stored proc exists on a writable database on the same server. It stuffsa subset of about 20000 rows into a temp table and repeatedly updates thetemp table while doing selects and joins on the read only database, forexample,Select *into #LocalTempTablefrom SAMESERVER.dbo.ReadOnlyDatabaseTableUpdate tmp set tmp.SomeColumn = 1from #LocalTempTable tmpinner join SAMESERVER.dbo.ReadOnlyDatabaseTable x ontmp.ID = x.IDwhere SomeCriteria = SomeValue, etc, etc.If I change the read-only database to writable it works fine. There are noexplicit transactions used in the stored proc. On the same server, aproduction database exists and is constantly using transactions and temptables, but I wouldn't think this is an overall server performance issuebecause it handles the same load when the suspect database is not read-only.Any thoughts will be greatly appreciated. I'm searching all over and can'tseem to find reference to this type of issue.
View 1 Replies
View Related
Feb 28, 2007
I am developing a mobile 5.0 application. I use mobile Sql as the database in the pda.
In the program, i use dataset.xsd to create the table and tableAdapter, but the performance is very slow for just access the data from the database. It takes about 4200ms for just
this.userAdapter = new PDA_USERTableAdapter();
MBDB2DataSet.PDA_USERDataTable ut = userAdapter.GetUser();
the "new PDA_USERTableAdapter()" is very fast.But...
The userAdapter.GetUser() will only return about 20 rows, each rows only contains 5 field .But it cost 4200 ms for this line.
The sql statement in userAdapter.GetUser() is
SELECT User, PASSWORD, TYPE, USER_ID, Supervisor_ID
FROM PDA_USER
WHERE (TYPE = 5)
ORDER BY User
Please Help, Urgent!!!!!
p.s (The total rows in the PDA_USER table is only 30 rows)
Thank you very much
View 3 Replies
View Related
Mar 22, 2007
Hi, I am using compact framework 1.1 and SQL CE database for my mobile application. My database has a total of 160000 rows of records and whenever i do a query searching, it will take about 20 seconds to look through the whole database if the record does not exist. Is there any method to improve the searching performance? i am using data reader for the query.
Thanks.
View 1 Replies
View Related
May 23, 2008
We have more than 2000 tables in the database. Can existence of so many tables affect the performance on SQL SERVER 2005?
View 1 Replies
View Related
Oct 2, 2006
Hi,
I'm trying to restore database on the server in the different domain.
The destination server is located in the physically remoted place.
First I tried UNC path (including IP address) with DBCC TRACEON | OFF (1807) optioin but it didn't work.
Some Internet document suggested that the FTP file transfer & restore locally on the remote domain but I didn't use this method because there is additional step.
Instead I used the following scripts:
DECLARE @COMMANDLINE VARCHAR(500)
SET @COMMANDLINE = 'NET USE B: \192.168.1.xxxsql_backup$ password /USER:SourceServer
ead_sql_backup /YES'
EXEC master..xp_cmdshell @COMMANDLINE
RESTORE DATABASE [DestinationDB]
FROM DISK = N'B:SourceBackup.bak' WITH
FILE = 1, NOUNLOAD , STATS = 10, REPLACE ,
MOVE N'dMSLog' TO N'D:MSSQLDataDatabase.ldf',
MOVE N'dMSData' TO N'D:MSSQLDataDatabase.mdf'
SET @COMMANDLINE = 'NET USE B: /DELETE /YES'
EXEC master..xp_cmdshell @COMMANDLINE
The script is working but it's extremly slow.
The size of database is only 1152.88 MB but it took more than 2 hours.
Is there any way that I can improve the performance?
Any advice would be appreciated
Thanks,
- Hyung -
View 2 Replies
View Related
Feb 1, 2008
What is the relative performance cost of peforming joins, inserts across databases as opposed to performing them on tables within the same physical database? I can't find much on this at all..
Thanks
View 12 Replies
View Related
Feb 11, 2008
Hi,
I am working in Trade enivornment and i have experience only on SQL 2000.
Each day approximately 4 million data gets loaded and transactions being happened in my server.
Please let me know which database(Sql/Oracle..) can i use for high performance with free or one license.
I am looking for a database which is having hhigh speed in retrieving large data and having massive storage with no performance issues.
Some suggested me to use Informix, but still im doing a R&D work on it.
Please suggest me with two or three different datases with their high performance features.
Thanks in advance
View 1 Replies
View Related
Feb 17, 2008
Hello All,
I have an issue where my SQL query is taking an exceedingly long time to execute. It joins tables in 2 different databases. When I import the tables from both my databases into a single database and execute the query, it executes instantly!
I had a look at the execution plans of the same query when executed within a single database and the query when executed with tables across databases and there were differences in the execution plan. The single database query had parallelism whereas the query for cross database joins didnt have any parallelism.
Both the databases are in the same server.
I havent seen a case where cross database joins impact query performance so much. Any pointers?
Thanks
Arun
View 4 Replies
View Related
Jul 28, 2006
Hi
I have a database that stores all the data in one table that is only the data we are required to store hence it didnt require any other tables
the table has five colunms
Orderid, ordercost, orderdate, ordersystem, orderref
it is used in an web application that seraches for all the rest of the information using an orderid and displays the rest of the details the size of the table currently is about 123000 records that is increasing by 20000 every week.
i have an archive of the same data that has 7,666,000 records that are also going to be placed into the same database.
my question is is that a good idea of doing this?
are there any performance issues that i need to be aware of as currently the application runs quite quick?
View 15 Replies
View Related
Sep 21, 2006
Hi there.I'm on a SQL 2000 SP4 machine.This is a development machine, with only a couple of small databases onit.Yesterday I needed to recover a table from backup, so I went throughthe following process:-Used 'Create SQL script' to generate a create db statement from thecurrent db.-Changed the db name, and all file paths to not conflict with thecurrent db.-Created a new db in a new directory with this script.-Restored last night's backup from the 'real' db to the new one.-Went into the 'recovered' database, located the table which needed tobe restored, and renamed it to "<table_name>_RECOVER"-Used DTS to transfer that table to the 'real' database-Truncated the table to be recovered-did a 'Insert into select * from' statement to recover the records.-dropped the table that I had copied in via DTS. This table was small -8000 rows or so.The database seemed fine at the time.Now (the next morning) I am finding that performance of the 'real'database is agonizingly slow. Even doing a 'select count (*) from <asmall tablesimply does not return a result. the 'processing' globeicon spins merrily away, and I get no result set.However, the same query, submitted against the 'recovered' database(the one I restored in order to get the data I required) respondsinstantly, as it should.These databases are both hanging off the same named instance of thisserver (there are three instances).A quick Perfmon check shows the CPU to be nearly idle.I'm not sure what to look at here - can anyone suggest a direction?Thanks!
View 3 Replies
View Related
Mar 11, 2008
What are the ways to do that ?
View 2 Replies
View Related