Working with SQL Server 2000, I have a table with the following structure:
ID (INT)
userID (INT, foriegn key)
productID (INT)
productQTY (DECIMAL(5,2))
purchaseDate(smalldatetime)
I have about a 1000 users, entering about 20-30 rows per day each, i.e ~20,000 - 30,000 new rows per day. The table might be queried with a simple "SELECT" for the products a user ordered per day or per time frame (purchaseDate column).
My question (finally) is - when should I expect to see performance degradation? Is there anything I can do to prevent it (i.e splitting this table somehow to several tables)?
I have an ASP.NET application used in my university for student management. Any now and then the application throws an error like "DataBinding: 'System.Data.DataRowView' does not contain a property with the name XXXX" where XXX may be different. Of course that this properties exist and they are generated ok by the stored procedures.
After an recopilation everything seems to work nomally for a while.
I did not succeed in identifying the moment when the error is thrown but I suspect some reports in sql reporting services that lead to a server overload ??? that also is strange because the server specifications are quite ok, I think:
- OS: Windows 2003 R2, 64 bit - SQL Server 2005 Standard, 64 bit - 8 GB RAM - Intel Xeon E 5345 2.33 Ghz (2 quad-core)
I would need help for: - identifying the moment when this error occurs and _why_ - how could I repair it - how could I distribute the load on more cores (from time to time Process Explorer from Sysinternals shows that only 1 core from 8 is working at full capacity)
Up until recently one of our SQL 7 databases has been running quite happily. However over the last 2 weeks the users have started to complain about performance levles, operations that would normally take 2-3 seconds now run for about 40 seconds. I have rebuilt all indexes and even ran update statistics althought both the 'Auto create Statistics' and 'Auto update statistics' are turned on. This failed to help so I ended up stopping and starting SQL server but still no luck.
I have run a DBCC SHOWCONTIG and the fragmentation is well within acceptable levels, scan desnity is at 94%. The only other thing I can think of is to reboot the NT server as it has been up for 67 days now. Can anyone else think of anything I might have overlooked?
Hello guys,Wonder if any of you could help me out here. I have just created annew empty database and imported data from another database into it.This was done with the import wizard from MMC.First thing that I noticed was the size difference, the old databasewas well over 1GB, but the new one was only about 400MB.Second thing I've noticed, and this is the problem, is that accessingthe new (smaller) database instead of the old one causes a huge speeddegradation, about 5 times slower than the old version.We are using MS SQL Server 7. Any help would be very gratefullyreceived.RegardsGethyn
We recently created transactional replication to hopefully improve performance issues we were expereincing. The replication is between 2 SQL Servers (2000), and since we have introduced the replication, the performance has degraded considerably.
I will try and explain the scenario.
We have a primary db that our internal users use and we also have the newly replicated db that our website and another application use. The users are complaining that the website and the internal application is extremely slow and I was just wondering if it is possible to do an Index Tuning on both the primary db and replicated db based on trace files so as to create new indexes or would this have an impact on the replication?
We have an application that runs across the WAN to multiple locations. Performance is poor and we are looking at ways to improve performance. One suggestion from our Sr. Network Administrator is to change our Network Packet Sizes across all points, SQL Server & NIC to match the outgoing Router. This would be a size of 1440.
Does anyone have any thoughts or recommendations on this?
I am not sure wether transaction log file size affect the database performance. My SQL 2K suddenly became slow yestoday. The data file is 3GB, and transaction log file is 11GB. Someone suggested I should shrink transaction log file. Can it work?
I am experiencing performance problems with one of my stored procedures. When the stored procedure is first compiled an executed, it behaves as expected (it usually takes 1 or 2 seconds to complete). But its performace it is degradated, so in 1 day, it usually takes 120 seconds to complete !!!. Once the stored procedure is compiled, its performance it is then the expected.
It is a complex stored procedure with two integer parameters with only one select, but composed by multiple views and sub-queries. We have been trying to break the query into small pieces using temporary tables but without success. The SQL Profiler shows an unusual number of reads when it goes wrong (more than a million reads).
I think the problem is in the execution plan. I know than compiling the stored procedure, the problem is fixed, but I do not know exactly when and why it starts to happen.
The stored procedure is running under the following configuration:
- Microsoft SQL Server Standard Edition (64-bit). - Version: 9.00.1399.06 - RAM 16 MB - 8 CPUs
I am running a Query in my Production Server. It is hardly taking 15 Mins. The same Query is taking more than 3 Hours in my test server. I can see the only difference between these two servers is Tempdb Size. Will tempdb size matters the performance of a Query. Can anyone reply me?
I just upgraded our application from SSCE 2.0 to SQL Mobile. Our app is written in C++, and we use OLE DB for most of our queries, including the routine the downloads and inserts our lookup table data. This application is running on a Dell Axim X51.
Using SSCE 2.0, this routine takes 236 seconds, with most time spent inserting data into various tables (using OLE DB). The resultant database size is 15.1 MB.
Using SQL Mobile, this routine now takes 675 seconds, with a resultant database size of 27.9 MB!! There is a noticable increase in time when the downloaded data is being inserted into the database.
What would be the reason(s) for the slower performance and the increased size of the database? This appears to be a monumental step backwards in performance. Any suggestions regarding improving the perfomance and size?
I have a general question concerning the impact on the performance of massive parallel data imports in one SSIS-package.
We have a Database on a SQL2005 SP1-Server (2 Xeons 3,8 Ghz, 4GB of RAM) for a report web-app which is updated every day with data of the last year/3 years. The data is extracted from several different DBs on multiple machines at different locations. Right now, there are imports/transformations from 7 companies at 3 locations. The table has ~80 columns and about 2 Mio. Rows. I built a SSIS-Package with one companies import and added the others by c&p-ing all the tasks in the package and changing connection parameters and values. Soon there will be 6 more companies to do imports with, and there will possibly be about 20 some day.
Now, when these 7 imports run parallel, there are 3 simultaneous imports from the same Source Server. Sometimes one of these imports seem to hang up. I cannot reproduce it, when I run the package 2 or 3 times, it´s gone. So I put some of the imports in line to reduce the parallel working tasks to 4. Then the problem dissappears. The "MaxConcurrentExecutables" Value is set to 6. "Retain same connection" is set "TRUE".
My questions, regarding stability and performance, are:
1.) Is it better to do those imports in seperate packages, if yes can I schedule multiple packages to execute parallel at the "SQL Server Agent"?
2.) Or should they be combined in one package, running (partly) parallel?
3.) What is the appropriate value for the "MaxConcurrentExecutables" Value and what options do I have to speed up those imports?
When creating my database I have modeled some of the tables after the Adventureworks sample database.
There are some fields or entire tables in Adventureworks that I do not see an imediate use for, however; I would hate to ommit them to find out later they would have been benificial. (.eg territory table).
In general terms what would the impact be on size and performance of a database which contains tables or fields that do not contain data.
We currently have a fairly new SQL server 2000 db (currently about 18mb is size) as a backend to an application (Navision). Performance seems to be below what it should be.
The db is increasing quite rapidly in size, with a lot of data scheduled to be loaded onto the db and also more and more shops and users coming onto the system with alot more transactions going onto the db.
The initial setup of the db has the database File properties set to "Automatically grow file" by "30%" and has an unrestricted file growth.
The server that the db sits on is high spec and very large disk space.
Because the database will be expanding alot and thus reaching its maximum space allocation and then performing a 30% increase in size (which I guess affects performance quite a bit??) quite regularly.
Is it best to set the intitial size of the db to a alot bigger size in the first place as we have large disk space availiable and also set the % increase bigger also.
any advice on best performance would be much appreicated.
Hi, i use this script that show me the size of each table and do the sum of all the table size.
SELECT X.[name], REPLACE(CONVERT(varchar, CONVERT(money, X.[rows]), 1), '.00', '') AS [rows], REPLACE(CONVERT(varchar, CONVERT(money, X.[reserved]), 1), '.00', '') AS [reserved], REPLACE(CONVERT(varchar, CONVERT(money, X.[data]), 1), '.00', '') AS [data], REPLACE(CONVERT(varchar, CONVERT(money, X.[index_size]), 1), '.00', '') AS [index_size], REPLACE(CONVERT(varchar, CONVERT(money, X.[unused]), 1), '.00', '') AS [unused] FROM (SELECT CAST(object_name(id) AS varchar(50)) AS [name], SUM(CASE WHEN indid < 2 THEN CONVERT(bigint, [rows]) END) AS [rows], SUM(CONVERT(bigint, reserved)) * 8 AS reserved, SUM(CONVERT(bigint, dpages)) * 8 AS data, SUM(CONVERT(bigint, used) - CONVERT(bigint, dpages)) * 8 AS index_size, SUM(CONVERT(bigint, reserved) - CONVERT(bigint, used)) * 8 AS unused FROM sysindexes WITH (NOLOCK) WHERE sysindexes.indid IN (0, 1, 255) AND sysindexes.id > 100 AND object_name(sysindexes.id) <> 'dtproperties' GROUP BY sysindexes.id WITH ROLLUP) AS X ORDER BY X.[name]
the problem is that the sum of all tables is not the same size when i make a full database backup. example of this is when i run this query against my database i see a sum of 111,899 KB that they are 111MB,but when i do full backup to that database the size of this full backup is 1.5GB,why is that and where this size come from?
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
The largest table in our database eats up above 4G . we do "sp_spaceused" for this table.The length of all columns of this table ( just int, char, varchar, money ,numeric fields types) is about 200 bytes, and the table has around 1,300,000 rows, but the reserved spaced for this table is 4,800,000kb and the data space is around 4,600,00kb.
How can average each row take 3.7kb ( the total size of all columns just 200 bytes)? Any other things I need check? Any one can give any suggestion what cause this problem? or it is normal?
I am wondering if there is the limitation of maximum table size in SQL 6.5. I have a table with 2.6GB and 12,000,000 rows in SQL 6.5 database. Is this a problem?
Is there a practical size limit, in MB's, of a table in SQL Server 6.5?
Is there a size, that once exceeded, degrades performance signifigantly?
I am speaking of raw megabytes. The table in question will consist of only 3 int columns but has the possiblity of becoming VERY LARGE (+1,000,000 rows). I am still in the design phase and can change my strategy if this will prove to be a problem.
I am trying to solve this problem for quite some time.. I was wondering if I can get some help..
These questions are all abt. MSSQL 6.5
1. Is there a limit on the size of the table ? 2. Does it make sense to have more tables if the size of the row size is more that the limit set by 6.5 or i should let have more rows in a different table with duplicate entries for a particular field. 3. What is the number of rows before the performance of a query starts getting affected..
Is there a maximum or optimum number of rows I should have in a table so that I can have fastest search queries. I am a novice programmer just developed something for my work place. The database has a table created by converting data from excel spreadsheets. There were 24 spreadsheets for 12 months each having approximately 500 rows. Designed this way the table will have approximately 24 * 500 = 12000 records. Should I consider redesigning the database to make searches faster
Hi. I am trying to get a row count of each row of each table in the database. Is that possible? Using a SP or UDFS? I dont want the column size of each table but the total datasize of each row.So for example if I have 5 rows each in 3 tables I need a query that will return 15 rows with the size of each row(size of all coumn data summed together). Thanks.