We just put on our main accounting (50 GB total, 8 GB largest table - GLTRAN) database on a new Windows Advanced 2003 server with 8 GBs of memory. Everything is essentially the same as the old box, aside from the fact that it's on Windows Advanced 2003 Server and it's using LUNS as the E: drive where the SQL database is kept. It runs fine for the most part, excpet this one report takes literally 20 times longer to run than on the pld box.
It's SQL Enterprise 2000 SP4 (also the same). Are there new config options for SQL when running on a 2003 server? Or is it how the OS is handling the SQL service? I'm perplexed. It's not indexes. I still have the old box and load the current dbase to it for testing purposes and the report runs like lightning on it.
How can i create a case statement with a bigger and smaller than sign in it. I keep on getting an error.
Here is the piece of code i'm working on and simply enough, the idea of what i am trying to accomplish.
Code Snippet SELECT Weight.Weight, Height.Height, (Weight.Weight/(Height.Height*Height.Height)) AS BMI, CASE BMI WHEN (BMI < 18) THEN 'Under Weight' WHEN (BMI < 25) THEN 'Healthy Weight' END AS 'BMI Grouping'
I´m trying to transfer a large amount of data from Oracle 8 to Sql Server. For each table, I have an OleDBSource with DataAccessMode Command (select with one inner join), Data Conversion and OleDBDestination. Sometimes I´m getting an ORA-01555 snapshot too old error, so I´d like to set a bigger rollback segment, but don´t know how. Please, can anybody help me?
Hi.. We have a MSSQL application and the DB file (not the log file) seems getting bigger over this few year and right now you are running almost out of space. May I know how does the other company deal with this kind of situation?
i am sure other company data is getting bigger as well and it has been longer time than ours. How to deal with it ?
I'm importing about 15 million rows of data from an access file to an MSSQL database. Some of the fields in the Access file are of DataType "text". The destination fields in the SQL DB are of type varchar(50), and none of the text fields in the access file actually use anything other then English characters. I put in a "data conversion" item to handle the switch from "text" (which usually trys conversion to nvarchar by default) to varchar.
The import works, and the resulting table ends up weighing about 1.2 gigs. HOWEVER, the log itself is a crazy 7-8 gigs heavy. I have no idea why the log size bloats this much. I can backup/shrink later in the package, but this 8 gig could easily push the hard drive over its limit at some point before completion and I'm looking for a better alternative.
Database is on "simple" recovery mode. The combined size of the db, before the operation, log + data, is maybe around 5 meg.
Incidentally, I tried with out the intermediate data conversion step, and a similar thing happened - log finishes up about 7 gig, table is 1.2
Seems ridiculous that the log should grow faster then the table. Any ideas why??
--------------------------------------------------------- SSRS Kills Kittens.
The reason I say this is because a subtotal of a dollar amount will take up more space than other values. Right now, I'm forced to make all columns the same larger width because it appears to be all wrapped into 1 column width setting. I can try to change the value of the subtotal column, "matrixcolumn4", but it reverts to the other value after I press enter to apply the changes.
I created 1 database with 2 file group : 1 primary and 1 index. - Primary file group includes 1 data file (*.mdf): store all tables - Index file group includes 1 index file (*.ndf): store all indexes
Most of indexes are non-cluster indexes
After a short time using, data file is 2GB but index file is 12 GB. I do not know what problem happened in my database. I have some questions:
1/ How so I reduce size of index file ? 2/ How to know what is stored in index file? 3/ How to trace all impact to index file? 4/ How to limit size growing of index file?
I was wondering if someone can point out the error or the thing I shouldn't be doing in a stored procedure on SQL Server 2005. I want to switch from SQL Server 2000 to SQL Server 2005 which all seems to work just fine, but one stored procedure is causing me headache.
I could pin the problem down to this query:
DECLARE @Package_ID bigint
DECLARE @Email varchar(80)
DECLARE @Customer_ID bigint
DECLARE @Payment_Type tinyint
DECLARE @Payment_Status tinyint
DECLARE @Booking_Type tinyint
SELECT @Package_ID = NULL
SELECT @Email = NULL
SELECT @Customer_ID = NULL
SELECT @Payment_Type = NULL
SELECT @Payment_Status = NULL
SELECT @Booking_Type = NULL
CREATE TABLE #TempTable(
PACKAGE_ID bigint,
PRIMARY KEY (PACKAGE_ID))
INSERT INTO
#TempTable
SELECT
PACKAGE.PACKAGE_ID
FROM
PACKAGE (nolock) LEFT JOIN BOOKING ON PACKAGE.PACKAGE_ID = BOOKING.PACKAGE_ID
LEFT JOIN CUSTOMER (nolock) ON PACKAGE.CUSTOMER_ID = CUSTOMER.CUSTOMER_ID
LEFT JOIN ADDRESS_LINK (nolock) ON ADDRESS_LINK.SOURCE_TYPE = 1 AND ADDRESS_LINK.SOURCE_ID = CUSTOMER.CUSTOMER_ID
LEFT JOIN ADDRESS (nolock) ON ADDRESS_LINK.ADDRESS_ID = ADDRESS.ADDRESS_ID
AND PACKAGE.CUSTOMER_ID = ISNULL(@Customer_ID,PACKAGE.CUSTOMER_ID)
AND PACKAGE.PAYMENT_TYPE = ISNULL(@Payment_Type,PACKAGE.PAYMENT_TYPE)
AND PACKAGE.PAYMENT_STATUS = ISNULL(@Payment_Status,PACKAGE.PAYMENT_STATUS)
AND BOOKING.BOOKING_TYPE = ISNULL(@Booking_Type,BOOKING.BOOKING_TYPE)
-- If this line below is included the request will take about 90 seconds whereas it takes 1 second if it is outcommented
--AND ADDRESS.EMAIl LIKE '%' + ISNULL(@Email, ADDRESS.EMAIL) + '%'
GROUP BY
PACKAGE.PACKAGE_ID
DROP TABLE #TempTable
The request is performing quite well on the SQL Server 2000 but on the SQL Server 2005 it takes much longer. I already installed the SP2 x64, I'm running the SQL Server 2005 on a x64 environment. As I stated in the comment in the query it takes 90 seconds to finish with the line included, but if I exclude the line it takes 1 second. I think there must be something wrong with the join's or something else which has maybe changed in SQL Server 2005. All the tables joined have a primary key. Maybe you folks can spot the error / mistake / wrong type of doing things easily. I would appreciate any help you can offer me to solve this problem.
On the web I saw that there is a Cumulative Update 4 for the SP2 which fixes the following:
942659 (http://support.microsoft.com/kb/942659/) FIX: The query performance is slower when you run the query in SQL Server 2005 than when you run the query in SQL Server 2000
Anyhow I think the problem is something else, I haven't tried out the cumulative update yet, as I think it is something different, more general why this query takes ages to process.
we have performance problems with MS SQL Server 2000. We upsized an Access 2000 application to MS SQL server, using linked tables. Most of the time the performance is fine (there are at most 10 users connected to the server at the same time). However, it regularly happens that the database stops responding. Queries, which normally take 20 ms to execute, require 20 seconds or more. In the Access client this looks as if Access has hung, it is not responding, even though it eventually comes back to live. What I have found out is that if I restart the SQL server, the problem disappears and the performance stays fine for some time. This in mind, I set up a batch which stops and restarts the SQL at night. However, recently the problem started appearing even when the SQL server was running for only a few hours. I also looked at the performance monitor at both the client workstation and the server and even when the response time are slow, the processor usage both at the workstations and the server is under 10 percent. I wonder whether anybody could help me with this problem. I realise that using linked tables in Access is not the best thing for achieving good performance, but I still would expect at least decent performance. At the moment the situation is worse than if we were using just Access. For your information, the computer which the SQL server runs on is a dual processor Pentium Pro 200 MHz, with 320 MB RAM and a SCSI RAID. The server is the only Windows 2000 domain controller on the network and it runs Active directory. Plus there is also Exchange server 5.5 installed on this server. This looks like a lot for a single server, but please bear in mind that there are only 15 users on the network. It may be also interesting to know that we have only recently upgraded from SQL server 7 to SQL server 2000, but we were experiencing the same problem before, even though not so often.
When, at the same network, I run Windows Server 2003, with webserver on it, SQL clients connected to SQL server on other computer server, experience slow work with queries. What seems to be a problem? When I disconnect Windows 2003 Server everything go faster. Why?
Hiya folks, This is more a request for some input from peeps with more experience of SQL than myself. A problem has shown itself on my SQL server over the last week or so, in that the server will 'slow down' intermitently, almost as if the the connection to the server has been lost for about 30 seconds. All will be fine for another minute or so and then the same problem occurs.
The only way I've found to get round this problem is to stop the SQL server and completely restart the server that SQL resides on, then restart SQL. This cures the problem for about a day.
I've written a program, which communicates via ODBC with multipe database platforms. In a local network it seems to be everthing OK, but when I connect via VPN (2MBit/s S-DSL) to the MSSQL (2000 SP3) the connection is not only very slow, it seems that the MSSQL only uses 1 % of the bandwith. I don't think that 0,25 KByte/s is quite normal speed. A query takes about 5 - 10 minutes. (And I do a lot of queries...)
If I connect to an Oracle-DB the full bandwith will used (125 KByte/s).
Is there a problem with the SQL2000? How can I solve this behaviour?
I am having a problem accessing my sql server database using either Enterprise Manager or Query Analyser. It is awfully slow. Each time I click to expand a database in Enterprise Manager it takes about 25mins to do anything. I was running a DTS package yesterday which failed and have had this problem since. If I access the database via my app everything seems to be running at a normal speed. If I go to my task manager the sql server process is using up 750MB of memory and 750MB of virtual memory??
Does anyone have any experience with a server slowing down over time to the point that it must be rebooted? This occurs over a time frame of from a few days to as long as a week. The server has a single Xeon 3.6 processor and 8 GB of ram. It executes production SQL scripts against databases contained on the server as well as a data warehouse stored on an AS400 server.
After rebooting, all jobs seem to execute in a reasonable time frame, according to their size and scope.
server was running out of space. drop a database to free up space. the server is slow now like its taking more time to query or delete records than normal. what happend and how do i fix it?
When I query or browse databases or tables in SQL Server 2000 it worksextremely slow. It started working like this from one day to another.I tried reinstalling but it stayed the same.Now I'm installing Service Pack 3, it's curious that executingreplsys.sql, replcom.sql and repltran.sql scripts in the installationis extremely slow too.Did anyone experienced this or have any idea?Thanks in advance.
An SSIS package to transfer data from a DB instance on SQL Server 2005 to SQL Server 2000 is extremely slow. The package uses an OLEDB Source to OLEDB Destination for data transfer which is basically one table from sql server 2005 to sql server 2000. The job takes 5 minutes to transfer about 400 rows at night when there is very little activity on the server. During the day the job almost always times out.
On SQL Server 200 instances the job ran in minutes in the old 2000 package.
Is there an alternative to this. Tranfer Objects task does not work as there is apparently a defect according to Microsoft. Please let me know if there is any other option other than using a Execute 2000 package task or using an ActiveX Script to read records from one source and to insert them into the destination source, which I am not certain how long it might take and how viable will that be?
Our server is running. There are no locks, and server has been rebooted but the problem is still there. This has been going on for some time now. I intend to restart the server. Does anybody have a quick solution, please help. Thanks for your assistance!!
Our SQL server needs to be rebooted every two weeks sometimes even earlier. Otherwise it gets extremely slow and I can even open any tables in enterprize manager. Also the users cannot type any info into the application screen, it takes forever to change from one screen to other. Can somebody please suggest me how to avoid this situation or any idea of why it happens.
When first time I start my sql server is running faster. After 10 to 15 days later, sql sever performance is very slow. After I restart SQL service, to become normal.
We are facing performance related problem using Sql server 2000.
We have one stand alone P4 Pc (128 ram) and around 30 users access the sql server through network.
We have written our aplication in VB 6 and backend as Sql Server 2000. We have used Stored Procedure where ever necessary. We have used cursor location as Server side.
When we start with 5 users it is not slow, when all the users say 30 comes in it is slow down.
Can some one help to find out what is the problem.
I'm still new to SQL Server so some of my lingo/verbage may be incorrect, please bare with me.
The company I work for relies strictly on ASP and SQL Server for 85% of it's daily operations. We have some Access projects and some VB projects as well, but for the majority it's ASP and SQL Server.
Previously we had 2 T1 lines with something like 3MB a piece and a handfull of Dell Servers. Our main server is also a Dell running Windows Server 2003 and is hosted through a reputable company here in town. They have a host of fiber lines running all over so I know we're getting good throughput. We've actually just upgradded to a DS3 but we're still working out the kinks with that. Anyway, I just want to eliminate that up front - we have great connection speeds.
The problems lies, I believe in our database design. The company supposedly had a DBA come in and help setup the design some 3 or 4 years ago, however even with my limited knowledge I feel like something is just not working right.
Our main table is "Invoices" which is obviously all of our Invoices, ever. This table has an Identity field "JobID" which is also the Clustered Index. We have other Indexes as well, but it appears they're just scattered about. The table probably 30-40 fields per row and ONLY 740,000 rows. Tiny in comparison to what I'm told SQL Server can handle.
However, our performance is embarassing. We've just landed a new client who's going to be brining us big business and they're already complaining about the speed of their website. I am just trying to figure out ways to speed things up. SQL is on a dedicated machine I believe with dual Xeon processors and a couple gigs of ram. So that should be ok. THe invoices table I spoke of is constantly accessed by all kinds of operations as it's heart of what we do. We also have other tables such which are joined on this table to make up the reporting we do for clients.
So I guess my question is this. Should the Clustered Index be the identify field and is that causing us problems? We use this field alot for access a single Invoice at a time and from what I understand this makes it a good Clustered Index, because the index IS the jobID we're looking for. But when it comes time to do reporting for a client, we're not looking at this field. We just pull the records for that Clients Number. And we only have 1400 clients at this point. So if we were to make the "ClientID" field the Clustered Index, it would much faster to Zero in on the group of Invoices we wanted because the ClientID is ALWAYS included in our queries.
But because a "DBA" came in to design this setup, everyone is afraid to change it. I guess it's hard to explain without people sitting here going through the code and look at the structures of all our tables - but I guess what I need is like a guide of what to do to easily increase performance on SQL Server and the proper use of Clustered and Non-Clustered Indexs and how to mix and match those.
Sorry I wrote a book. Ideas? This place has always helped me before, so thanks in advance!
Our main production server has started running slow, it is a dual zeon thingy with plenty of ram so hardware is not an issue.
Basically a service connects to the database and executes a few stored procs, the only way I can get the system up to speed again is to recompile one of the SPs but that is only a temporary fix.
Anyone had a similar thing?
Can anyone give me help on performance tuning in SQL server 2000.
You know how there are lots of hosted applications out there, many of them provide you with your own database (not shared).
1. If a server has 1K databases on it, will this slow down the server just due to the # of databases? (each user has their own database, but they won't be accessing it that much really).
A seperate database is required for security purposes usually.
Hi I'm using SQL Server 2000 with Small Business System 2003. I have a smallish database and an Access 2003 Front End, with ODBC links. The system has been running fine for about 2 months with 16 users. This weekend, I will be adding a further 30 users to the system.
I've been doing some work on a new copy of the front end over the past day or two, and found that occassionaly, the system runs really slowly, taking a couple of minutes to open my front screen which normally takes a few seconds.
The only solution I have found is to stop and start SQL Server Manager on the server, and then everything is fine. However this is clearly not an acceptable solution, because I'm doing it about twice a day.
can anybody suggest why this might be, or how I might fix the problem?
I have a maintenance plan in place, which runs overnight,
I have a SQL 2k database, relatively small, < 1Gb, WinXP. Whenever I try to do anything in Ent Man I get the hour glass for minutes every time. Customers are not complaining. Performance Monitor and the db logs have not revealed any bottlenecks so far. Hardware tested good. All other applications run normally. Log file is about 80mb. This has started just recently.
I am a rookie so I need a hint on what to check. Indexes? Logging?
I have a database and when I run a query on it the query takes 10 minutes to complete. I am running the following query
SELECT t103.cs_flag, t103.pr_flag, SUM (t103.amount), COUNT (t103.record_id) FROM br_data t103 WHERE t103.acct_id = 12 AND (t103.state = 3 OR t103.state = 7) GROUP BY t103.cs_flag, t103.pr_flag
The br_data table doesnt seem to be using its indexes ?? And it has around a million records. Now when I export the database and import on to another SQL server and then run the same query as above it only takes 1 or 2 seconds.
On the server that we are having problems with I have tried to re-build the indexes using DBCC DBREINDEX (br_data,' ',0) but this hasnt helped. I have also tried backing up the database, delete the database then restore, this also hasnt helped. I have no idea why the query runs slow on the original box, but then quick when I transfer it to another server??
Both servers are running windows 2003 with SQL 2000 SP4. There are no resorrce problems such as CPU / memory, Any ideas??