I am currently migrating a DB from Oracle to SQL Server (Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86))
I've used ssma to do the migration, and I'm reviewing the prodedures to check them. I have find a performance problem in one of them, which worked perfectly under Oracle, and I have tried lots of things with no luck, so I guess I need some help
I insert a row in a table, and the time it takes for this is fine, but seconds later I need to read the row, and this Select lasts 1-2 ms more every time. This process is repeated lots of times.
Every insert-select takes 200 ms when it receives the first data (including some other operations that are not increasing the response time), and 200 insertions later it takes about 500 ms, which is really too much, considering it keeps increasing.
The table has 25+ columns, and some of them contain varchar of 3000+ characters.
I make the select using 4 columns in the where part. One of them is a numeric, and the rest are varchar (no one is the primary key).
I've got a clustered index for the primary key, and two more non-clustered indexes. One of them refers to the columns I use in the Select, and the parameters are Fill Factor: 90, and Recompute Statistics Automatically.
Regarding SSRS, what is considered a good response time? We have some reports running 2 minutes and the users think that is too long. Is there a guideline as to what a user should reasonably expect and if so, what is that guideline?
Hi I have Problem, My response time is too Low. Is Any one Know how to improve my response time. My DATABASE SIZE IS 11 GB. I didn’t change any configuration parameter after installing SQL Server. Right my server Having default configuration parameters. Whether I have to change any parameters or not. My queries will generate lot of temporary tables.
When I try to connect to a SQL server instance from Enterprise Manager, I'm getting a timout connection error. I have to change the timoeout parameter from 4 (the default) to 30 in order to work. Also I realize that some applications (like sharepoint) are having the same problem connecting to that server.
My question is:
Why is that happening?
It used to work fine, and I'm getting this issue a couple of days ago.
We did an in place convertion of our data base from MS SQL Server 6.5 to 7.0. Our application is much slower now on SQL 7.0. Any idea why? The following is a sample SQL statement that runs quickly on SQL 6.5 and takes a long time on SQL 7.0 I also attached the query plans from SQL 6.5 and 7.0.
SELECT Person_Name.PerNam_Person_Name_PK , Person_Name.PerNam_Row_Status , Person_Name.PerNam_Last_Name_Sndx , Person_Name.PerNam_Last_Name , Person_Name.PerNam_Name_Suffix , Person_Name.PerNam_First_Name , Person_Name.PerNam_Name_Prefix , Person_Name.PerNam_Middle_Name , Person_Name.PerNam_Event_Person_FK , Event.Evn_Event_Nbr , Event.Evn_Event_Type , Event_Person.EvnPer_Last_Name , Event_Person.EvnPer_First_Name , Event_Person.EvnPer_Middle_Name , Event_Person.EvnPer_Name_Prefix , Event_Person.EvnPer_Name_Suffix FROM Person_Name , Event , Event_Person WHERE (Person_Name.PerNam_Agency_ID = "CL") AND ( Person_Name.PerNam_Event_Person_FK = Event_Person.EvnPer_Event_Person_PK ) and ( Event_Person.EvnPer_Event_FK = Event.Evn_Event_PK ) and (Person_Name.PerNam_Person_Name_PK = 0 or ( Person_Name.PerNam_Event_Person_FK = 581541) ) and ( Person_Name.PerNam_Row_Status <> "D" )
Query plan in SQL 6.5
SQL Server Execution Times: cpu time = 0 ms. elapsed time = 31250 ms. STEP 1 The type of query is INSERT The update mode is direct Worktable created for REFORMATTING FROM TABLE Person_Name Nested iteration Index : PK_Person_Name FROM TABLE Person_Name Nested iteration Index : PerNam_Event_Person_FK FROM TABLE Person_Name Nested iteration Using Dynamic Index FROM TABLE Event_Person Nested iteration Index : PK_Event_Person TO TABLE Worktable 1 STEP 2 The type of query is SELECT FROM TABLE Worktable 1 Nested iteration Table Scan FROM TABLE Event Nested iteration Index : PK_Event SQL Server Parse and Compile Time: cpu time = 0 ms. Table: Person_Name scan count 2, logical reads: 6, physical reads: 5, read ahead reads: 0 Table: Event scan count 0, logical reads: 0, physical reads: 0, read ahead reads: 0 Table: Event_Person scan count 0, logical reads: 0, physical reads: 0, read ahead reads: 0 Table: Worktable scan count 0, logical reads: 0, physical reads: 0, read ahead reads: 0 Table: Worktable scan count 1, logical reads: 1, physical reads: 0, read ahead reads: 0
SQL Server Execution Times: cpu time = 0 ms. elapsed time = 62 ms.
Is there a global variable or something of the sort that would tell me how long it took to execute a query??
I need to monitor my DB response times and we have a query that runs in under 2 seconds. So we want to run this query every couple of minutes and if it takes more than 12 sec to run, we want to send an email to our DB staff...
I know that I could take a time stamp before and after then subtract but I wanted to know if there was an easier way to do it..
In my database table has auto Identity file which is (1,1) But Its Increasing 1000 Some time 100 I don't Understand why It is happening in my every table.
We have poor performance spikes on a drive containing our log file but this is only for reads and seems to be at a time when we run a re-index job. If this is a likely correlation as to poor performance in reading the log file, and what reads are done from a log file.
Using SSMS 2012, we are experiencing extremely slow response times when opening SQL job steps to edit and also deploying SSIS Pkg's. Sysadmins have no problem. Users in the ssis_admin role have no problem. It's the rest of the users who have issues.
I have a problem with querying systemjobhistory data. Response time is slow and it is vary from time to time, sometime it takes few seconds and sometime it takes more than 2 minutes. I understand that there is quite a number of jobs in DB server and which might result in slow response time.
Is it possible to shorten the response time? like using index? My application is always look like hang when the query take very long time to run.
I have a report which takes around 5 seconds when run in BIDS but takes around 20 seconds when deployed on report server.Execution log says TimeDataRetrieval is around 3-4 seconds and  rendering time is around 15-17 secs.From this report I am passing 8 parameters to a  drill through report and there are 36 text boxes where I have defined these parameters for drill through action.All these parameters are populated in main dataset.When I deployed the same report without any drill through action and parameter, it takes 5 seconds.So I am suspecting that because of drill through parameters report is taking more rendering time on server.I am using 2008R2 and IE11.
Is it expected behavior that due to so many parameters for drill through action, report will take more rendering time?If yes, then why is it not taking same time in BIDS?
Hi, I was aware of an idea that I want to share with you guys: Here's the thing, I have an Access DB that I will pass to SQL, optimizing its structure and data, to optimize then its performance. About this last issue, the present Access DB has some consultations very redundant. Take this example as "source code": Consultation A is made by selecting some fields with some conditions on consultations A and B. I want to end this. Consultation A is consultation A, B is B and C is C, they all are made by themselfs.However, a new idea came to light! This data is to be displaied on a Web application on VB.Net, and I want to show the results from consultation A, B and C, once per time, but all sequencially. What if I execute two commands, one reading consultation A (by a stored procedure), another to consultation B, and save that data in an arraylist of objects, and then generate consultation A based on those application objects that are alread instaned and ready to use, selecting the final data on those objects and not in the database? Am I making any mistake, or am I optimizing performance somehow? The final purpose is really getting the most performance as we can!Any tips on this?Thanks a lot!
I'm not sure if this is the right forum, but I have a general question about running/storing databases. I have been running a process with 60+ million records in one table and another 16 million in another table and it is taking forever to get everything imported in and run the appropriate queries. I've been doing this all on a desktop and I am anxious to learn of a more efficient, faster method of processing this amount of data.
What solution should I pursue if I am doing this work a few times a year so that it doesn't take three full days of processing to reach an answer with the data?
I've received conflicting information from Microsoft personnel so thought I'd see what some thoughts here are.
In summary, we upgraded a server from SQL Server 2000 SP4 Standard to SQL Server 2005 SP2 Standard. This servers main purpose is to handle alot of merge replication to anonymous pull subscribers. We have some Transactional replication also occurring. There are 8GB memory on the server.
During the upgrade we ran into memory pressure on MemToLeave. We put the /3GB parameter in boot.ini and -g512 on the startup per Microsoft's suggestions. This got us past the upgrade process.
After the upgrade, we took off the boot.ini setting and the -g512. We enabled AWE and assigned 6GB to SQL Server. Then once in a while when the merge snapshots were running, we'd receive some "system out of memory" errors. I went ahead and put -g512 back on and haven't received the error since.
My question to Microsoft then was if we go to say 16GB of memory on the box and give say 14GB to SQL Server, would it be beneficial to set the -g option to a higher number. That's when I got into a discussion with the Microsoft person that SQL Server 2005 Standard would not use anything above 4GB, which is opposite what the Microsoft site says, others have said, and opposite to what I'm seeing for memory usage with DBCC MEMORYSTATUS showing the 6GB being used We'll be talking to our TAM about our suppport, specifically on Replication topics, as we've had some problems getting knowledgable support on this topic. If anyone knows of support outside of Microsoft on Replication topics, I'd love to hear about it.
Any thoughts on the tweaking of memory related to our environment? I know it may be site-specific and we may have to do some trial and error, but with:
1. Doing heavy merge replication processing on the server(1,500 subscribers). 2. say we get 16GB on the box(server is Windows 2003 SP2 Enterprise)
are there some suggestions on a -g setting to best utilize Buffer Pool and MemToLeave ? Some other things to do? Is there some process/method to help determine how best to define the memory settings? If there a way to see how much BPOOL and/or MemToLeave the system is using at a given moment? DBCC MEMORYSTATUS gives alot of info, but I'll be the first to admit that I don't know what alot of the info there is really telling me. If there some white paper, etc that would help determine what the system is doing memory-related, that'd be great to know.
one of our clients is running out of disk space on the SAN and I was simply wondering if it's possible to increase the disks on the fly without any major problems...? Should we take any special precausions? It's a clustered Win2k3 64-bit server with SQL Server 2005 Ent Edition...
Hi, i'm using SQL Server Express (9.0.2047). My database's *.ldf file's size increasing everyday... How can i decrease it? How can i make ldf file to small ?
Can I know somedetail about why the Sqlservr.exe app increasing in size drastically. Even I check all parameter of the server and I check the process running on server.
I feel server is not releasing the queues and It is occupying the memory. I any one suggest what could be the cause ?
I have increased the number of connections my Sql server will allow, and now I cannot restart my SQL Server, it keeps creashing and giving me an error message, has anybody else come across this, or know how I can restart my SQL server so I can atleast do a bit of work today!
I am having trouble increasing the size of the log device on a SQL 6.5 database. When I use SQL Enterprise Manager I get an error saying that the device has 0 MBs available. When I use the ALTER DB statement I get an error saying that there is not enough space on the disk, but I know this not to be the case. Has anyone any suggestions? Thanks in advance, Michael lawlor
I have faced a network problem during some days, what forced one of our replications to be stopped. The Publisher database is a high volume database. After I re-started the replication, the Subscirber database has its transacting log size increased quickly, because of the high volume of information to be inserted.
My concern is the way it is working, there will be no enough space for the log or for its backup files.
So, I have created a TSQL job within the following commands:
BACKUP LOG database_name WITH TRUNCATE_ONLY DBCC SHRINKDATABASE (database_name,TRUNCATEONLY)
It's running every 20 minutes, however the transaction log remains increasing.
I have also changed the db_option "SELECT INTO/BULKCOPY" to TRUE, in order to avoid logging bulk copies, but I believe, it didn't work because it didn't apply to replication process.
Does Anybody know if I can disable the transaction log or avoid this incresing of size during the replication?
Thanks a lot! Regards, Felicia Schimidt felicia.schimidt@br.flextronics.com
I have a database with 1 .mdf data file and 1 .ldf t-log file. There are multiple inserts/deletes/transactions performed on the data daily, but the size of the two files remains constant (5,774,,336 and 153,480 respectively)??? I perform daily full backups and hourly T-log backups (during business hours of 9-6) and these backup files change size, but why aren't my physical DB files changing? I have them set to auto-grow at 10% unrestricted...
I`m using IIS7..At present, our company has come not a difference indeyo of operating the server using Classic ASP recently we met a big problem : session is increases indefinitely. 503.3 - ASP.net has a queue is full. The phrase occurs continuously. Is there any solution for check problems? or how can i see what source code page makes the problem (ex. 123.asp : session 300. 234.asp : session 4000).
Would anyone have any suggestions/advice on how to determine what is causing the memory usage for sqlserver.exe to increase a dramatic pace in windows task manager? What would be a good resolution to slow down this memory usage? Thanks!
Hi all, Our Production server has 4GB RAM and is running SQL Server 7.0. By default since SQL Server 7.0 Standard Edition can take up only less than 2GB, our SQL Server is now using only 1.8GB (leaving the rest for the OS, Windows 2000 Server).
Inorder for SQL Server to take advantage of more than 2GB of RAM it is suggested that boot.ini be modified to include the switch, /3GB
Has anyone seen any issues with doing this? Is it safe to do so on the Standard Edition of SQL Server 7.0?
Does anyone know how to improve performance on insert statements. I have to run a query of several thousand insert statements, but it just takes too long. Does anyone know of any good tips to improve performance?