Reducing Reads Question
Aug 24, 2007
I'm trying to insert all the rows from a table to a new table.
(insert A select * from AA)
The reads on Profiler shows ar really high value (10253548).
First I created a unique clustered index and the reads shows (3258445), then I created a non clustered index expecting to have lower reads. Instead the reads shows (10253548).
I read creating indexes helps reduce reads. But it's not happening.
Any ideas what is going on?
=============================
http://www.sqlserverstudy.com
View 6 Replies
ADVERTISEMENT
Sep 14, 2007
Hi,I have the following three tables below containing Resources,Categories and a link table so each Resource can belong to one or moreCategories. I would like to create a view (ResourceID, ResourceName,CategoryID, CategoryName) that includes one row for each Resource withjust one of the Categories that it belongs to.Resource table- ResourceID- ResourceName- etc..Category table- CategoryID- CategoryName- etc..ResourceCategory table- ResourceID- CategoryIDCan anyone help? Thanks.
View 2 Replies
View Related
Jan 2, 2001
i used "dbcc shrink file" to reduce the log file of a database.the query analyzer says "successfully executed" but the log file doesn't seem to reduce..am i missing something?
View 2 Replies
View Related
Dec 4, 2000
Hi all!
The transaction log of the databse i am using has grown up to 7GB...(previously the setting was unrestricted file growth...now changed to restrict file growth to 7 GB approx.)
now this 7GB space is not needed....i would like to reduce the size to around 2 GB...how can i achieve this?i observer that on the properties i can only increase the size and not decrease it...also i am using transactional replication..this server is the publisher to four subscribers..
View 1 Replies
View Related
Jan 9, 2008
Hi guys, how are you?
Ive got a little question, i hope you can help me up
in sql server i have 3 tables
Code:
user
id_user - autoincremental int, primary key
username - varchar 30
car
id_car - autoincremental int, primary key
carName - varchar 30
user_car
id_user - int, foreign key
id_car - int, foreign key
what i want to do, is to
1.- insert a row in User,
2.- insert a row in Car
3.- using SCOPE_IDENTITY, to insert a new row in user_car that relates the ids of the last added car with the last added user.
But i need that to be done in one single transaction.
Once that is done, i have another question
how can i do the same but where i can add a variable number of different cars? in other words, i add 1 user and then add 5 cars, then create 5 rows in the user_car where i have the id of the last added user with the incremental id of the last 5 added cars
how can this be done in one single transaction, i cant make it in 3 transactions cuz it would cause me a lot of trouble. Any help?
my transaction should be something like this
Code:
insert into user(username) values('user1');
insert into car(carName) values('car1');
insert into user_car(???,???)
but im not sure how to do this in one single transaction
View 3 Replies
View Related
Apr 18, 2007
Hi,
If I understand it correctly, you only need an LDF file to restore to a point in time after the last full backup? If this is so, then Could the LDF file not be reduced in size on perfoming a full backup?
Most of the time it's not an issue as there is enough space on the HDD, but is it possible to reduce the ldf file size periodically (manually would be fine). Is changing the recovery mode from FULL to SIMPLE and then back to FULL an option?
If so, is anyone able to tell me how, exactly, I can do this? ... I've sifted through the documentaion to no avail ... :eek:
Many thanks
Rob
View 3 Replies
View Related
May 8, 2007
I am trying to reduce the size of my tempdb using the DBCC SHRINKFILE command and get the following error.
DBCC SHRINKFILE: Page 1:1164376 could not be moved because it is a work table page.
How can I get around this so I can shrink the db?
View 1 Replies
View Related
Apr 29, 2002
hi
i have a database that becomes to big after a few days. is there a chance to say if the table reaches a number of lines or a special memorysize, delete (or better archive) the oldest entries?
thanks
View 1 Replies
View Related
Jul 29, 2002
Hi all,
I have started to look at the way our production DB has been defined and set up, with the view to improving performance.
The DB is now 11gb, and the original size was set up as 3000mb, the rest has been take in 10% additional extents.
Now, back in my DB2 DBA days, this was a bad thing to have any data spread across extents as they may not be contiguous. I am assuming that is the same with SQL Server. Can someone confirm/deny this?
If this is the case, how can I get the DB back into one primary partition?
Thanks in advance.
Mike
View 2 Replies
View Related
May 25, 2001
If I have a transaction log in a database of size 1GB ( space allocated is during creation of database) currently only 300 mb of its space is used i.e. nearly 700 mb is free. If I want to reduce physical file size of transaction log by 200 mb and release it for operating system then How can I do it???
View 4 Replies
View Related
Dec 16, 1999
I have inherited a number of databases which were substantially over sized when they were set up. I'd like to reduce both the log and database files to be smaller than their original sizes, what's the easiest way to do this? If anyone has any experience of doing this please reply.
View 1 Replies
View Related
Jul 17, 2003
i am new to sql server. i recently found the transaction log size of my database has reached 109 MB. how can i reduce it. a transaction log backup was sceduled daily at 12.00 noon nad full backup monthly.
View 3 Replies
View Related
Dec 14, 2001
I have a production database of a size of 70 GB. Half of the data was archived and deleted from the current database. What is the best way to
reduce the size of the database, as we cannot shrink an entire database to be smaller than its original size? Thanks a lot!
View 1 Replies
View Related
Jan 10, 2005
I created few jobs that would archive the production DB and delete the archived data...
but it looks like the DB size is not reducing!!! Some times it looks like the size has increased!!
I think this is because of the log file size has increaded by the DELETE operations....But what can I do for this???
Please Help!!
View 1 Replies
View Related
Feb 9, 2012
Having difficulty achieving an end-result in transforming the results of a rowset query into XML.
Here is simplified test code that displays my problem:
declare @TimesheetHdrs table (EmpID int,EntryYear smallint,EntryPeriod tinyint,AdminNotes varchar(max),UserNotes varchar(max))
declare @TimesheetDtls table (EmpID int,EntryYear smallint,EntryPeriod tinyint,ProjCode varchar(25),ActCode varchar(25),ExpendCode varchar(10),EntryDate date,EntryQty decimal(7,2))
declare @Projects table (ProjCode varchar(25),ProjName varchar(200))
[Code] ....
The result of the above code is the following:
Code:
<root>
<timesheet empid="1" entryyear="2012" entryPeriod="1" adminnotes="These are the admin notes" empnotes="These are the user notes">
<project projnum="TestProject" projname="The really big project for our best customer">
<activity actcode="000103020200302302322" actname="Demolish the 55th story of the main tower">
[Code] ....
Notice how there is a tremendous amount of redundancy in the XML. I was hoping to come up with an XML result of the following, which transmits the same data, without the redundancies.
Code:
<root>
<timesheet empid="1" entryyear="2012" entryPeriod="1" adminnotes="These are the admin notes" empnotes="These are the user notes">
<project projnum="TestProject" projname="The really big project for our best customer">
<activity actcode="000103020200302302322" actname="Demolish the 55th story of the main tower">
<expenditure expcode="1" expname="Regular Hours">
[Code] ....
View 2 Replies
View Related
Feb 23, 2004
The length of a column is 20(varchar),
When i m trying to execute select column name it gives all 20 characters.
My requirement is - is there any option by which i will be able to see only 10 characters ?
View 5 Replies
View Related
Jul 20, 2005
I am working on a personal project and am drawing a complete blank(too much celebrating last night?) on the SQL term that is used toeliminate multiples of like data when it is returned from thedatabase.ie, instead of ....redblueredgreenit would return ...redbluegreenSorry for the trouble and thanks.
View 2 Replies
View Related
May 19, 2008
Hi all,
We currently have an e-commerce app written in .NET with SQL Server backend and built-in CMS that works just fine. We are now implementing a service to remove the need for the CMS by automating the synchronisation of the e-commerce database with a back-office database (non SQL Server). The problem we have run into is that during some of the larger updates to the website (i.e. new product information), the e-commerce system is experiencing timeouts. The synchronisation service uses transactions while performing updates and so I am assuming that the timeouts are being caused by the transactions locking tables and data.
What steps can I take to try to reduce these locks? The transactions are as short as possible so I do not think we can reduce the amount of processing each transaction deals with. I was looking at different isolation modes, Snapshot in particular, to reduce the locks, but would like some advice from someone who may have dealt with this type of situation before I start messing around here. (The synchronisation service uses the default ReadCommitted level, BTW)
Any advice you have to offer will be much appreciated.
Regards,
Stephen.
View 5 Replies
View Related
Dec 7, 2007
I have a Log file grown to 28 G.
TASK: I want to claim the hard disk space.
I want to use
--backup Log DB-NAME with truncate_only
--dbcc shrinkfile(DB-NAME_Log,1)
Is there any risk involved in above steps, and OR would any experienced Folk like to share his or her idea to the Task as above.
Many Thanks,
View 4 Replies
View Related
Jul 12, 2006
Hi All,
Is it possible to reduce the automatic failover time on MS SQL 2005? It seems to take around a minute on my servers.
Thanks for your help.
Regards,
View 3 Replies
View Related
Feb 12, 2008
I have a database who is in full recovery mode. I have four maintenence plans setup: database backup, log backup, optimization and integrity checkup. The last two plans run weekly and the first two run daily. I found that the log size often increase to a dramatically size in a very short period, almost same size as the database file (4G). Further I found that the size seems increase a lot after the last two plan runs.
My question is that the optimization operation(reconstruct index page) will write any reocord to log file? Is this possible a reason?
Now the log file occupy too much disk space (90% of space can be free). What I should do? Shrink database weekly?
Thanks
View 1 Replies
View Related
Mar 25, 2008
Hi!
I was assigned to solve performance problems for an application. I fired up Sql Server profiler and started a trace. Downloaded Sql Server Trace Analyzer. It's a trial version so it's very limited. What I found is that one stored procedure generates almost 400 000 reads everytime it's used and it's used everytime the user wants to see his orders. I've tried to translate the t-sql to english from swedish, it looks something like this:
select top 100
o.orderid,
o.name,
o.latestdeldate,
os.name as OrderStatus,
os.orderstatusID,
p.placeID,
p.name as place,
p.address,
p.city,
a.name as worktype,
noOfActions=(select count(*) from actions a where a.order_orderid=o.orderid),
noOfServiceObjects = (select count(*) from Serviceobject s, Actions a where s.Place_PlaceID = o.Place_PlaceID and a.order_orderid = o.orderid and a.Serviceobject_serviceobjectid = s.serviceobjectid),
...
...
...
It has 8 select count(*) in the select statement then in the where statement it has 2 more select count(*).
I know it's very difficult for you to come up with a solution but do you know a better way than to use select count(*) everywhere? The count is used for to show different status flags on the website.
/Magnus
Jesus saves. But Gretzky slaps in the rebound.
View 19 Replies
View Related
Jan 3, 2001
Can anyone help me reduce a transaction log. It is currently at 2.5GB because it was set to autogrow with no backup !?
I need to drastically reduce it and have backed it up and tried dbcc shrinkfile...but...it now says space used is 120MB but current size is still 2.5GB.
How can i reduce this down please ??
thanks
View 3 Replies
View Related
Jun 8, 2004
Hello. I am wondering how to effectively reduce the size of my database. After viewing the individual table sizes, I have come to realize that nearly 99% of the database's size is due to images. I am told that too much binary data is not good. How can I go about reducing the size of my database (possibly the images themselves)? I'd appreciate any help.
View 11 Replies
View Related
Oct 3, 2006
At this time I am only playing with the applications that generate the data and send it to the database. Without doing too much, and with deleting most data tables that were created, my transaction log file has grown over a gigabyte. I tried using the SQL server management studio (express) to shrink the database (tried shrinking files, too) but that did not make the file smaller. Right now there is hardly any data in the database (6 tables, a dozen columns and rows each) so it must be old transactions that are kept in the log. How do I get rid of the old data and make the file size smaller? Thanks.
Kamen
View 3 Replies
View Related
Feb 5, 2007
Hi all,
Currently we take full database backups nightly for our SQL Server 2000 data warehouse systems. The backups take a very long time over 20 hours and we would like to find a good way to reduce these backup times. How can I change our backup plan to reduce the long backup run times. Data size is 1 TB for our data warehouse database server.
Thanks
View 3 Replies
View Related
Mar 26, 2008
Hi all,
I'm running a transformation script that's taking decimal(18,10) data and trying to shoehorn it into a numeric(9,6). generally this works, as most of the data in the original table is not using anywhere near the precision it's capable of, but once in a while I run into one that does use it.
Is there any way to automagically reduce the precision so that i can cram the data into the destination table?
___________________________
Geek At Large
View 3 Replies
View Related
Jul 25, 2007
I've got a very filesize restricted database. I noticed that when I insert 1000 rows my filesize jumps to 80k, but when I delete all but 50 of those rows...the filesize actually increases to 84k. How do I make sure the filesize of my database shrinks when I delete rows?
Thanks!
View 5 Replies
View Related
Feb 20, 2008
We have a reporting system where the default rendering format is HTML.
HOwever, in some cases user may export the data into Excel aftergenerating
the report in HTML. Howveer, this export is taking too much time. eg 5500
row report in HTML takes around 8 minutes to export into excel. Is there a
workaround for this? Please note that default rendering has to be in HTML
only.
Also another feature noticed is that in RS 2005, the report server execution
log seems to be logging seperate entries for export feature as well. This
was not happening in RS2000. Is this a new feature in RS2005 or is the
underlying SP for the new report being called again when the export to excel
happens?
Any help would be appreciated
View 3 Replies
View Related
Aug 1, 2001
If I'm doing a dirty reads and a someone updates a record when I'm trying to read it is it possible to read both the old and new records thereby retrieving two records?
View 2 Replies
View Related
Oct 30, 2006
How can You find the reads and writes per second of your hard drives in sql. I am reading my SQL book and it says that your average disk should have 125 or less i/o's. And it gave the forumal but as mentioned I don't know how to find the reads and writes.
View 4 Replies
View Related
May 1, 2008
server: QAT on clustering server ----> 23 seconds
----------------------------------------------------
SS 2000 developer edition SP4
win NT 5.2 (3790) SP4
MeM 7935 MB
processors 4
root directory C:program files...
use a fixed memeroy size 640 MB
reserve physical memory for sql server
minimum query memory 1024 kb
use all available processors
minimum query plan threshold for considering 5
PROFILER READS = 5234
server: MILLER ----> 3 seconds
----------------------------------------------------
SS 2000 developer edition no service pack
win NT 5.2 (3790) SP4
MeM 2047 MB
processors 4
root directory f:MSSQL$INAQAT
dynamically configure sql server memory
use all available processors
minimum query plan threshold for considering 5
PROFILER READS = 598
----------------------------------------------------
Making story short. I got an application that hits only 1 database called RECORDS. I'm getting different duration when running an application. 23 and 3 seconds.
Same database, same objects and same application.
SERVER QAT is our staging server, means lots of databases
SERVER MILLER is just a server i just assembled, means just one database (RECORDS).
Not sure if it's because it's a clustering server that is causing the issue nor the reads. If its the reads, what is causing it? Do you think is the how the memory is configured?. Will the experts pls stand up?
View 20 Replies
View Related
Jul 18, 2006
So I€™m at a dead-end looking for the reason behind the following behavior. Just to make sure no one misses it, the 'behavior' is the difference in the number of reads between using sp_executesql and not.
The following statements are executed against a SQL 2000 database that contains >1,000,000 records in the act_item table. They are run using Query Analyzer and the Duration and Reads come from SQL Profiler
SQL 1:
exec sp_executesql N'update act_item set Priority = @Priority where activity_code = @activity_code', N'@activity_code nvarchar(40),@Priority int', @activity_code = N'46DF335F-68F7-493F-B55E-5F9BC6CEBC69', @Priority = 0
Reads: ~22000
Duraction: 250-350 ms
SQL 2:
DECLARE @Priority int
DECLARE @Activity_Code char(36)
SET @Priority = 0
SET @Activity_Code = '46DF335F-68F7-493F-B55E-5F9BC6CEBC69'
update act_item set Priority = @Priority where activity_code = @activity_code
Reads: ~160
Duration: 0 ms
Random information:
Activity_code is an indexed field on the table, although it is not the primary key. There are a total of four indexes on the table, none of which include the priority as one of the fields.
There are two triggers on the table, neither of which is executed for this SQL statement (there is an IF UPDATE(fieldname) surrounding the code in the trigger)
There are no foreign relationships
I checked (using perfmon) to see if a compilation/recompilation was happening. No it's not.
Any suggestions as to avenues that could be examined would be appreciated.
TIA
View 3 Replies
View Related