Effect On Snapshots While Reindexing
Apr 30, 2007
Does someone know if doing a reindex on a clustered or non-clustered index cause the snapshot file to grow? In other words, is the data that makes up the snapshot copied from the source to the snapshot database? If a normal reindex is done on the underlying database, will it block users from acessing the snapshot? Any help would be appreciated.
View 1 Replies
ADVERTISEMENT
Apr 15, 2002
Attempt to fetch logical page (1:166354) in database 'pm_pmc_prod' belongs to object '837120', not to object 'ng_visit'.
I got his error while importing data into the error, when i ran dbcc checkdb it gave me Msg 8928,8942,8976 etct witl serverity level 16...
is it possible to fix this curropt tables?
View 2 Replies
View Related
Jun 22, 1999
We load tables from text files for inquiries.
My procedure is to truncate the table
Use DTS to move the text file into the table
do the command DBCC DBREINDEX (TABLE,'')
Am I wasting my time?
Does SQL 7 rebuild the indicis as it loads the data from the text file?
View 1 Replies
View Related
Aug 16, 2006
Hello guys,
Two things:
1) Could somebody explain me how to reindex all tables in my db?
2) How do I know when I should reindex my db?
Thank you very much for any help!
Regards,
Fabian
my favorit hoster is ASPnix : www.aspnix.com !
View 4 Replies
View Related
Sep 17, 2001
I've just recently tried to perform a scheduled reindexing job with the following command:
EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?')"
Unfortunately, this command has only finished once of the five or six times I've tried to run it. The times it has failed, it deadlocks behind what seems to be a succession of other transactions. How can I make sure this command finishes?
View 1 Replies
View Related
Dec 27, 2002
Hello Everyone,
I need to rebuild all indexes in one table, is there any command that will do automatically instead of deleting and recreating them all?
Any help will be appreciated.
Thanks
View 3 Replies
View Related
Nov 15, 2007
Hi All,
is it really improve performance that making table reindexing?
what i mean to say is i've one script, which will automatically drops all the indexes in a database, and reconstruct them with the same name.
is it really worth doing that?.....
thankyou very much
Vinod
Even you learn 1%, Learn it with 100% confidence.
View 10 Replies
View Related
Oct 18, 2007
All,
I've got a medium sized database in a mirror configuration with witness. The database size is about 300gb and I would to reindex all of the tables in the database. My process would go something like this:
1) Backup principal
2) Break the mirror
3) Set the principal database to simple recovery mode
4) Perform the reindexing
5) Backup the principal and transfer that backup to the mirror
6) Restore the backup
7) Re-establish the mirror
Does anyone see any issues with the process itself?
Regards,
Ian
View 4 Replies
View Related
Dec 27, 2007
Hi experts,
For defragmenation and reindexing they are using the below cursor, and now they have asked me to remove the cursor and schedule the job accordingly to do the same functionality, so what will be the other way we can do without cursor? can you plase let me know the solution?
DECLARE @Database VARCHAR(255)
DECLARE @Table VARCHAR(255)
DECLARE @cmd NVARCHAR(500)
DECLARE @fillfactor INT
SET @fillfactor = 90
DECLARE DatabaseCursor CURSOR FOR
SELECT database_name FROM dbadmin.dbo.tdbstatus WHERE status='y'
AND database_name NOT IN ('master','model','msdb','tempdb')
ORDER BY 1
OPEN DatabaseCursor
FETCH NEXT FROM DatabaseCursor INTO @Database
WHILE @@FETCH_STATUS = 0
BEGIN
SET @cmd = 'DECLARE TableCursor CURSOR FOR SELECT table_catalog + ''.'' + table_schema + ''.'' + table_name as tableName
FROM ' + @Database + '.INFORMATION_SCHEMA.TABLES WHERE table_type = ''BASE TABLE'''
-- create table cursor
EXEC (@cmd)
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @Table
WHILE @@FETCH_STATUS = 0
BEGIN
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
EXEC (@cmd)
FETCH NEXT FROM TableCursor INTO @Table
END
CLOSE TableCursor
DEALLOCATE TableCursor
FETCH NEXT FROM DatabaseCursor INTO @Database
END
CLOSE DatabaseCursor
DEALLOCATE DatabaseCursor
Your help will be appreciated.
View 1 Replies
View Related
Mar 24, 2005
In the Enterprise Manager of SQL Server 2000 I have set up a maintenance plan which rebuilds my indexes. I've stuided the documentation, and from what I've learned what happens behind the curtain is that several DBCC REINDEX commands are being issued.
Question:
If I have 20 tables and 40 indexes: Will SQL Server do the maintance plan in 1 single transaction, or will it divide the it up to eg. 20 or 40 transactions?
-h
View 1 Replies
View Related
Nov 22, 2004
Hi
We are upgrading from sql 7 to 2000.During the upgrade process do we have to do a reindexing of all tables or will update statistics take care of that.
Or do we have to do both?
What is the difference between reindexing and update statistics.
Thanks
Madhukar Gole
View 5 Replies
View Related
Dec 14, 2004
Hi All,
Just after some feedback on a scenario where we have full logging setup on one of the databases, and the transaction logs are backed up every 60min. At 0000-0100 the log jumps from being a few thousand k up to over 1.7gb.
I did some profiling for this time, and it appears that this jump is related to the reindexing of the indexes on the database.
Is this normal for the log file to jump in size so much? Or is this an indication of some other issue (potentially with the indexes)?
Is there any way that the reindexing can be excluded from the log files or is this a necessity?
Thanks in advance for your help.
Cheers
Troy
View 4 Replies
View Related
May 27, 2008
hi
i have reindex of the tables
is it really improve performance that making table reindexing ?
View 1 Replies
View Related
Jul 23, 2005
Folks,I work on a system which is growing rapidly, with the number oftransactions we process growing on a daily basis. While this is goodnews or the business, maintenance is starting to become an issue as thedatabase is the backend for a website which cannot be down for alengthy period of time.While I do defrag the indexes, periodically the indexes do need to berebuilt. When this happens, the process locks pages and transactionsstart getting bounced out.Are their any third party utilities which will rebuild an indexwithout this locking occuring? Any help in pointing me in the rightdirection would be appreciated.
View 1 Replies
View Related
Sep 5, 2006
I need to reindex all tables in my database and would like to do this without using a Cursor. What is the simplest way to achieve this.
Cheers
Nat
View 2 Replies
View Related
Oct 18, 2007
All,
I've got a medium sized database in a mirror configuration with witness. The database size is about 300gb and I would to reindex all of the tables in the database. My process would go something like this:
1) Backup principal
2) Break the mirror
3) Set the principal database to simple recovery mode
4) Perform the reindexing
5) Backup the principal and transfer that backup to the mirror
6) Restore the backup
7) Re-establish the mirror
Does anyone see any issues with the process itself?
Regards,
Ian
View 1 Replies
View Related
Apr 25, 2008
Hi,
We have scheduled a job for DB Reindexing (Maitinance Plan) for a OLTP database on sunday.
We have used mirroring for automatic failover with a witness server now the DR Reindexing job fails after 30 mins without any error.
Please let me know why Database reindexing gets failed.
Regards
Sufian
View 16 Replies
View Related
Nov 3, 2005
Hello,
I'm looking for the query command that will go out to all the user Tables and will tell me what Indexes need to be reindexed.
We are having a problem with some of the tables and we don't know when our tables need to be reindexed other than when operations are stopped for our company.
Thanks,
Ron
View 2 Replies
View Related
Jul 18, 2007
Hello. When reviewing the DBCC SHOWCONTIG immediately after reindexing all indexes on a database, I see the ExtentFragmentation has values like 50 to 70%... These are SQL 2005 tables with clustered PK's, no large varchars/blobs, and at least 100 pages in the index... The numbers related to PAGE fragmentation are ok after reindexing, but not the EXTENT fragmentation numbers.
I noticed the drive is in need of being defragged at the disk level. Is that a reason why reindexing doesn't fix the Extent frag numbers?? ANy other ideas on this? I can try defragging the DISK over the weekend, bringing the database offline then, but any other thougths on why the Extents show these high %'s? Is there any command to reset them and maybe that isn't happening? Like must I do update usage to get valid Extent frag #'s??
If there were MANY autogrows on the files, is that a different level of fragmentation? and how could all those small pieces of files be pulled back together? Thanks, Bruce
View 7 Replies
View Related
Mar 26, 2007
Have 3 reports that runs using a shared schedule to generate historic snapshots. This has be working for months and months but out of nowhere it changed.
So now when ever the schedule runs it will generate the historic snapshot for 1 report but NOT the 2 remaining. As a strange twist the report that is successful changes from time to time...
SQL Server 2000, RS+SP2.
Any ideas?
Thanks
Regards
Jonas
View 2 Replies
View Related
Jul 13, 2005
We will soon be going to a new ERP system using MS SQL 2000. I'm looking into possible backup strategies. The database size will be about 50 Gigs.
We have a SAN, which the new ERP system will be backed up on. I know a little about a SAN, but I need to make decisions about the types of backups to make. We can have 10 active snapshots, so I could use snapshots during the day as a point in time backup. I could also use the SAN snapshot instead of using a full backup. Would it be safe to use the SAN snapshots instead of the normal SQL Server backups? I’m not sure how long it will take to do a full SQL Server backup because the server is still in testing mode. I’m not getting the big picture about when to use snapshots and when to use regular SQL Server backups.
Is it possible in Windows 2000 Server to stop the transactions and flush the buffers for a few seconds to do a snapshot? From what I understand, the database could be left in a suspect state if you use snapshots to restore a database.
Thanks in advance for any help!
View 5 Replies
View Related
May 16, 2008
I think I've read some conflicting advice in BOL. Maybe someone can clarify it for me.
Under "Database Mirroring and Database Snapshots" it says:
"You can take advantage of a mirror database that you are maintaining for availability purposes to offload reporting. To use a mirror database for reporting, you can create a database snapshots on the mirror database and direct client connection requests to the most recent snapshot"
To my mind, to enjoy a noteworthy performance gain, that mirror database would need to be on another server.
But then you read under "Database Snapshots":
"Multiple snapshots can exist on a source database and always reside on the same server instance as the database."
So how do you get the snapshot to the mirror?
View 2 Replies
View Related
Nov 9, 2007
HiWhat would be the quickest way to create a backup and revert programon an sql (2000) database?- Can you create a transaction on a database, regardless of theconnections and thenrollback it all via an external program- Could you monitor the changes with profiler and then reverse those?- If desperate, could you backup the db from a tool and then restoreit? (too slow to be practical?)Not sure how to do this so any offers would be appreciatedta
View 3 Replies
View Related
May 2, 2006
If multiple snapshots for the same report attempt to generate at the same second, is there something in place to prevent them from conflicting with each other?
Our SSRS 2005 application will use a console app that we are writing to change the default parameters on each report and call the CreateReportHistorySnapshot method. This application will be multi-threaded, so there is a possibility that multiple versions of the same report with different parameters will attempt to run at the same time. We need to be sure that these snapshots will not conflict and return the appropriate SnapshotHistoryID for that report run.
From looking at the History table in the ReportServer database it looks like there are uniqueIDs and other keys that would prevent duplicates, but what if the requests come in at the same time? Is SSRS 2005 smart enough to delay for a second so that there is not a duplicate? Since the way you reference these snapshots in your URL is with the Snapshot Time in UTC format, that only goes out to full seconds (2006-05-02T00:00:00).
Sorry for the length, but I could not think of how to condense this.
Thanks,
Steve
View 4 Replies
View Related
May 13, 2008
Hopefully not too much of a stupid question.
At the moment, we are using SSRS to report off an Oracle database, if we take a snapshot of the data on the Oracle database and put it within SQL, will we expect an increase in performace/response.
And is it a good idea to keep the snapshot in the same place where SSRS is installed?
Thanks in advance.
View 1 Replies
View Related
Mar 12, 2008
I want to share one of my doubt about SSRS and report server,
if i use snapshots or cache to my reports in report server, is it going to make increase in performance?
if it is going to make increase in performance,then SQLSERVER DATABASE is going to have any burdens,
means is it going to down the performance of Database?
Pls give me Responce to my Questions....
View 3 Replies
View Related
Jun 15, 2007
Hi
I am creating a full Merge replication setup between two servers
Is there any way that I can set it up so that I do not have to create a snapshot.
Ie I would gurantee that the data and structutre is EXACTLY the same on both the subscriber and the publisher.
The problem is that my data base is over 4GB and it is quicker for me to get the data from one server to the other by doing a backup and restore.
I am using SQL 2005 with the latest SP
Cheers
Ants
View 4 Replies
View Related
Mar 13, 2008
I have a sqlserve, which service account is 'local system', running in machine A.
A credential ,which associated a windows user U1 in machine A, mapped to a sqlserver login Login1.
A file named 1.txt that only can be accessed by U1 in machine A.
A CLR procedure P1 that would read the 1.txt file.
I encounter a error when i access the 1.txt file through P1 as Login1:
A .NET Framework error occurred during execution of user-defined routine or aggregate "HelloWorld":
System.UnauthorizedAccessException: Access to the path 'E:1.txt' is denied.
System.UnauthorizedAccessException:
at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy)
at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options)
at System.IO.StreamReader..ctor(String path, Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize)
at System.IO.StreamReader..ctor(String path)
at StoredProcedures.StoredProcedure1(SqlString fileName)
.
I think it need a credential when a sqlserver login access some resource outside the sqlserver, so i add a credential and mapped to the login
any suggestions would be appreciated.
And please correct me if i have any inaccurate concept.
View 9 Replies
View Related
Nov 12, 2006
I have this stored procedure. I want to run a few simple SQL functions against my tables. In particular I want to take a subset of records (One or Two years worth) and calculate AVG, VAR and STDEV.
It does not work the way I thought it would. I end up with the whole input table in #tempor1 which is about 6 years worth of records.
set ANSI_NULLS ON
set QUOTED_IDENTIFIER OFF
GO
ALTER PROCEDURE [dbo].[findAve1YearDailyClose_MSFT]
AS
BEGIN
SET NOCOUNT ON;
SELECT adjClosed, volume INTO #tempor1 FROM dbo.dailyCl_MSFT
GROUP BY dateTimed, adjClosed, volume
HAVING (dateTimed > DATEADD (year, -1, MAX (dateTimed)))
SELECT AVG (adjClosed) AS "AVGAdjClose1Year",
VAR (adjClosed) AS "VARAdjClose1Year", AVG (volume) AS "AVGVolume1Year",
STDEV (volume) AS "STDEVVolume1Year", COUNT (*) AS "total"
FROM #tempor1
END
Thus if I change the number of years I subtract from the latest date from 1 to 2 I end up with the same result. What is the problem?
Thanks.
View 3 Replies
View Related
Aug 28, 2007
I am having the huge db of 80 GB and trying to configure merge replication. The intial snapshot application is failing due to some schema issues. I have made some necessary changes to avoid the schema issues. If I try start the merge agent it is going to re-initialize the snapshot and reapplying the whole snapshot will take atleast 10-15 hrs.
Is there any way to resume the snapshot from the point of failure.. avoiding the reapplying all the data which has already got transfered to subscriber.
View 1 Replies
View Related
Aug 7, 2006
Hi
I have setup my web synchronisation with very few problems until now. Initially I set the snapshots to generated by the subscriber when they first synchronise, this worked perfectly every time until a bottleneck occured after I reinitialised all the subscribers. When more than one subscriber then initialises the snapshot agent, the snapshot jobs fail because there is already a job running for user distributor_admin. The agent then retries etc, etc.. until all retries have failed. Ultimately only one of the snapshots is successfully created only after all other jobs have failed and run out of retries, it is then left alone to carry out its job.
Following this I made the decision to run with pre-generated snapshots instead so that I can schedule the server load and avoid these bottlenecks. Using the same publication I unchecked the option "Automatically define a partition and generate..." in the publication properties and proceeded to add each of the partitions specifying the correct "Host_Name()" value etc. After this I selected one partition and clicked "Generate the selected snapshots now". The snapshot was duly created but when synchronised the snapshot is not collected and used. Weirdly the merge porcess begins enumerating changes at the publisher followed by downloading 3100 chunks of data before receiving an error concerning the format of the message from the distributor.
If I check the option to allow auto generation, the synchronisation begins absolutely perfectly but only once it has generate a snapshot (Ignoring the one I created previously).
To add further weirdness, I created a small app which would generate the dynamic partitions using RMO. I used the sample code implicitly from BOL and it duly generated the dynamic snapshot job. I can then create a new Job class passing it the JobServer and JobName, it gives me the correct job but I cannot then call the Start() method because it keeps telling me the Job object is not created even though It is clearly enabled as a job in SSMS and can be right_clicked and started from there.
Even partition jobs created from within the SSMS appear as state "Creating" when I make an instance of it using the Job class.
What am I missing? Are my pre-generated snapshots not available to the subscriber because they seem to still be in the creating state? Is this a SQL Server configuration issue or have I setup the publication incorrectly?
Any help would be much appreciated.
Many thanks
Rab
P.S. Code snippet below
Imports Microsoft.SqlServer.Replication
Imports Microsoft.SqlServer.Management.smo
Imports Microsoft.SqlServer.Management.smo.agent
Imports Microsoft.SqlServer.Management.Common
Public Class frmSnapshot
Private Sub btnGenerate_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnGenerate.Click
Dim hostName As String = txtSiteID.Text.Trim
' Define the server, database, and publication names
Dim publisherName As String = "WDU340"
Dim publicationName As String = txtOrgID.Text.Trim
Dim publicationDbName As String = publicationName
Dim distributorName As String = publisherName
Dim publication As MergePublication
Dim partition As MergePartition
Dim snapshotAgentJob As MergeDynamicSnapshotJob = Nothing
Dim schedule As ReplicationAgentSchedule
' Create a connection to the Distributor to start the Snapshot Agent.
Dim distributorConn As ServerConnection = New ServerConnection(distributorName)
Try
' Connect to the Publisher.
distributorConn.Connect()
' Set the required properties for the publication.
publication = New MergePublication()
publication.ConnectionContext = distributorConn
publication.Name = publicationName
publication.DatabaseName = publicationDbName
' If we can't get the properties for this merge publication,
' then throw an application exception.
If (publication.LoadProperties() Or publication.SnapshotAvailable) Then
' Set a weekly schedule for the filtered data snapshot.
If RunJob(publication, snapshotAgentJob, distributorConn, hostName) = False Then
schedule = New ReplicationAgentSchedule()
schedule.FrequencyType = ScheduleFrequencyType.OnDemand
'schedule.FrequencyRecurrenceFactor = 1
'schedule.FrequencyInterval = Convert.ToInt32("0x001", 16)
' Set the value of Hostname that defines the data partition.
partition = New MergePartition()
partition.DynamicFilterHostName = hostName
snapshotAgentJob = New MergeDynamicSnapshotJob()
snapshotAgentJob.DynamicFilterHostName = hostName
' Create the partition for the publication with the defined schedule.
publication.AddMergePartition(partition)
publication.AddMergeDynamicSnapshotJobForLateBoundComClients(snapshotAgentJob, schedule)
RunJob(publication, snapshotAgentJob, distributorConn, hostName)
End If
Else
Throw New ApplicationException(String.Format( _
"Settings could not be retrieved for the publication, " + _
" or the initial snapshot has not been generated. " + _
"Ensure that the publication {0} exists on {1} and " + _
"that the Snapshot Agent has run successfully.", _
publicationName, publisherName))
End If
Catch ex As Exception
' Do error handling here.
MessageBox.Show(String.Format( _
"The partition for '{0}' in the {1} publication could not be created.", _
hostName, publicationName) & ": " & ex.Message)
Finally
If distributorConn.IsOpen Then
distributorConn.Disconnect()
End If
End Try
End Sub
Private Function RunJob(ByVal publication As MergePublication, ByVal snapshotAgentJob As MergeDynamicSnapshotJob, ByVal distributorConn As ServerConnection, ByVal hostName As String) As Boolean
Dim jobs As ArrayList
Dim iJob As Integer
Dim bExists As Boolean = False
jobs = publication.EnumMergeDynamicSnapshotJobs()
For iJob = 0 To jobs.Count - 1
snapshotAgentJob = DirectCast(jobs(iJob), MergeDynamicSnapshotJob)
If snapshotAgentJob.DynamicFilterHostName = hostName Then
'Run the Job
bExists = True
Dim server As New Server(distributorConn)
Dim job As New Job = (server.JobServer, snapshotAgentJob.Name)
MessageBox.Show(String.Format("About to run the dynamic snapshot job: {0} with status: {1}", job.Name, job.State.ToString))
Try
job.Start()
Catch ex As Exception
MessageBox.Show(ex.ToString)
Throw
End Try
Exit For
End If
Next
End Function
End Class
View 7 Replies
View Related
Mar 28, 2007
I posted last year about getting and Expand All/Collapse All link working in reports, and it does work perfectly for on-demand reporting. I was able to get these working using the boolean report parameter and then the Jump to Report option going back to itself changing that parameter.
The issue that I have just discovered is that when a report with these Expand All/Collapse All links are in a report snapshot, clicking those links will cause it to re-render the report from the datasource at the time the link is clicked. This is an issue for one of the datasources we are using because the data is always changing, so when it goes back to re-render the counts and data returned are completely different then they were when the snapshot was first created. As time goes on the data will no longer even be in the datasource since it can only maintain 4 months of data.
Is there another way to get all detail groups to expand/collapse at once that does not require the report to be completely re-rendered from the datasource?
View 1 Replies
View Related
Feb 2, 2007
I am using SQL2005 merge replication and have pull subscribers on a low bandwidth link
I am compressing the snapshot into an alternate folder. Files are not put into the default folder
When I start a synchronization, I would expect the cab file to be copied to the subscriber and then the files to be extracted locally at the subscriber in order to apply the snapshot
However, what appears to be happening is that the files are being extracted from the cab file on the publisher (in a UNC specified directory) and then copied in their uncompressed form to the subscriber - resulting in an extremely slow snapshot application.
Any ideas what I am doing wrong? I have read about the options for using FTP to transfer snapshot files, but I am not clear whether I have to use FTP in order to transmit a compressed snapshot. I don't want to use FTP unless I need to.
Any suggestions would be much appreciated
aero1
View 8 Replies
View Related