SQL 2012 :: Querying The Transaction Log?
Jun 4, 2014
Just wondering if there was a way of querying the transaction log to ascertain particularly large queries filling it?
If you have taken over another persons solution and the transaction log seems to be filling very fast and want to break it down to find out what are the main causes, what would the best way be?
View 6 Replies
ADVERTISEMENT
May 5, 2014
I have an xml document that (for this example) I've simplified to look like this:
<xml xmlns:s='uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882'
xmlns:dt='uuid:C2F41010-65B3-11d1-A29F-00AA00C14882'
xmlns:rs='urn:schemas-microsoft-com:rowset'
xmlns:z='#RowsetSchema'>
<s:Schema id='RowsetSchema'>
[code]....
When I try querying the xml document in SQL, I get nothing back, unless I remove the schema information. I'm using this:
declare @x xml
select @x = P
from openrowset (bulk 'E:VehicleOption0514.xml', single_blob) as Products(P)
declare @hdoc int
exec sp_xml_preparedocument @hdoc output, @x
[Code] ....
View 3 Replies
View Related
May 23, 2014
I have a plain XML variable
<BS>
<B Id="5" />
<B Id="3" />
<B Id="4" />
<B Id="6" />
<B Id="15" />
<B Id="7" />
</BS>
When I insert this into a temp table, the order get mixed up.
SELECT I.value('@Id', 'INT') ProductId
INTO #ProductList
FROM @Products.nodes('/BS/B') AS T(I)
When I select out of #ProductList the order is random and different from the XML.
View 8 Replies
View Related
Oct 30, 2014
I have a table (we will cal DateTable) with several (20) columns, each being a date type. Another table's (Project) PK is referenced in the DateTable.
I am trying to write a query that will pull all dates for a specific project from the DateTable if they meet certain criteria(i.e. if the date is <= 7 days from now.
I started with a normal select statement selecting each column with a join to the project and then a where clause using
(DateTable.ColumnName BETWEEN GETDATE() AND DATEADD(day, 7, GETDATE()) OR (DateTable.ColumnName BETWEEN GETDATE() AND DATEADD(day, 7, GETDATE())) ...
The rest of the columns(all with OR between them).
The problem with this is that because I am using OR once one of the dates meets the criteria it selects all the dates that are associated with the project. I ONLY want the dates that meet the criteria and don't care about the rest.
Obviously because I have all the columns in the select statement... So I need something like
Select ALL Columns
from DateTable d
Join Project p
where p.ProjectID = d.ProjectID AND only dates BETWEEN GETDATE() AND DATEADD(day, 7, GETDATE()))
View 2 Replies
View Related
Jan 29, 2014
I'm fairly new to SQL and am just setting up a Windows 8 app using an Azure SQL server. The issue I have is looking up a part number supersession and getting the latest number. One part number can have multiple supersessions (ie RTC5756 > STC8572 > STC3765 > STC9150 > STC9191 > SFP500160 ).The data I am supplied monthly has both the superseeded items and the supersession information in both columns and is not easy to decipher - for example:
Supersessions Table
----------------------
RTC5756 | STC9191
SFP500160 | STC9191
STC9191 | STC2951
STC3765 | STC9191
STC8572 | STC9191
STC9150 | STC9191
[code]...
The newest part number is kept in a separate table - called "source" - which in this instance is SFP500160. I need access to the latest part number but also to the part's previous numbers, due to the fact that some people may still be stocking them as an old part number and for them to search by. Is there an easy and efficient way of doing both a lookup for the supersessions and a join on the two tables to minimize the queries on the database?
View 9 Replies
View Related
Oct 13, 2015
I am stuck in the following scenario.
Tbl_Loan
Trans_ID
Emp_ID
Guarantor_1_ID
Guarantor_1_ID
TRN_01
EMP_01
[code]....
View 5 Replies
View Related
Mar 28, 2014
I am doing some general housekeeping for a couple of our SQL boxes in the Development environment. All the databases are set to Simple recovery mode. No need in anything else for these boxes. I have a database on all the boxes named "DatabaseMaintenance" Keeps things like all the sprocs for any type of database maintenance, etc....
I would like to schedule a single sproc that is located in the DatabaseMaintenance database to shrink the Transaction logs on a set schedule. They sometimes grow quite large while testing and developing. The thing that I cannot seem to get around, is when using the ShrinkFile command, one must use the Log Name. If this code is in a sproc that is located in the DatabaseMaintenance database, it will fail when attempting to call out to a different database. Because the Log does not exist on the database that the sproc is located.
How can I get around this small dilemma? There are only about 10 databases per box. To a point we really do not care what happens to them. They are on a Full backup schedule daily, just to keep the objects. As I stated previously, the logs will still grow huge at times while pumping data.
Is there a way to create a piece of code that will run against each database on the server, and be stored in a single database? Other than the system databases of course.
View 5 Replies
View Related
May 13, 2014
I have a data set that comprises of a transaction_id and then a list of line_id's for that transaction.
eg:
transaction_id = 12345 : Line_id = 129172
transaction_id = 12345 : Line_id = 1654572
transaction_id = 12345 : Line_id = 56229172
This would imply that 3 items were purchased for a single transaction.
The output I'm looking for is follows:
transaction_id = 12345 : Line_id = 1
transaction_id = 12345 : Line_id = 2
transaction_id = 12345 : Line_id = 3
If there are 5 Line_id's for a transaction_id then the output I'm looking for would be
transaction_id = 992345: Line_id = 1
transaction_id = 992345: Line_id = 2
transaction_id = 992345: Line_id = 3
transaction_id = 992345: Line_id = 4
transaction_id = 992345: Line_id = 5
If there are duplicate Line_id's i.e purchased two of the same product, then the logic must carry on as above
eg:
transaction_id = 56474: Line_id = 1
transaction_id = 56474: Line_id = 2
View 3 Replies
View Related
Jan 21, 2015
I wanted to schedule the transaction replication. How do I do it? Currently I have set up a transaction replication which runs continuously and synchronizes the changes with immediate effect.
I need to configure a replication which will gather logs from the publication once in a day.
View 3 Replies
View Related
Jan 29, 2015
Looking at the recovery model on MSDN - was under the belief from it that if you set your database to use the simple recovery model, then when you preform a full back up it would truncate the transaction log file but this doesn't seem to be the case?We aren't looking to do a point in time recovery for this particular databse.
View 7 Replies
View Related
Apr 15, 2014
We have poor performance spikes on a drive containing our log file but this is only for reads and seems to be at a time when we run a re-index job. If this is a likely correlation as to poor performance in reading the log file, and what reads are done from a log file.
View 2 Replies
View Related
Apr 22, 2014
I have a update trigger. In this trigger I need to insert few records in 3 tables. If error comes in any of these inserts then previous inserts to get committed. This trigger was written in Sybase and it was possible to create transaction and commit the transactions.
View 4 Replies
View Related
May 28, 2014
I have a publication on Sql Server 2012 that uses transactional replication to 7 subscribers (these are a mix of Sql Server 2008R2 and Sql Server 2012). Last night I scheduled the Snapshot job to run to "re-publish" the database to the subscribers. I had a few new table to push down. Unfortunately the snapshot job became the deadlock victim. Now updates to the publisher are not being sent to the subscribers.
Short of rerunning the snapshot job, is there a way to repair the replication so the updates to the publisher are pushed to the subscribers? The "re-publish" can only be run overnight when there is very little impact to users.
Bill Soranno
MCP, MCTS, MCITP DBA
Database Administrator
Winona State University
Maxwell 143
"Quality, like Success, is a Journey, not a Destination" - William Soranno '92
View 0 Replies
View Related
Feb 24, 2015
I backed up my transaction log file using this:
BACKUP LOG iTest2_AELetters TO DISK = N'nul'(yes I know this breaks the log chain - it's expected in this intance)
But after I run it my t log is exact same size. I have refreshed many times.
I am using SQL Always On on this DB.
View 4 Replies
View Related
May 3, 2015
I'm taking the liberty to announce the availability of a suite of articles on my web site about error and transaction handling in SQL Server. In total there are three main parts and three appendixes.
The first part is a short jumpstart, while Part Two is a long in-depth discussion of what can happen in SQL Server in case of an error and what commands that are available. Part Three covers implementation and has lot of examples as well as a facility for logging and raising errors.
The appendixes cover special areas: linked servers, the CLR and Service Broker.
Start at [URL] ....
View 1 Replies
View Related
Jul 7, 2015
For a few days now I have a discussion with a colleague about shrinking the transaction log as a daily maintenance job on an OLTP database. The problem is I cant figure out a way to convince her she is doing something really wrong. Its not the first discussion.. Maintenance Plans.
She implemented this "solution" with a lot of customers as a solution against VLFs fragmentation and huge transaction log sizes. My thoughts about doing this is very clear and I have used the following arguments without success to convince her:
- To solve too many VLFs you have to focus on the actual size of the transaction log and the autogrowth settings in combination with regularly transaction log backups. Check the biggest transaction and modify the transaction log size based on this. Not use shrinking as a solution for solving many VLFs.
- Shrinking the transaction log file on a daily basis that is disk I/O intensive. When the transaction log file is too small for new transactions, the transaction log needs to grow and this will cause disk I/O, this can cause performance problems.
- It looks unprofessional.
These steps are used every morning at 6:00 AM and a transaction log backup is made every 30 minutes.
Step 1
DBCC SHRINKFILE (N'' , 0, TRUNCATEONLY);
go
Step 2
ALTER DATABASE
MODIFY FILE (NAME = N'', SIZE = 4098MB);
GO
My main purpose is making sure the customers have the best possible configuration and I cant accept this is being implemented. Are there any more arguments available for this issue?
View 2 Replies
View Related
Aug 14, 2015
I have a Database that is set to Simple Recovery mode.
It is not clearing the log and the transaction log is growing.
I thought that it would clear the log upon checkpoint?
How can I force the Transaction Log to clear since NO-TRUNCATE is no longer available?
View 9 Replies
View Related
Jan 21, 2014
I'm trying to create a cte to return a list of lots and every period between the first and last transaction date for each lot. I've gotten this far:
SELECTIntLotKey
,DATEPART(YYYY, StartDate)StartYear
,DATEPART(MM, StartDate)StartPeriod
,DATEPART(YYYY, EndDate)EndYear
,DATEPART(MM, EndDate)EndPeriod
[Code] ....
This gives me the following results:
IntLotKeyStartYearStartPeriodEndYearEndPeriod
271532013120135
28468201312201312
2846920131201312
2847020131201312
28472201312201312
593022013120131
593032013120131
Now what I need is something that looks like this:
LotKeyYearPeriod
2715320131
2715320132
2715320133
2715320134
2715320135
[Code] .....
Some lots may not have any transactions for some of the periods between the start and end dates but I need to report every period between the start and end period for each lot. I have a period table that I thought I could use but haven't come up with a way to get the results I'm after.
View 9 Replies
View Related
May 13, 2014
We use AlwaysOn availability groups, which has 2 SQL nodes configured (version: 11.0.3373.0), my full and differential backups are working for all my databases, however I am unable to perform any LOG backups.
I have double checked my Availability Group settings, and the backup preferences is set to: 'Prefer Secondary'
I've tried creating a maintenance job as well as using Ola Hallengren's maintenancescript job to back them up, but nothing is written to the drive. All jobs return successful every time, and take less then 3 seconds to run. There are no events being written in the SQL error log or event log.
View 2 Replies
View Related
Jun 27, 2014
I am basically trying to update a table which reflects account transactions. Accounts get paid in full but occasionally balance payments can be reversed and I want to update the table to show this - I need to show which period the account was previously paid in full.I've created a simplified version of the scenario and below are a couple of examples of things I've tried that do not work. I understand why they do not work but I'm struggling to figure out how to update the 'PeriodPrevPaidInFull' field.
create table Trans
(
AccNo int,
Transaction_Period_Index int,
PeriodOpeningBalance money,
DebtBalance money,
PeriodPaidInFull int NULL,
PeriodPrevPaidInFull int NULL,
[code]...
View 9 Replies
View Related
Nov 24, 2014
create proc proc1 (@param1 int)
as
begin try
declare @param2 int
begin transaction
exec proc2 @param2
commit transaction
end try
begin catch
if @@trancount > 0
rollback transaction
end catch
i haven't had an opportunity to do this before. I have nested stored proc and both inserts values into different tables. To maintain atomicity i want to be able to rollback everything if an error occurs in the inner or outer stored procedure.
View 3 Replies
View Related
Dec 4, 2014
Getting below error most of the times .
"Server was unable to process request. ---> The transaction log for database 'xxxx' is full due to 'REPLICATION'."
I am doing below steps to clear the log everytime.
1.EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0,@time = 0, @reset = 1
2.checkpoint
View 8 Replies
View Related
Feb 3, 2015
I have a Customer running a database in a High Availability Group and I am not familiar with the set up... They have a transaction log that quadrupled in size during a data import and update which was generated by an external application. They have limited server space and would like to shrink the log again now as this import / update only happens once a year. The way this has always been dealt with in the past was by shrinking the DB and logs after the update.
Now however, when attempting to do a log or db shrink, an error message is generated which says that the log cannot be shrunk as the DB is in use as part of an Availability Group....
The more I search and try to read up on this subject, it looks like the DB has to be removed from the Availability Group before the log can be shrunk and then the Availability Group has to be re-created or restored in some way. Is there a simple solution to this conundrum?
View 9 Replies
View Related
Feb 9, 2015
SQL 2012 Ent SP2
Database is in simple recovery mode, and published with transaction replication push subscription, just one subscriber but the database is huge. I don't want to overwrite the schema at the subscriber either.
I had to run an alter database command on a published database, it created so many logs that an extra drive had to be added along with an extra log file to accommodate all the logs.
The problem I have is I'd like to know clear the file of logs so I can drop the temporary log file, and give the drive back, but I cannot.
I have tried dbcc shrinkfile with the emptyfile option but it never clears, I have also tried it with notruncate and truncateonly options (mainly out of desperation).
I do not need to worry about point in time restore as a full backup is taken before and after the operation. After which the database will be put back into Full recovery mode.
I have looked at log_reuse_wait_desc and the file says 'Replication', so I am now thinking the file cannot empty because replication is keeping one of the VLFs active. I tried dropping and recreating the subscription hoping it might free something up and I could get somewhere, but it made no difference.
Do I have to remove replication completely to get round this? Surely not.
I have also tried putting the database back into full recovery mode, doing a full DB backup, and a transaction log backup, but its made no difference, which is also what makes me think a portion of the log is still active because of replication, and perhaps the transactions have not gone through to the subscriber, which raises another question, why not?
I have not tried restarting SQL server, as I'd like to know a way out of this without having to do that, plus I do not think it would make any difference anyway.
View 1 Replies
View Related
Feb 26, 2015
Any way to have a process run that will not write its changes to the transaction log? I have a process that runs every three hours and has a huge impact on the transaction log (it becomes larger than the database itself). We do hourly backups of the transaction log and normally it is reasonably sized but when this process runs, it gets HUGE.
The process takes source data, massages it and writes it to summary tables. It is not something we need to track as we can recreate the summary tables if needed and it has no impact on the source tables.
Everything is driven through a stored procedure. Is there a way to run a stored procedure and tell it that nothing it does should be written to the transaction log?
View 6 Replies
View Related
Mar 5, 2015
I vaguely remember reading somewhere that all distributed transactions are executed at Serializable Isolation Level "under the covers."
1. Is this true?
2. What does "under the covers" mean in this case; i.e. will I not see the isolation level represented accurately in requests?
View 9 Replies
View Related
May 18, 2015
i want to run a transaction across mulitpule instences of sqlserver with out linked server.
distributed trnasaction can do it with link server , can it do it with out linked server.
View 9 Replies
View Related
Aug 20, 2015
I am creating a report query that returns all unreconciled P/O lines. I am near completion but I am unable to find a way to remove the reconciled records.
I have included a script to produce some sample table, data & query.
The recordset dispalys 6 rows. All reconciled Supplier Invoices are duplicated and have transaction codes 40, 50 and reconcile code of 9 (5024, 921689471).
All unreconciled only appear once and have transaction codes 40 and reconcile code of 0 (4835 & 921978016). These are the only records that I want to show.
CREATE TABLE [dbo].[Purch_Ledger](
[EPDIVI] [nvarchar](3) NULL,
[EPSUNO] [nvarchar](10) NULL,
[EPSINO] [nvarchar](24) NULL,
[EPDUDT] [numeric](8, 0) NULL,
[EPTRCD] [numeric](2, 0) NULL,
[code].....
Whatever I try I cant find a way to get rid of the unwanted records.
View 3 Replies
View Related
Nov 4, 2015
I am developing a process to monitors a table in a high transaction database. The process will count the number of lines in the table to verify if it has changed or it is stuck. Due to the fact that the database has a lot of transaction I don't want to execute a query on database too often.l Is there another suitable way to accomplish this goal ?
View 2 Replies
View Related
Jul 30, 2014
I have a setup of transaction replication between one publisher and subscriber in the Same server.Now, I need to add a new subscriber to the existing publisher. So publisher database name is DB_A and Subscriber 1 name is DB_B. So the new subscriber will be DB_C. Is this kind of setup possible on one server?
If yes then at the time of reinitialization is it going to apply the snapshot on DB_B as well as DB_C?Also let say if due to disk error DB_B gets corrupted then will data be still replicated between DB_A and DB_C? (Assuming publisher, subscriber 1 and 2 are sitting on individual disks).
View 1 Replies
View Related
Nov 14, 2014
Got this situation, trying to do Use SignleMode to recover my handing db, after that lost ldf (and physically too). Tried all things thru SSMS and scripts (below) that I know with no result, is there anything else I can try to recover it, I don't need log file.
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
Could not open new database 'MyLostDB'. CREATE DATABASE is aborted.
File activation failure. The physical file name "C:xxxMyLostDB.ldf" may be incorrect.
The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure. (Microsoft SQL Server, Error: 1813)
EXEC sp_attach_single_file_db @dbname='Commissions',
@physname=N'C:SQLDataMyLostDB.mdf'
GO
CREATE DATABASE Commissions ON
(FILENAME = N'C:SQLDataMyLostDB.mdf')
FOR ATTACH_REBUILD_LOG
GO
View 6 Replies
View Related
Mar 3, 2015
Setting up Transaction Replication in test environment. I am willing to bet that most of you take a production backup (if so, how, and using what?), restoring the database to your test environment, then running a snapshot to your subscriber and away you go.
But perhaps you take a backup of your publisher and subscriber, if so, how do you know there are no inconsistences because there were transactions sitting on the distributor?
What do you do if you have additional indexes on the subscriber for reporting, that are not on the publisher?
Here at work we are having issues with getting consistent databases set up with T Rep, missing rows, duplicate keys at subscriber etc. How to avoid these issues.
View 0 Replies
View Related
Mar 4, 2015
I have a stored procedure that does the following
BEGIN TRANSACTION OUTERTXN
BEGIN TRANSACTION
Copy records from live to archive
END TRANSACTION with commit or rollback
execute sproc to write audit log with success or fail
IF transaction was committed
BEGIN TRANSACTION
Delete records from live the archive
END TRANSACTION with commit or rollback
execute sproc to write audit log with success or fail
End IF
END TRANSACTION OUTERTXN with commit if both inner transactions were successful or rollback if either failed
If either inner transaction rolled back execute sproc to write audit log saying whole process is rolling back End IfMy problem is that if the outer transaction rolls back then I am losing the two audit records because they are part of the transaction scope. I want these executes to commit even if the master transaction fails.
View 2 Replies
View Related