I am using change tracking for tracking changes to an OLTP database for daily data warehouse load. I am tracking changes on 4 tables:
Site, Well, GasEngine, GasField,
I am using an SSIS package for our DW load and on completion, I store the results of the queries above into my own ChangeTracking table. I then use this as the last synced version for each table which I then use for the next DW load. I have the following change tracking settings:
CHANGE_TRACKING = ON
(
CHANGE_RETENTION = 10 DAYS,
AUTO_CLEANUP = ON
)
Now, the Site and Well tables may not have any changes for a few weeks or months but the GasEngine table will have changes every half hour and the GasField table will have changes every week. I have recently received a NULL value for Site changes query (below) and the SSIS package fails.
Is this due to there being no changes within the 10 days retention period?? What can I do in order to maintain the changes?? Could I maybe update the scripts above to check if the response from the query is NULL and if yes, set the current synced version to the last synced version from my ChangeTracking table?
Also, when the 10 days retention period is up, will the change tracking change version counter continue on from the last change version?
We are developing an application that requires change tracking.We tested it in development and test environments and we are preparing our production deployment.The very first thing that needs to be done is an
ALTER DATABASE [db_name] SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON)
We are holding on this first step because this statement alone executed for a good 4min on the development server. The production environment is many times larger and busier, and we can’t afford service disruption, so we are at the point where we need to understand what’s involved in running this ALTER DATABASE statement.Is there any documentation on what is happening behind the scene when this statement executes such that we can assess the risks of running it in production?
We are going to use SQL Sever change tracking. The problem is that some of our tables, which are to be tracked, have no primary keys. There are only unique clustered indexes. The question is what is the best way to turn on change tracking for these tables in our circumstances.
USE MyDB GO EXEC sys.sp_cdc_enable_db GOand I have enables cdc on tables within the database using USE MyDB GO EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'MyTable', @role_name = N'MyRole', @filegroup_name = N'MyDB_CT', @supports_net_changes = 1 GO
As per [URL] ....
Data changes are then stored in a system table called cdc.dbo_MyTable_CT So far so good.I backup the database and restore it
backup database MyDB to Disk = 'CBackupMyDB.bak'restore database MyDB1 from disk = 'CBackupMyDB.bak' No table called cdc.dbo_MyTable_CT in system tables!
Have I missed a step? and is there a way to get these tables into the backup.I am now building a ssis job to move the contents of the cdc.dbo_MyTable_CT to an archive table, but i am worried that if I had to restore the database to a point in time I may not be able to do it for these tables.
I have a windows 2012 server and will like to know how to audit DMLs on a table (delete, truncate, update) on this table, I want to see all T-Sql DML statement carried out on this table in a file.How can this be achieved using if possible something already built into SSMS.
I am new to programming trying hard to learn. I made a code for Views in mssql for the 2 tables below:I am wondering if I could make it 3, 4 or more tables at the same time. How can incorporate in the following codes I made.
Create View vwEmployeeInfoByGender as Select FirstName, MiddleName, FamilyName, Gender, BirthDate, MaritalStatus from tblEmployeeInfo join tblGender on tblEmployeeInfo.GenderID = tblGender.GenderID
I am attempting to reach some Clipper tables through a 32-bit ODBC driver from a 64 bit SQL Server. As there is no 64 bit driver offered for Clipper, I am pursuing a solution similar to the one described here:
Creating a Linked Server with 64 bit SQL Server 2008 to MS Access
It involves using a SQL Express 32 bit instance as a bridge.
I have created a Linked Server on the 32 bit instance MTESTXPRESS as follows:
EXEC sp_addlinkedserver @server = N'ABDATA', @srvproduct=N'DataDirect 4.1', @provider=N'MSDASQL', @datasrc=N'ABServerCA' On the 64 bit instance ALISTESTER I have another Linked Server as follow: EXEC sp_addlinkedserver @server = N'ABACUS', @provider=N'SQLNCLI', @datasrc=N'ALISTESTERMTESTEXPRESS'
The suggestion is to then use a select statement such as:
SELECT * FROM OPENQUERY(ABACUS, 'SELECT COUNT(*) FROM ABDATA...ABBATCH')
Unfortunately, the DataDirect driver for MTESTEXPRESS will not recognize the 'ABDATA...ABBATCH' 3-part naming convention. The error message is:
An invalid schema or catalog was specified for the provider "MSDASQL" for linked server "ABDATA"
Is there some other way to select from the MTESTEXPRESS linked server?
I am learning the Optimizer from the book "Querying Microsoft SQL Server 2012" for certificate exam 70-461. I really cannot understand how it explains the number of possible ways to execute a query joining three tables. the pseudo-query is:
SELECT A.col5, SUM(C.col6) AS col6sum FROM TableA AS A INNER JOIN TableB AS B ON A.col1 = B.col1 INNER JOIN TableC AS C ON B.col2 = c.col2 WHERE A.col3 = constant 1 AND B.col4 = constant2 GROUP BY A.col5;
The book says:"Start with the FROM part. Which tables should SQL Server join first, TableA and TableB or TableB and TableC? And in each join, which of the two tables joined should be the left and which one the right table? The number of all possibilities is six, if the two joins are evaluated linearly, one after another."
Q1: How could it be six possibilities? From my understanding, lets say, if the SQL Server has to join A and B first, and then join C, in this case I can think of 4 possibilities, which are:
1. When A Join B, Left: A, Right: B. When Join C, Left: result of A join B, Right: C
2. When A Join B, nbsp; When Join C, nbsp;When A Join B, nbsp; When Join C, nbsp;When A Join B, nbsp; When Join C, "line-height:13.5px;">
Q2: The section following the previous question says there are 4 different types of join.."This already gives four options for each join. So far, there are 6 x 4 = 24 different options for only the FROM part of this query."
How can it be 6 x 4? My understanding is 4 is only for 1 join, but in our case, there are 2 joins, so it should be 6 x 4 x 4.
Just wonder if system base tables always use clustered index? I am using SQL Server 2005 and find sys.sysidxstats base table is using heap, not clustered index. Why?
I want to compare two tables and log the difference in new table with the fields as (old value,new value, column name). The column name should be the changes value column.
we run 2008 std edition (I believe r2 on this server, will double check) and I need to find a needle in a haystack.
I am looking at two tables and would like to learn which, if any objects on same db updates these tables. I'd even live with any references to these tables. I already know sql agent is out of the picture. I'm looking for a static approach, not a monitoring one because its possible updates don't occur till month end.
Is there a way in sql server to query the answer to such a question using dmvs or something like dmvs? What if I get bold and want to search all db's on this server for objects that update my two tables even across dbs?
In the mean time, i'll look at scripting all triggers and procs , and doing a search on these table names in the scripts.
I don't know that I've ever seen a way to script all triggers before. Procs I can easily script from the object explorer details. But I'm not sure if in explorer details--> server objects --> triggers does what I hope it does.
I have a fundamental problem with how CDC works for bulk updates.When CDC enabled table is updated for single row - My CDC system tables its recording it as update (3 & 4) which is perfect and what it should be. No Complains!But when I do a bulk update in the same CDC enabled tables for the same columns - My CDC system tables its recording as delete and then insert (1 & 2). This is not correct and this is what my problem is. We used triggers before CDC we did not face this problem with triggers every thing was fine with triggers other than performance.The way how the CDC is handling the bulk update is a big problem for me because based on the output of CDC system tables we are doing some migration work to legacy system.
It will be impossible for me to go and change my migration logic scripts because we have 100's or procedures in it.Is it a know problem with CDC? Is there any solution in CDC when a bulk update happens on a table the CDC system tables record it as updates. I don't think CDC 'net changes' in this situation because the net change would show as single inserted row.If this can't be done with CDC then I have to completely abandon CDC and go back to triggers..
1. I need to make use of in memory engine for my pr-existed develop procedures ,tables ,index. do I need and code changes for application and how to store tables /indexes in OLTP memory
Assume table index may have primary key index as well.
2. If table with one primary index and 2 foreign constraints, 3 non clusters indexed. which one able o load to memory area and how t do that.
3. In memory is lock free zone. usually locks will happpen in RDMS context . how this works without locks.
How do you track changes to objects in SQL Server. For example changes in stored procedure,views and indexes. What system table or column track or indicate changes in text of sp or views. Help appreciated.
We are building an inventory management system complete with BOM. It is important to track what employee is updating what tables. Currently all such tables have a Date field that is updated when a change is made, and an EmployeeID field which records the employee making the change. I am wondering if someone knows of a better way to track this information. Any suggestions?
OK. For DDL, please refer to the classical Northwind ORDERS table,problem/challenge, find the longest duration (start_date andend_date), during which, no orders were placed.FYI, column names by the order of colid per syscolumns:OrderIDCustomerIDEmployeeIDOrderDateRequiredDateShippedDateShipViaFreightShipNameShipAddressShipCityShipRegionShipPostalCodeShipCountryAny idea/approach? TIA.
Hi I'm trying to get CDC going, it works however when i query the LSN using the functions i get no changes. The min and max LSN returns null. SQL agent is running, db is on full recovery model etc.
Yesterday I had problem with that a sp that contained join that the developer had forgotten the where part. The sp join very large tables and sp took more 95% dual itanium processor. How do fastest track this down when it accrues. Find the sp or sql that running and consume this much processor resources. I tried with a lot. Sp_who, sp_who2, activity monitor.
im trying to create a procedure that will insert/update a small table that has an ID, AmtBefore & AmtAfter.
this is just a table that i'd be using to monitor other activity on the database.
i'm really pulling a price + ID every hour and i need to track that somehow and know when the procedure runs if a price is lower than it was the last hour, plus of course keep the ID's straight. does anyone have a good approach to something like this?
We are building an inventory management system complete with BOM. It is important to track what employee is updating what tables. Currently all such tables have a Date field that is updated when a change is made, and an EmployeeID field which records the employee making the change. I am wondering if someone knows of a better way to track this information. Any suggestions?
I have several transaction tables on which I need to track the changes. That is I need to maitain the history of changes. Only few column values are changed often.
Which is the best way for tracking the changes.
1.Store the whole record after the change ?
Or
2.Store the ColumnName & its respective old & new value ?
Or any other better.
Note : UI part & SP's will take care of the tracking & no plans for triggers.
I have a table in my database and it holds some important information,I want to track the users who are executing DML commands(select,update,delete) on the data in that table. Is there any way do that?If so how could i implement that.can any one help me in this regard? Thanks in advance...
First thank you for all your help thus far. Now I'm stuck again. I've been doing a lot of reading on triggers and logging information into tables but I've been trying to capture how many times someone enters an item into the search box.
So every time somebody types Gumballs into the search box I want to capture it and the name of the person who is currently logged in. Is there away to do this? Maybe this is something that I should be checking in ASP.NET forums?
Hello everyone, I have a fairly unique need :) I am trying todetermine the use/clients for databases in my corporation that I ammaintaining, but that noone seems to know what they are for. Many ofthese databases never seem to have anybody connected to them in thecurrent activity.What I'd like to do is find a way to audit the logins, so everytimesomeone connects to a database it simply logs the clients IP address,what login they used, and maybe what time. I've been searching googlefor this and have found tons of information on auditing the logins,but not the clients, such as by ip. Any help in this regard would beGREATLY appriciated!Joshua