How To Capture Data And Tune Indexes With Wizard
Jul 7, 1999
I want to tune the indexes on my database and I am trying to use the SQL Server Profiler to collect data for the Index tuning wizard to analyze. My question is what do I need to trace with the profiler so that the Index tuning wizard can work? I am looking at the trace properties in Profiler at the Events, Data Columns, and Filters tabs but I have no idea of what I need to capture.
Thanks in advance.
Mike
View 1 Replies
ADVERTISEMENT
Jul 20, 2005
Just curious if anyone has a script to find and delete all indexescreated by index tuning wizard, leaving the original indexes untouched.All of the original indexes in this particular database are precededwith IX_, whereas those created by ITW are the table name followed by anumber.I'm thinking of something along the lines of "sp_MSforeachtable@command1="print '?'" + a DBCC which just targets the ITW indexes (ifsuch a thing exists). Any ideas how to go about this?
View 3 Replies
View Related
Jan 8, 2006
I saw this tool for SQL-Server 2000 :
http://www.sql-server-performance.com/index_tuning_wizard_tips.asp
Is there anything similar for SQL-Server 2005 Express ?
Thank you very much for any help!
Regards,
Fabian
my favorit hoster is ASPnix : www.aspnix.com !
View 2 Replies
View Related
Oct 20, 2006
Hi All,
I have become frustrated and I am not finding the answers I expect.
Here's the gist, we support both Oracle and SQL for our product and we would like to migrate our Clients who are willing/requesting to go from Oracle to SQL. Seems easy enough.
So, I create a Database in SQL 2005, right click and select "Import Data", Source is Microsoft OLE DB Provider for Oracle and I setup my connection. so far so good.
I create my Destination for SQL Native Client to the Database that I plan on importing into. Still good
Next, I select "Copy data from one or more tables or views". I move on to the next screen and select all of the Objects from a Schema. These are Tables that only relate to our application or in other words, nothing Oracle System wise.
When I get to the end it progresses to about 20% and then throws this error about 300 or so times:
Could not connect source component.
Warning 0x80202066: Source - AM_ALERTS [1]: Cannot retrieve the column code page info from the OLE DB provider. If the component supports the "DefaultCodePage" property, the code page from that property will be used. Change the value of the property if the current string code page values are incorrect. If the component does not support the property, the code page from the component's locale ID will be used.
So, I'm thinking "Alright, we can search on this error and I'm sure there's an easy fix." I do some checking and indeed find out that there is a property setting called "AlwaysUseDefaultCodePage" in the OLEDB Data Source Properties. Great! I go back and look at the connection in the Import and .... there's nothing with that property!
Back to the drawing board. I Create a new SSIS package and figure out quickly that the AlwaysUseDefaultCodePage is in there. I can transfter information from the Oracle Source Table to the SQL Server 2005 Destination Table, but it appears to be a one to one thing. Programming this, if I get it to work at all, will take me about 150 hours or so.
This make perfect sense if all you are doing is copying a few columns or maybe one or two objects, but I am talking about 600 + objects with upwards of 2 million rows of data in each!!
This generates 2 questions:
1. If the Import Data Wizard cannot handle this operation on the fly, then why can't the AlwaysUseDefaultCodePage property be shown as part of the connection
2. How do I create and SSIS Package that will copy all of the data from Oracle to SQL Server? The source tables have been created and have the same Schema and Object Names as the Source. I don't want to create a Data Flow Task 600 times.
Help!!!
View 8 Replies
View Related
Aug 12, 2015
I have a requirement to implement CDC for 50+ tables to implement incremental data changes warehouse/reporting rather than exporting the whole table data. The largest table is having more than half a billion records.
The warehouse use a daily copy of OLTP db (daily DB refresh). How can I accomplish this. Is there a downside in implementing CDC just for the sake of taking incremental changes on the tables?
Is there any performance impact if we enable CDC on OLTP db?
Can we make use of the CDC tables on the environment we do daily db refresh so that the queries don't hit OLTP database?
What is the best way to implement CDC to take incremental changes for reporting.
View 0 Replies
View Related
Apr 21, 2014
I am using SQL Server 2012 and to me a part of data captured by CDC is not making sense.
I have a table called 'Schema.Table1', and I enabled CDC on it by running 'sys.sp_cdc_enable_table'. I see that a table called 'cdc.Schema_Table1_CT' got created which now gets an entry when ever I Insert, Update or delete a record in the original table.
Till this point every thing works fine.
My original Table has a NOT NULL INT column called 'AuditTrackerUserID' with a default value of 1996. My application does not provides a value for this column, but because the column itself has a default value, records get inserted without error.
When I try to execute the following Query I see multiple records with __$operation of 3 and 1.
SELECT * from cdc.Schema_Table1_CT where AuditTrackerUserID IS NULL
My expectation is that I should not ever see any record returned by this query because AuditTrackerUserID is a not null column, but I do.
View 2 Replies
View Related
Jan 13, 2006
Again, looking for the best way to do this with SSIS.
I have a source table and I'd like to load it to a database daily, capturing what changed.
This is not a dimentional table but a fact table.
So, what I;d need to do for each record is to see if the record already exists (using business key) and if it does - compare some of the data fields and of there are changes - register it somehow and if not changes ignore.
Right now, the only two ways I see to do it with SSIS:
- Use Slowly Chaging Dimentions transformation
- Use Lookup and customize SQL, adding something like: WHERE key = ? and (field1 <> ? or field2 <> ?...)
I was wondering of there an easy way.
Dima.
View 3 Replies
View Related
Jul 28, 2015
When using Change Data Capture on SQL Server 2012 I have researched that you cannot truncate data in a table. Is this also true if one wanted to delete data from the table? Getting a little confused about what DDL statements can be ran against a table with CDC enabled. Does CDC have to be disabled before performing certain DDL statements against a table?
I would like to safeguard the truncation and dropping of certain tables within the dbo schema. Wondering if I could do this with one fail swoop with CDC enabled on those tables. The other option would be to use a DDL trigger to prevent certain DDL statements to be performed.
View 2 Replies
View Related
Sep 3, 2007
I have a table with a field "StartedAt". I wish to capture all the data in that table which has Yestarday's StartAt date.
My script below captures the data which has yestardays "StartedAt" info as well as today's date till now. How can i capture only yestardays info only.
SELECT StartedAt
FROM myTable
WHERE StartedAt >= DATEADD(day, DATEDIFF(day, 0, getdate()), -1)
please help.
View 5 Replies
View Related
Oct 17, 2007
Is there a built in functionality to do the CDS in SSIS 2005? if not, what is the best way to do this in ssis 2005?
View 2 Replies
View Related
Oct 29, 2015
We have enabled Change Data Capture for auditing our table changes in SQL Server 2008. There is a request to NULL out a few columns (for all rows) in a couple CDC tables, due to compliance with a certification. Is there a compelling reason not to modify these tables and to leave the audit trail as-is?
View 1 Replies
View Related
Jul 24, 2015
I want to create a SSIS package as follows
Conditions
If there are about 100 records in text file, if there is an error at 43 and at 67 record respectively , it should capture 43 and 67 record in failure folder and remaining 98 records , should be processed
1) Successful record into table and move the success record from the folder
to new path say( Success folder) (98 records to table)
2) Unsuccessful records to new path (Failure folder) (2 lines )
3) Error message to capture the failed records and store them in another folder(Error log) (2 line failure information)
While writing the 3rd condition to error log table , it has to point out the record which is failed for what reason, say it may be due to invalid data type for column 10 for 43 record, and incorrect syntax error at 67 record.
View 9 Replies
View Related
Sep 26, 2007
I have 2 tables, table one with 772 pieces of compliant data. Table 2 has 435 pieces of data that meet another criteria (all the columns are identical it was just passes through an additional filter). I need to capture the values that are excluded from table 2.
Example Table 1
ID some value
1 x
2 x
3 x
4 x
5 x
Table 2
ID some value
2 x
3 x
5 x
I need to capture the data from ID 1 and 4 and assign a new value to it, it is extra compliant data. Thanks!
View 2 Replies
View Related
Jul 14, 2015
SQL Server 2008R2: Enabling Change Data Capture on a replicated database or its tables will have any performance impact on existing transactional replication.Is it possible to use both of them con temporarily.
View 5 Replies
View Related
Jun 25, 2015
My objective is to extract the source table data from SQL/Oracle or CSV files and load into destination table using CDC mechansim. May I know the steps required to implement in production from development.
View 3 Replies
View Related
Sep 18, 2007
Let me preface by saying I am not very familiar with SSIS.
Ideally, since the Transfer SQL Server Objects task can do all tables, I would like to use it to copy only data from one server to a new server that has the tables pre-created. When I encounter any kind of error, in addition to the error information provided by SSIS, I also need the actual row data.
If using the Transfer Object task can't do that, how would I loop through all the tables on an OLEDB source and capture the same error information on the destination? I figured out how to do the Data Flow a table with a redirect error output but that does not give me the actual row data.
View 7 Replies
View Related
Jan 28, 2015
I am trying to use change data capture to load the data into the secondtable from table 1 which is coming from UI.
What will be the minimum latency??? Can we use incase of latency less than 5 seconds.
View 1 Replies
View Related
Mar 18, 2015
I setup data collection on a production server to capture growth rates.
When I run the dis usage report, it shows a daily growth rate of over 500 megs. This seems excessive to me.
As a troubleshooting step I then ran sp_space used and got these results:
database_namedatabase_sizeunallocated space
rgc_prod 273442.63 MB3648.48 MB
reserved data index_size unused
265345488 KB164385384 KB99826072 KB1134032 KB
What should my next steps be to try and determine why there is so much growth? And isn't the index size rather large?
View 1 Replies
View Related
Dec 3, 2007
Hi All,
I am now working on the design phase of my project, we are looking to implement Change Data Capture (CDC) but i need some help if you guys has implemented before using the SSIS 2005 componets. I am trying to use the Following:
Source---------Derived Column---------Lookup---------------Conditional Split (to split New records and Updated Records)-----------Destination. Respectively.
Lets make it clear, my source holds (Old records and newly added or Updated records), the Derived Column is to Derive new columns called Insert_Date and Update_Date. The Lookup i am Using is to look the Fact_Table(the Old Records) as Reference, and then based on this lookup i will split the records on timely based using the Conditional Split. My question is
1. Am i using the right components?
2. what consideration should i have to see to make it true (some Logics on the conditional split)?
3. Any script which helps in this strategy?
4. If you have a better idea please try to help me, i need you help badly.
Thank you,
SamiDC
View 11 Replies
View Related
Jun 9, 2015
I have 5 tables that are joined respectively,
Each one of the tables listed below has a “CreateDateTime” and “UpdateDateTime” fields, I need to get yesterday changes, I can get any record where either CreateDateTime or UpdateDateTime is greater than midnight yesterday butI need to watch dates on all of the tables so I need to do atleast 10 date checks.
If any table shows an updated or created record, I need to gather ALL of the information for that customer. So, if my name didn’t change (SCUS table), but my email does (SEML table), I have to pull out both the SCUS and SEML tables (and the others, of course). So It may not be simple WHERE clause, How can I achieve this:
SELECT
SCUS.CUSFULLNAME
,
SCUS.CUSMIDDLENM
,
SCUS.CUSLASTNM ,
[Code] ....
View 3 Replies
View Related
Feb 24, 2007
I am using the following query to export data from sql server to ms access in export data wizard:
SELECT * FROM myView where myID = 123
Order by varcharColumnName1,varcharColumnName2 ,intColumnName3
This query will fetch about 7, 00,000 records.
SQL server 2005 shows the correct order, but Data in access table shows Incorrect data.
Please give me the solutions.
View 4 Replies
View Related
Jan 13, 2013
Or can it record before and after column changes based on the LSN only?
An extract from a file based legacy accounting system is performed every night. The system does not have a primary key because transactions are managed through program code. (the more things change...). The extract is copied to text in Unix and FTP'd to Windows, where the file is loaded into SQL Server by kill & fill. Because of the expense of modifying the source system, there is enormous inertia/resistance to injecting a primary key at the source, so kill & fill it stays.
In reading about Change Data Capture, it seemed to me that column level insert update and delete are stored in tables that remember the before and after content of each column tracked. In my reading I have seen many references to the LSN to decide when and what to record as changed, but I have not seen any refereference to the necessity of a primary key for Change Data Capture to work. This is in contrast to replication, where the requirement for the existence of a primary key is made plain.
Is it possible to use Change Data Capture against a table without a primary key? How to use it to change the extract from kill and fill to incremental.
View 9 Replies
View Related
Mar 23, 2015
I have located a bug in the functions cdc.fn_cdc_get_net_changes_<capture_instance> generated when you enable cdc on a table. This bug can be triggered if 2 rows are created in the _CT table having the same values for the __$start_lsn, __$seqval and the table's key column(s). From research on the internet I have found such rows can be created by a "deferred update": a single update statement in which a column that is part of a unique constraint is updated.
In order to report the bug with Microsoft I need to create a complete series of steps-to-reproduce. But even though the situation happens several times a day in our production environment, I have not yet been able to reproduce it in my test environment.I need a single update statement (plus maybe some steps in advance) that make that the log reader inserts 2 rows into the _CT table, one with __$operation = 1 (delete) and another with __$operation = 2 (insert) as opposed to the single row with __$operation = 4 that it inserts for a normal update. Below is the script I have so far to create a fresh database, enable cdc, create a test table, insert some data and update this data.
I would have liked the last update statement to be handled as a "deferred update". However in all of my tests the log reader just simply inserts a single row into the cdc.dbo_NETTEST_CT table.how to reproduce the situation where I get the 2 rows with __$operation 1 and 2 from a single update statement instead of the single row with __$operation = 4.
CREATE DATABASE [cdcnet]
CONTAINMENT = NONE
ON PRIMARY
( NAME = N'cdcnet', FILENAME = N'S:SQLDATAcdcnet.mdf' , SIZE = 4096KB , FILEGROWTH = 1024KB )
LOG ON
( NAME = N'cdcnet_log', FILENAME = N'T:SQLLOGcdcnet_log.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)
[code]....
View 4 Replies
View Related
Mar 22, 2004
hi here has a question for the mssql,
a problem occurs when i run a large database, the running speed is very slow.
for example, if i want to seek the record of 1500 employees with 15 per person(average) within 1 year(12 months), that mean i have to find record for 1500 * 15 * 12 time.
so, could i find a way to solve this problem? is this call tune the sql server/index/view?
what different of tune the sql server with index and view?
thanks for giving me advice.
:)
View 4 Replies
View Related
Sep 21, 2005
I have written a query which fetches data from a table with huge amount of data. This query is actually being used in a stored-procedure and it takes a lot of time, hence slows down my SP.
Here is the query that I'm builiding in the stored-procedure:
-------------------------------------------------------------
SET @selectLeadsInPeriodString = ' SELECT LM_Dealer, LM_Brand, SUM(LM_ImpressionCount)
FROM LM_ImpressionCount_Dealer'
IF(@timeCriterion IS NULL OR LTRIM(RTRIM(@timeCriterion)) = '' OR LTRIM(RTRIM(@timeCriterion)) = 'NULL')
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString +
' WHERE (CONVERT(varchar,[LM_ImpressionCount_Dealer].[LM_ImpressionDate],102) >= ''' +
CONVERT(varchar,@startDateTime,102) +
''' AND CONVERT(varchar,[LM_ImpressionCount_Dealer].[LM_ImpressionDate],102) <= ''' +
CONVERT(varchar,@endDateTime,102) + ''')'
ELSE IF(@timeCriterion = 'CurrentMonth')
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString +
' WHERE MONTH([LM_ImpressionCount_Dealer].[LM_ImpressionDate]) = MONTH(GETDATE())'
ELSE IF(@timeCriterion = 'PreviousMonth')
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString +
' WHERE MONTH([LM_ImpressionCount_Dealer].[LM_ImpressionDate]) = MONTH(GETDATE()) - 1'
ELSE IF(@timeCriterion = 'YearToDate')
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString +
' WHERE ([LM_ImpressionCount_Dealer].[LM_ImpressionDate] >= cast((''1/1/''+cast(year(getdate()) AS varchar(4))) AS datetime)
AND CONVERT(varchar,[LM_ImpressionCount_Dealer].[LM_ImpressionDate],102) <= CONVERT(varchar,GETDATE(),102))'
IF(@brand IS NOT NULL AND LTRIM(RTRIM(@brand)) <> '')
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString +
' AND [LM_ImpressionCount_Dealer].[LM_Brand] = ''' + @brand + ''''
SET @selectLeadsInPeriodString = @selectLeadsInPeriodString + ' GROUP BY LM_Dealer, LM_Brand'
The variables used in the query formation above are passed as input parameters to the SP.
The table being queried has columns 'LM_Dealer', 'LM_Brand', 'LM_ImpressionCount' and 'LM_ImpressionDate'.
Also, the table has a non-clustered index on the column LM_ImpressionDate.
With all this information, can anyone suggest as to how I can optimize the query above.
Thanks in advance.
-Dex
View 10 Replies
View Related
Sep 30, 2005
I think this is very silly question
but It is hard for me :eek:
SELECT email, host FROM WAITING_AUTH WHERE email NOT IN
(SELECT email FROM MEMBER)
AND host NOT IN (SELECT host FROM MEMBER)
thanks~ Have a nice weekend
View 3 Replies
View Related
May 11, 2004
Hi,
I have SQL desktop version installed and for the last few days it has really slowed down. I have ran many anti-virus etc. and all is okay on that front.
Any tips regarding how I can tune things up? What should I look for and how do I go about it. Deleting LOG etc. etc???
Please guide.
Thanks.
View 3 Replies
View Related
Sep 15, 2006
If anyone is able to provide advice for tuning the below query in sqlserver, it is much appreciated. In addition any index suggestions arealso appreciated as I have access to the tables. Thank you.select a.id, isnull(b.advisement_satisfaction_yes, 0) asadvisement_satisfaction_yes,isnull(c.advisement_satisfaction_no, 0) as advisement_satisfaction_no,casewhen isnull(b.advisement_satisfaction_yes, 0) >isnull(c.advisement_satisfaction_no, 0) then 'YES'when isnull(b.advisement_satisfaction_yes, 0) <isnull(c.advisement_satisfaction_no, 0) then 'NO'when isnull(b.advisement_satisfaction_yes, 0) =isnull(c.advisement_satisfaction_no, 0) then 'TIE'end as Satisfied_With_Advisementfrom aleft Join(select id, count(answer_text) as Advisement_Satisfaction_yes from awhere question = 'The level of Academic Advisement I received from theUniversity staff during this course was appropriate.'and answer_text = 'yes'GROUP BY id) bon a.id = b.idleft join(select id, count(answer_text) as Advisement_Satisfaction_NO from awhere question = 'The level of Academic Advisement I received from theUniversity staff during this course was appropriate.'and answer_text = 'NO'GROUP BY id) con a.id = b.idwhere question = 'The level of Academic Advisement I received from theUniversity staff during this course was appropriate.'
View 2 Replies
View Related
Jul 8, 2015
I get the following error message when a job calls a Stored Procedure that TRUNCATES a Table:
Cannot truncate table 'CombinedSurveyData' because it is published for replication or enabled for Change Data Capture
Is my only option to change the TRUNCATE to DELETE?
[URL]
View 2 Replies
View Related
Jul 31, 2013
I am working on an HR project and I have one final component that I am stuck on.
I have an Excel File that is loaded into a folder every month.
I have built a package that captures the data from the excel file and loads it into a staging table (transforming a few bits of data).
I then combine it with another table in a view.
I have another package that loads that view into a Master table and I have added a Slowly Changing Dimension so that it only updates what has been changed. (it’s a table of all employees, positions, hire dates, term dates etc).
Our HR wants to have this data in a report (with charts and tables) and they wanted it to be in a familiar format. So I made a data connection with Excel loading the data into a series of pivot tables.
I have one final component that i cant seem to figure out. At the end of every year I need to capture a count of all Active Employees and all Termed employees for that year. Just a count.
So the data will look like this.
|Year|HistoricalHC|NumbTermedEmp|
|2010|447 |57 |
|2011|419 |67 |
|2012|420 |51 |
The data is in one table labeled [EEMaster]. To test the count I have the following.
SELECT COUNT([PersNo]) AS HistoricalHC
FROM [dbo].[EEMaster]
WHERE [ChangeStatus] = 'Current' AND [EmpStatusName] = 'Active'
this returns the HistoricalHC for 2013 as 418.
SELECT COUNT([PersNo]) AS NumbOfTermEE
FROM [dbo].[EEMaster]
WHERE [ChangeStatus] = 'Current' AND [EmpStatusName] = 'Withdrawn' AND [TermYear] = '2013'
This returns the Number of Termed employees for 2013 as 42.
I have created a table to report from called [dbo.TORateFY] that I have manually entered previous years data into.
|Year|HistoricalHC|NumbTermedEmp|
|2010|447 |57 |
|2011|419 |67 |
|2012|420 |51 |
I need a script (or possibly a couple of scripts) that will add the numbers every year with the year that the data came from.
(so on Dec 31st this package will run and add |2013|418|42| to the next row, and so on.
View 20 Replies
View Related
May 6, 2015
I need reflecting changes of csv file in oracle DB. Suppose, I load single csv file in oracle DB which contains 10 rows. After some time, I have loaded another CSV file which has the modified row of the previously loaded csv file. So, how can I capture the CSV changes and how it is going to get reflected in oracle DB?There is no unique column in csv file to identify particular row.
View 3 Replies
View Related
Oct 4, 2015
I am studying indexes and keys. I have a table that has a fixed width of data to be loaded in the first column which is parsed in a view based on data types within the fixed width specifications.
Example column A:
(name phone house cost of house,zipcodecountystatecountry)
-a view will later split this large varchar string based
column b: is the source filename of the data load (varchar 256)
....
a. would there be a benefit of adding a clustered or nonclustered index (if so which/point in direction on why)
b. is there benefit of making one of these two columns a primary key (millions of records) or for adding a 3rd new column as a pk?
c. view: this parses the data in column a so it ends up looking more like "name phone house cost of house zipcode county state country" each having their own column.
-any pros/cons of adding indexes (if so which) to the view instead of the tables or both for once the data is parsed?
View 4 Replies
View Related
Mar 23, 1999
ENVIRONMENT:
I have SQL Server6.5 running under a dedicated NT Server. NT configuration includes dual pentium 200Mhz processors, 256MB RAM and RAID system.
The Database size is 1GB with actual data size about 500MB.
PROBLEM:
I have an application which uses lots of joins to get the results. My select query is running too slow even when I run it on the server.
I updated the statistics and rebuild all the indexes on the tables used by the query.
Any suggestions on using SQL Trace and tuning the server/database are welcome.
Srini
View 4 Replies
View Related