Currently we have a variety of SQL 2000 (and 2005) database servers, we are having issues with the maintenance plan of a few SQL2000 boxes where they no longer have enough hard disk space to do a full index-rebuild on the system.
Now we want to re-build the databases indexes approximately once a week, or maybe a little less often, in the past this has worked fine with maintenance plans.
However, we now have issues because we have some databases in offline mode, and we are quite low on disk space with no plans for hardware upgrades anytime soon. The temporary solution is to turn the index rebuilds off.
I have been working on a script that will:
Cycle through each database and within that database:
Go through each table Run a DBCC DBREINDEX on the table Move on to the next table Once the reindexing of one database is complete IF the database is not in simple mode
Backup the transaction log Run a DBCC SHRINKDATABASE with the required amount of free space Go to the next database until all are complete.The logic is quite simple but so far this has not worked, it would appear something is locking the transaction log until the script exits.
Now the script works fine excluding the shrinkdatabase, I always get: DBCC execution completed. If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000] Shrinking database: inf_dev target percentage: 10 at: Aug 2 2007 5:33PM [SQLSTATE 01000] Cannot shrink log file 2 (INF_PROD_Log) because all logical log files are in use. [SQLSTATE 01000]
Where I'm indexing the INF_Prod database. A DBCC LOGINFO shows something along the lines of:
Clearly there is something in the log file towards the end. However, I don't know why this is happening as I'm running the script in the master database and I've backed up the transaction log of the database I'm working on. I've tried doing Full backup + Transaction log + Shrink, it fails. I've tried waiting 10minutes in the script + shrink, it also fails.
However, if I open a query analyzer and do a backup log, then a shrink it works perfectly every time. However in the script it always fails no matter what I do.
We have recently migrated quite a databases around 20 from SQL 2000 and 2005 to SQL server 2008R2.
I am using Ola's script for index maintenance for those with compatibility level above 80 as i heard it supports that way.
Hence separated in 2 way job where for those with compatibility level 80, we are running job with below query on each database with 80 as compared
USE ABC GO EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)" GO EXEC sp_updatestats GO
I am not sure if this is the only way in for those databases, because we are seeing the database getting because of that somewhere using above query.( seems log file filling very rapidly).
But above is not the case with those databases , with compatibility 90 and above.
I want to know is a flat file faster than a RDBMS for indexing for example a search engine indexing would a flat file be better in terms of performance, scalability etc than a RDBMS?
The other day we tried online re-indexing feature of SQL 2005 and it€™s performing faster than offline re-indexing. Could you please validate if it€™s supposed to do be this way? I always thought offline should be faster than online.
Dear Readers,Is it possible, like in Access, to link to tables in other SQL databases that are on the same server? I have a query that I originally had in Access that queered from multiply databases. It did this by having those other tables in the other databases linked to the database that had the query.
Hi!! I dont know if this is the correct forum for this or not, but still...Actually i wanted to know some details about SQl's Indexing services. I found this link on my hosters help pages : You need to use SQL Query Analyzer tool for this.
This will enable full-text indexing for the current database:exec sp_fulltext_database 'enable'
This creates a catalog:exec sp_fulltext_catalog 'catalogname', 'create'
This enables indexing of a table:exec sp_fulltext_table 'tablename', 'create','catalogname', 'indexname'
This adds a column to an index:exec sp_fulltext_column 'tablename', 'columnname', 'add'
This activates fulltext on a table:exec sp_fulltext_table 'tablename', 'activate'
These two enable automatic filling of the full-text index when changes occur to a table:exec sp_fulltext_table 'tablename','start_change_tracking'exec sp_fulltext_table 'tablename','start_background_updateindex' From the above i get that i need to set up my database for indexing then make a catalog and then add an index of a table to this catalog. Can anyone point any good tutorials for using this is the proper way so that performance is not affected and tells me details on updating indexes etc(esp using some criterias). Moreover does indexing columns lower the performance? Is there a workaround? I am completely new to this.
What should I be looking at if I have real-time data (constant transactions) writing to a table that is experiencing index type problems? The table needs to constantly be re-indexed, which is slowing the whole transaction process down.
Can you please help me find out if this statement is always true:
"Adding a new Index slows down updates"
This is more a general question, applicable as well for SQL Server. If this is not the appropriate subforum then I kindly ask an moderator to move this thread to the appropriate sub-forum.
Okay, so i've been creating a .net app that basically gathers data from a web page, and then passes the parameters to a s.p. i wrote in sql, fetches a count, and displays the data to the webpage. My problem layes in that i have the query command timeout set to 1:00 but alot of my quries on the larger tables take longer then that to complete, so the page is timing out quite often.
i KNOW my problem is database design, i'm running an OLAP database. trasactions only occur once a week when we run a federal DO_NOT_CALL database update. i was wondering if anyone would be so kind as to help me tune my database a little more the get some more juice out of it. i can also tell you guys that i've notice every time a query is ran, the Diqk Query length tacs out to nearlly 100% for the entire length of the query. dont know if that helps.
I have a problem on indexing. The field PK_hrSetBenefitsLeave is the primary key of the table "hrSetBenefitsLeave". When i see it on the "Manage Indexes and Keys", the identity name became PK_hrSetBenefitsLeave_1. Everytime i change it to its original name will get me error...and i can't save it.
Error msg on saving : 'hrSetBenefitsLeave' table - Unable to create index 'PK_hrSetBenefitsLeave'. There is already an object named 'PK_hrSetBenefitsLeave' in the database. Could not create constraint. See previous errors.
I tried to check using this query.
Select * from Information_Schema.Columns where column_name = 'PK_hrSetBenefitsLeave'
Hello,I need some help understanding why my indexes do not seem to be affecting mysearches. I would really appreciate help understanding what indexes I needto make this query run faster. I realize that I use wildcards when searchingfor g1.gene_name, but is there anything I can do to make that less of aproblem? I ran EXPLAIN on the search I wanted to optimize and got thefollowing:EXPLAIN SELECT c1.SFID FROM Gene g1, cDNA c1, Transcript t1, Refseq r1 WHERE(c1.SFID = t1.cDNA_SFID AND t1.gene_SFID = g1.SFID AND (g1.gene_sym = 'hh'OR g1.genbank_acc = 'hh' OR g1.gene_name LIKE '%hh%')) OR (c1.genbank_acc ='hh' OR c1.SUID = 'hh') OR (c1.SFID = t1.cDNA_SFID AND t1.gene_SFID =g1.SFID AND g1.locuslink_id = r1.locuslink_id AND (r1.mRNA_acc = 'hh'));+-------+-------+--------------------------+------+---------+------+--------+-------------------------+| table | type | possible_keys | key | key_len | ref | rows| Extra |+-------+-------+--------------------------+------+---------+------+--------+-------------------------+| r1 | index | mRNA_acc,llid,rma,rllid | rma | 25 | NULL | 20093| Using index || g1 | ALL | PRIMARY,llid,ggs,gga,gll | NULL | NULL | NULL | 190475| || c1 | ALL | PRIMARY,cga,cs | NULL | NULL | NULL | 43714| where used || t1 | index | gene_SFID,gS,cS,tg,tc | gS | 4 | NULL | 47238| where used; Using index |+-------+-------+--------------------------+------+---------+------+--------+-------------------------+I have the following indexes (which were all added after the database waspopulated):ALTER TABLE cDNA ADD INDEX cga(genbank_acc, SFID);ALTER TABLE cDNA ADD INDEX co(organism, SFID);ALTER TABLE cDNA ADD INDEX cs(SUID, SFID);ALTER TABLE Gene ADD INDEX ggs(gene_sym, SFID);ALTER TABLE Gene ADD INDEX gga(genbank_acc, SFID);ALTER TABLE Gene ADD INDEX ggn(gene_name, SFID);ALTER TABLE Gene ADD INDEX go(organism, SFID);ALTER TABLE Gene ADD INDEX gll(locuslink_id, SFID);ALTER TABLE Gene ADD INDEX gui(unigene_id, SFID);ALTER TABLE Transcript ADD INDEX tg(gene_SFID, cDNA_SFID);ALTER TABLE Transcript ADD INDEX tc(cDNA_SFID);ALTER TABLE Refseq ADD INDEX rma(mRNA_acc, locuslink_id);ALTER TABLE Refseq ADD INDEX rllid(locuslink_id);
Hi, There is a table which I regularly run a select query on. The select query always has a fixed where clause on only three of the columns with different parameters.
This is a query that runs each time:
select * from tblData where PersonNo = 2 and EmployeeType = 4 and DataDate = getdate()
This are the types of indexes the table currently has: One index for each of these three fields i.e. index1 for PersonNo index2 for EmployeeType index3 for DataDate In addition to the above, I also have created a covering index as follows index4 for PersonNo,EmployeeType,DataDate
Is what I have enough for indexes on this table please? Is there anything else I have to do on indexing this table? Thanks
I need some help with MS Indexing Services, and there doesn't seem to be much support for it on the web. Do you know of any good forums or sites?I'm using MS Indexing Services to power the search feature on my site. Should I be using something else like Sharepoint?
If you put an index on an integer type column named 'test_column' in a table that had 1,000,000,000 rows in it, and you said select top 50 * from test_table WHERE test_column = 1 since 'test_column' has an index, that would perform extremly fast wouldn't it? Cheers
I am working on SQL Server 7.0. Every weekend we go for reindexing of some tables. I want to know if it is possible to run the re-indexing of tables in parallel so that I can save time.
Our database is of size 80GB and one table is around 22GB. Rebuilding of index on this table takes a lot of time and we are unable to index the other tables.
Hoping someone could me with an ongoing indexing question that I have.
On my site, we have over the past 5 years developed what is emerging as a fairly complicated dbase structure, as features have been added to my site and relations have increased between different database tables, there has been a need to index fields in different ways, and in some instances field indexing has overlapped. For example we may have a table that has 5 fields (field1,field2,field3,field4,field5). A need to index field1 is requried because of a query that reads:
SELECT * From Table1 where field1=XXXXX
Additionally there may be a need to for another query that reads:
SELECT * From Table1 where field2=XXXXX
In this instance an index is placed on field2.... But, for example when there is the following query:
SELECT * From Table1 where field1=XXXXX and field2 = XXXXX
Is it necessary to set a new index on: field1,field2 ???
We have made the choice that yes, in fact there is...but now over time some of our tables have instances of single fields being indexed along with combinations of two single fields that have already been indexed, being indexed together. As tables have grown to over 1,000,000 records and having up to 15 or so indexes, we realize that the number of indexes maybe degrading performance. Also, indexes vary in type, e.g INT,BIGINT,Varchar fields... In the above instance, can we eliminate the multi-indexes and improve performance over all...?
On a second related question:
In the event that two tables are joined on a common field.
e.g. Select * from Table1,Table2 where Table1.field1=Table2.field1
Is it necessary to index both of these fields in tables: Table1 and Table2 ?
Hope someone can help, as we are looking to improve the efficiency of our tables as they continue to grow.
I have a database with no index on any table, I have to pull out records from them, process them and insert into a set of table in another database. There is no one to one mapping. What I have been doing is get the data into cursor and manipulate row by row and insert to target tables. This is very slow even for few thousand records and we have to do it for few hundred thousands.
The process takes long time to run (hours for 20000 records). I created indexes to speed up the operation, but with index my process just hangs, I have put some print statements within the transaction loop that also does not appear on ISQL, it appears only after I kill the process.
It's all confusing to me, index is not helping at all. I checked the query plan for queries after creating index, it displays fine but the stored procedure just stops.
I’m using SQL Server 2000. I have a table called Contacts and I would like to be able to have the UserID as an indexed column and to ignore duplicates. I set up the following properties within my SQL Server database table:
Every time I try to enter duplicates for the UserID column; I get an error that says, “Cannot enter duplicate key row in object ‘Contacts’. Can anyone explain this? Is it possible to create an index column with duplicate data?
I am not really sure how the whole indexing side of MS SQL works (I'm a noob), so my question has 2 parts:
1) Does SQL store every Index in memory? 2) If so, can I perform a SELECT on a table's index(s) without hitting the disk?
For example: I have a table with a column called "Id" which is of type uniqueidentifier. I want to select all of the "Id"s in the table without accessing the server's hard drive (get info from memory).
I'm looking for some help on how i should index this table.
current table has about 500k records in it. the fields in the table are: member_num (varchar(12), not null) first_name (varchar(20), null) last_name (varchar(20), null) ssn (varchar(50), null) address1 (nvarchar(200), null) address2 (nvarchar(200), null) city (nvarchar(200), null) state (nvarchar(200), null) zip (nvarchar(100), null) phone1 (nvarchar(50), null)
all of the fields are searchable through an asp.net webform.
my first stab at this consisted of creating a clustered index on member_num and then creating a separate index for each of the remaining fields.
Sorry its been a while since I was taught about indexes, Can I place indexes on both FK fields of a Associative table?and what is the recommended number of rows to place an index on a table for SQL server (if different from other DBMS)?and also whats a clustered index?
i have to make the following but i have no clue any help will be appreciated
i have to search through three tables based on user preferences.
the tablkes are author name, book name and topics.( i have created ttables and their relations)
now i want to the user to select the option from the drop down menu. The problem is how do i ascertain(dynamically) which table to search based on the action selected by the user. Thanks
Dear All, in my current databases, indexing is very poor. same columns are having clustered index and non clustered indexes. is there any tool to help me out? i'm thinking in this way...please correct me if i'm thinking wrong...
1) i'm planning to drop all the indexes first. 2) i'm planning to create clustered index on ID column. 3) i'm planning to create non clustered index on some columns which are using where conditions.(many procedures and functions, as well as report queries). 4)planning to run the index rebuild script everday at non-peak time 5)planning to run the index defragmentation script every week at non-peak time 6) planning to run shrink database command every week.
please correct me and add flavour with your great experience.
thank you very much
Arnav Even you learn 1%, Learn it with 100% confidence.
Okay, so first off, here is a sample query i'm using:
SELECT o.state_abbrv, count(o.state_abbrv) as kount FROM dbo.mortgage o WHERE 1 = 1
and per1_age>=20 and wealth_rating>=1 and hm_purprice>=100 --6 sec / 3 sec AND oo_mtg_amnt >= 100 and est_inc >= 'B' and per1_ms='M' and hm_year_build>='1905' and oo_mtg_lender_name<>' ' and oo_mtg_rate_t in ('f','v') and oo_mtg_loan_t in ('c','f') and hm_purdate>='20000101' and child_pres='y' and zip in (85302,85029) and state_abbrv in ('az')
and rtrim(city)+' '+state_abbrv in ('glendale az','phoenix az') and rtrim(county_name)+' '+state_abbrv in ('maricopa az') and substring(phone,1,3) in ('602','623')
group by o.state_abbrv ORDER BY o.state_abbrv
i'm trying to fine tune the database to come back with quries in less then 30 seconds. EVERY query ran will be a count.
i've managed to fine tune it to the point where anything above the rtrim(city) comes back in about 3-7 seconds. my problem is everything below that. i cant seem to get a query to respond fast enough, any recommendations? i've tried pluging the whole query into the index tuning wizard and it gives me nothing.
CREATE INDEX [mortgage1] ON [dbo].[mortgage]([oo_mtg_rate_t], [state_abbrv], [wealth_rating], [est_inc], [per1_age], [per1_ms], [hm_purprice], [hm_purdate], [hm_year_build], [oo_mtg_amnt], [oo_mtg_lender_name], [oo_mtg_loan_t]) ON [PRIMARY] GO
CREATE INDEX [mortgage11] ON [dbo].[mortgage]([oo_mtg_rate_t], [state_abbrv], [zip], [wealth_rating], [phone], [est_inc], [per1_age], [per1_ms], [hm_purprice], [hm_purdate], [hm_year_build], [oo_mtg_amnt], [oo_mtg_lender_name], [oo_mtg_loan_t]) ON [PRIMARY] GO
CREATE INDEX [mortgage2] ON [dbo].[mortgage]([oo_mtg_rate_t], [state_abbrv], [wealth_rating], [phone], [est_inc], [per1_age], [per1_ms], [child_pres], [hm_purprice], [hm_purdate], [hm_year_build], [oo_mtg_amnt], [oo_mtg_lender_name], [oo_mtg_loan_t]) ON [PRIMARY] GO
CREATE INDEX [mortgage4] ON [dbo].[mortgage]([zip]) ON [PRIMARY] GO
i assume thie issue is the substring. any assistance would be GREAT!
Okay, so I am writing a program that takes any combination of about 30 parameter passed via .ASPX and Visual Basic Code. my question is this, do have to create an index for each possible combination of parameters in order to get the query to come back REALLY fast?
or would it be maybe a better method to have the program pass Every parameter even if it would be selecting all the data and just setting up a few indexes?
Hi folks, I have setup a re-indexing job on a sql2005 server. It runs great. But... in the morning the first Transaction log backup equals almost the size of the entire database. Am I missing something here. I am looking at a similar job on SQL 7 and the transaction logs are not changing much from normal.
I have a set of tables with about the same structure
dataID, recordID, 15 other columns
dataID is unique but is never referenced in queries
recordID is one of the most referenced columns but only has a cardinality of about 30%
The current structure has a clustered PK on (dataID,recordID)
Someone suggested reversing the clustered PK to (recordID,dataID) because of the number of references to recordID but that didn't seem to boost performance any
After staring at this for a while I came up with something but I'd like some advice whether it makes sense or not.
create a non-clustered PK on dataID create a non-unique clustered index on recordID
Let me know if any other information is needed. Thanks
I've got some tables which are indexed on primary key and a couple of search columns.
I've also got some views based on these tables, some are quite complex. Do I index the views on the same search columns or do they automatically index themselves based on the tables they reference?