Anyways, I recently imported some records (4000 or so) into it which didn't have a UPC Code (NULL) , Since then, all the Stored procedures in my database that use this table have been incredibly slow. I'm talking like things that used to take 2 minutes to run now take an hour.
I'm not really sure what is causing this, but since it work fine before I imported the records I have an idea that it has something to do with that.
I'm wondering if maybe since there was a index on UPC code, maybe it is choking on all the Nulls? I'm still just learning about the wonderful world of indexes.
The types of queries I'm running on this table are not that involved, mainly stuff like:
SELECT Cost, ItemID FROM SupplierData WHERE UPC = @UPC and Supplier = @SupplierName ORDER BY Cost
SELECT Top 1 Cost FROM SupplierData WHERE UPC = @UPC ORDER BY Cost
I have an asp.net application on SQL Server 2005. I have completed indexing all the physical primary and foreign keys, virtual primary and foreign keys, sorting order, where clause fields and so on. On first day, I only index all the physical primary and foreign keys, virtual primary and foreign keys. I noticed the loading performance has improved. So I continue with the remaining index process on the second day. This time, I noticed the loading performance is slower by 0.5 to 1 second. Is there any possibility that the loading performance will be slower after indexing? Please advise. Thanks.
Hi -Trying to chase down a baffling performance issue. Our database has beenrunning very slow lately. So we are performance tuning the database. Indoing so, we created a copy of our production database. In that database, Ichanged one clustered index on a table to try to improve performance. I ranone query - saw a slight improvement - but saw "lazy spool" in the executionplan.I tried to change it back to the original index by dropping the changedindex, and recreating the original index. I then ran the original query -which now went from 5 seconds to 36 seconds.I then ran DBCC REINDEX on that table. Performance of the query was stillmarkedly worse. I then reran the DBCC REINDEX on all tables, and then Iupdated each tables statistics. Performance of that query has never returnedto the original 5 seconds.What could be at issue here? Is there something else that I caused inchanging the index and changing it back?Ideas much appreciated.
Hello friends, I am in need to tune my stored procedures to get best performance. First I have a doubt in some SQL statement execution plan,
-- assume "Table_Heat" has one Million records. -- assume "Mill" column hold value either 1 or 2.
-- assume "HeatNumber" column is Unique.
1. If I have statement like this, -----SELECT * FROM Table_Heat WHERE HeatNumber = @nHeat AND Mill = @nMill-----
How WHERE clause filter the result from table??? -- In which order the AND statements of WHERE clause will be executed??
2. If I have created Indices for the table Table_Heat
For the above SQL statement - -- If Index is for HeatNumber --Execution Speed?? -- If Index is for HeatNumber, Mill -- Execution Speed?? -- If Index is for Mill, HeatNumber -- Execution Speed??
I know how to work with indices, but creating too much indices and unwanted indices also cause performance inefficiency, isn't it?. so please help me on this....
I am about to heavily index a table and have to include atleast 3 to 4 olumns in the fulltext index for this table. The table is updated very frequently and the also the columns that are involve in the fulltext indexing undergo frequent updates. As of now, I can't avoid using full text indexing as these columns are very very lengthy and basically contail text. The users of the database will give some key words as the search criteria to get infomation as to what they are looking for. How frequently should I update my full text catalog. This is a scenario where the full text is operating on various tables and each of thses table might be containingaround 300,000 to 800,00 rows. I would appreciate an intelligent siggestion as I need it as soon as possible.
I want to know is a flat file faster than a RDBMS for indexing for example a search engine indexing would a flat file be better in terms of performance, scalability etc than a RDBMS?
The other day we tried online re-indexing feature of SQL 2005 and it€™s performing faster than offline re-indexing. Could you please validate if it€™s supposed to do be this way? I always thought offline should be faster than online.
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
Hi!! I dont know if this is the correct forum for this or not, but still...Actually i wanted to know some details about SQl's Indexing services. I found this link on my hosters help pages : You need to use SQL Query Analyzer tool for this.
This will enable full-text indexing for the current database:exec sp_fulltext_database 'enable'
This creates a catalog:exec sp_fulltext_catalog 'catalogname', 'create'
This enables indexing of a table:exec sp_fulltext_table 'tablename', 'create','catalogname', 'indexname'
This adds a column to an index:exec sp_fulltext_column 'tablename', 'columnname', 'add'
This activates fulltext on a table:exec sp_fulltext_table 'tablename', 'activate'
These two enable automatic filling of the full-text index when changes occur to a table:exec sp_fulltext_table 'tablename','start_change_tracking'exec sp_fulltext_table 'tablename','start_background_updateindex' From the above i get that i need to set up my database for indexing then make a catalog and then add an index of a table to this catalog. Can anyone point any good tutorials for using this is the proper way so that performance is not affected and tells me details on updating indexes etc(esp using some criterias). Moreover does indexing columns lower the performance? Is there a workaround? I am completely new to this.
What should I be looking at if I have real-time data (constant transactions) writing to a table that is experiencing index type problems? The table needs to constantly be re-indexed, which is slowing the whole transaction process down.
Can you please help me find out if this statement is always true:
"Adding a new Index slows down updates"
This is more a general question, applicable as well for SQL Server. If this is not the appropriate subforum then I kindly ask an moderator to move this thread to the appropriate sub-forum.
Okay, so i've been creating a .net app that basically gathers data from a web page, and then passes the parameters to a s.p. i wrote in sql, fetches a count, and displays the data to the webpage. My problem layes in that i have the query command timeout set to 1:00 but alot of my quries on the larger tables take longer then that to complete, so the page is timing out quite often.
i KNOW my problem is database design, i'm running an OLAP database. trasactions only occur once a week when we run a federal DO_NOT_CALL database update. i was wondering if anyone would be so kind as to help me tune my database a little more the get some more juice out of it. i can also tell you guys that i've notice every time a query is ran, the Diqk Query length tacs out to nearlly 100% for the entire length of the query. dont know if that helps.
I have a problem on indexing. The field PK_hrSetBenefitsLeave is the primary key of the table "hrSetBenefitsLeave". When i see it on the "Manage Indexes and Keys", the identity name became PK_hrSetBenefitsLeave_1. Everytime i change it to its original name will get me error...and i can't save it.
Error msg on saving : 'hrSetBenefitsLeave' table - Unable to create index 'PK_hrSetBenefitsLeave'. There is already an object named 'PK_hrSetBenefitsLeave' in the database. Could not create constraint. See previous errors.
I tried to check using this query.
Select * from Information_Schema.Columns where column_name = 'PK_hrSetBenefitsLeave'
Hello,I need some help understanding why my indexes do not seem to be affecting mysearches. I would really appreciate help understanding what indexes I needto make this query run faster. I realize that I use wildcards when searchingfor g1.gene_name, but is there anything I can do to make that less of aproblem? I ran EXPLAIN on the search I wanted to optimize and got thefollowing:EXPLAIN SELECT c1.SFID FROM Gene g1, cDNA c1, Transcript t1, Refseq r1 WHERE(c1.SFID = t1.cDNA_SFID AND t1.gene_SFID = g1.SFID AND (g1.gene_sym = 'hh'OR g1.genbank_acc = 'hh' OR g1.gene_name LIKE '%hh%')) OR (c1.genbank_acc ='hh' OR c1.SUID = 'hh') OR (c1.SFID = t1.cDNA_SFID AND t1.gene_SFID =g1.SFID AND g1.locuslink_id = r1.locuslink_id AND (r1.mRNA_acc = 'hh'));+-------+-------+--------------------------+------+---------+------+--------+-------------------------+| table | type | possible_keys | key | key_len | ref | rows| Extra |+-------+-------+--------------------------+------+---------+------+--------+-------------------------+| r1 | index | mRNA_acc,llid,rma,rllid | rma | 25 | NULL | 20093| Using index || g1 | ALL | PRIMARY,llid,ggs,gga,gll | NULL | NULL | NULL | 190475| || c1 | ALL | PRIMARY,cga,cs | NULL | NULL | NULL | 43714| where used || t1 | index | gene_SFID,gS,cS,tg,tc | gS | 4 | NULL | 47238| where used; Using index |+-------+-------+--------------------------+------+---------+------+--------+-------------------------+I have the following indexes (which were all added after the database waspopulated):ALTER TABLE cDNA ADD INDEX cga(genbank_acc, SFID);ALTER TABLE cDNA ADD INDEX co(organism, SFID);ALTER TABLE cDNA ADD INDEX cs(SUID, SFID);ALTER TABLE Gene ADD INDEX ggs(gene_sym, SFID);ALTER TABLE Gene ADD INDEX gga(genbank_acc, SFID);ALTER TABLE Gene ADD INDEX ggn(gene_name, SFID);ALTER TABLE Gene ADD INDEX go(organism, SFID);ALTER TABLE Gene ADD INDEX gll(locuslink_id, SFID);ALTER TABLE Gene ADD INDEX gui(unigene_id, SFID);ALTER TABLE Transcript ADD INDEX tg(gene_SFID, cDNA_SFID);ALTER TABLE Transcript ADD INDEX tc(cDNA_SFID);ALTER TABLE Refseq ADD INDEX rma(mRNA_acc, locuslink_id);ALTER TABLE Refseq ADD INDEX rllid(locuslink_id);
Hi, There is a table which I regularly run a select query on. The select query always has a fixed where clause on only three of the columns with different parameters.
This is a query that runs each time:
select * from tblData where PersonNo = 2 and EmployeeType = 4 and DataDate = getdate()
This are the types of indexes the table currently has: One index for each of these three fields i.e. index1 for PersonNo index2 for EmployeeType index3 for DataDate In addition to the above, I also have created a covering index as follows index4 for PersonNo,EmployeeType,DataDate
Is what I have enough for indexes on this table please? Is there anything else I have to do on indexing this table? Thanks
I need some help with MS Indexing Services, and there doesn't seem to be much support for it on the web. Do you know of any good forums or sites?I'm using MS Indexing Services to power the search feature on my site. Should I be using something else like Sharepoint?
If you put an index on an integer type column named 'test_column' in a table that had 1,000,000,000 rows in it, and you said select top 50 * from test_table WHERE test_column = 1 since 'test_column' has an index, that would perform extremly fast wouldn't it? Cheers
I am working on SQL Server 7.0. Every weekend we go for reindexing of some tables. I want to know if it is possible to run the re-indexing of tables in parallel so that I can save time.
Our database is of size 80GB and one table is around 22GB. Rebuilding of index on this table takes a lot of time and we are unable to index the other tables.
Hoping someone could me with an ongoing indexing question that I have.
On my site, we have over the past 5 years developed what is emerging as a fairly complicated dbase structure, as features have been added to my site and relations have increased between different database tables, there has been a need to index fields in different ways, and in some instances field indexing has overlapped. For example we may have a table that has 5 fields (field1,field2,field3,field4,field5). A need to index field1 is requried because of a query that reads:
SELECT * From Table1 where field1=XXXXX
Additionally there may be a need to for another query that reads:
SELECT * From Table1 where field2=XXXXX
In this instance an index is placed on field2.... But, for example when there is the following query:
SELECT * From Table1 where field1=XXXXX and field2 = XXXXX
Is it necessary to set a new index on: field1,field2 ???
We have made the choice that yes, in fact there is...but now over time some of our tables have instances of single fields being indexed along with combinations of two single fields that have already been indexed, being indexed together. As tables have grown to over 1,000,000 records and having up to 15 or so indexes, we realize that the number of indexes maybe degrading performance. Also, indexes vary in type, e.g INT,BIGINT,Varchar fields... In the above instance, can we eliminate the multi-indexes and improve performance over all...?
On a second related question:
In the event that two tables are joined on a common field.
e.g. Select * from Table1,Table2 where Table1.field1=Table2.field1
Is it necessary to index both of these fields in tables: Table1 and Table2 ?
Hope someone can help, as we are looking to improve the efficiency of our tables as they continue to grow.
I have a database with no index on any table, I have to pull out records from them, process them and insert into a set of table in another database. There is no one to one mapping. What I have been doing is get the data into cursor and manipulate row by row and insert to target tables. This is very slow even for few thousand records and we have to do it for few hundred thousands.
The process takes long time to run (hours for 20000 records). I created indexes to speed up the operation, but with index my process just hangs, I have put some print statements within the transaction loop that also does not appear on ISQL, it appears only after I kill the process.
It's all confusing to me, index is not helping at all. I checked the query plan for queries after creating index, it displays fine but the stored procedure just stops.
I’m using SQL Server 2000. I have a table called Contacts and I would like to be able to have the UserID as an indexed column and to ignore duplicates. I set up the following properties within my SQL Server database table:
Every time I try to enter duplicates for the UserID column; I get an error that says, “Cannot enter duplicate key row in object ‘Contacts’. Can anyone explain this? Is it possible to create an index column with duplicate data?
I am not really sure how the whole indexing side of MS SQL works (I'm a noob), so my question has 2 parts:
1) Does SQL store every Index in memory? 2) If so, can I perform a SELECT on a table's index(s) without hitting the disk?
For example: I have a table with a column called "Id" which is of type uniqueidentifier. I want to select all of the "Id"s in the table without accessing the server's hard drive (get info from memory).
I'm looking for some help on how i should index this table.
current table has about 500k records in it. the fields in the table are: member_num (varchar(12), not null) first_name (varchar(20), null) last_name (varchar(20), null) ssn (varchar(50), null) address1 (nvarchar(200), null) address2 (nvarchar(200), null) city (nvarchar(200), null) state (nvarchar(200), null) zip (nvarchar(100), null) phone1 (nvarchar(50), null)
all of the fields are searchable through an asp.net webform.
my first stab at this consisted of creating a clustered index on member_num and then creating a separate index for each of the remaining fields.
Sorry its been a while since I was taught about indexes, Can I place indexes on both FK fields of a Associative table?and what is the recommended number of rows to place an index on a table for SQL server (if different from other DBMS)?and also whats a clustered index?
i have to make the following but i have no clue any help will be appreciated
i have to search through three tables based on user preferences.
the tablkes are author name, book name and topics.( i have created ttables and their relations)
now i want to the user to select the option from the drop down menu. The problem is how do i ascertain(dynamically) which table to search based on the action selected by the user. Thanks
Dear All, in my current databases, indexing is very poor. same columns are having clustered index and non clustered indexes. is there any tool to help me out? i'm thinking in this way...please correct me if i'm thinking wrong...
1) i'm planning to drop all the indexes first. 2) i'm planning to create clustered index on ID column. 3) i'm planning to create non clustered index on some columns which are using where conditions.(many procedures and functions, as well as report queries). 4)planning to run the index rebuild script everday at non-peak time 5)planning to run the index defragmentation script every week at non-peak time 6) planning to run shrink database command every week.
please correct me and add flavour with your great experience.
thank you very much
Arnav Even you learn 1%, Learn it with 100% confidence.
Okay, so first off, here is a sample query i'm using:
SELECT o.state_abbrv, count(o.state_abbrv) as kount FROM dbo.mortgage o WHERE 1 = 1
and per1_age>=20 and wealth_rating>=1 and hm_purprice>=100 --6 sec / 3 sec AND oo_mtg_amnt >= 100 and est_inc >= 'B' and per1_ms='M' and hm_year_build>='1905' and oo_mtg_lender_name<>' ' and oo_mtg_rate_t in ('f','v') and oo_mtg_loan_t in ('c','f') and hm_purdate>='20000101' and child_pres='y' and zip in (85302,85029) and state_abbrv in ('az')
and rtrim(city)+' '+state_abbrv in ('glendale az','phoenix az') and rtrim(county_name)+' '+state_abbrv in ('maricopa az') and substring(phone,1,3) in ('602','623')
group by o.state_abbrv ORDER BY o.state_abbrv
i'm trying to fine tune the database to come back with quries in less then 30 seconds. EVERY query ran will be a count.
i've managed to fine tune it to the point where anything above the rtrim(city) comes back in about 3-7 seconds. my problem is everything below that. i cant seem to get a query to respond fast enough, any recommendations? i've tried pluging the whole query into the index tuning wizard and it gives me nothing.
CREATE INDEX [mortgage1] ON [dbo].[mortgage]([oo_mtg_rate_t], [state_abbrv], [wealth_rating], [est_inc], [per1_age], [per1_ms], [hm_purprice], [hm_purdate], [hm_year_build], [oo_mtg_amnt], [oo_mtg_lender_name], [oo_mtg_loan_t]) ON [PRIMARY] GO
CREATE INDEX [mortgage11] ON [dbo].[mortgage]([oo_mtg_rate_t], [state_abbrv], [zip], [wealth_rating], [phone], [est_inc], [per1_age], [per1_ms], [hm_purprice], [hm_purdate], [hm_year_build], [oo_mtg_amnt], [oo_mtg_lender_name], [oo_mtg_loan_t]) ON [PRIMARY] GO
CREATE INDEX [mortgage2] ON [dbo].[mortgage]([oo_mtg_rate_t], [state_abbrv], [wealth_rating], [phone], [est_inc], [per1_age], [per1_ms], [child_pres], [hm_purprice], [hm_purdate], [hm_year_build], [oo_mtg_amnt], [oo_mtg_lender_name], [oo_mtg_loan_t]) ON [PRIMARY] GO
CREATE INDEX [mortgage4] ON [dbo].[mortgage]([zip]) ON [PRIMARY] GO
i assume thie issue is the substring. any assistance would be GREAT!
Okay, so I am writing a program that takes any combination of about 30 parameter passed via .ASPX and Visual Basic Code. my question is this, do have to create an index for each possible combination of parameters in order to get the query to come back REALLY fast?
or would it be maybe a better method to have the program pass Every parameter even if it would be selecting all the data and just setting up a few indexes?