Hi all,
i have a very general question about databases. What is the advantage and disadvantage of using a heavily indexed database?
The advantage i could think is that search operations will be fast. The disadvantage (according to me who is a newbie) is that the size of the database will increase.
My teacher however is not very happy with this answer and wants me to research more. Any help will be greatly appreciated.
As the title says I re-indexed all of my databases using the wrong fill factor. Instead of using 90% as the fill factor I misunderstood and set this at 10%. So I believe my databases are now packed with a ton of unused space. The DB sizes should be about 5-6 GB but have since grown to 20-40GB. I am very new to SQL administration and don't know of a safe way to remove this unused space so that my databases return to their normal sizes. The databases do not grow very much at all so the free space is not really that necessary.
I have created a table from another table where I specified that one of the fields, an number field, is sorted in ascending order and have NOT specified that it is to be an indexed field and there are 10 million records, from 1 to 10,000,000 exactly.
Now, if I query that table, asking to return records 1-1,000 from that non indexed number field that I sorted in ascending order (where number field <= 1,000) , will it run as fast as if it were indexed?
In other words, does SQL know somehow that these records are sorted in ascending order and so will not do a full table scan, stopping at 1,000 to return my data set?
Or is there no way for SQL to know this and only specifying an indexed field allows SQL to know that its in some order and so it doesn't have to do the full scan?
What are my options to find heavily accessed DBs on a server? I know I can do this by profiler and some counters. Is there any tool which gives me this information easily?
------------------------ I think, therefore I am - Rene Descartes
Hi, I'm rather new here, but I would like to add the question here since this project encompasses a lot of areas of ASP.NET (Visual Studio 2005). My project is about a site which is heavily based on Search Option. So basically, users will just have to fill in the forms, click search and it will search throughout the database. For example... The website is about a property listing. The database (based on SQL Server, can be created right through the Visual Studio 2005) consists of these attributes: ID, Property Name, Property Location, Property Cost. The user may search based on Name, Location and Cost. For the cost, there are two forms to fill: the lowest cost allowed and the highest cost allowed.All these are put on the Master Page (Located on the top). When the button Search is clicked, it will display the records that match the forms filled, in the main page located below the Master Page.If anyone can help me with this, I'll be very grateful. Thank you very much.
We run a multiple database environment, with two of the databases receiving most of the user activity. (both write and read). These databases are roughly 25gb each and receive roughly the same amount of activity. Currently both of the .mdf files sit on the same drive shelf. Their log files are located on a separate drive shelf.
Debate: We have an extra fiber channel shelf available for us to use. We are not having too many problems related to performance, but we are always seeking for different ways to increase application/server performance. The debate centers on what to do with the extra shelf. There are two different suggestions on how best to use the shelf. They are:
1)Separate the .mdf files for two most utilized databases. This would separate the databases and the I/O associated with each across two different shelves
2)Break off the indexes for all 5 databases on to the extra shelf. This would leave all the .mdf files on the same shelf, but it would move the I/O associated with the indexes to a different shelf.
Can anyone provide the pros and cons of either suggestion? I would like to see arguements for either side.
WHERE [AppraisalView_C].[AppraisalID_C] = 'APP-000006'
but I end up getting the dreaded "Msg 4104, Level 16, State 1, Line 1 The multi-part identifier "AppraisalView_C.AppraisalID_C" could not be bound." error....
I cant change the Query that is called, but i can change the view, what is wrong?
Hi everyone, If I have a table with some indexes on the foriegn keys and these indexes are heavily fragmented (80%+), is it normal for queries to return incorrect results?
For example if I had a table called Customer( CustID, Name) and Orders (OrderID, CustID, Product, Date). Lets say I have a non clustered index on CustID in Orders table, and the clustered indexes are Customer.CustID and Orders.OrderID
If the non clusterd index on Orders.CustID becomes heavily fragmented and I am querying the Orders table with TSQL "SELECT * FROM Orders where CustID = @CustID" I sometimes get missing data or incorrect results. In one case all orders for a particular year were missing, but if I queried using OderID they were returned. Rebuilding the index fixed the problem.
I know the index should be rebuilt or reorganized depending on the fragmentation but if one happened to become this fragmented should it start returning incorrect data?
We have a bunch of Audit tables that contain almost exact copies of the operations tables. The audit tables also include:
AuditID - the audit action (insert, modify - old, modify - new, deleted) AuditDate - date and time of action AuditUser - User who did it...
At the end of the day I need to know for any given record what it looked like at the beginning of the day and what it looks like at the end of the day. There could have been numerous changes to the record throughout the day, those records I am not interested in. Only the first record and the last record of a give day.
I am going to be doing a lot of MIN(AuditDate) and MAX(AuditDATE) and .. WHERE AuditDate BETWEEN '10/1/2007 00:00:00' AND '10/1/2007 11:59:59' ...
Question: Whats better for performance:
1. Separating out the date and time and doing a clusterd index on the date.
2. Keeping date and time in the same column and just use a normal index.
In the properties of the table in the MS Access there is an Indexed column. If someone would had set that to "Yes (Duplicates Ok)" how can one implement this to MSSQL???
I have a table with three fields that are marked as indexed fields with unique keys. I want to remove one of the fields. When I do this and hit the rebuild button -- I get an error, that there is a duplicate field, so it will not just let me remove the one field and leave the other two?? How can I get around this?
I am trying toe create an indexed view but cannot seem to get it right.
CREATE VIEW dbo.D_Policy_View with schemabinding AS SELECT Policy_ID, Environment_Code, CoB, Sub_CoB, Policy_No, Version_No FROM dbo.D_Policy WHERE (Policy_ID IN (SELECT MAX(Policy_ID) FROM dbo.d_Policy GROUP BY Environment_Code, COB, Policy_No, SUB_COB))
I have read on BoL that MAX is not allowed but don't know of any other way to get the latest record??
Hello, I'm currently performance tuning a table with 100 million rows in it (about 18 GB of data) and would like to know if -
1. Is the table too large to be performance tuned. Is it better to just redesign the schema ? 2. Can using techniques as indexed views really help me with tuning such a table. 3. How long would it take to create a clustered, non clustered index on a varchar column (for instance) on a table with 100 million rows ? (i know this is a function of hardware as well - let's assume i'm using afairly maxed out DL 360 - i.e. dual processor with 4 GB of memory)
I am looking for a little insight. I am using an SQL Server database created by a third party vendor. There are certain columns in a given table that I query for quite often. To speed things up, I created an indexed view.
Now I can no longer insert into the base table. Attempting an insert causes a SQL error stating that the system properties ARITHABORT and NUMERIC_ROUNDABORT are incorrect. If I remove the index from my view, the inserts work just fine.
Can somebody provide some insight as to why this happens and how I might be able to correct it (keep in mind that the DB was setup by a third party, so I cannot change too much of the underlying setup without possibly compromising their functionality).
I have a problem trying to create an indexed view on SQL Server 2000. There are multiple databases, one of which stores system wide data. I would like to create an indexed view on the system wide data for each of the site databases.
Is it possible to create an indexed view on data in another database?
I am trying to create an indexed view, but because I am using a MAX function, I get the error
Cannot create index on view "dbo.View" because it uses aggregate "MAX". Consider eliminating the aggregate, not indexing the view, or using alternate aggregates. For example, for AVG substitute SUM and COUNT_BIG, or for COUNT, substitute COUNT_BIG.
Am totally stuck on how I can replace the MAX function.
Any help would be appreciated.
SET ANSI_NULLS ON GO SET ANSI_PADDING ON GO SET ANSI_WARNINGS ON GO SET CONCAT_NULL_YIELDS_NULL ON GO SET NUMERIC_ROUNDABORT OFF GO SET QUOTED_IDENTIFIER ON GO SET ARITHABORT ON GO
CREATE VIEW [dbo].[View] WITH SCHEMABINDING AS
SELECT TOP 100 PERCENT MAX(js_id) AS job_event, job_id FROM dbo.JobEvent GROUP BY job_id ORDER BY job_event GO
CREATE UNIQUE CLUSTERED INDEX IX_VMaxJobEvent ON View (job_id)
snehalata writes "i create view as follows CREATE VIEW Data WITH SCHEMABINDING AS SELECT A.PartitionID,FundID,ReportDate,ForeignTaxWithheld,DomDividendIncome,RGainShortTerm,RGainLongTerm,NewIssueRGainShortTerm, NewIssueRGainLongTerm,ChgUnrealizedGain,ReplaceTax,TotIncomeBefFee,TotalIncome,EndingNetCapital,EndingRedemptionUnits, BeginRedemptionAmount,EndingRedemptionAmount,EndingUnits,InterestOverseas,ExpenseOverseas,OrdIncome, ReallocationExpense,BeginRedemptionFee,EndingRedFee,BeginGrossCapital,EndingGrossCapital,GPFees,FixedExpense,MergerCost, SellingCommission,GrossRoR,NAV,GAV,GPMgmtFee,IMMgmtFee,GPIncentivefee,IMIncentivefee,NetRoR,MonthCounter, BegUnits,BegAddUnits,BegAddAmount,EndAddAmount,GrossRealizedGain,BrokerCommission, NetRealizedGain,OperatingExpense,OffsellExp,OrgExp,USObligationIncome,FixIncomeIntrIncome, CapitalGain,SellingFee,SellingMgmtFeeMidQtr,SyndicateCost,BeginNetCapital,DomesticDividendExp,FixedIncomeIntrExp FROMdbo.vPart1A LEFT JOIN dbo.vPart2B ON A.PartitionID= B.PartitionID
then if i creat index as follows
SET ANSI_NULLS ON SET ANSI_PADDING ON SET ANSI_WARNINGS ON SET ARITHABORT ON SET CONCAT_NULL_YIELDS_NULL ON SET QUOTED_IDENTIFIER ON SET NUMERIC_ROUNDABORT OFF
create unique clustered index ind1 on Data(FundID,ReportDate) it gives me following error Cannot index the view 'MonthliesTest2.4.dbo.Data'. It contains one or more disallowed constructs.
Dear experts, I've been working for an ERP solutions, company, as a DBA....
we have around 1200 tables as wellas 650 views.....
we are not using clustered index on views..... using the clustered index will boost the performance? and the ERP is web based application. so that modifications will be done on a regular basis.... is it good thing to implement clustered indexes on these views....
please guide me in this regard
thank you verymuch
Vinod Even you learn 1%, Learn it with 100% confidence.
I am trying to index a large number of PDF files using SQL Server Full Text indexing, and am running into an issue where about 1% of the documents are not being indexed. I looked in the SQL Full Text logs and the following error appears thousands of times: Error '0x80043651: msftesql should reprocess this document in an isolated fashion to confirm the error.' occurred during full-text index population for table or indexed view '[DocumentWarehouse].[dbo].[Document2006_tbl]' (table or indexed view ID '485576768', database ID '5'), full-text key value 0x00E32429. Attempt will be made to reindex it.
The component 'MSFTE.DLL' reported error while indexing. Component path 'Y:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLBinnMSFTE.DLL'. Occasionally this warning appears: Warning: No appropriate filter was found during full-text index population for table or indexed view '[DocumentWarehouse].[dbo].[Document2006_tbl]' (table or indexed view ID '5', database ID '485576768'), full-text key value 0x00FE8A91. Some columns of the row were not indexed. The problem isn€™t because certain PDF documents can€™t be indexed, because when you reinsert a doc that wasn€™t indexed, it does get indexed. My version of SQL Server is 9.00.2153.00 running on Windows Server 2003.
Hi, I have large table rrsn_security_t - more than half a million rows. I do a complete update on all the rows of the table using the following query.
UPDATE rrsn_security_t set cusip = b.fmr_cusip, master_issuer_num = b.mstr_isr_cusip, ticker = b.fmr_symb, description = b.fmr_name, prim_exchange_code = c.exchange_key, shares_otstndng = d.amount, iv_type = b.iv_typ, active = case when b.deact_date is null then 'Y' else 'N' end FROM rrsn_security_t a INNER JOIN ref_security_t b ON a.security_id = b.fmr_cusip LEFT OUTER JOIN shares_outstanding_feed_t d ON b.fmr_cusip = substring(d.fmr_cusip,1,9) AND d.fmr_type = 'OUTS' LEFT OUTER JOIN rrs_exchange_t c ON b.dft_exch_cd = c.exchange_id where b.fmr_cusip not in (select security_id from rrsn_scrty_ovrrd_in_effect_t)
This is a part of a daily batch load and a DTS package. The table has one clustered index and three non clustered indexes. Two of them are covering indexes. The update is on all the columns that have the non clustered indexes.
The problem is that when I run the update the transaction log runs to more than a GB of memory and alomost takes an hour to do so. Without the index it takes around 300 MB and 7 mins.
I am not confortable with the idea of dropping and recreating the index since it is not necessary in SQL Server 7.0, though it was the case in the previous versions of SQL Server.
Also the query plan with the indexes on shows that Table spool/Eager spool to optimize rewinds takes 50% of the query cost.
Could any one help me with how I should deal with this situation.
I need to insert 30 times daily 50000 rows into 13-month invoice big table tb_Invoce
The table has 15 columns. 4 of them are indexed (1 clustered) The table is heavily queried. I want to minimize insertion time and minimize time of table locked. What is the best algorithm for inserting?
I had some issues yesterday with the fact that some of the tables I had indexed views for did not have a unique/clustered index. The tables had unique indexes and clustered indexes but not a unique/clustered index. What I was seeing were rows that should have been in the view, not showing up in a regular select but they would showup in a with noexpand hint.
To fix the problem I created a unique/clustered index on each of the underlying table but cannot find that requirement anywhere, is this a requirement and if so can someone tell me where to find it.
How do we defrag indexed views? Can any one give me a query to loop thru all the indexed views in the database and find out the fragmentation levels and also defrag them? Thanks in advance!
I have an indexed view with a clustered index on my database........when I try to run and update statement agaisnt the table that is referenced in the view, I get one of the following errors:
I created an indexed view in SQL 2000, and I expected to see the index created on the view referenced in the execution plan when I query the view. Instead, I see the index for the base table referenced in the execution plan. Why?
There are 6,000,000+ records in the base table, and the view only references 256 of these rows.
Here is some of the DDL if you need it:
CREATE TABLE [alarm_t] ( [ct_dtm] [datetime] NOT NULL , [dst_flg] [char] (3) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [stn_nm] [varchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [alarm_txt] [varchar] (255) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL , [utc_dtm] [datetime] NOT NULL , [create_utc_dtm] [datetime] NOT NULL ) ON [PRIMARY] GO
CREATE CLUSTERED INDEX [alarm_idx2] ON [dbo].[alarm_t]([ct_dtm], [stn_nm], [dst_flg]) ON [PRIMARY] GO
create view dbo.alarm_Mapbd_v with schemabinding as SELECT [ct_dtm], [dst_flg], [stn_nm], [alarm_txt], [utc_dtm], [create_utc_dtm] FROM [dbo].[alarm_t] WHERE[stn_nm]= 'Mapbd' GO
create unique clustered index alarm_Mapbd_idx1 on dbo.alarm_Mapbd_v ( stn_nm, ct_dtm, dst_flg ) go
update statistics alarm_t go update statistics alarm_Mapbd_v go
The following 2 queries have the exact same execution plan, both showing a cost of 50%. I expected to see the index created on the view referenced in the execution plan for the first query. Is the index created on the view being used?
selectstn_nm, ct_dtm, dst_flg fromalarm_Mapbd_v go SELECT [ct_dtm], [dst_flg], [stn_nm], [alarm_txt], [utc_dtm], [create_utc_dtm] FROM [dbo].[alarm_t] WHERE[stn_nm]= 'Mapbd' go
I have created a unique clustered index on a view. The view does a GROUP BY on 3 of the columns and uses the COUNT_BIG aggregate function. I used the following SET commands before creating the view and the index:
SET ARITHABORTON SET CONCAT_NULL_YIELDS_NULL ON SET QUOTED_IDENTIFIER ON SET ANSI_NULLSON SET ANSI_PADDINGON SET ANSI_WARNINGSON SET NUMERIC_ROUNDABORTOFF
I can insert and delete rows from the base table, and the indexed view is updated fine.
However, when a scheduled job does effectively the same thing (delete some rows, and insert some new rows) I get the following error:
Executed as user: NT AUTHORITYSYSTEM. DELETE failed because the following SET options have incorrect settings: 'ARITHABORT'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or query notifications and/or xml data type methods. [SQLSTATE 42000] (Error 1934). The step failed.
Why am I getting this error?
The same SET commands above are in the Transact-SQL code for the job before the delete and before the insert statements.
Lalitha writes "Can I use DML statements against indexed views? If yes how it works internally, means will the pages gets locked during update and inserts and when the base tables get reflected of these modified data?"