Large Databases - Create Indexes For Fields That Are Used In Where Statements?
Aug 29, 2013
For large databases is it a good idea to create indexes for fields that are used in Where statements? Does that improve performance and reduce overhead?
I am new to writing SQL code and I read that you can use ALTER statements to create an index for a table. How would I go about doing that? Everything that I have tried in Query Analyzer comes up with an error.
I'm working to improve performance on a database I've inherited, and there are several thousand indexes. I've got a list of ones which should definitely exist within the database, and I'm looking to strip out all the others and start fresh, though this list is still quite large (1000 or so).
Is there a way I can remove all the indexes that are not in my list without too much trouble? I.e. without having to manually go through them all individually. The list is currently in a csv file.
I'm looking to either automate the removal of indexes not in the list, or possibly to generate the Create statements for the indexes on the list and simply remove all indexes and then run these statements.
As an aside, when trying to list all indexes in the database, I've found various scripts to do this, but found they all seem to produce differing results. What is the best script to list all indexes?
I am trying to import a very large amount of data into an SQL database. The data is in a format of SQL statements already - it is all in one large text file consisting of a number of CREATE TABLE X and INSERT INTO X VALUES().
The problem is that the amount of values being inserted into some of the tables is so large, that I am unable to open the .SQL file using query analyzer to run it, because the line-size limit for query analyzer is 64kb, whereas actual line-size in the file is in some cases in excess of 15MB.
I would appreciate any advice on how to get all this data into a managable format. I keep thinking that there simply has to be a way to execute these over-size SQL statements.
Hello!I have a developer that is playing around with some SQL statementsusing VB.NET. He has a test table in a SQL 2000 database, and he hasabout 2000 generated INSERT statements.When the 2000 INSERT statements are run in SQL query analyzer, all2000 rows are added to the table. When he tries to send the 2000statements to SQL Server through his app., a random number ofstatements do not get executed. But, SQL Profiler shows that each ofthe 2000 statements are getting sent to the server.I suggested that he add a "GO" statement at the end of the INSERTblock, but the statement fails when that is sent to the server.I know that this is not the ideal manner to insert bulk data to thesystem, but now we are all just curious as to why SQL server doesn'texecute each individual INSERT.Any thoughts?
I have come across a database system which isn't designed to work optimally. It is fairly large (~400GB) and performance of loading and querying is degrading (improper data types, fragmented indexes, non unique clustering key and other problems). So, I have quite a task in front of me, but I am up for the challenge. I figure this is not a unique situation, many of us would have come across this before. I have done this before too, but only for smaller databases, some of the operations here I expect to take a couple of hours or more to complete (depending on load/infrastructure speed etc, I know).
My plan is thus:
+ Take a full backup of the database + Set the recovery model of the DB to simple + Drop non clustered indexes + Drop clustered indexes + Remove PKs (wrong data types, too large!) + Narrow data types (add new column, update column in batches to old value, rename new column to old column) + Add PKs, which will create clustered indexes automatically based on PK ID + Create non clustered indexes + Run a SHRINKDB (normal operations I would never do this, but this is a special case, ensure log file is truncated to a logical size especially after all those table modifications...) + Set the recovery model of the DB to Full + Ensure everything works OK or better
I created a cursor that moves through a table to retrieve a user's name.When I open this cursor, I create a variable to store the fetched name to use within the BEGIN/END statements to create a login, user, and role.
I'm getting an 'incorrect syntax' error at the variable. For example ..
CREATE LOGIN @NAME WITH PASSWORD 'password'
I've done a bit of research online and found that you cannot use variables to create logins and the like. One person suggested a stored procedure or dynamic SQL, whereas another pointed out that you shouldn't use a stored procedure and dynamic SQL is best.
Dear all, I'm using SQL Server 2005 Standard Edetion. I have the following stored procedure that is executed against two tables (RecrodedCalls) and (RecordedCallsTags) The table RecordedCalls has more than 10000000 Records and RecordedCallsTags is about 7500000 Records Now the lines marked in baby blue are dynamic (Dynamic where statement) that varies every time this stored procedure is executed, may it contains 7 columns in condetion statement or may it contains 10 columns, or 2 coulmns.....etc Now I want to create non-clustered indexes on the columns used in the where statement, THE DTA suggests different indexing whenever the where statement changes. So what is the right way to created indexes, to create one index on all the columns once, or to create separate indexes on each columns, sometimes the DTA suggests 5 columns together at one if I€™m using 5 conditions, I can€™t accumulate all the possible indexes hence the where statement always vary from situation to situation, below the SP:
CREATE TABLE #tempLookups (ID int identity(0,1),Code NVARCHAR(100),NameE NVARCHAR(500),NameA NVARCHAR(500))
CREATE TABLE #tempTable (ID int identity(0,1),TypesCount INT,CallsType NVARCHAR(50))
INSERT INTO #tempLookups SELECT Code, NameE, NameA FROM lookups WHERE [Type] = 'CALLTYPES' ORDER BY Ordering ASC
INSERT INTO #tempTable SELECT COUNT(DISTINCT(RecordedCalls.ID)) As TypesCount,RecordedCalls.CallType as CallsType
FROM RecordedCalls LEFT OUTER JOIN RecordedCallsTags ON RecordedCalls.ID = RecordedCallsTags.CallID
WHERE RecordedCalls.ID <= '9369907'
AND (RecordedCalls.CallDate BETWEEN cast ('01 Jan 1910 00:00:00:000' as datetime ) AND cast ( '01 Jan 2210 00:00:00:000' as datetime ))
AND (RecordedCalls.Duration BETWEEN 0 AND 1000000)
AND RecordedCalls.ChannelID NOT IN('62061','62062','62063','62064','64110','64111','64112','64113','64114','69860','69861','69862','69863','69866','69867','69868')
AND RecordedCalls.ServerID NOT IN('2')
AND RecordedCalls.AgentID NOT IN('1000010000')
AND (RecordedCallsTags.TagID is null OR RecordedCallsTags.TagID NOT IN('100','200'))
AND RecordedCalls.IsDeleted='false'
GROUP BY RecordedCalls.CallType
SELECT IsNull(#tempTable.TypesCount, 0) AS TypesCount, CASE('English')
WHEN 'Arabic' THEN #tempLookups.NameA
ELSE #tempLookups.NameE
END AS CallsType FROM
#tempTable RIGHT OUTER JOIN #tempLookups ON #tempTable.CallsType = #tempLookups.Code
DROP TABLE #tempLookups
DROP TABLE #tempTable
Thanks all, Tayseer
Any suggestions how to create efficient indexes??!!
I have two SQL statements that return sum(pcs) and sum(amt) from two differnt conditions. How do I get a total sum of pcs and amt from these two SQL went I union them?
SQL1: return columns value, SUM(pcs), SUM(amt) where amt is value*pcs
$2 , 4pcs, $8 $5, 5pcs, $25
SQL2: return similar columns value, SUM(pcs), SUM(amt) FROM another condition giving:
I have a table that contains articles (as in, newspaper articles, blogarticles, whatever). I need to use a column of type uniqueidentifierbecause one of the requirements is that I be able to write the articlesout to XML or import them from XML, and references (as in, "for moreinfo read this: 2323-232-90934" have to still work after the export andimport).So as a minimum, the table is going to look like this:CREATE TABLE Articles (ArticleID uniqueidentifier,PublishDate datetime,Title nvarchar (50)ArticleContent ntext)GOALTER TABLE Articles ADDCONSTRAINT PK_ArticlesPRIMARY KEY NONCLUSTERED (ArticleID)WITH FILLFACTOR = 100GOAs you can see, I'm not going to use a clustered index on a column oftype UniqueIdentifier. I got that much from this newsgroup and fromwebsites on sql server performance tuning.Two questions. 1: I will obviously need to list recent articles. I'llneed to do: select top 10 ArticleID, PublishDate, Title from Articlesorder by PublishDate descWill there be any problem with an index on a datetime field to makethat query faster?CREATE UNIQUE CLUSTERED INDEX IX_Articles_PublishDateON Articles (PublishDate DESC)WITH FILLFACTOR = 100GOQuestion 2: Is there anything else that I can do here that I'm missing?Should I maybe also have a auto-increment field and put the clusteredindex on it instead?Thanks in advancechris
I have an access database and an SQL database and using data transformation services, i want to update the access database using the SQL data.
Can anyone tell me the syntax for referencing the access database?
Is it something like: [TABLENAME].dbo.FIELDNAME ?
Just to clarify, i have
Microsoft Access Database Table 1 (UnitHistory)
SQL Database Table 1 (UnitHistory)
How do i reference these seperately? I want to update the microsoft access database based on the SQL database data.
Eventually i'm trying to update an access database using the data held on my SQL server. Is DTS the best way for me to acomplish this or should i use another method?
Okay this is a test...actually I am still learning SQL and need some help. Does anyone have any information on being able to move indexes from one database to another. My scenario is I have 3 databases, Development, QA and Production. I want to move/copy indexes I created in Development to the QA database. I have many indexes so I do not want to have to recreated them if I can avoid it. Any suggestions?
I'm trying to transfer a table from SQL Server 7 database to another SQL server 7 database on another server. This table has a text field with lots of data (~.5-1 G). I'm using the export wizard and the transfer appears to complete successfully, but when I view it, the text field data has been truncated.
I have an idea to use LIKEW opeartor (with te wildcards) to match large (>10Kb) text fields of 'text' and 'ntext' types. Are there known problems here?
I have a table with raw scientific test results in a single field, some of which are over 25Mb field. I need to parse into the field to find and aggregate selected values from the field.
Table structure is CREATE TABLE [dbo].[Gxxx_Data]( [id] [uniqueidentifier] NOT NULL, [Status] [nvarchar](50) NULL, [GxxxItem_ID] [int] NULL, [Stats_Data] [varbinary](max) NULL, ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
[code]...
From which I need to parse and summarize the (Assembler) opcodes (MOV,CMPi, SHR etc...)I need to parse the large field [Stats_Data] to locate the target data.The internal result strings are delimited with Char(10), conservative counts are from 64k to over 100k lines in each record. Is there a way to parse the individual lines into another table (temp) that would be queried/regexed ?
I want to log errors to a table. If the error is with a URL, I want to store the URL. These URLs can be very large, hundreds of characters, but I only need to store it if it causes the error, which should be very infrequent. Which is the better design:
Create a large varchar field in the log table to hold the URL, or null if the error wasn't with the URL. Create a foreign key field in the log table to a second URL table, which has a unique ID and a large varchar, and only create a record in this table if the error is with the URL.
One concern I have with design 2 is that there could be many other fields that are infrequent. Do I create a separate table for every one?
Hi, My MSQSQL 2000 application inone company has a backup file of more than 6 GB. The total number of tables are more than 280 while the number of fields are more than 8000. The last months I noticed that the backup database file is increasing rapidly. Is there a particular reason or anyone fas an idea why? My largest table consists of more than 160 fields and has amore than 1,2 million records and 32 indexes. I know this is not very good, and I'm thinking methods to fix it since I have noticed performance problems. Can you have some advices? I read that with MSSQL 2005 I can partition my table horizontally and vertically. However I think that this option is available with the enterprise edition which costs more than 10,000 Euros. Is this correct? Is there another way to increase performance? My server is Windows 2003 with 4 GB memory. I think that Windows 2003 doesn't support more than 4 GB memory. Is this true? Are there some advices for my case from the experienced users of this forum?? Regards, Manolis
I have been working as a SQL Server 2000/2005 db administrator (small-medium dbs), in the near future I am going to work as a DBA for large DB on SQL Server 2005 (200 - 300 GB and over) but I have not experences. Do you have any suggestions, Recommendations or useful docs, books or web links and everything else to suggest me on how to administer large databases.
Hi,We are discussing possible implementation for sql2005 database(s). This database will serve one web portal. Part of data will get into it by hand, and part will be replicated from internal system.Some of us are for creating two separate databases, since there are two separate datasources. One, automatic, will change very little over time and requires almost no maintenance. Other datasource will be manual input. Tables and procedures related to this part will change over time.Some of us are for creating single database, since it will serve one web site. More important this group is concerned about performance issues since almost every select will require join between tables that would be stored in two separate databases. Do these issues exist? Can you share some insights, comments, links about this?
For anyone with a larger number of databases (500+): How many do you have in a single instance. If you are using multiple instance on a single server, how many dbs per instance. This is why I'm asking
We are experiencing 701 "out of system memory" and temporary (usually) system freezes when the error occurs. We have 32bit 2005 version 9.00.2153.00, 32GB of memory, AWE enabled, quad dual-core 3GHz hyperthreaded server. Nether the bPool or VAS show any pressure when the "out of system memory error" occurs. Since this error usually indicates a VAS problem we tried increasing VAS to 1GB w/the -g flag. It made no difference. PSS has been working on the case for 3 weeks. They dont seem to be finding any evidince of memory pressure either. When I last spole to the escalation engineer yesterday it seemed that they are going to recommend reducing the number of databases on the server. I asked for clarification as to whether we are hitting a 32 bit barrior, an instance limitation, or both. I am awaiting the answer. How many databases do you have on your server? We had between 1700 and 1900 (the number varies) at times when the error occured. We are now at 1500, and have not had the error in the 2 days since reducing the number of databases...
We are going to be getting a new system to house our large databases. Some of these databases will need to be encrypted. Ideally we'd like to set up encryption in SQL server rather then have to purchase an appliance to encrypt the data. What is the trade off of using SQL server 2005 to encrypt data? Does performance suffer a great deal with a large database (500g-4tb)? Also, how much more/less sure is the encryption for SQL Server? Some of our users need to be able to read the data as unencrypted while working, but we need to keep the system as secure as possible. Answers to any of these questions would be incredible helpful.
I have been looking into mirroring a large amount of small databases approx 150 databases.
As I understand this won't be feasible because of the way mirroring threading works, http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=441900&SiteID=1
As I understand it for every database being mirrored sql will ping the mirror second, causing a network bottleneck?.
Also that the amount of threads generated for each mirrored database will cause also cause a bottleneck?
At the moment our database servers are under very little pressure and as an estimate use about 10% of the resources allocated to them such as CPU utilization, memory, disk IO and network. Our server hardware is Dual Quad core Xeons with 4 - 8 gig of memory and variety of 10k SCSCI raid configurations from raid 5 or 1,0 and sql 2005 32bit.
Ive done some calculations on the log file generation rate compared to network bandwidth there is more than enough network bandwidth.
Has anybody had any luck in mirroring many small databases?
My concerns is how much traffic is caused by the pinging of the mirror for each database?,
How many threads will the mirroring cause and what is the max amount of threads sql can handle?
How much memory will be consumed by each one of these mirroring threads?
I have designed a CV database with complete CV stored in a TEXT field. There is a keyword search which queries the TEXT field also. The query conditions are defined in T-SQL submitted through an ASP page. There is about 20,000 records now. Now while querying the database for keyword search I am receiving time out errors. Is there any solution other than Index server to rectify this situation. How can I speed up the query execution time. Please advise.
My company works with SQL Server 2005 express locally with Visual Studio to develop websites. Everything works very well.
We use SQL Server 2005 express on our production server as well. We change the database over to a non-User Instance when the site is ready to go live. Everything works fine here as well.
We run into issues when databases get near 100 MB. This is well below the stated database size limit for Express of 4GB.
At that point, about once a day, a site with a "large" database will stop responding. The error that we'll get is "Cannot open database 'DBNAME' requested by the login. The login failed. Login failed for user 'DBUSER'
The only way we've found to fix this is to restart the SQL Express service. Obviously, that isn't a very useful alternative.
Has anyone run into anything like this? Could we have some setting wrong?
Would moving to the full version of SQL Server 2005 fix this?
I am trying to enable database mirroring for 100 database. It goes error free till 59 databases (some times 60 databases) with the status (principal, synchronized) on principal. on the 60th or 61st database it gave the status (principal, disconnected). Also mirror starts acting abnormal. connection to mirror starts to give connection timeout and it is not enabling database mirroring on any more databases. I have SQL SERVER 2005 Enterprise with SP1 on the servers. witness is not included yet.
this are my test servers... i have more than 500 databases on my production servers.
principal and mirror both are using port 5022 for ENDPOINT communication.
I am trying to enable database mirroring for 100 database. It goes error free till 59 databases (some times 60 databases) with the status (principal, synchronized) on principal. on the 60th or 61st database it gave the status (principal, disconnected). Also mirror starts acting abnormal. connection to mirror starts to give connection timeout and it is not enabling database mirroring on any more databases. I have SQL SERVER 2005 Enterprise with SP1 on the servers. witness is not included yet.
these are my test servers... i have more than 500 databases on my production servers.
principal and mirror both are using port 5022 for ENDPOINT communication.
All of the databases are critical and all must be included in the Database Mirroring. so, after that I tried to implement database mirroring again...... System has 3 GB of RAM, SQL SERVER (Mirror) using 85 MB of RAM but still giving this error while trying to enable database mirroring for 37th Database.....
"There is insufficient system Memory to run this query"