Whether this index will be considered by the query optimiser to lock records? If I created another index with only the QuestionId field will it boost the performance? Actually how the optimiser chooses the right index while update?
Is it neccessary to schedule a update statistics on index in sql server 2005 on daily basis Is it neccessary to schedule a rebuild index on index in sql server 2005 on daily basis
I am using Ola Hallengrens scripts for index and stats maintenance but I am wondering what most people to in terms of the maintenance schedules. At present we do an index rebuild reorg weekly, but do people also do update stats nightly?
I suppose there is an element of "it depends" here in that the data may be fairly static so the update stats may not be required, or if heavily updated then perhaps rebuilding indexes may be required more frequently.
We have a 20 GB database and reorganize indexes and update statistics maintainance takes about 4 hours and the log files grows out of control what is a serious problem since it can not be truncated (database mirroring).
How can i enable my fulltex change-tracking and update-index in my table? I recreated my fulltext catalog and start the full population, but although my fulltext index status shows active, my full-text change-tracking and the update index were disabled. - and I don't know how to enable them. Thanks in advance
I have Re-Builld index Job,it is runnining evry week.now a days my queries are runnign very slow.eben after running re-build indexs.Please guide me how to check where was the problem.
I am suspecting Stastitics how check the stastitics updation.
tblJobDates serves two purposes: to give us the most recently entered due date for a job, and to serve as a "repository" to track changes to the due date.
Report C: The report I want to generate does NOT provide historical information... it only serves to show the CURRENT due date for each job in the tblJobs table: --------------------------------------------
COLUMNS: LocationName Due Date (alias of DateData)
OUTPUT (csv): Jonesport ME, 6/8/2002 Garden City NY, 6/13/2002
Note that for Jonesport, an initial due date of 6/17/2002 was entered (based on the CRD). Then someone changed it so that the job was due EARLIER.
Note that for Garden City, an initial due date of 6/12/2002 was entered (based again on the CRD). Then someone changed it so that the job was due LATER.
The "most recently entered due date" is what should be reflected in my report -- just as it does above ("C")
Other Notes:
-- There are other columns of information from both tables that i would like to return, but above is the most basic form of my request. Most notably, we would need to return the JobPK in report (C).
-- A job should only appear ONCE in report (c), with it's "current" due date, regardless of the other due dates that may have been entered for that job.
-- If a job has no due date, it should not appear on the report.
-- Although not shown here, each row in (B) DOES have a unique identifier (DatePK) as well... if that helps in your solution.
-- Note that the job that is "due first" appears at the top of report (C). This allows a person looking at the report to quickly determine which job "gets priority" -- the one on top!
Okay gurus -- how should the query look that would generate the desired output in Report C?
THANKS IN ADVANCE if you even can point me in the right direction!!
I need to decided between Standard and Enterprise Edition (Cost is acriteria - but its secondary to performance - <!--and I am not paying forit myself-->)The server spec under consideration: Dual Xeon, 1GB RAM, 36GB - RAID 1(Dell PowerEdge 1850).Application: Windows 2003 Std Server, ASP.NET, MS SQL Server 2000 baseddata driven web application.Approximately 25 simultaneous clients. Peak activity would probably be 50transactions/activities per second (2 per second per client). I expectthe database size to grow up to 4GB in 1 year.The application would use only basic OLAP features (if at all)...sofeature set wise I believe that standard edition is good enough.What I am concerned about is when MS documentation says that StandardEdition is for "organization that do not require the advanced scalability,availability, performance, or analysis features of the SQL Server 2000Enterprise Edition"Is there a difference in performance between Std and Ent editions? Interms of number of transactions per second that can be serviced?What other criteria should I be aware of before deciding to go one way orthe other?Any ideas?
There must be a way to do this simply. We're running SQL Server 2000. I'm looking for some generic SQL statement that I can apply.
If I have a table with a person column and a location column and multiple records for the same person / locatioin combination, how do I select the person with the location they most frequently visited? Say George visits Mexico 5 times, and the Bahamas twice and costa rica once. I would have 8 records in my table for George. The data looks something like this:
This calls the Sp that does the Reindex. It fails at the update statistics with a very generic message. like " Command: UPDATE STATISTICS [xxxx_DB].[dbo].[xxxx_xxx] [_WA_Sys_00000007_49C3F6B7] [SQ... The step failed."
I suspect it has more error but this is all it is showing me when I right click on the job history. therefore, I updated the job step in the advance tab with log to a txt file. Am I on the right track or there is another way to see error some where else.
I looked at the logs but they didn't show any thing.
Please help me out: I have some records in a sqldatasource and want to show it column wise. Now I do it with a datalist because it's easy. But other options are open. Every item/record should have a radiobutton (in a group, so that you can only choose one from all). People advised me to do this with a html radiobutton inside the template. After the user has selected an item and chooses the next-button I need to know what item the user has choosen. Furthermore, when the user likes to step back, the same radiobutton should allready be selected. Please help, this is bothering me for a while, best regards from The Netherlands, Gert
My company has a website that connects to a sql server (on a different box). I am trying to convince them to get sql server 2005. However, I do not know if SQL Server 2005 Workgroup edition is okay for our needs. Can someone please tell me if it is. Basically, our setup is the following:
The SQL Server will only have one/two clients - the web server
i have to store some data on a remote sever(MS SQL SERVER2000). The scenario is like 1. The web application runs on a local machine. User (who inputs) uses through LAN.2. The Input should be stored in the remote server. if the remote connection is ok. otherwise it should be saved in local server's database(MS SQL 2000).3. In the application's web.config there is a connection string pointing to the remote server and another one (alternating one) points the local server's database. in scenario like this i first to tested the remote connection. if it is not ok then i initialize the local server's connection like thisprivate MyConnection() { try { connectionSql = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnForRemote"].ToString()); connectionSql.Open(); } catch (Exception ex) { connectionSql = new SqlConnection(ConfigurationManager.ConnectionStrings["ConnForLocal"].ToString()); } finally { connectionSql.Close(); } connectionSql2 = new SqlConnection(ConfigurationManager.ConnectionStrings["Temp"].ToString()); }My problem is when the remote connection is lost it takes almost 1 minute to store in local database. how can i make it more time efficient. Thanks....
Hello, I have a table with some data in it. What I want to do is to create a query that returns me randomly one of the records of the table. Can this be done?
If this is not possible from SQL server I have thought an alternative way. This is:
I want to return all rows of the table with SELECT *, but I want the select to return in the first column an autoincreament number for each row without the need to add an autoincrement field in the table. e.g
Table ------ Banana Tomatoe Aple ... ... Orange
Result from select ------------------ 1 Banana 2 Tomatoe 3 Aple . .... . .... 23 Orange
Can this be done? At least this way 1) I can travel to the end of the results (from ASP), 2) read the ID of the last row 3) Create a random integer number from 1 to last ID, 4) and finaly select the appropriate random row from that integer.
I am purchasing a new/first server and could use some help with the details.
I am purchasing the server with the intent of managing a large database that will be quite extensive and requires a good amount of processing power. I have decided to go with windows server 2003 and SQL Server 2000 as a database. Within next year I hope to have this database directly flowing to a website that I could possibly be hosting as well as 2-3 offsite employess logging into the system remotely.
I would say my biggest question is whether or not to choose the raid 1 configuration or the raid 5. I want to be able to have the Hard drives mirror eachother. I was thinking of going with three hard drives but im not really sure if I would even need that setup. With that, I will just show my current system:
Dell poweredge 1800
3.0 ghz xeon 2 gb memory sata 1 raid cerc 6-Channel sata raid controller 160 gb hd x 2 onboard NIC network adapter
Im going price savvy on this one so no ups redundant, power supplies, or tape backup. Although I am open to any suggestions.
Definately appreciate any help with this as I have been hard pressed to find some quality reseller help. They just want to throw the biggest and baddest thing at me.
I would like to know the experts views on the following I have listed below.
1. Is there any significant performance gain by choosing the Native SQL server driver rather than OLEDB for example. I know there are lot of specified features in the Native SQL Driver but I am thinking in terms of the performance.
2. Why not develop for the generic database rather than specific database?
3. More generic mean less work when migrating database to a different database?
Appreciate your valuable thoughts and any recommendations.
please explain the differences btween this logical & phisicall operations that we can see therir graphical icons in execution plan tab in Management Studio
We face slow performance issue for like taking long time for same query execution after We apply index rebuild and reorganize index. But, after execution of query or procedure for 2 -3 times, performance will be faster. I have following questions
1 do we need to update stats after we rebuild an reorganize index. 2. is it will be slow for 1-2 times for every query and stored procedure execution after we rebuild and reorganize index?
CREATE TABLE [dbo].[SomeTable]( [key] [int] IDENTITY(1,1) NOT NULL, [keywords] [nvarchar](255) NOT NULL CONSTRAINT [PK_SomeTable] PRIMARY KEY CLUSTERED
CREATE FULLTEXT CATALOG ft AS DEFAULT;
CREATE FULLTEXT INDEX ON [dbo].[SomeTable] KEY INDEX [PK_SomeTable] ON [ft] WITH CHANGE_TRACKING AUTO ALTER FULLTEXT INDEX ON [dbo].[SomeTable] ADD ([keywords]) ALTER FULLTEXT INDEX ON [dbo].[SomeTable] ENABLE
I need to choose a database based on the following criteria (using .NET app): 1) a light but fully functional database, preferably with the support of store proc and constraints, less than 8000 transaction a day. 2) portable or the database can be export/import very easily 3) reliable and stable 4) least maintenance
I have two db in my mind, Access and MSDE? Does anyone have some hand-ons experience on the above two? Or any other better suggestions?
Hello, I am really dripping wet behind the ears on this and would really appreciate some help. I am setting up my first SQL table and am lost at trying to choose data types for my fields. Basically, all I am doing is setting up a contact form. It is going to ask for phone number, name, address, city, state, zip, etc. I will also have two fields which if I were using an Access db, would be "memo" with say, 500 characters. So in researching SQL data types, I came across the following:
char Fixed-length non-Unicode character data with a maximum length of 8,000 characters.
varchar variable-length non-Unicode data with a maximum of 8,000 characters.
text Variable-length non-Unicode data with a maximum length of 2^31 - 1 (2,147,483,647) characters.
nchar Fixed-length Unicode data with a maximum length of 4,000 characters.
Can someone shed some light on what I need for simple fields like street, name, city, and more importantly, description? I will also have a "premium" field which should be a "yes" or "no". I am thinking a data type of bit, which is set to 1 or 0? Thanks for any help, I appreciate it so much. TOm
Hello group:I've done alot of reading on this subject somewhat and have found thatmany people have many different opinions on this subject. My questioncenters mainly around using a lookup table to enable users to select apre-defined list of values.I have developed a practice myself of avoiding AutoNumber type datafields for primary keys where the primary key will be related to achild table. Nevertheless, what do most users do with lookup tables?My thoughts are to create a small key value for each value in thelookup table. For example:I might have a Carriers table which shows a list of carriers that Imight ship an order by. One of the entries may be 'Air Freight -Overnight', or 'Air Freight - 2nd Day Air'. I've seen a few exampleswhere the primary key field for each entry like these would beautonumber, or at least, a numeric value. What I like to do is createmy own key, like for 'Air Freight - Overnight', I might use 'AFO' forthe key, and for 'Air Freight - 2nd Day Air', I might use 'AF2'. Anythoughts on this? Mine are that even tho the users may never see thisvalue - I, as the developer will see it and I tend to prefer a keyvalue based on real data that means something other than anauto-incremented number. In referencing the well-known Northwind.mdbdatabase, I noticed their Categories table used a number field value,like 1, 2, 3....etc, but their customers table used values like'ALFKI' to represent their key values.What are some other thoughts out there? I'm working with Accesscurrently, but this project is about to move to SQL Server.James
I've got a couple of questions linked to partitionating tables.
-What sort of criteria follows Database Engine when you have two NDF assigned to one filegroup and this filegroup is part of partition What's more: Could I force that Sql will use one by default?
I mean, my first partition encompass from 20020101 till 20030101. When I add data for example March or June, could I decide that these months belong to NDF1 rather than NDF2?
Lets assume database A is production, B is copy. SQL Server 2005 sp2, SQL CE 3.5
Database A has a variety of transactions against it 24x7 Database B (the copy) is for reporting and as a source of merge replication for SQL CE instances Merge replication and reporting is used 24x7 as well
I have the following requirements: Maintain an up to date copy of the production database (need not be up to the minute, could be hourly, even daily update) Database B is read-only. The merge replication is NOT bi-directional.
Here is the caveat (which I think prohibits using some solutions to this problem): The production application accomplishes much of it's functionality with in-memory copies of records. I have no control over the production application. When it works against the database, it sort of does a 'withdrawal-deposit' scenario. (to the best of my knowledge it's not using SQL Server transactions) So, for every record it works with, a copy is made out of the database, changes are made in memory, a delete of the database record is done, then the record is re-inserted.
With this kind of behavior in db A, I'm not sure what it would do to log-shipping or transactional replication. I do know that I want to minimize the changes required at the SQL CE instances to keep the sync operation to a minimal cost.
At one of your client sides we have configured Always on with synchronous mode.Also we have schedule rebuild index and update statistics job which runs in night every alternate day. the issue is there are more then 100 sleeping queries which is blocking update statistics job.
I have to stop update statistics job manually once i come to office manually.
Once I have killed blocking sleeping query but then other sleeping query blocked it and so on.
Hi All,I have a database that is serving a web site with reasonably hightraffiic.We're getting errors at certain points where processes are beinglocked. In particular, one of our people has suggested that an updatestatement contained within a stored procedure that uses a wherecondition that only touches on a column that has a clustered primaryindex on it will still cause a table lock.So, for example:UPDATE ORDERS SETprod = @product,val = @valWHERE ordid = @ordidIn this case ordid has a clustered primary index on it.Can anyone tell me if this would be the case, and if there's a way ofensuring that we are only doing a row lock on the record specified inthe where condition?Many, many thanks in advance!Much warmth,Murray
I want to create index for hash table (#TEMPJOIN2) to reduce the update query run time. But I am getting "Warning!
The maximum key length is 900 bytes. The index 'R5IDX_TMP' has maximum length of 1013 bytes. For some combination of large values, the insert/update operation will fail". What is the right way to create index on temporary table.
Update query is running(without index) for 6 hours 30 minutes. My aim to reduce the run time by creating index.
And also I am not sure, whether creating index in more columns will create issue or not.
Attached the update query and index query.
CREATE NONCLUSTERED INDEX [R5IDX_TMP] ON #TEMPJOIN2 ( [PART] ASC, [ORG] ASC, [SPLRNAME] ASC, [REPITEM] ASC, [RFQ] ASC,
I have a database table which needs to make the Index "ParentREF, UniqueName" unique - but this fails because duplicate keys are found. Thus I now need to cleanup these duplicate rows - but I cannot just delete the duplicates, because they might have rows in detail tables. This means that all duplicate rows needs an update on the "UniqueName" value - but not the first (valid) one!
I can find those rows by
SELECT OID, UniqueName, ParentREF, CreatedUTC, ModifiedUTC FROM dbo.CmsContent AS table0 WHERE EXISTS ( SELECT OID, UniqueName, ParentREF FROM dbo.CmsContent AS table1 WHERE table0.ParentREF = table1.ParentREF AND table0.UniqueName = table1.UniqueName AND table0.OID != table1.OID ) ORDER BY ParentREF, UniqueName, ModifiedUTC desc
...but I struggle to make the required SQL (SP?) to update the "invalid" rows. Note: the "valid" row is the one with the newest ModifiedUTC value - this row must kept unchanged!
ATM the preferred (cause easiest) way is to rename the invalid rows with UniqueName = OID because if I use any other name I risk to create another double entry.
choosing a primary key for the database which i am designing.
I have few tables which contains 5 -15 fields out of it 3 - 9 columns combined to form the uniqueness of the row.
All are un-related tables. Three parent tables connect with 20 child non-related child tables.
I believe it would not be a wise choice to choose 3 to 9 fields for primary key. But if i use an auto increment as a key will there be of any use as it might never be used to fetch rows. Then why do i still have to go with that?
Or Is it ok to create a primary key of upto 5 attributes?