Please give me some advice. In my application I calculate a list of identifiers (Guids) that are primary keys in my table and I have to retrieve those rows from the database. So my first approach is like
Code SnippetSELECT id, c2 FROM t1 WHERE id IN (@id1, @id2, @id3,....)
where @idn are the calculated identifies as parameters. This approach does not scale well since there is a limit of parameters that can be used. So one possibility might be to use several SELECT statements, each with the maximum number of parameters. I can't believe that this is a good solution. A temporary table may be a better solution - I don't know. Are there any better ways to retrieve performantly - any recommandations?
I have the Transact-SQL Programming book from O'Reilly. It waspublished in 1999. It states that "SELECT ... INTO" statements end uplocking the entire database of the target table. Since the tempdb isalso involved (in many cases), this creates major deadlocks for theentire database and all users. It suggests using the "INSERT ...SELECT" form instead.Considering that the book is somehow dated, is this recommendationstill valid, especially on target sizes of up to 5 million records?
I will have to create a table that consists of only of two fields. one: them employeeID and two: the SupervisorID, my question is what should I define as my primary key. Should it be an aditional field, or could it be the EmployeeID field.
The employeeID is an unique filed. The end user for this application will be updating rearly some of this records, and may be adding or deleting some new records exporadically.
A SQL 2005 server with around 1K~ databases, capacity at about 1TB~. We would like to be able to have a warm standby with transactions replicated to it. In the event of a failure on the principle, we would want the warm standby to come online automatically and begin serving db requests.
I've looked at the SQL 2005 database mirroring option; however, this has a restriction of around 10 databases per SQL server instance which, unfortunately, I exceed. One method I've been looking at is transaction replication in the classic publisher / subscriber model; however, how would I handle automated fail-over to the subscriber if the publisher were to fail?
Does anyone in the community have any thoughts or recommendations?
Hi, folks. I've a production SQL machine with more than 20 users making transactions 24 hrs in 6 days a week. I've only Sunday for maintenance. The server has fixed 2 GB RAM allocation for SQL. Is it good to Restart SQL ( or machine) to clear the Buffer-Cache( or is it good to keep the cache) .... :rolleyes:
Are there any general recommendations concerning filegroups? My personal point of view is to place large tables in their own filegroups and group smaller, more static, tables in a single filegroup. Is it also good practice to group small and large index in two separate filegroups or should each large index have their own filegroup? Are there any useful links out there concerning filegroups and configuration?
I am in need of some advice. I need to build a SQL machine that willbe adequate for my company. Budget is a very big factor but I need themachine to be reliable and as redundant as possible.This box will be 'vanilla' since I will be building it myself. Ilooked at some larger companies websites and the prices are way out ofcontrol.Here's what my configuration is so far (keeping price in mind):Case: rack-mount 4UMotherboard: Intel 865GLCL (800MHz FSB)Processor: Intel Pentium 4 2.4GHzMemory: 1GB DDRAMHard Drive(s): 3-36GB SATA [10,000 RPM] in RAID 5 configurationCD-ROM: standardFloppy: standardRAID Controller: Promise SATANIC: 3ComMy machine does not come under a very heavy load but it is used often.I'm interested in hearing others comments about their SQL servers so Iknow how to gauge building my machine.
I have begun to try to break out of using Access db's (97!) and have been trying out SQL Server Express 2005 along with the SQL Management Studio Express. I am a little confused with it as I am trying to use the interface inside of VB.NET 2005 as well as the management studio and sometimes I can connect from one without the other.
Anyway this points to the fact that I have a lot to learn and I was looking for a recommendation for a book that could be a tutorial for using VB.NET 2005 with SQL Server Express. I really need something that starts from square one but hopefully builds fast. Right now it appears I need to understand connection strings (when do I put ".sqlexpress" and when do I use the server name followed by the instance for example?).
I have tried some of the books online for example and ran into a dead end with the simple tutorial (http://msdn2.microsoft.com/en-us/library/ms165732.aspx) when the headers didn't sort, I couldn't select any other pages and the edit button didn't work. I don't have a clue what happened as I followed the instructions.
Anyway if someone could recommend something that teaches using SQL Server Express while building an application with VB2005 that would be perfect.
I would like to locate a book that focuses on MSSQL administration fromthe command line. My background is in Informix, and I am used to doingthings from the prompt.Any recommendations?
I am re-writing an application for Windows CE which was originally written for the Palm OS. The original application was written in VB6 using access databases. I will be re-writing it in VB.Net and was considering using SQL Everywhere as it seems to fit the criteria that i need.
There is also an application written for the desktop that synchronizes with the mobile application. This also is written in VB6 and uses Access Databases.
I found the Sync with access CTP which i thought was exactly what i will need for this project. However i have a few concerns about SQL and Access and would like to ask a few questions before i can continue with this project.
I read that this Sync with access will allow me to synchronize the data between my desktop application and the mobile device application.
What will happen when we re-write the desktop application to use SQL? Will i be able to sync the data between the 2 applications without using SQL Server? i.e. sync using SQL Everywhere. If not, is there any way around it without implementing SQL Server. I thought of having an Access Database in between the 2 applications to utilize the Sync with access component. Does this sound feasible? Also, is it possible to Remotely sync the data without using SQL Server?
Need Recommendation on Tool for SQL Server Development
I have inherited a SQL Server 7 database (actually a SQL Server 6.5 database running on SQL 7 in compatibility mode) with 1000s of objects with triggers and stored procedures with zero documentation; i need to make major changes to this database; are there any tools available that will allow me to quickly search through the database objects and code (like stored procedures, etc) for keywords and other useful criteria? Do you recommend any SQL Server specific tools that will help me learn this database in the shortest amount of time?
I have a database that is around 2 to 4 GB.If I were to estimate some numbers like 4x growth oreven 10x, the database size could reach 40GB.The new server will be running SQL Server 2005.I am not sure which configuration option to take.I've gathered some information from different places:Configuration #1:OS - Raid1 2x36GBLogs - Raid1 2x36GBData - Raid 5 4x73GBConfiguration #2:OS - Raid1 2x36GBLogs - Raid5 (not sure how many drives)Data - Raid5 (not sure how many drives)Now if I am using a separate RAID array disksfor the database's transaction log, should I also put theTempDB in this RAID also?Here's the configuration I am thinking of right now.Please give me your comments:OS - Raid1 2x36GBLogs & TempDB - Raid5 3x36GB = 2x36GB usable spaceData - Raid5 3x73GB = 2x73GB usable spaceIf you have other configurations you recommend please letme know.Thank you
Our shop is expanding use of SQL Server, both 2000 and 2005. We haveLitespeed on some boxes to handle the backup/recovery jobs. Can I askwhat are considered the best tools for monitoring SQL Server, in termsof things like performance monitoring, tuning and auditing if it ispossible to get all of this functionality in one?What do you use and like?Thanks in advance.Gerry
I would like to have some clarification about index-related recommendation from Database Tuning Advisor.
Let me describe the scenario first:
There is a table with clustered index defined on ID column of type INT and there are other columns of varchar/int types as well. Now when I run tuning advisor I get recommendations related to creating statistics as well as non-clustered indexes. When I view the syntax for recommended non-clustered index, sometime it explicitly add ID column as well which already has clustered index defined on it. e.g
CREATE NONCLUSTERED INDEX idx_TableName_IndexName ON dbo.TableName
(
ColName1 ASC,
ColName2 ASC,
ID ASC
)
My understanding is that for each non-clustered index, clustered index is automatically a part of it and that is how non-clustered index retrieve the actual data. I have seen it more often than not in DTA's recommendation to include clustered index column somewhere among columns for indexing for so many of my tables.
I can understand if the recommendation was to INCLUDE clustered-index column.
I would appreciate if someone out there could help me to understand what I am missing here.
The DBA is not around and I would like to see if someone had a good recommendation on what the Maximum insert commit size (MICS) should be for an OLE DB Destination where the default of ZERO is not being used.
I want to use Fast Load and I want to use Redirect Row to catch the errors. I just performed a test where the OLE DB Destination was NOT set to Fast Load - it took FOREVER and I cannot have this kind of performance.
I know that this may be totally dependent on what is being inserted, but is there any problem with just setting this value to say 800,000? -.
The destination SQL database's recovery mode is set to SIMPLE as it is not a transactional database.
We are getting prepared to move from SQL Server 2000 to 2005. We have a lot of DTS's that will need to be converted to SSIS. Can you recommend a really good reference book or text book on SSIS that will help us out both with DTS conversions as well as SSIS development in general.
I have a report with a hidden parameter that defaults to the logged in user. In this way, a user can only see his own information. However, I now need to give a group of administrators access to change the value of the parameter (to see information for other people). What is the best/most appropriate way to accomplish this?
I thought I'd try the Linked Report, but do not see how I can set the default value for the User ID parameter to the value of the current logged in user in the Reports Manager. Is that possible?
I also tried creating a report with the parameter open but restrictive access, and then use that report as a subreport on a report where the parameter is closed. However, a general user cannot open the subreport in this case because he doesn't have access to the open report.
Hi,We have several Access database that we would over time plan to convertto SQL. I am looking for a tool that works better than the upsizewizard in helping with the transition - I am looking for somethingreliable yet reasonably priced.Thanks in advance for all your helpKR
What's the most efficient way to store the following information:
* Table contains 1 million listings * Each listing can be geo-targeted to any of the 200+ countries * Searches return listings based on location
Storage options:
Option #1 (normalized) * Listings (PK listingID int) [1 million rows] * ListingLocations (listingID, locationID) [could be up to 200 million rows]
Option #2 (denormalized) * Listings (PK listingID int, binary(32) with bit-mask consisting of 200 bits one for each location)
Usage: Usually the query will simply lookup listings based on some keywords. It will get back 50-200 listings. Then the application (C#) will filter the listings based on location.
Did anyone have experience with similar structures? Which option is more efficient?
I know that using the intersection-table in Option #1 is the "proper" relational-DB way of doing things. However, I do not like the idea of storing the listingID so many times (ones for each locationID).
We are working on converting to SQL 2005 database. During the conversion we are having to rewrite a lot of code and doing a lot of intital testing and development on development data. This is causing our transaction logs to really big. I have created a maint plan that runs nightly that does a back up of database and tran log but throughtout the day the tran logs are getting really big and eating up a ton of disk space. Does anyone have suggestions on what sort of maint plan I can setup to run on my developement data where as at this point I am not concened about being able to roll back the database just keep is small as possible and "healthly"
I have been working as Sybase DBA for 5+ years, and I would very much like to add MS SQL Server to my resume. Given the common roots of the two RDBMS, it seems that the learning curve would not be as sharp as if I were going to learn Oracle or DB2. Can anyone out there know of any books that are geared toward learning MS SQL Server from a Sybase DBAs perspective?
I am reading "SQL Server Query Performance Tuning Distilled",on page 104 it talks about one of the index design recommendationswhich is to choose the column that has very high selectivity of valuesinstead of a column that has very few selectivity of values.My question is if I have currently indexes on my tables that have1, 2, 3, 4, ... values only on thousands of rows, are these nonclusteredindexes pretty much useless indexes that I should get rid of?And I know that pretty much the number of selectivity values willalways remain very low.Thank you
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server witha particular query. It would take approximately 22 seconds to return100 rows, thats about 0.22 seconds per row. Note: I ran the query insingle user mode. So I tested the query on the Development server bytaking a backup (.dmp) of the database and moving it onto the devserver. I ran the same query and found that it ran in less than asecond.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue isrelated to some external hardware issue like: disk space, memory etc.Or could it be OS software related issues, like service packs, SQLServer configuations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating systemrelated issue.Any Ideas would help me greatly!Thanks,Brian T
We have the same application installed on a few different environments with similar servers and similar hardward. The only difference is the versions of SQL and the colations. Is SQL 2005 a lot faster that SQL 2000? Could colation type make a big effect on performance? ScAndal
HiI want to insert 1000s of records into SQL Server 2005 Database with some manipulation. So that i put into the For Loop and inserting record.Inside the loop i am opening the connection and closing after use. The sample code is belowfor(int i=0;i<1000;i++){ sqlCmd.CommandText = "ProcName"; sqlCmd.Connection = sqlCon; sqlCmd.Connection.Open(): sqlCmd.ExecuteNonQuery(); sqlCmd.Connection.Close(); } What my Question is.. How is the Performance of this Code..?? Will is take time to get the Connection and Close the Connection in every itration?Or Shall I Open the Connection in Begining of the outside loop and close the connection at end of the Loop? will it increase the Performace?Please clarify me these question.. Thanks in advance.