I've created a few indexes on my tables but before I over-do it I wanted to see if there were any good websites out there with recommendations. My plan is to create clustered indexes on my primary keys and non-clustered for each foreign key. Also there are a few fields that are regularly searched so I will index them separately as well.
I don't want too many though because I know that affects the performance of record inserts. I'm also not sure about using multiple keys in one index.
Is there a good site out there with tips on what indexes to create or avoid, or have I pretty much covered it?
Me any my team are soon going to work on a performance critical application. My team has some experience of writing SQL, however we have not done performance oriented coding.
I am looking for a comphrehensive document which lists information for writing good SQL with performance. Please guide if there is such a document or web site.
I have to do row by row date comparisons in a date column. If the date difference is more than 30 days we keep it , otherwise we suppress it. How can we write the query without using cursor so that only the bold rows will come ?
Dear all,I'm designing a system including the database and the securityrepresents the most crucial aspect of the system; hence for thedatabase security i have implemented the following aspects and needyour advise on further aspects or perhaps corrections where by thesystem is web based using asp.net and under iis 6.0 with https; in theasp.net engine side, i have included client-side validations for whatever is inputed and validated against sql injections for postbackforms:The features of security in SQL Server 2005 side i have implemented:1.) Created MACHINEASPNET Account2.) Allowed ASPNET Account to access the DB3.) Explicity denied ASPNET Account all permissions to all tables,functions and views4.) Denied all permissions to the ASPNET user for stored proceduresexcept EXECUTE permissions5.) By Stored procedure creation, WITH ENCRYPTION, EXECUTE AS'MACHINEASPNET' was usedNo SQL was included in the asp.net code except for calling storedproccedures; the policy is to only call stored procedures within theasp.net pages and encrypt the connection strings inside the web.configfile.Kindly, give me some guidelines for better security or discuss with methe security aspects i mentionedRegards
Somebody can give some tips or hints for speed up my querys and procedures from sql and get more performance, what is best to use, joins o cursors, using cursors can give me more performance?
I'm new to replication but I have already set up replication and have seen it working and failing and have gotten myself out of jams so far but there must be an easy way to administer it when things don't replicate as expected. I'm finding that I could easily kill half a day just trying to dig up information leading to troubleshooting tips. Is there documentation just on managing this feature. The regular MS Administrator's guide doesn't offer much.
Currently I have a problem that if replication fails on one command I get a SQL Mail telling me of the problem but does replication continue to the next command or does it just stop until the problem is fixed? I'm finding that I am constantly checking the publisher and subscriber databases and verifying if replication is indeed doing what the msjob_commands table reports. I set the batch to commit after each transaction instead of every 100.
Hi, For your day-to-day SQL Server issues like query tuning, optimization, TSQL problems, I am writing the blog called http://blog.namwarrizvi.com
Some of the latest articles are: Generating 1 million rows in less than a second Conditionally add column in the table Multiple Inserts in one statement Capture every data operation in SQL Server 2008 100 Nano seconds precision in SQL Server 2008 Represent Trees and Graphs in TSQL MERGE Statement of SQL Seerver 2008 Return Last n Orders by using APPLY operator Number padding in TSQL Microsoft Performance Point Server and Sharepoint Caching and Recompilation in SQL Server 2005
and many more....
I will really appreciate comments and suggestions.
We come accross situations where people are running big updates on the database (i.e. 50.000 updates). Our problem is that those big updates are blocking other user updates.
Thanks to snapshot isolation, users can query (select) the database with no lock. We rebuilt the indexes setting that the indexes used by the update procedure would not use page locks and only row locks. We set that the database would update the statistics asynchronously. Now we are still facing blockings and we would like to optimize the database to avoid those blockings. What else could we check? Any tips regarding the way to avoid that problem is really welcome.
Hello I have 2 sql servers in my company and many remote sites. I amtrying to figure out the best way to keep them safe, since both haveaccess to the internet behind the firewall. I was planning to disbablethe default gateway on one or maybe disabling file sharing on both, iwas also thinking to block access to the the terminal server that isrunning in admin mode, either through the firewall or the permissionsof the rdp protocol. I have a few admins that have account manager andserver operator permission as well as exchange admin. Any ideas in howto restrict access to my servers? thanks.
please explain the differences btween this logical & phisicall operations that we can see therir graphical icons in execution plan tab in Management Studio
Hello everyone! I've got a problem with a real slow query, I would be very happy if somebody has any idea to improve the speed of it... The idea is to get the top 2 products, a customer hasn't bought wich are in his interest...
query (simplificated) ------------------------------------------------- SELECT TOP 2 prodID, Title, Price FROM bestSold7Days WHERE prodID NOT IN (SELECT prodID FROM orders INNER JOIN orderProducts ON orders.orderID = orderProducts.orderID WHERE (orders.custID=394)) AND (prodType = COALESCE((SELECT TOP 1 products.prodID FROM orders INNER JOIN orderProducts ON order.orderID = orderProducts.orderID INNER JOIN products ON orderProducts.prodID = products.prodID WHERE (orders.custID=394) GROUP BY products.prodType ORDER BY SUM(orderProducts.PCS) DESC), 2)) ------------------------------------------------- end query
(COALESCE is for replacing if the customer hasnt ordered anything, or hasnt ordered anything of this type)...
Are there any best practices for indexing to support queries with MIN() and MAX() in them? what if MIN() and MAX() are partitioned? Super bonus question: what if MIN() and MAX() are not only partitioned, but are called on a field in a derived table, and one of the partitioning elements comes from a table that's being joined in the derived table?
I experimented with inserting the derived table into a temp table, putting a POC index on that, and querying out, but that actually took longer.
We are coming out of the dark ages with our app using SQL 7 and, following the excellent advice of the folks here on SQLTeam, installing SQL 2005 Express on our new webserver.
Not being terribly fluent in all things SQL, I was wondering if anybody could provide input on the best practices for getting SQL 2005 Express going on the new server.
So far I've:
- Installed SQL 2005 Express - Downloaded and "installed" the SSEUtil for CMD line instructions - Downloaded and installed the graphical management interface (very nice, makes me feel more comfortable - like SQL 7 console!) - Copied backup files (made using SQL backup maintenance) to the new server
Should I simply create an empty db of the same name on the 2005 server and then restore the 7 data? Or ????
I searched briefly for previous posts of this nature and didn't find too much info so I hope I'm not duplicating effort here...
Thanks in advance for any advice!
Mmmmmkay. Yeah, did you get the memo about the TPS reports?
I need to convert an excel matrix into a table. Currently, the data consists of months going across the top and business names going down the left side. Each business name has three rows of data per monthly column, such that there are three numbers in the january column, three in the february column, etc. etc.
I want to convert to a table that has five columns, the business name, date, and the three data columns.
Any help would be greatly appreciated. As of right now I'm staring at keying in about 2000 rows of data by hand.
Our parts table has 5k records. I want to use part number as a parameter for one of my reports. Is there a way to do this and have the report generate in a reasonable amount of time?
I have questions about Slowly Changing Dimensions. I am quite confused about when should we use type 1 ( changing), type2 (historical), or type3( fixed) for the dimensions in each table? Is there any good suggestions on that?
Thank you in advance and I am looking forward to hearing from you.
I have a SSIS package that reads a text file and generates an output file out of it after transformations.
Now, a 20MB text file (containing about 50,000 records) is taking around 5 mins to complete. There is a Data Flow Task which is taking the major chunk of the time. It contains the following: 1) File Source 2) Conditional Splits (2 in number) 3) Derived Column 4) Data Conversion Transformation 5) OLE DB Destinations (3 in number)
The number of records being processed is close to 50,000
Please share your tips for optimizing the package.
Originally posted by Jeremy at 12/10/2001 11:39:38 AM
Hello all,
I've written a simple dts job that uses oracle (8.x) as a source and oracle (8.x) as a destination. I'm using SQL 2000 and Microsoft's oledb provider for oracle as the two connections. I've chosen "Transform Data Task" with the following SQL "SELECT * FROM REPORTER_STATUS WHERE LASTOCCURRENCE > TRUNC(SYSDATE)". As you can see, it's very simple, however it's very very very slow. (averages about 1000 rows per minute). In my column transformations, I've selected many to many versus the one to one. There are no activex scripts or anything along those lines. Just a simple push of the data from one oracle box to the other. The table schemas are identical etc... I've had this problem before with writing to Oracle and I can't imagine that it's really supposed to be this slow. If you need more details, please just let me know.
The official response from microsoft is that dts only allows for single inserts... not bulk or bcp for oracle. There must be someone out there who has figured out how to configure / modify / call (something) from a dts pacakage to insert millions of records into Oracle in a decent time frame...
I have a stored procedure that queries a database using a Selectstatement with some inner joins and conditions. With over 9 millionrecords it takes 1 min 36 sec to complete. This is too slow for myrequirements.Is there any way I can optimize this query. I have thought aboutusing an indexed view. I haven't done one before, does anyone know ifthis would have potential to improve performance or indeed any otherperformance enhancing techniques I might try.SELECT vehicle.vehicle_idFROM (( [vehicle]INNER JOIN [vehicle_subj_item_assn] onvehicle.vehicle_id=[vehicle_subj_item_assn].vehicle_id)INNER JOIN [subj_item] on[vehicle_subj_item_assn].subj_item_id=[subj_item].subj_item_id)INNER JOIN [template_field] on[subj_item].subj_item_id=[template_field].subj_attr_idWHERE([template_field].template_field_id=@template_field_id) AND([template_field].template_field_type_id=3) AND([vehicle_subj_item_assn].subj_item_value_text=@value) AND(vehicle.end_dtm IS NOT NULL)ThanksGavin
I would like my transformation to automatically create an output column for each input column. Any tips? I can't seem to determine which event to listen to or method to override.
SELECT * FROM ( SELECT TOP 15 * FROM (SELECT TOP 15 CMDS.STOCKCODE AS CODE,CMDS.STOCKNAME AS NAME,CMDS.Sector AS SEC, CMD7.REFERENCE AS REF,T1.HIGHP AS HIGH, T1.LOW,T1.B1_CUM AS 'B/QTY', T1.B1_PRICE AS BUY,T1.S1_PRICE AS SELL, T1.S1_CUM AS 'S/QTY', T1.D_PRICE AS LAST,T1.L_CUM AS LVOL,T1.Chg AS CHG,T1.Chgp AS CHGP, T1.D_CUM AS VOLUME,substring(T1.ST,7,6) AS TIME, CMDS.SERIAL as SERIAL FROM CMD7,CMDS,CMD4 AS T1 WHERE T1.ST IN (SELECT max(T2.ST) FROM CMD4 AS T2 ,CMDS WHERE T1.SERIAL=T2.SERIAL AND CMDS.SERIAL=T2.SERIAL AND T2.sd='20060821' AND CMDS.sd='20060821' AND T2.L_CUM < '1900' AND CMDS.sector >='1' AND CMDS.sector <='47') AND CMDS.SERIAL=T1.SERIAL AND CMDS.SERIAL=CMD7.SERIAL AND CMDS.sd='20060821' AND CMD7.sd='20060821' AND T1.sd='20060821' AND T1.L_CUM < '1900' AND CMDS.sector >='1' AND CMDS.sector <='47' ORDER BY T1.D_CUM desc) AS TBL1 ORDER BY VOLUME asc) AS TBL1 ORDER BY VOLUME desc;
Is it that I have a syntax error in the nested OPENQUERY or is there another issue? Do I need to specify a different provider in the Server Link such as OLEDB? Non-nested OPENQUERYs work fine.
I'm generally following theTips and Tricks article.
"Executing predictions from the SQL Server relational engine". One problem is the sample doesn't actually complete the example query after the second nested OPENQUERY call.
e.g.
SELECT * FROM OPENQUERY(DMServer, 'select €¦ FROM Modell PREDICTION JOIN OPENQUERY€¦')
The SQL Server server link's provider is configured to allow adhoc access. I appears that the inner OPENQUERY cannot be prepared by Analysis Server or the Server link provider? but I need to return a key value t.[CardTransactionID] for joining to SQL Server data elements.
OLE DB provider "MSOLAP" for linked server "DMServer" returned message "Errors in the back-end database access module. The data provider does not support preparing queries.".
Msg 7321, Level 16, State 2, Line 2 An error occurred while preparing the query SELECT * FROM OPENQUERY(DMServer, 'SELECT t.[CardTransactionID], t.[PostingDate], [Misuse Abuse Profile].[Even Dollar Purchase], PredictProbability([Misuse Abuse Profile].[Even Dollar Purchase]) AS Score, PredictSupport([Misuse Abuse Profile].[Even Dollar Purchase]) AS Suppt, t.[BillingAmount] FROM [Misuse Abuse Profile] PREDICTION JOIN OPENQUERY([Athena Dev], ''SELECT [CardTransactionID], [PostingDate], [BillingAmount], [AccountNumber], [SupplierStateProvinceCode], [MerchantCategoryCode], [PurchaseIDFormat], [TransactionTime], [TaxAmountIncludedCode], [Tax2AmountIncludedCode], [OrderTypeCode], [MemoPostFlag], [EvenDollarPurchase] FROM [dbo].[vMisuseAbuseProfile] '') AS t ON [Misuse Abuse Profile].[Account Number] = t.[AccountNumber] AND [Misuse Abuse Profile].[Supplier State Province Code] = t.[SupplierStateProvinceCode] AND [Misuse Abuse Profile].[Merchant Category Code] = t.[MerchantCategoryCode] AND [Misuse Abuse Profile].[Purchase ID Format] = t.[PurchaseIDFormat] AND [Misuse Abuse Profile].[Transaction Time] = t.[TransactionTime] AND [Misuse Abuse Profile].[Tax Amount Included Code] = t.[TaxAmountIncludedCode] AND [Misuse Abuse Profile].[Tax2 Amount Included Code] = t.[Tax2AmountIncludedCode] AND [Misuse Abuse Profile].[Order Type Code] = t.[OrderTypeCode] AND [Misuse Abuse Profile].[Memo Post Flag] = t.[MemoPostFlag] AND [Misuse Abuse Profile].[Even Dollar Purchase] = t.[EvenDollarPurchase] ')
In desparation I tried returning the case key (CardTransactionID) and the predictive column elements but I get an error when I try that. I assume this is a no-no? OLE DB provider "MSOLAP" for linked server "DMServer" returned message "Error (Data mining): Only a predictable column (or a column that is related to a predictable column) can be referenced from the mining model in the context at line 2, column 15.".
I am using Full Text Index to index emails stored in BLOB column in a table. Index process parses stored emails, and, if there is one or more files attached to the email these documents get indexed too. In result when I'm querying the full text index for a word or phrase I am getting reference to the email containing the word of phrase if interest if the word was used in the email body OR if it was used in any document attached to the email.
How to distinguish in a Full Text query that the result came from an embedded document rather than from "main" document? Or if that's not possible how to disable indexing of embedded documents?
My goal is either to give a user an option if he or she wants to search emails (email bodies only) OR emails AND documents attached to them, or at least clearly indicate in the returned result the real source where the word or phrase has been found.
Web Base application or PDA devices use to initiate the order from all over the country. The issue is this table is not Partioned but good HP with 30 GB RAM is installed. this is main table that receive 18,0000 hits or more. All brokers and users are using this table to see the status of their order.
The always search by OrderID, or ClientID or order_SubNo, or enter any two like (Client_ID+Order_Sub_ID) or any combination.
Query takes to much time when ever server receive more querys. some orther indexes are also created on the same table like (OrderDate, OrdCreate Date and Status)
My Question are:-
Q1. IF Person "A" query to DB on Client_ID, then what Index will use ? (If any one do Query on any two combination like Client_ID+Order_ID, So what index will be uesd.? How does MS-SQL SERVER deal with these kind of issues.?
Q2. If i create 3 more indexes on ClientID, ORderID and OrdersubID. will this improve the performance of query.if person "A" search record on orderNo so what index will be used. (Mind it their would be 3 seprate indexes for Each PK columns) and composite-Clustered index is also available.?
Q3. I want to check what indexes has been used? on what search?
Q4. How can i check what table was populated when, or last date of update (DML)?
My Limitation is i Dont Create a Partioned table. I dont have permission to do it.
In Teradata we had more than 4 tb record of CRM data with no issue. i am not new baby in db line but not expert in sql server 2003.
My SSIS package is running very slow taking so much time to execute, One task is taking 2hr for inserting 100k records, i have disabled unused index still it is taking time.I am rebuilding/Refreshing indexes and stats once in month if i try to execute on daily basis will it improve my SSIS Package performance?Â