Hello I have 2 sql servers in my company and many remote sites. I am
trying to figure out the best way to keep them safe, since both have
access to the internet behind the firewall. I was planning to disbable
the default gateway on one or maybe disabling file sharing on both, i
was also thinking to block access to the the terminal server that is
running in admin mode, either through the firewall or the permissions
of the rdp protocol. I have a few admins that have account manager and
server operator permission as well as exchange admin. Any ideas in how
to restrict access to my servers? thanks.
Me any my team are soon going to work on a performance critical application. My team has some experience of writing SQL, however we have not done performance oriented coding.
I am looking for a comphrehensive document which lists information for writing good SQL with performance. Please guide if there is such a document or web site.
Hi, For your day-to-day SQL Server issues like query tuning, optimization, TSQL problems, I am writing the blog called http://blog.namwarrizvi.com
Some of the latest articles are: Generating 1 million rows in less than a second Conditionally add column in the table Multiple Inserts in one statement Capture every data operation in SQL Server 2008 100 Nano seconds precision in SQL Server 2008 Represent Trees and Graphs in TSQL MERGE Statement of SQL Seerver 2008 Return Last n Orders by using APPLY operator Number padding in TSQL Microsoft Performance Point Server and Sharepoint Caching and Recompilation in SQL Server 2005
and many more....
I will really appreciate comments and suggestions.
Is it that I have a syntax error in the nested OPENQUERY or is there another issue? Do I need to specify a different provider in the Server Link such as OLEDB? Non-nested OPENQUERYs work fine.
I'm generally following theTips and Tricks article.
"Executing predictions from the SQL Server relational engine". One problem is the sample doesn't actually complete the example query after the second nested OPENQUERY call.
e.g.
SELECT * FROM OPENQUERY(DMServer, 'select €¦ FROM Modell PREDICTION JOIN OPENQUERY€¦')
The SQL Server server link's provider is configured to allow adhoc access. I appears that the inner OPENQUERY cannot be prepared by Analysis Server or the Server link provider? but I need to return a key value t.[CardTransactionID] for joining to SQL Server data elements.
OLE DB provider "MSOLAP" for linked server "DMServer" returned message "Errors in the back-end database access module. The data provider does not support preparing queries.".
Msg 7321, Level 16, State 2, Line 2 An error occurred while preparing the query SELECT * FROM OPENQUERY(DMServer, 'SELECT t.[CardTransactionID], t.[PostingDate], [Misuse Abuse Profile].[Even Dollar Purchase], PredictProbability([Misuse Abuse Profile].[Even Dollar Purchase]) AS Score, PredictSupport([Misuse Abuse Profile].[Even Dollar Purchase]) AS Suppt, t.[BillingAmount] FROM [Misuse Abuse Profile] PREDICTION JOIN OPENQUERY([Athena Dev], ''SELECT [CardTransactionID], [PostingDate], [BillingAmount], [AccountNumber], [SupplierStateProvinceCode], [MerchantCategoryCode], [PurchaseIDFormat], [TransactionTime], [TaxAmountIncludedCode], [Tax2AmountIncludedCode], [OrderTypeCode], [MemoPostFlag], [EvenDollarPurchase] FROM [dbo].[vMisuseAbuseProfile] '') AS t ON [Misuse Abuse Profile].[Account Number] = t.[AccountNumber] AND [Misuse Abuse Profile].[Supplier State Province Code] = t.[SupplierStateProvinceCode] AND [Misuse Abuse Profile].[Merchant Category Code] = t.[MerchantCategoryCode] AND [Misuse Abuse Profile].[Purchase ID Format] = t.[PurchaseIDFormat] AND [Misuse Abuse Profile].[Transaction Time] = t.[TransactionTime] AND [Misuse Abuse Profile].[Tax Amount Included Code] = t.[TaxAmountIncludedCode] AND [Misuse Abuse Profile].[Tax2 Amount Included Code] = t.[Tax2AmountIncludedCode] AND [Misuse Abuse Profile].[Order Type Code] = t.[OrderTypeCode] AND [Misuse Abuse Profile].[Memo Post Flag] = t.[MemoPostFlag] AND [Misuse Abuse Profile].[Even Dollar Purchase] = t.[EvenDollarPurchase] ')
In desparation I tried returning the case key (CardTransactionID) and the predictive column elements but I get an error when I try that. I assume this is a no-no? OLE DB provider "MSOLAP" for linked server "DMServer" returned message "Error (Data mining): Only a predictable column (or a column that is related to a predictable column) can be referenced from the mining model in the context at line 2, column 15.".
I have to do row by row date comparisons in a date column. If the date difference is more than 30 days we keep it , otherwise we suppress it. How can we write the query without using cursor so that only the bold rows will come ?
Dear all,I'm designing a system including the database and the securityrepresents the most crucial aspect of the system; hence for thedatabase security i have implemented the following aspects and needyour advise on further aspects or perhaps corrections where by thesystem is web based using asp.net and under iis 6.0 with https; in theasp.net engine side, i have included client-side validations for whatever is inputed and validated against sql injections for postbackforms:The features of security in SQL Server 2005 side i have implemented:1.) Created MACHINEASPNET Account2.) Allowed ASPNET Account to access the DB3.) Explicity denied ASPNET Account all permissions to all tables,functions and views4.) Denied all permissions to the ASPNET user for stored proceduresexcept EXECUTE permissions5.) By Stored procedure creation, WITH ENCRYPTION, EXECUTE AS'MACHINEASPNET' was usedNo SQL was included in the asp.net code except for calling storedproccedures; the policy is to only call stored procedures within theasp.net pages and encrypt the connection strings inside the web.configfile.Kindly, give me some guidelines for better security or discuss with methe security aspects i mentionedRegards
Somebody can give some tips or hints for speed up my querys and procedures from sql and get more performance, what is best to use, joins o cursors, using cursors can give me more performance?
I'm new to replication but I have already set up replication and have seen it working and failing and have gotten myself out of jams so far but there must be an easy way to administer it when things don't replicate as expected. I'm finding that I could easily kill half a day just trying to dig up information leading to troubleshooting tips. Is there documentation just on managing this feature. The regular MS Administrator's guide doesn't offer much.
Currently I have a problem that if replication fails on one command I get a SQL Mail telling me of the problem but does replication continue to the next command or does it just stop until the problem is fixed? I'm finding that I am constantly checking the publisher and subscriber databases and verifying if replication is indeed doing what the msjob_commands table reports. I set the batch to commit after each transaction instead of every 100.
We come accross situations where people are running big updates on the database (i.e. 50.000 updates). Our problem is that those big updates are blocking other user updates.
Thanks to snapshot isolation, users can query (select) the database with no lock. We rebuilt the indexes setting that the indexes used by the update procedure would not use page locks and only row locks. We set that the database would update the statistics asynchronously. Now we are still facing blockings and we would like to optimize the database to avoid those blockings. What else could we check? Any tips regarding the way to avoid that problem is really welcome.
I've created a few indexes on my tables but before I over-do it I wanted to see if there were any good websites out there with recommendations. My plan is to create clustered indexes on my primary keys and non-clustered for each foreign key. Also there are a few fields that are regularly searched so I will index them separately as well.
I don't want too many though because I know that affects the performance of record inserts. I'm also not sure about using multiple keys in one index.
Is there a good site out there with tips on what indexes to create or avoid, or have I pretty much covered it?
Hello everyone! I've got a problem with a real slow query, I would be very happy if somebody has any idea to improve the speed of it... The idea is to get the top 2 products, a customer hasn't bought wich are in his interest...
query (simplificated) ------------------------------------------------- SELECT TOP 2 prodID, Title, Price FROM bestSold7Days WHERE prodID NOT IN (SELECT prodID FROM orders INNER JOIN orderProducts ON orders.orderID = orderProducts.orderID WHERE (orders.custID=394)) AND (prodType = COALESCE((SELECT TOP 1 products.prodID FROM orders INNER JOIN orderProducts ON order.orderID = orderProducts.orderID INNER JOIN products ON orderProducts.prodID = products.prodID WHERE (orders.custID=394) GROUP BY products.prodType ORDER BY SUM(orderProducts.PCS) DESC), 2)) ------------------------------------------------- end query
(COALESCE is for replacing if the customer hasnt ordered anything, or hasnt ordered anything of this type)...
Are there any best practices for indexing to support queries with MIN() and MAX() in them? what if MIN() and MAX() are partitioned? Super bonus question: what if MIN() and MAX() are not only partitioned, but are called on a field in a derived table, and one of the partitioning elements comes from a table that's being joined in the derived table?
I experimented with inserting the derived table into a temp table, putting a POC index on that, and querying out, but that actually took longer.
We are coming out of the dark ages with our app using SQL 7 and, following the excellent advice of the folks here on SQLTeam, installing SQL 2005 Express on our new webserver.
Not being terribly fluent in all things SQL, I was wondering if anybody could provide input on the best practices for getting SQL 2005 Express going on the new server.
So far I've:
- Installed SQL 2005 Express - Downloaded and "installed" the SSEUtil for CMD line instructions - Downloaded and installed the graphical management interface (very nice, makes me feel more comfortable - like SQL 7 console!) - Copied backup files (made using SQL backup maintenance) to the new server
Should I simply create an empty db of the same name on the 2005 server and then restore the 7 data? Or ????
I searched briefly for previous posts of this nature and didn't find too much info so I hope I'm not duplicating effort here...
Thanks in advance for any advice!
Mmmmmkay. Yeah, did you get the memo about the TPS reports?
I need to convert an excel matrix into a table. Currently, the data consists of months going across the top and business names going down the left side. Each business name has three rows of data per monthly column, such that there are three numbers in the january column, three in the february column, etc. etc.
I want to convert to a table that has five columns, the business name, date, and the three data columns.
Any help would be greatly appreciated. As of right now I'm staring at keying in about 2000 rows of data by hand.
Our parts table has 5k records. I want to use part number as a parameter for one of my reports. Is there a way to do this and have the report generate in a reasonable amount of time?
I have questions about Slowly Changing Dimensions. I am quite confused about when should we use type 1 ( changing), type2 (historical), or type3( fixed) for the dimensions in each table? Is there any good suggestions on that?
Thank you in advance and I am looking forward to hearing from you.
I have a SSIS package that reads a text file and generates an output file out of it after transformations.
Now, a 20MB text file (containing about 50,000 records) is taking around 5 mins to complete. There is a Data Flow Task which is taking the major chunk of the time. It contains the following: 1) File Source 2) Conditional Splits (2 in number) 3) Derived Column 4) Data Conversion Transformation 5) OLE DB Destinations (3 in number)
The number of records being processed is close to 50,000
Please share your tips for optimizing the package.
Originally posted by Jeremy at 12/10/2001 11:39:38 AM
Hello all,
I've written a simple dts job that uses oracle (8.x) as a source and oracle (8.x) as a destination. I'm using SQL 2000 and Microsoft's oledb provider for oracle as the two connections. I've chosen "Transform Data Task" with the following SQL "SELECT * FROM REPORTER_STATUS WHERE LASTOCCURRENCE > TRUNC(SYSDATE)". As you can see, it's very simple, however it's very very very slow. (averages about 1000 rows per minute). In my column transformations, I've selected many to many versus the one to one. There are no activex scripts or anything along those lines. Just a simple push of the data from one oracle box to the other. The table schemas are identical etc... I've had this problem before with writing to Oracle and I can't imagine that it's really supposed to be this slow. If you need more details, please just let me know.
The official response from microsoft is that dts only allows for single inserts... not bulk or bcp for oracle. There must be someone out there who has figured out how to configure / modify / call (something) from a dts pacakage to insert millions of records into Oracle in a decent time frame...
I have a stored procedure that queries a database using a Selectstatement with some inner joins and conditions. With over 9 millionrecords it takes 1 min 36 sec to complete. This is too slow for myrequirements.Is there any way I can optimize this query. I have thought aboutusing an indexed view. I haven't done one before, does anyone know ifthis would have potential to improve performance or indeed any otherperformance enhancing techniques I might try.SELECT vehicle.vehicle_idFROM (( [vehicle]INNER JOIN [vehicle_subj_item_assn] onvehicle.vehicle_id=[vehicle_subj_item_assn].vehicle_id)INNER JOIN [subj_item] on[vehicle_subj_item_assn].subj_item_id=[subj_item].subj_item_id)INNER JOIN [template_field] on[subj_item].subj_item_id=[template_field].subj_attr_idWHERE([template_field].template_field_id=@template_field_id) AND([template_field].template_field_type_id=3) AND([vehicle_subj_item_assn].subj_item_value_text=@value) AND(vehicle.end_dtm IS NOT NULL)ThanksGavin
I would like my transformation to automatically create an output column for each input column. Any tips? I can't seem to determine which event to listen to or method to override.
SELECT * FROM ( SELECT TOP 15 * FROM (SELECT TOP 15 CMDS.STOCKCODE AS CODE,CMDS.STOCKNAME AS NAME,CMDS.Sector AS SEC, CMD7.REFERENCE AS REF,T1.HIGHP AS HIGH, T1.LOW,T1.B1_CUM AS 'B/QTY', T1.B1_PRICE AS BUY,T1.S1_PRICE AS SELL, T1.S1_CUM AS 'S/QTY', T1.D_PRICE AS LAST,T1.L_CUM AS LVOL,T1.Chg AS CHG,T1.Chgp AS CHGP, T1.D_CUM AS VOLUME,substring(T1.ST,7,6) AS TIME, CMDS.SERIAL as SERIAL FROM CMD7,CMDS,CMD4 AS T1 WHERE T1.ST IN (SELECT max(T2.ST) FROM CMD4 AS T2 ,CMDS WHERE T1.SERIAL=T2.SERIAL AND CMDS.SERIAL=T2.SERIAL AND T2.sd='20060821' AND CMDS.sd='20060821' AND T2.L_CUM < '1900' AND CMDS.sector >='1' AND CMDS.sector <='47') AND CMDS.SERIAL=T1.SERIAL AND CMDS.SERIAL=CMD7.SERIAL AND CMDS.sd='20060821' AND CMD7.sd='20060821' AND T1.sd='20060821' AND T1.L_CUM < '1900' AND CMDS.sector >='1' AND CMDS.sector <='47' ORDER BY T1.D_CUM desc) AS TBL1 ORDER BY VOLUME asc) AS TBL1 ORDER BY VOLUME desc;
My server is a dual AMD x64 2.19 GHz with 8 GB RAM running under Windows Server 2003 Enterprise Edition with service pack 1 installed. We have SQL 2000 32-bit Enterprise installed in the default instance. AWE is enabled using Dynamically configured SQL Server memory with 6215 MB minimum memory and 6656 maximum memory settings.
I have now installed, side-by-side, SQL Server 2005 Enterprise Edition in a separate named instance. Everything is running fine but I believe SQL Server2005 could run faster and need to ensure I am giving it plenty of resources. I realize AWE is not needed with SQL Server 2005 and I have seen suggestions to grant the SQL Server account the 'lock pages in memory' rights. This box only runs the SQL 2000 and SQL 2005 server databases and I would like to ensure, if possible, that each is splitting the available memory equally, at least until we can retire SQL Server 2000 next year. Any suggestions?
We have an old machine which holds SQL server 2000 database. We need to migrate a whole database to a new machine which has SQL server 2005.
When we tried to move whole database using Import and Export Wizard, only tables can be selected to import/export. However we want to import/export the whole database, including tables, stored procedure, view, etc. Which tool should we use?
We have an old machine which holds SQL server 2000 database. We need to migrate a whole database to a new machine which has SQL server 2005.
When we tried to move whole database using Import and Export Wizard, only tables can be selected to import/export. However we want to import/export the whole database, including tables, stored procedure, view, etc. Which tool should we use?