I'm aware that it's best practice to separate mdf and ldf files onto separate drives.
However, I see a lot of servers where the underlying disk array is the same for drives on the server.
Is there still any performance benefit to separating mdf and ldf files in this situation?
For example, a single virtual server running SQL Server, with multiple drives attached.All of the drives are connected to a shared storage via iSCSI.There drives C:, D:, E: etc are all actually sharing the same underlying disks.
Obviously, there are some benefits from an administration perspective whereby individual drives can be reconfigured without affecting the others.
I am pulling down out of range values from a single table on one database to a different table on a different database on a different server (one i have full access to). Basically, it looks something like this:
id1 value1 prev_value1 value2 prev_value2 date prev_date id2 value1 prev_value1 value2 prev_value2 date prev_date id3 value1 prev_value1 value2 prev_value2 date prev_date
all the "prev"'s are null. I want to do one do one query that will get me the previous values and dates for each id from the original database. how to do this.
I have several reports that are looking for a code within a certain set of codes or ranges. The specific list of codes to be including is determined by the end user. Currently my "IN" statement can be a hundred lines, listing several ranges, lists of specific codes, etc. I am constantly getting asked what codes does it include, is this code included, etc. Sometimes they'll give me a printed 10 page list of codes and want me to compare to what I have included in the report. Not ideal in the slightest.
What I'd like to do is have a table or a file of some kind somewhere where the end user can view the codes contained, add new ones, and delete ones they no longer want. Then I'd like to be able to just reference that file in my IN statement. Leaving the responsibility of listing the correct codes on them.
I have two tables having one row identifier column each of int datatype. Both these columns are part of the respective primary keys. Now as a part of my process, i'm inserting one small part of data from one table to another table. This was working fine but suddenly started getting error like
Violation of PRIMARY KEY constraint 'PK_TargetTable'. Cannot insert duplicate key in object 'DW.TargetTable'. The duplicate key value is (58544748).First I checked with DBCC CHECKIDENT with NORESEED and found that there is difference in the current identity value and current column value. I fixed it by running DBCC CHECKIDENT. But to my surprise again got the same issue. interesting thing is that the error comes after inserting 65466 records.
I am using following queries in a stored procedure.This stored procedure is executed through a dot net application.
DECLARE @DEPTNBR BIGINT SELECT @DEPTNBR = DEPTNBR FROM DEPARTMENT_DETAILS WITH (UPDLOCK,READPAST) WHERE STATUS= 1 UPDATE DEPARTMENT_DETAILS SET STATUS= 0 WHERE DEPTNBR = @DEPTNBR SELECT DEPTNBR,DEPTNAME,DEPTLOC FROM DEPARTMENT_DETAILS WHERE DEPTNBR = @DEPTNBR ​
From my queries,I am providing a available department information.Each user needs to get unique available department information.But when more number of users using the application concurrently, multiple users getting same department information.How to solve my problem?I always wants to get unique department information even though multiple users using the application concurrently.
I am extremely new to database design, and I ran into a problem that I know comes up often, however has many opinions...
Basically I have a table that is going to have 50+ columns. The natural key on this table is actually 8 columns wide, 4 of them being Varchar columns by default. (varchar(50)'s).
I have added an identity column, (1,1) to the table, however I put the clustered index on the 8 natural keys... My plan is to rebuild the clustered index once nightly when the system isn't in use (after 7 pm).
I know others would say it would be better to have the clustered key on the 1,1 column and then add indexes on the other 8 fields... However I don't quite understand why honestly...
Every single query against this table will use the 8 columns, and will NOT use the Identity column (1,1) because they are calls from other systems that do not know the Identity column....
Therefore if your database is set up for query speed, and every single query has to have a value for 8 columns to get a valid result, does it make sense to put a clustered index over the 8 columns?
If not why? Why is putting a clustered index on an identity column (that will literally never be used in a query) a better solution?
There is a bug in SQL-2005/2008 Replication system, which may break data integrity, when using @@IDENTITY function to update FOREIGN KEY of some table.
When Merge replication is set up, and there is a table article with IDENTITY column in it, after inserting a new row in the table a value of @@IDENTITY function does not actually shows just inserted row's identity value.
This issue also generated when performing inserts via ADO.
I am trying to add 2 separate columns from separate tables i.e column1 should be added to column 2 when inserted and I want to use a trigger but i don't know the syntax to use...
Using VS 2008 Beta 2, SQL CE 3.5, on desktop, and Typed Datasets: The INSERT command of dataset table adapter does not return the updated identity of inserted row. Why?
also every time I want to modify the insert command to return the updated identity of inserted row, i get the error: "Unable to parse query text."
Hi, I am having problem in bulk update of a sql server table haning identity column from a datatable( has no identity column) using sqlbulkcopy. I tried several approaches, but it does not show any error nor is the table getting updated. But the identity value seems to getting increased every time. thanks. varun
There are a few features in the new SQL Server - Reporting Services that I really need in production. I have tested everything and it works great. I am running the CTP version since Microsoft is saying they aren't releasing the release version until 3rd quarter 2008.
Since Microsoft won't sell SQL 2008 until 3rd quarter, can I run the CTP in production until the release and then purchase SQL 2008?
Is it possible to stick RS on a separate web server without Sql server being on that box? What steps to I have to take to install the RS web services portion on this web server?
Hello - does anyone have experience w/SQL Server 2005 in a virtual environment? I'm considering this for a production environment but not sure if performance will suffer. Our databases will have a lot of writing but not too much reading. A SSRS solution is currently the only app. connecting to the SQL db. Max users to server at any given time will be very low (~10 users max). But the databases are pulling in data from other, outside multiple data sources on a daily basis.
Hello! Recently, I set up server with Windows Web Server 2008 RC1, SQL 2008 Express beta, .NET 3.5, IIS 7. I'm running ASP.NET web application with SQL database. Everything works fine until the first application state on the server expires. After that, any postback that starts a new application state on the server and connects to the database, results in the following error: Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance. The connection will be closed. Is this a bug that will be fixed in release of Windows / SQL or am I doing something wrong? Many thanks for help, Jan
Please help me decide what to do about my current hardware configuration. I have an ASP.NET app that uses SQL Server for the database. Currently both IIS and SQL are running on the same machine (see machine 1 below). I want to separate it so that IIS and SQL each have their own machine but I have a very limited hardware budget right now. I am trying to decide if it would be worth moving either IIS or SQL to another machine that we have, or if I would actually lose performance by doing so considering the extra machine I have is a bit outdated (see machine 2 below). Should I leave well-enough alone or try to split it to these 2 machines I have. (buying new machines aren't an option right now although that's what I'd like to do). I could probably afford a memory upgrade on one or both computers if necessary. Machine 1Dual XEON 1.8 Ghz w/ 1G RAM Machine 2P3 1.13 Ghz w/ 512K RAM Thanks
I am trying to tie together tables that show quantities of a product committed to an order and quantities on hand by a location.
My end result should look like the below example.
Item Location QtyOnHandByLocation SumQtyCommitTotal Prod1 NJ 10 10 Prod1 NY 10 0 Prod1 FL 0 0 Prod1 PA 0 0
So I can see I have 10 items in NJ On Hand and Committed to an order. 10 available in NY but not on an order. Then the other two locations have no quantities.
Below is the CTE but it produces inaccurate results. I've tried running it several different ways by playing with the grouping but have no luck thus far.
--create the temp table Create table #SalesLine ( Novarchar (50) not null , LocationCodevarchar (50) not null , QtyCommitint not null ) create table #ItemLedgerEntry
[code]....
I am close to the desired results but can't find a way.
I am currently running a windows 2000 machine with asp, sql server,mail server, ftp server etc all on the same box.The site runs several hundred ecommerce stores. Recently theprocessor utilization has been spiking and I have decided to getanother server and use sql server on one and asp on the other.So now I have a new windows 2003 server that I have setup all of theasp code on. Problem is that when I run the asp code from the newwindows 2003 server it is extremely slow compared to the code runningon the old windows 2000 server which is where the sql server databaseis also located.From everything I have read the best way to optimize your site is touse 2 separate servers one for iis/asp and one for sql server.Am I doing something wrong here or is this normal??Could this possibly be just because the old server is still servingmany requests and is pushing the requests from the new server to theback of the line?Does anyone have any ideas?The syntax I am using to open the connection string is:db_ConnectionString = "Driver=" & db_Driver & ";Server=" & db_Server &";UID=" & db_UIN & ";PWD=" & db_pwd & ";Database=" & db_Database & ";"conn_store.Open db_ConnectionStringwhere db_server is the ip address of the windows 2000 serverIs there a better way to do it across a network??Any help or ideas would be much appreciated.
I downloaded the €œMicrosoft SQL Server 2008 Express CTP, February 2008€? from http://www.microsoft.com/downloads/details.aspx?FamilyId=749BD760-F404-4D45-9AC0-D7F1B3ED1053&displaylang=en
I simply replaced the 2005 file €œSQLEXPR.EXE€? with the 2008 file €œ€?, recompiled the installation and tested only for it to fail. I than read the 2008 books online and noted the change in command line options.
I then changed the command line to suit the Microsoft 2008 books online, recompiled the installation and tested only for it to fail once more.
Interestingly I tested the install from the default GUI and at the point of adding the €œsa€? login credentials it fails to allow the installation to proceed. Strangely by selecting the windows authentication credentials, €œnext€? than €œback€? it now allows me to add the €œsa€? login credentials and continues to install correctly as required.
I hope I have explained this clearly enough.
1. Is this a bug in the €œMicrosoft SQL Server 2008 CTP, February 2008€? installation? 2. If so is this causing the command line install options to fail? 3. How do I obtain a version of €œMicrosoft SQL Server 2008 Express€? that will work installing from the command line?
I've got reporting services on a different box from the database and I can see all the reports, but when I try to setup a subscription, I get this weird error:
The SQL Agent service is not running. This operation requires the SQL Agent service. (rsSchedulerNotResponding)
The same error happens when I connect to the database server via management studio and try to run a job.
If you see below there are 2 customer names on 1 loan, most of them share the same lastname and address, I want to separate it with fields,LoanID, customer 1 Firstname, Customer 1 Lastname, Customer 2 FirstName, Customer 2 Lastname, Adddress,zip
LEFT JOIN Status As S on S.LoanID = L.LoanID LEFT JOIN Borrower B on B.LoanID = L.LoanID LEFT JOIN MailingAddress MA on MA.LoanID = L.LoanID where S.PrimStat = '1' and B.Deceased = '0'
I'm encountering an issue with "Named Pipes Provider, error: 40 windows" and am having problems determining how to fix it due to the environment I'm using. I have two SQL Servers installed on two separate Win2K3 Server boxes, one is SQL Server 2000 and the other is SQL Server 2005. The SQL Server 2000 contains the actual application data. The 2005 database is used only for Reporting Services. I've set up the reports on SSRS such that their datasources hit the 2000 server. This is using SQL Server authentication.
When testing the reports via SSRS (in Visual Studio 2005), the connection to the data works and the reports are generated fine. When I deploy them to the reporting server and launch IE to test locally (still on the 2005 box), I get this "Named Pipes Provider, error 40" issue. I made sure that Named Pipes and TCP were enabled and the port set at 1433 (to match that on the 2000 box).
Now I changed the datasource's authentication from SQL Server to Windows authentication. I tested this in SSRS and this works too. When I redeployed the reports with this authentication change, testing the reports via IE locally (on the 2005 box) worked (using Administrator login). Great. Now when I open IE on an external box, i.e. on the 2000 box, and try to test the reports using that server's Administrator login, I get this same error 40 issue. I've been through a few threads describing the error 40, fiddling around with the SQL Server configuration as well as SSRS, to no avail. I have a feeling this error 40 issue has to due with permissions/authentication between the SQL Server boxes but I can't really be sure. Anyone have any ideas on how to troubleshoot my situation. Thanks.
We have multiple databases on a single instance in an OLTP environment. I have my data files on a separate SAN LUN from my transaction log files (and a few NDFs split out onto additional LUNs). I was wondering if there is a performance benefit to putting each LDF file on its own LUN? Or at least my few busiest LDFs?
We are currently on 2012, but I'm having to put together specs for a 2014 installation and need to answer this question without having an environment in which I can benchmark different setups. I just want to hear whether or not others have done this (why or why not?).
I am setting up a two active instances of MS SQL Server on a clustered box. In the past I have set up using the nomenclature SQLCLUSTERNAMEINSTANCE-NAME such as SQLCLUS1INST1 and SQLCLUS2INST2.
But, this time the client will like it installed such as SQLCLUS1INST1 and SQLCLUS1INST2. I did not think this is possible --- That is I assumed that cluster resource names have to be unique. But, nevertheless I have tried to do so for the last few days, but no luck.
Can someone please let me know if it is indeed possible to have two separate instances in a single cluster.
BTW, I know this is quite possible with non-clustered instances --- where the SQLSERV1INST1 refers to an actual server name, and not a resource name.