I was informed (by Microsoft) that for all SQL Servers prior 2008 recommended (or must) stripe size is 64k Does anyone knows full set of settings that must be applied when setting up raid 5 for sql server box
I've been asked to build a 64bit Windows 2003 server which will SQL Server Enterprise 2000 64bit edition on it. This is my first foray into the world of 64 Server OSs, so hoping to get some advise from the wise. This will be used as a back end to a web application. This server has been purchased with 5 disks (all 146GB disks) in it. Currently this has been setup from the manufacturer as RAID 1+1 with 1 disk not allocated.
I was going to reconfigure the logical drives to have RAID 1 (2 drives) which has OS and swapfile on it, and RAID 5 (3 drives) with all SQL data (the main DB I am estimating to be 20-30GB) on it.
Reading through a couple of forums and google results regarding determining stripe size, it seems some people are recommending putting the tempdb and logs on a seperate drive to the rest of the SQL data, as well as trying to get the optinum strip size.
Can you let me know your opinions on if my proposed logical drive setup seems OK (or just keep one logical drive as either RAID5 or 6), and if tempdb and logs should be on the OS, or should stay on the RAID 5 array? Also, for the stripe size, should I think the same as a 32bit OS, and just use 128/64 or 64/64?
Please could you tell me how big sql tables are when people refer to them as small, medium and large? Preferably in terms of disk space or rows (each row in my table will contain a standard length job advert and 20 additional columns with an average of 8 characters)
I have a server setup with the standard recommended RAIS(10-5-10 setup (10 for the OS, 5 for the data, and 10 for the trans logs). Running out of space on my RAID 5. Have lots of extra space on my RAID 10 where my trans logs are. I currently dump my files to disk and then use tape to back them up. I have been putting these files on my RAID 5 array, but was going to move them to my RAID 10 array. Anyone seen any downside to doing this?
Background... Server raid failed, rebuilt raid ran chkdsk but now I am unable to run SQL service.
I've tried to manually start service but receive the following message
The MSSQLSERVER service on the local computer started and then stopped. Some services stop automatically if they have no work to do.
Here's all I have in the error logs
2007-08-19 10:56:39.98 server Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 1)
2007-08-19 10:56:39.99 server Copyright (C) 1988-2002 Microsoft Corporation. 2007-08-19 10:56:39.99 server All rights reserved. 2007-08-19 10:56:39.99 server Server Process ID is 1812. 2007-08-19 10:56:39.99 server Logging SQL Server messages in file 'f:MSSQLlogERRORLOG'. 2007-08-19 10:56:40.01 server SQL Server is starting at priority class 'normal'(2 CPUs detected). 2007-08-19 10:56:40.15 server SQL Server configured for thread mode processing. 2007-08-19 10:56:40.18 server Using dynamic lock allocation. [2500] Lock Blocks, [5000] Lock Owner Blocks. 2007-08-19 10:56:40.23 server Attempting to initialize Distributed Transaction Coordinator. 2007-08-19 10:56:42.29 spid4 Starting up database 'master'. 2007-08-19 10:56:42.31 spid4 Error: 5172, Severity: 16, State: 15. 2007-08-19 10:56:42.31 spid4 Error: 5173, Severity: 16, State: 1. 2007-08-19 10:56:42.31 spid4 Error: 5180, Severity: 22, State: 1.
Quick question in setting up a 3-disk SQL 7.0 system - can anyone think of a benefit to segregating a single RAID 5 disk array into numerous logical partitions for separating out the OS, the database files and the transaction logs? I would assume performance would be unaffected (as the drives are acting as a single array for reads & writes anyway) so other than general organization what (if any) advantage would be gained over making a single large logical partition?
can anyone help me? I just installed WinNT 4 SP3 and now trying to install SQL 6.5 on an IBM Netfinity 5500 with RAID 5. There is one 500mb partition for the system and then one 12 gig partition and SQL only seems to see ~117mb of the 12 gig partition. Is this a known prob? Thanks for any input Josh
when to use table variable and temp table. i told the interviewer that when rows is less like hundreds or thousand then use table variable else use temp table.After that he asked that what do u mean by less data or thousand rows may be there are multiple columns involved with that less rows and make a huge data set.
the application will add items into a "bag". That is, the items in one table will refer a record in another table. This will be done in timely manner -- with second or minute delays between adding a new item. There will be up to thouthand of items per bag. The option is to wait until a full bag accumulates and set up all the references at once by using
UPDATE items SET container_ref = bag WHERE id IN [...]
The disadvantage of such all-at-once I see is inability to encapsulate the functionality into a SP -- the problem is to pass a set of IDs. The advantage should be efficiency in terms of total SQL Server load. How mush would it be?
1. To combine all the company's data in one large database, and use schemas and file groups to create logical and physical distribution on drives and namespaces
or
2. Distribute the data into smaller databases with related data - eg. products and product description in one db, Customers in another and orders and orderlines in a third db.
System.OverflowException: Value was either too large or too small for an Int32. Why does this error originate in the following line?"SqlCommand cmd = new SqlCommand("SELECT Count(*) FROM Contacts", conn)........ ..........DataSetContacts.ContactsRow row = ds.Contacts.NewContactsRow();..................row["ContactNumber"] = Convert.ToInt32(txtContactNo.Text);" ContactNumber field is SqlDbType.Int.
HiThis is a question of "what does it cost me".Lets say I have an integer value which would fit into a smallint fieldbut the field is actually defined as int or even larger as bigint.What would that "cost" me ? How would definitions larger than I need forthe values in the field affect me ?Its obvious that the volume of the database would grow but with the sizeof resources etc that we have nowadays disc space isn't a problem likeit used to be and i/o is much faster and many people would tell me "whocares" , or IS it a problem ?How does it affect performance of data retrieves ? Searches ? Updatesand inserts ? How would it affect all db access if tables are pointingat each other with foreign keys ?Thanks !David Greenberg
I've got some data in a table called Dim.Sources that I generated by a program to generate data, so the characters are wierd, but it is in the database.
When I process the dimension, I get an error like: the size specified for the link is too small, and it will truncate one or more values of the column...
It is only for the name column, but not for the id column (I guess because in this column there are only numbers).
I'm getting nuts! please, any help will be much appreciated! No way to find any information on the internet about it!
I don't remember defining any link size!! Where can I change it?? What is it??? : )
This is a tough error.... The situation: we are trying to manage Stored Procedures (ALTER, CREATE, DROP statements) on one of our SQL2005 servers and this error spews out: "Target string size is too small to represent the XML instance" Now this script i wrote (simple alter statement adding a column) works on 4 other versions of SQL 2005, but when i try a certain server, i get that error Searching Google, Yahoo, and MSN, all three point to one single instance of this error, right here on Don Keily's blog http://www.sqljunkies.com/WebLog/donkiely/archive/2005/10/20.aspx and unfortunately... no solution :( Maybe in the 3 months since that post, someone might have run across that and knows how to fix?
I am trying to understand creating SQL Server projects and managed code. So I created a C# SQL Server Database project and named it "CSharpSqlServerProject1" and followed the steps in the following "How to: " from the Help files:
"How to: Create and Run a CLR SQL Server Stored Procedure "
I used the exact code in this "How to: " for creating a SQL Server managed code stored procedure (see below) in C#. However it didn't even compile! When I went to build the code I got the following error message:
"Error 1 Target string size is too small to represent the XML instance CSharpSqlServerProject1"
It does not give a line number or any further information! Since this is a Microsoft example I'm following I figure others must have run into this too. I can't figure out how to fix it!
Here's the code as copied directly from the howto:
using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server;
public partial class StoredProcedures { [SqlProcedure()] public static void InsertCurrency_CS( SqlString currencyCode, SqlString name) { using (SqlConnection conn = new SqlConnection("context connection=true")) { SqlCommand InsertCurrencyCommand = new SqlCommand();
I have a small number of rows in a dataset, Table 1. There is a CLOB on a large dataset, Table 2. They join on a PK. I would like to retrieve this CLOB and add it to the data flow for Table1. In short I want to emulate the following:
Table 1: Small table without CLOB, 10 rows. Table 2: Large table with CLOB, 10,000,000 rows
select CLOB from table2 where pk = (select pk from table1)
I want this to return the CLOBs for the small number of rows in Table 1. The PK is indexed obviously so it should be a fast look up.
Table 1 and Table 2 live on different Oracle databases. How do I perform this operation efficiently in SSIS? It seems the Lookup and Merge Join wont do this.
I am configuring a new database server, without SAN access, and want to know what is the best practice for SCSI RAID configuration. Do most folks prefer RAID 5 or RAID 10 configurations where their databases will reside?
I've been searching this site and the Web for info on an error message I get when importing from Access 2003 into SQL Server 2000.
'Data for Source Column 3('Col3') is too large for the specified buffer size'
A memo field in Access is larger than 255.
I have followed advice about putting the field to the first column. This doesn't work - the error just returns the new column number. In fact, I've tried just importing the first column - no good.
I am wary about making Registry changes as comments on the Web say this doesn't work either.
I have set up transaction replication between two databases. Data from a table in the first database is replicated to the same table in another database.
The table at the publisher already has some data in it. The table at the subscriber is empty. When the replication is synchronizing, I get the following errors in the replication monitor: *The process could not bulk copy into table "dbo"."virtualdatalocations_waitingqueues". (Source: MSSQL_REPL, Error number: MSSQL_REPL20037) Get help: http://help/MSSQL_REPL20037 *Field size too large
The table looks like this: CREATE TABLE virtualdatalocations_waitingqueues ( dataid int , personid int , queueid int , CONSTRAINT FK_vw_dataid FOREIGN KEY(dataid) REFERENCES datalocations(id) ON DELETE CASCADE , CONSTRAINT FK_vw_personid FOREIGN KEY(personid) REFERENCES persons(id), CONSTRAINT FK_vw_queueid FOREIGN KEY(queueid)REFERENCES waitingqueues(id) );
It used to run fine in the past. I couldn't find any help on google or on forums.
I developed a vb app that imports csv data into an sql server db. The original text file is 36.5mb. The db after import is 230mb and the log file is 555mb. Is this normal?
Thanks in advance. What is maximum SQL Server database (*.mdf) file size with SQL Server 2000 as part of Microsoft Small Business Server 2000? (Database files were limited to 10 GB in SBS 4.5 with SQLServer 7.0... has this changed?).
I recently started using Differential backups. They are working but are growing in size a lot quicker than I expected.
The backups are growing by 2.5GB every day although the total size of all transaction backups is under 350MB. I would have imagined that the total transaction log backups would be a good indicator of total database changes and therefore the differential backups would approach this figure.
I have a issue with the drill down. In the report there is drill down in the Amount column. I am trying to pass the customer names in this drill down but there are more than 100 customers for that specific case and drill down is not able to pass all the customers.
Is there any other way to pass the large string in the drill down?
Hi I want to store large files like pdf file,Html page,audio file in Sql Server database.How can i do it? if somebody know then tell me as soon as possible. Thanks in advance. Bye
Hello there,I have and small excel file, which when I try to import into SQlServer will give an error "Data for source column 4 is too large forthe specified buffer size"I have four columns in the excel file, one of the column contains alarge chunk of data so I created a table in SQL Server and changed thetype of the field to text so I could accomodate this field but stillno luck.Any suggestions as to how to go about this.Thanks in advance,Srikanth pai
We currently have a fairly new SQL server 2000 db (currently about 18mb is size) as a backend to an application (Navision). Performance seems to be below what it should be.
The db is increasing quite rapidly in size, with a lot of data scheduled to be loaded onto the db and also more and more shops and users coming onto the system with alot more transactions going onto the db.
The initial setup of the db has the database File properties set to "Automatically grow file" by "30%" and has an unrestricted file growth.
The server that the db sits on is high spec and very large disk space.
Because the database will be expanding alot and thus reaching its maximum space allocation and then performing a 30% increase in size (which I guess affects performance quite a bit??) quite regularly.
Is it best to set the intitial size of the db to a alot bigger size in the first place as we have large disk space availiable and also set the % increase bigger also.
any advice on best performance would be much appreicated.