I was wondering what more experienced DBAs have observed with regard to the capacity of a MSSQL DB. Is there an upper threshold of rows where performance becomes unacceptable? I have a fairly slow, but constant input rate of approximately 2,000 rows every 60 seconds or so (that is a little high, but I'm interested in worse case scenario here). That is up 172,800 rows a day. (I'm being overly pessimistic here.) We'd like to be able to keep all of this around as long as possible.
Or would a more heavy duty DB be in order for these sorts of data rates?
There may/may not be an upper limit for the number of rows in a table, but is there any performance-related limit?
I'm designing a database that stores results that have been acquired from a number of devices. Each device provides a set of data measurements every 10 minutes. Therefore each year a device will produce 52000 sets of results. If I design a table to store a row for each set of measurements from a device (PK is based on the timestamp and the deviceID), and if there are 100 devices recording for 5 years, there will be 52000x100x5 rows. Would I get a performance increase by separating this data into one table per year? Perhaps the year could be appended to the table name to identify the particular tables.
A secondary issue is some devices can also be configured to produce a different set of measurements every 10 seconds. In this case there will be hundreds of millions of rows over a 5 year period. Therefore I am considering bulking the results into an array for a 10 minute period, and storing this array as a blob each 10 minutes. Is this going to be faster or slower than having hundreds of millions of rows?
One of our database is approaching the gigabyte size. I know that microsoft claims to support terabyte databases with sql server 7.0. I was wondering if anyone could tell me about the max size of database they have used on an OLTP site without running into problems. ofcourse with SQL Server.
Our development team wanted to create a database user for each application user in the application and use these for granular data access control, which at first, sounded like a good idea but our initial testing ran into some interesting results.
Our target user base was about 15 million users with an estimated 1% concurrency rate, and finding no MS documentation on an upper limit to the number of users a database can have we began some load testing to see how the database performed. In the hundreds of thousands of users range our test database had a hard time performing well under light loads (even without any concurrent connections).
When we purged the users and reverted back to just a handful of service accounts, performance went back to "normal" under the same loads. I began to wonder if this is a situation where throwing more hardware at the problem would overcome the issue or if there is a practical upper limit to the number of users a single database can handle well.
(There were of course other cons to this arrangement and I certainly was never going to expand the users tree in the object explorer for a database like this, but we thought it a solution worth investigating.)
What is the largest number of users any of you have had in a single database?
Im currently working with a DTS-package used for importing a Excelsheet into SQL-server.
I have a "Microsoft Excel 97-2000" as source and a "Microsoft OLE DB" as destination.
A "Transformation Data Task" is used to shuffle the data.
The package works ok as long as the path for the Excelsheet doesn't exceed 128 characters! Then it gets truncated at 128 characters and of course the data task can't find the file......
I can browse to my excelsheet, and when I look at the path it's ok. But when I close the package and then opens it again the path is truncated at 128 characters. Grrrr.
Is this a set limit in SQL Server or is it something I can mail the databaseadministrator about tomorrow?
I am not sure about the architecture of the Issue Tracker and hence not sure if it applies here. But I will post in any case and wait for users on this forums comments as well.
===========Earlier post================== This question is regarding the architecture of TimeEntry. In some programs it builds an arrayList for Master-detail type of relationship and when user is ready to save it by clicking 'submit' it build a variable with pipe delimited fields. This is then passed to a sql query.
This to me does not seem to be an efficient manner. Because the max character is 1500 chars as parameter to SQL query.
I was wondering if instead I could store it as an XML and then use the XML to import in to SQL.
Any ideas is greatly appreciated, I am running in to problems where my variable construct does increase to more than 1500 chars. Any thoughts are much appreciated in this regards.
In my application I must store over 16000 character in a sql table field . When I split into more than 1 field it gives "unclosed quotation mark" message. How can I store over 16000 characters to sql table field (only one field) with language specific characters?
Hi everybody, I would like to know if there is any property in sql2000 database to separate lowercase characters from uppercase characters. I mean not to take the values €˜child€™ and €˜Child€™ as to be the same. We are transferring our ingres database into sqlserver. In ingres we have these values but we consider them as different values. Can we have it in sqlserver too?
Is there a method for converting the first character of a account name to uppercase and the the remaining characters to lower case? I've used the substring procedure but for a name like 'MY NEW COMPANY', how could I convert it to 'My New Company' ? Thanks, Terry
Folks, what script must I use, as a part of CREATE TABLE, to automatically convert characters to UPPER case on insert? I wrote <CHECK (country = UPPER (country)> in the CREATE TABLE, which was wrong, because the values were still in the lower case. The sample script is:
CREATE TABLE address (street varchar(40), city varchar(20), state char (2), zip varchar (10), country varchar (20))
When a user types "Canada", I want the inserted value be "CANADA"
Hi expert, I would like to ask regarding the UPPER function in SQL Query. I was tryin' to create a scipt that will give me a result of all the names that are in UPPER case format, but when I tried to execute the script the result is not right, it also retrieves all the records that are in PROPER case.
SQL Script: SELECT id, name FROM table_1 WHERE UPPER(name) LIKE 'DAR%'
The all caps text strings at the beginning of the field need to end up in a separate field than the mixed strings, and the mixed strings need to stay together. The field length varies, as do the lengths of the all caps text strings. There are a lot of records, so I would be interested to know if there was a way to proceed without manually editing each line.
@DeptID nvarchar(10) ) As If Exists ( Select DeptName From Departments Where DeptID LIKE @DeptID ) Return 1 Else Return 0
Now I want to apply replace and upper functions to DeptID in database before saying "DeptID LIKE @DeptID".
for example the parameter is :"D&V" DeptID in database is:"d & v" //there are spaces
if I say DeptID LIKE @DeptID nothing is found because of character nonmatching So I have to apply replace & upper functions to the column DeptID in database
My SQL Server database is not case sensetive. How can I compare like cluase with search for capital and small letter? For example SELECT add1 from xcty_all where add1 like '%AL'%' I need only ................... 10 ltncewwod way AL 456 Ruio St. AL NOT
Ho can I convert first letters of a string to Upper Case (i.e. UNITED KINGDOM - Untited Kingdom). I have country names table which has all entries in uper case. This makes a select box very larg and unproportional. Thanks in advance for the help.
Hello, we've an Oracle transition in the pipeline and want to convertall our database objects to upper case. Any one got a script ortechnique (other than manual) to do it?Many thanks, Kevin.
I have a problem. I need to rename all columns of a database to uppercase. Since SQL SERVER 2005 does not support changing system tables is there a smooth way to do this? Has anyone ideas for a script? point me to the right direction. I have found the stored procedure sp_rename which could be useful (or would it be better to alter the tables)... So any help would be appriciated very much...
Almost all of our character fields are stored in upper-case. Is there an easy way to force SQL Server char and varchar fields to upper-case? Something I can do in SQL Server instead of in the client? It needs to apply to any new records.
There are some exceptions (email addresses for one). I don't mind going through each field and changing something.
Hi There, I have a column which contain alphanumeric values: aab123add234cdf423dej553edg543 If I try to return records between these values 'a' and 'e' it will only go as far as d. (first letter) aab123add234cdf423dej553 This is true if I use where value between a and e Or if I use greater than equal to operators Any help would be great. Thanks Stuart
hi i want to select * from table1 where name =petter?now if there is many type of petter in table linke PETTER , Petter And petter which record will come in display?if i want all this three (PETTER,Petter,petter) will come in display which command is for this ??? regard
I have several tables in a deployed database in which the primary key is of type int, and autoincrements by 1 each time a record is added. My question is, since ints are 32-bit, what happens when its value reaches 4,294,967,296? I know that seems like an extrememly large amount of records, but when we imported the data into the database it started at key value 1,000,000. I don't know how to make it use lower numbers which are currently not being used (numbers below 1,000,000), and I am worried I will have problems when I reach the upper bound. What kind of problems could this cause? Should I change the primary key's type?Thanks!