SQL Server Admin 2014 :: Practical Upper Limit On Number Of DB Users
Sep 9, 2015
Our development team wanted to create a database user for each application user in the application and use these for granular data access control, which at first, sounded like a good idea but our initial testing ran into some interesting results.
Our target user base was about 15 million users with an estimated 1% concurrency rate, and finding no MS documentation on an upper limit to the number of users a database can have we began some load testing to see how the database performed. In the hundreds of thousands of users range our test database had a hard time performing well under light loads (even without any concurrent connections).
When we purged the users and reverted back to just a handful of service accounts, performance went back to "normal" under the same loads. I began to wonder if this is a situation where throwing more hardware at the problem would overcome the issue or if there is a practical upper limit to the number of users a single database can handle well.
(There were of course other cons to this arrangement and I certainly was never going to expand the users tree in the object explorer for a database like this, but we thought it a solution worth investigating.)
What is the largest number of users any of you have had in a single database?
View 3 Replies
ADVERTISEMENT
Jan 14, 2015
I Want to test for Automatic switch between primary and secondary.
How do i do it ?
or
I need this for some purpose : How to set suspect mode for a database ?
View 1 Replies
View Related
Dec 3, 2013
We have applications connected to SQL using windows authentication. While having connection with Application user can also access to Database instance on the same time as well. We need to limit the access of user outside application.
View 6 Replies
View Related
May 7, 2008
Does SQL Server 2005 Workgroup Edition have a limit to the number of user logins I can make?
View 1 Replies
View Related
Sep 24, 2014
Is there a way to backing up SQL DB without having to stop users from connecting to it.
View 5 Replies
View Related
Jul 22, 2015
I'm trying to find out what tables are being used in a Database.
I don't want the last User but the User and the Dates.
I have a script that return the last user but that is not going to work.
The following script returns the last user but not all users and the Login Name:
ITH LastActivity (ObjectID, LastAction) AS
(
SELECT object_id AS TableName,
last_user_seek as LastAction
FROM sys.dm_db_index_usage_stats u
WHERE database_id = db_id(db_name())
[Code] .....
View 2 Replies
View Related
Mar 12, 2015
I am quite new to SSIS but managed to build a package which imports text files in to SQL. The text files are generated after users complete a manufacturing process on a machine.
The SSIS package is stored in the SSIS catalog and currently a SQL Agent tasks runs every evening to import new files that have been created during the day. Users have now requested the ability to run the import process as soon as they have finished their manufacturing runs as they may want to query the data to looks up stats etc.
What is the best way to do this considering all of the users are not SQL guys and wont have direct logins into the SQL Server or access to SQL Server Management studio. They will have access to the PC where the files are generated, so I ideally I need a batch file which they can just execute to import their new files.
I have seen lots of things on the web about running dtsexec but as the package is stored in the SSIS Catalog, how can I execute this remotely?
View 6 Replies
View Related
Oct 21, 2015
I have a requirement to delete all the orphans users for the databases. The issue I am having is with when database principal owns a schema in the DB, User cannt be dropped.
How do I transfer it to DBO in case I am looping multiple databases. This is what I got so far .
declare @is_read_only nvarchar (200)
Select @is_read_only = is_read_only from master.sys.databases where name='test' /* This should be a parameter value */
IF @IS_READ_ONLY= 0
BEGIN
Declare @SQL as varchar (200)
[Code] .....
View 4 Replies
View Related
Oct 7, 2015
I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.
I created 2 DBs with 100MB datafile and 50MB logfile.
TestDB log file had 100MB autogrowth
TestDB2 log file had 1% growth.
I inserted 1048576 records, took the backup
Ran DBCC loginfo and
TestDB had 40 VLFs and
TestDB2 had 165 VLFs
But when I restored both DBs, this is what I got.
TestDB:
RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec).
SQL Server Execution times:
CPU Time = 125ms, elapsed time = 8323 ms.
TestDB2:
RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec).
SQL Server Execution Times:
CPU time = 109 ms, elapsed time = 8314 ms.
Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.
View 6 Replies
View Related
Apr 7, 2015
I have this query
SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count
FROM sys.dm_exec_query_stats AS a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b
where last_execution_time >= '2015-04-07 10:01:01.01'
ORDER BY execution_count DESC
But the result of execution count is from the first. I want to know it only one day.
View 9 Replies
View Related
Sep 23, 2005
I was wondering what more experienced DBAs have observed with regard to the capacity of a MSSQL DB. Is there an upper threshold of rows where performance becomes unacceptable? I have a fairly slow, but constant input rate of approximately 2,000 rows every 60 seconds or so (that is a little high, but I'm interested in worse case scenario here). That is up 172,800 rows a day. (I'm being overly pessimistic here.) We'd like to be able to keep all of this around as long as possible.
Or would a more heavy duty DB be in order for these sorts of data rates?
View 7 Replies
View Related
Jun 2, 2014
I am writing a performance baseline test.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
View 4 Replies
View Related
Feb 2, 2015
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
View 9 Replies
View Related
Oct 27, 2015
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.
View 0 Replies
View Related
Nov 6, 2015
I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.
On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.
Is this normal behaviour? I've seen more blocking as a result of this.
Is there any way to reduce the number of SPIDs generated?
View 0 Replies
View Related
Apr 3, 2015
Basically the question is, which number should I pick?
View 4 Replies
View Related
May 3, 2015
ID A B C AVG
------------------------
1 08 09 10 -
------------------------
2 10 25 26 -
------------------------
3 09 15 16 -
------------------------
I want to calculate the average of the larges two number from the column A,B & C for particular identity and store that average in the AVG column....
View 9 Replies
View Related
Apr 19, 2001
One of our database is approaching the gigabyte size. I know that microsoft claims to support terabyte databases with sql server 7.0. I was wondering if anyone could tell me about the max size of database they have used on an OLTP site without running into problems. ofcourse with SQL Server.
thanks,
rachna.
View 1 Replies
View Related
Jul 20, 2007
There may/may not be an upper limit for the number of rows in a table, but is there any performance-related limit?
I'm designing a database that stores results that have been acquired from a number of devices. Each device provides a set of data measurements every 10 minutes. Therefore each year a device will produce 52000 sets of results.
If I design a table to store a row for each set of measurements from a device (PK is based on the timestamp and the deviceID), and if there are 100 devices recording for 5 years, there will be 52000x100x5 rows. Would I get a performance increase by separating this data into one table per year? Perhaps the year could be appended to the table name to identify the particular tables.
A secondary issue is some devices can also be configured to produce a different set of measurements every 10 seconds. In this case there will be hundreds of millions of rows over a 5 year period. Therefore I am considering bulking the results into an array for a 10 minute period, and storing this array as a blob each 10 minutes. Is this going to be faster or slower than having hundreds of millions of rows?
Thanks in advance for any advice,
Mark.
View 2 Replies
View Related
Jul 23, 2005
Hi, all:I'd heard that the upper row limit in SQL of 6080 bytes may have beenincreased with SP3.Can anyone confirm/deny this? Is this still a 'carved-in-stone' uppercap?Thanks,DW.
View 1 Replies
View Related
Aug 27, 2015
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
View 0 Replies
View Related
Feb 15, 2008
Does sql server limit max number of connections, or max connections open from a given source, or over a given time period?
View 3 Replies
View Related
May 7, 2015
In log shipping is there any limit for configuring no.of secondary db server???
View 3 Replies
View Related
Apr 14, 2015
I inherited a lot of Servers to upgrade to 2014 to include an SSRS Server.
The encryption Key was never backed up and it seems that no one knows what the password is?
Do I have to manually load the reports? There are a lot of Reports.
[URL]
View 4 Replies
View Related
Sep 26, 2013
I want to use BCP to load data from a text file.
By default, constraints are turned off in bcp, so I use the CHECK_CONSTRAINTS hint.
bcp aborts if ANY of the rows contains a FK violation. No data get loaded.
So if I add the -b 1 batch size option, it loads all data UNTIL the first FK violation, but nothing after that.
I want to load EVERYTHING ... except for the violations. But bcp won't let me. Is there a way?
View 2 Replies
View Related
Jan 13, 2014
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
View 9 Replies
View Related
Mar 21, 2014
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
View 9 Replies
View Related
Jun 18, 2014
I am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.
View 2 Replies
View Related
Jun 25, 2014
In our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?
View 1 Replies
View Related
Jul 31, 2014
Is there a way to schedule a sql job to run at different intervals
For eg:
The job should run at
7:00 Am
8:00 AM
and then at 10:00 Am
View 3 Replies
View Related
Nov 19, 2014
We installed the Sql 2014 in Test server.
We observed frequently
"Process 0:0:0 (0x1e10) Worker 0x00000006B6D341A0 appears to be non-yielding on Scheduler 13. Thread creation time: 12906028806348. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 13%. System Idle 84%. Interval: 70189 ms."
Is it better to run the profiler or performan counter?
What are the filters we have to select in the profiler to monitor the Sql server
View 0 Replies
View Related
Dec 8, 2014
I have a SQL server box running 2014 reporting services. I have another server running IIS v8.
I would like to be able to connect to the IIS site and be given the SSRS report browser.
So externally if I browse to [URL], I am presented with the report server interface, the same as if I browse to http://xxx.xxx.xxx.xxx/reports internally.
What are my options?
View 4 Replies
View Related
Dec 11, 2014
What is the best approach for a read only copy of a database that is ~ 1TB. The primary database is fed nightly with an ETL process. We are currently trying to duplicate the ETL to read only server but that process is not going well. So we are looking at other options to let SQL make the copy.
The primary database is on a Win12R2 with SQL 12 or 14, a 2 node A/P failover cluster.
The read only copy will be on a Win12R2 with SQL 12 or 14. It is not a requirement to fail over to the read only copy if the primary should go down.
What would best the approach to accomplish the end result?
View 3 Replies
View Related