SQL Server Admin 2014 :: Storing Very Large Tables?
Jan 24, 2015
We are in the middle of re-designing few tables (namely transaction tables) that would store very large data and would be hosted on cloud (Azure). The old design of this product breaks transaction tables into monthly tables. i.e. say ORDERS Table would be physically broke into twelve monthly tables over a year like ORDERS0115 (mmyy), ORDERS0215 and so on.
We are in the opinion that keeping the entire transactions in one Table is better. Would like to know what's the best practices for transaction tables like the one mentioned above? Is it better to use one table with partitions. I read somewhere that partitions can slow down SELECT queries if not designed and thought properly.Since this would be hosted on cloud (Azure), do you think some additional things are to be taken care? How a site like Amazon keeps their transactions tables?
I want to analyze procedure cache, to find inefficient plans and parameter issues.
I do it trow DMV But my requests to DMV are very slow and demand resources because procedure cache is about several GB Actually I dont need on-line analysis.
Is it possible to have fast snapshot of procedure cache?
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.
I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.
On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.
Is this normal behaviour? I've seen more blocking as a result of this.
Is there any way to reduce the number of SPIDs generated?
Hello,Currently we have a database, and it is our desire for it to be ableto store millions of records. The data in the table can be divided upby client, and it stores nothing but about 7 integers.| table || id | clientId | int1 | int2 | int 3 | ... |Right now, our benchmarks indicate a drastic increase in performanceif we divide the data into different tables. For example,table_clientA, table_clientB, table_clientC, despite the fact thetables contain the exact same columns. This however does not seem veryclean or elegant to me, and rather illogical since a database existsas a single file on the harddrive.| table_clientA || id | clientId | int1 | int2 | int 3 | ...| table_clientB || id | clientId | int1 | int2 | int 3 | ...| table_clientC || id | clientId | int1 | int2 | int 3 | ...Is there anyway to duplicate this increase in database performancegained by splitting the table, perhaps by using a certain type ofindex?Thanks,Jeff BrubakerSoftware Developer
I had to reinstall my local copy of SQL a few weeks ago, which naturally overwrote the
msdb.dbo.sysmanagement_shared_server_groups_internal and msdb.dbo.sysmanagement_shared_registered_servers_internal tables.
However I still have the local XML file that SSMS reads so I can still access the groups, I just get weird errors when trying to re-register my install as the new CMS. How to rebuilt those tables from the XML file or know of a way to repopulate?
I have 6 tables which are very huge in row count and need to be partitioned for better manageability.
Little info: Every day, 300 Million records are inserted and 300 million records are deleted in below 7 tables. we maintain only 8 days worth of data in below tables which is the reason records which are older than 8 days are continuously deleted.
Master table which has [ID],[Timestamp] Table Name: Sample - 2,578,106
Child tables: Foreign key [ID] is common for all the tables. There is no timestamp column in child table. dbo.ConnectionDB - 1,147,578,048 dbo.ConnectionSS - 876,458,321 dbo.ConnectionRT - 118,133,857 dbo.ConnectionSample - 100,038,535 dbo.Command - 100,032,235
I would like to partition the above child tables based on the IDs that are inserted every 4 hours. Meaning, All IDs that are inserted in 4 hours window should be in a partition.
I'm working on databases where statistics of some indexes (tables) are changing too frequently. Once I update them manually, one minute after they get 10-20% change, and five minutes after they get over 100% change. Tables get updated very frequently (multiple times in a second).
When I run a query to read from sys.stats, sys.dm_db_stats_properties and other dynamic views, I see that they were last updated when I did it manually, but the change rate overpassed the 500+20% (tables have multiples of 10K rows). Auto create and update statistics are set to true on all databases, and I don't know why sql server does not do that automatically.
- An MSSQL 2014 Standard server that houses multiple small databases (in excess of a hundred). - These databases are frequently dropped and restored by an application that uses this SQL Server. - There is a business need for this setup at this time, so I can't get away from it. Therefore answers like "don't have so many small databases that are frequently dropped and restored" would be somewhat unuseful
This is the problem I have:
- When I connect SSMS 2014 to the server and expand the "Databases" node, it takes forever to display. In comparison, SSMS 2008 connected to SQL 2008R2 server with the same number of databases displays the Databases tree very quickly.
I ran a trace to see what exactly SSMS 2014 is doing. When the "Databases" node is expanded, it runs a query that checks each database for Memory-Optimized Tables (new and wonderful feature of SQL 2014 for sure, but I'm not using it, at least yet). Naturally, when you have to loop through over a hundred DBs, it takes time. Worse yet, if one of these DBs is in process of being restored, the query sits and waits to time out before proceeding to the next DB. Sometimes this causes outright timeouts. Here is the query:
use [MyDatabase] SELECT ISNULL((select top 1 1 from sys.filegroups FG where FG.[type] = 'FX'), 0) AS [HasMemoryOptimizedObjects]
To be sure, this is NOT a SQL Server performance issue. This server processes a rather heavy workload and has been doing so for over a month, and the workload completes within expected time limits or better. Even so I've done some basic performance measuring, and the server itself is quite all right.
Moreover, if I connect SSMS 2008 to it, I get an error message (Index out of bounds or somesuch), but SSMS 2008 does connect, and displays the Databases tree much faster than SSMS 2014.
I'd like to turn off the option to check for Memory Optimized Objects altogether, as I'm not using the feature.
As per attachment, i have been created replications but in local subscription it is not populated any thing at the same time, Subscription database has been created but tables is not populated as per publication table.
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
I have a table that I'm inserting a file into and using the Image data type to store the binary object. Now the code below works fine for files around 1.5 MB, but anything larger and it's like the code won't even execute and I get a Page Not found error. I'm in the process of running some traces to find out what's going on in the backend, but I'm assuming there's something amiss with my code. The Image data type should handle files that size with no problem but for some reason it isn't. Does anyone see anything wrong? Thanks Dim iLength As Integer = CType(File1.PostedFile.InputStream.Length, Integer) If iLength = 0 Then Exit Sub 'not a valid file Dim sContentType As String = File1.PostedFile.ContentType Dim sFileName As String, i As Integer Dim bytContent As Byte() ReDim bytContent(iLength) 'byte array, set to file size
'strip the path off the filename i = InStrRev(File1.PostedFile.FileName.Trim, "") If i = 0 Then sFileName = File1.PostedFile.FileName.Trim Else sFileName = Right(File1.PostedFile.FileName.Trim, Len(File1.PostedFile.FileName.Trim) - i) End If conn = New SqlConnection(eco) conn.Open() cmd = New SqlCommand("INSERT INTO ECO_Attachments (ECOID, FromType, DocName,OldRev,NewRev,NtLogin,DisplayName, FileName, FileSize, FileData, ContentType) VALUES (@ECOID, @FromType,@DocName,@OldRev,@NewRev,@NtLogin,@DisplayName, @FileName, @FileSize, @FileData, @ContentType) ") cmd.Connection = conn Try File1.PostedFile.InputStream.Read(bytContent, 0, iLength) With cmd .Parameters.Add("@ECOID", SqlDbType.Int) .Parameters.Add("@FromType", SqlDbType.NVarChar, 50) .Parameters.Add("@DocName", SqlDbType.NVarChar, 250) .Parameters.Add("@OldRev", SqlDbType.NVarChar, 50) .Parameters.Add("@NewRev", SqlDbType.NVarChar, 50) .Parameters.Add("@NTLogin", SqlDbType.NVarChar, 100) .Parameters.Add("@DisplayName", SqlDbType.NVarChar, 200) .Parameters.Add("@FileName", SqlDbType.NVarChar, 255) .Parameters.Add("@FileSize", SqlDbType.Real) .Parameters.Add("@FileData", SqlDbType.Image) .Parameters.Add("@ContentType", SqlDbType.NVarChar, 50) .Parameters("@ECOID").Value = ECOID .Parameters("@FromType").Value = From .Parameters("@DocName").Value = DocName .Parameters("@OldRev").Value = OldRev .Parameters("@NewRev").Value = NewRev .Parameters("@NTLogin").Value = NTLogon .Parameters("@DisplayName").Value = DisplayName .Parameters("@FileName").Value = sFileName .Parameters("@FileSize").Value = iLength .Parameters("@FileData").Value = bytContent .Parameters("@ContentType").Value = sContentType .ExecuteNonQuery() '.ExecuteScalar() End With Catch ex As Exception Response.Write(ex) 'Handle your database error here conn.Close() End Try
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
I am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.
In our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?
"Process 0:0:0 (0x1e10) Worker 0x00000006B6D341A0 appears to be non-yielding on Scheduler 13. Thread creation time: 12906028806348. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 13%. System Idle 84%. Interval: 70189 ms."
Is it better to run the profiler or performan counter?
What are the filters we have to select in the profiler to monitor the Sql server
I have a SQL server box running 2014 reporting services. I have another server running IIS v8.
I would like to be able to connect to the IIS site and be given the SSRS report browser.
So externally if I browse to [URL], I am presented with the report server interface, the same as if I browse to http://xxx.xxx.xxx.xxx/reports internally.
What is the best approach for a read only copy of a database that is ~ 1TB. The primary database is fed nightly with an ETL process. We are currently trying to duplicate the ETL to read only server but that process is not going well. So we are looking at other options to let SQL make the copy.
The primary database is on a Win12R2 with SQL 12 or 14, a 2 node A/P failover cluster.
The read only copy will be on a Win12R2 with SQL 12 or 14. It is not a requirement to fail over to the read only copy if the primary should go down.
What would best the approach to accomplish the end result?
I have 10 databases which are configured as principal in mirroring I need to failover all the databases as part of failover , instead of writing query each database as parner failover, is an script which will generate the databases as principal to failover ?
After installing SSMS on some computers - the only way we can get SSMS to run correctly is to run it as the administrator. Is there a way where you don't have to do that? These end users are logging as themselves and have accounts in SQL Server all set up - but SSMS will only launch for them if we right click and select "run as administrator".After doing some digging - it seems that this is a common problem out there.
Have a SQL 2014 install and cannot for the life of me get the maintenance plan to remove old backups. I've tried everything. Rights to the folder where the backups are stored are adequate, extension set in the clean up task is as it should be, etc. Log shows the job ran successfully. Running the command manually shows successful completion, but backups are still not removed.
when execute the restore log command, in the messages window it shows how many seconds the restore takes, at the meantime, on the status bar, it also shows the seconds the command takes.
Two values are different and could be very different, please see below examples , restoring takes 1.8 seconds, but in total the command takes 4 seconds to complete, the other one is 8.1 seconds and 12 seconds.
What does SQL Server or Windows do after the restoring?
pic a:
pic b:
I did a xperf, I can see after the restoring is completed, sql server did garbage collect and log write, which just run very quickly, but storage is busy on reading the log file for nearly 2.2 seconds( 4-1.8), and 4 seconds ( 12-8.1) .
pic 1:
pic 2:
see pic 1 above, from 13 to 17, the restore operation is finished, but the storage jump to 100% active to do some reads, only reads no writes. zoom that period shows pic 2, it read 4096 (I don't know the unit size) for about 4 seconds, what does this do?
Data file, log file, backup file are no different drives, but all local drive, the interesting point is the read jumped after restoring, I tested it on different server, same result...