CDC is creating additional tables under System tables.
What is the performance overhead on the database by creating these tables?
I am going to access the CDC records through one ETL tool. Once read the data I am going to delete the records.
If frequency of changes are more once reading the data there may be few records will be added to the CDC. Is CDC is going to truncate the tables or only read records?
Hello,I read several articles of newsgroup about the bulk delete, and I foundone way is to:-create a temporary table with all constraints of original table-insert rows to be retained into that temp table-drop constraints on original table-drop the original table-rename the temporary tableMy purge is a daily job, and my question is how this work on a heavyload operational database? I mean thousand of records are written intomy tables (the same table that I want to purge some rows from) everysecond. While I am doing the copy to temp table and drop the table whathappens to those operational data?I also realized another way of doing the bulk delete is using BCP:1) BCP out rows to be deleted to an archive file2) BCP out rows to be retained3) Drop indexes and truncate table4) BCP in rows to be retained5) Create indexesAgain the same question: When I'm doing the BCP is there any insertionblocking to my original table? What happens to my rows meantime to beinserted?Does BCP acquire an exclusive lock on the table which prevents anyother insertion?Does any one have an experience with a BCP command for querying out 2million records, and how long will it take?I appreciate your help.
We use timed subscriptions to do almost all of our reporting. Reports are delivered (primarily via e-mail and printer) once they are completed and users don't have to "watch the pot boil" so to speak.
Apparently SSRS has some load balancing capability whereby it lets only a limited number of threads/reports run concurrently. We often reach this max and lock ourselves up on some very long-running reports, causing other important reports to wait a long time.
We've added some operational reports (ie. document prints) to the mix. These reports run off of OLTP data. They are very fast and very high priority. Waiting on them is not an option. Is there some way we can get SSRS to work on these operational reports in preference to other types of reports (eg. "just for kicks" reports)? I think we'd almost like to add another SSRS server and dedicate it to the operational reports. Ideally the new SSRS server would use the same Report Server database but would only work on subscriptions for certain documents.
Has anybody else tried to solve this problem? This MS document does really address subscriptions or load balancing by report: http://www.microsoft.com/technet/prodtechnol/sql/2005/pspsqlrs.mspx
I recently configured SQL Server 2012 AlwaysOn Availability group using two nodes - a primary and one secondary read only replica. The group is residing on a windows 2012 cluster with an smb file share as the quorum. I am able to successfully failover through SQL and through the windows 2012 cluster. When I look at the group dashboard on the primary server and view the Operational state of each node I notice an odd value. The secondary role server is listed as Unknown. I also noticed that the Availability replicas node icons in object explorer are displaying the same icon on the primary server but on the secondary server, the primary server is shown as a server with a question mark.
Am I missing a permissions setting or is this normal behavior.
For example:
ServerA is the primary ServerB is the secondary ServerA lists the servers in Object Explorer as:
ServerA (Primary)ServerB (Secondary) ServerB lists the servers in Object Explorer as:
ServerA ServerB (Secondary)
The primary is never listed a primary on the secondary server. Again failovers are working properly, but I want to be sure I am not missing a setting somewhere.
hi,Is it over head to use SqlTransaction(begin, commit, rollback) for a single transaction.am not using application block or enterprise library.only a single insert statement.
As part of dealing with a locking problem I am fine tuning a stored procedure that updates a table. The application updates a row by changing every single column except for the primary key, whether 1 or all of the columns have been modified. Although easier to code, this strikes me as taking a sledge hammer to a nut.
Could anyone tell me if there is any benefit in attempting to break this down. That is, code the stored procedure so that only those columns being changed are modified. I am thinking this might reduce dramatically the overhead of writing to the transaction log and making the changes to the actual row.If the benefit is non existant (or insignificant) because of the way Sql Server updates a row it will obviously be a waste of time to generate dynamic sql.
I have received some reports and I have been asked to decide whether these reports can be developed as an operational report or Analytical Reports.
Basically I wanted to understand what points needs to be considered while deciding whether I should go for Analytical reporting (Cubes) or Operation Reporting.
Does anyone have any benchmarks for the amount of overhead caused by autoshrink of the log and having autostats enabled? We have a customer that insists that turning off these options was necessary to eliminate a performance problem they were having (Query timeouts), but we are not convinced that these two options would have generated enough overhead to have been the root cause (they also rebuilt all their indexes and made some other unspecified changes that more likely solved the problem).
We are hestitant to have them continue with these options disabled because then we need to rely on them to keep the log file shrunk and the statistics updated and because of the data changes during the day, would prefer to have stats updated automatically rather than on a fixed schedule that may not be as appropriate.
Anyway, if anyone has any feedback on overhead generated and potential performance implications of having either of these options enabled, it would be greatly appreciated.
We are having some issues with temporary tables (with # prefixes) within Stored Procedures.
When running a profile trace on them, the stored proc quite happily creates the temp # table (in fact several of them) but whenever it hits the first statement inserting data into one of them (and it doesn't matter which one), there is a 5-6 second delay.
By commenting out one and moving to the next piece of code, the same thing happens.
Following which, the rest of the Stored Prco runs fine and subsequent inserts into the # temp tables also run efficiently.
Is the stored proc getting recompiled perhaps ?
Any advice woul;d be appreciated.
we are running SQL Server 7.0, dont know whether that helps ?
I have been doing some testing with SQL Server 2000 using a packet sniffer, and have found that it is sending bytes with a value of 0x00 between each "valid" character. For example, if it was going to send "hello" over the network, it would be transmitted as "h.e.l.l.o".
Can anybody suggest why this might be. It happens regardless of what client is used - ADO.NET, osql.exe, etc. Is it something to do with the encoding used?
Everything does work fine and the data is received intact, but if these seemingly redundant bytes could be removed then it would increase performance by 100%.
I would like to capture about 20 rows from the sysperfinfo table every30 secs on a production server. I am thinking of ways I can reduce thedisk (not network) I/O overhead of this process. Instead of readingthe table from a local SQL Agent job and writing to a local table, I amwondering if I should create the job to capture the data from a remote(less critical) server. The servers are connecting via a gigabit lineand are in the same server room on the same switch. This way theproduction server would be required to do the reads but the otherserver could take care of the I/O of the writes to the capture table.Also, this job would run from 7am to 7pm non-stop. A waitfor delay of30 seconds would control the twice a min scheduling. Running this onthe remote server would free up a few SQL Agent CPU cycles as well.This would be one less job for the production server to worry about.Thanks
I am a developer, and I have a disagreement with my DBA. He has convinced management, that SQL 2005 FullText Index is so much overhead on production, that it should NEVER be used under any circumstances. We have a Cold Fusion site, and somehow he convinced management that a bunch of Cold Fusion developers can create a more efficient full text indexing method than by using SQL 2005 Full Text Index. So now we have to come up with a method for doing this in Cold Fusion.
Is there any statistical data that could possible support or refute his statements? Thanks
I've been asked to put together an estimation for the performance impact that replication would have on our database server during a particular operation. I know that this depends on a lot of different factors, including:
* Number of articles being replicated * Types of articles being replicated * Number of DML transactions that would result in delivery of replicated data
I am Crystal Reports Developer and I am new in SSIS environment. I have started to read Professional SQL Server 2005 IS book. I am really confused by many tasks to choose.
I need to develop reports from data warehouse. But before I have to send the data from operational database (SQL Server 2000) to warehouse (SQL Server 2005) monthly - I have a script for retrieving the data. For my package, I chose Data Flow Task, Execute SQL Task, and OLE DB Destination, and it does not work.
Please help me if I can look similar packages performing? Thank you!!
I have several users with an unusual problem with SSMS 2012. When they attempt to connect to a database using the "Connect to Server" dialog box, the connection just hangs. Sometimes after about 15 minutes the connection will be successful. Other attempts simply spin seemingly endlessly. Users experiencing this issue are both running SSMS 2012 on Win 7 Pro (64 bit). The following troubleshooting steps have been tried:
1. When the user runs SSMS "As Administrator" the connections work almost instantly. (Elevating privileges is not a solution in our environment) 2. Wireshark shows that SSMS does not try to hit the SQL server when the user experiencing this issue clicks connect. 3. I can create a new virgin user on the PC and that login experiences the same problem. 4. A complete rip and re-install of SSMS 2012 does not resolve the issue.
I have just finished configuring my first test mirrored environment (High safety mode). I setup the database engine service accounts on each of the servers with domainuser. I inherited a production mirrored environment set up by someone else. On the production servers the database engine service account is NT Authorityuser a local account. I am trying to practice installing Windows updates within a mirrored environment and I not sure how to proceed when the service account is NT Authority user account. should I change the service account to a domainuser?
Last week I backed up my SQL Server by using BE 2012. I named the file "SQL Server BAK" which contained copies of my SQL Server databases. A few days ago I lost some part of my data due to accidental deletion. I backed it up, so I tried to restore the database from the .bkf file. The problem comes here, when I try to to restore my .bkf file, it becomes inaccessible.what causes this?
Sometime during the night last night some user account permissions were "lost". Am I right to think that restoring the master database would be the way to go? We have a 2 node 2012 cluster and I stop the cluster resource and start the db in single user mode from the active node. Somehow the sharepoint farm is still trying to connect so I can't get logged in single user. What method could I use to stop users from connecting when I don't have access to the sharepoint farm.
Our auditors are trying to enforce a requirement that all users be disconnected from any application or database after 15 minutes of inactivity. Is there any way to force a logoff within SQL Server? For example, someone is in Management Studio connected to a database. They should be logged off after 15 minutes of no activity.
I would like to know if it is possible to split a database across different servers, in the same manner you can split it over multiple drives on the same server We are trying to balance the load cause we are finding that the current server can't handle the load
I ran the database consistency on 2 databases, the first little used (and relatively new), the other much more active. On the little used db the report didn't return any lines. On the much more active DB it returned 6 lines (one each for the last 6 days). They all reported 0 errors. Should I be concerned that the one database is actually reporting information, even if it appears to tell me "all clear"?
I have inherited a new SQL Server 2008 database server and can not figure out how my user databases are being backed up. This database server is running under a VM.
All the user databases are being backed up nightly per the SQL server log. The backups are written to a virtual disk and is kicked off by the NT AUTHORITYSYSTEM user. I can not see the virtual disk. A restore task does not provide any information about the last backup. I have created a new database, and it is automatically included in the next set of backups.
I have looked at the windows event viewer with out any luck. There are no SQL Server Maintenance Plans or Agent jobs that call a backup. I have also checked the Windows Task Scheduler and can not find any task that does a backup.Could the backups be called from another server ?
I been trying to learn availability groups since I have not implemented it.
From my understanding you can have more than one group.
Lets pretend we have two groups in one instance:
1. Accounting 2. Engineering
From my understanding you can't make a database in two AG because it wouldn't make sense.
But lets pretending there is one database that both are used by accounting and engineering.
Would you have to make a third AG for future fail overs so that other databases in the other two group don't failover when not needed because when you fail over an AG all the databases inside it fail over.
I am receiving the below message however when going into my database properties and going into 'File' it's set as either unrestricted growth for the log files or 2097152MB limit and the log files are only taking up about 3gigs.
Could not allocate new page for database. There are no more pages available in the file group.
Database log file is full. Back up the transaction log for the database to free up some log space.
Could not allocate space for ojbect in database because the filegroup is full.