SQL Server Admin 2014 :: Rebuild CMS Tables From XML File?
Jul 23, 2014
I had to reinstall my local copy of SQL a few weeks ago, which naturally overwrote the
msdb.dbo.sysmanagement_shared_server_groups_internal and
msdb.dbo.sysmanagement_shared_registered_servers_internal tables.
However I still have the local XML file that SSMS reads so I can still access the groups, I just get weird errors when trying to re-register my install as the new CMS. How to rebuilt those tables from the XML file or know of a way to repopulate?
View 3 Replies
ADVERTISEMENT
Jul 27, 2015
Do I need to rebuild my indexes on my High Availability listeners?
When I do a full index rebuild on my primary DB's. Does rebuilding also send the rebuild to the listener(s)?
View 1 Replies
View Related
Jun 26, 2015
We face slow performance issue for like taking long time for same query execution after We apply index rebuild and reorganize index. But, after execution of query or procedure for 2 -3 times, performance will be faster. I have following questions
1 do we need to update stats after we rebuild an reorganize index.
2. is it will be slow for 1-2 times for every query and stored procedure execution after we rebuild and reorganize index?
View 2 Replies
View Related
Sep 15, 2015
I'm trying to determine how much space I would need for my data drive and log file drive to do index rebuild. I have a database which is 100gb, it is in simple recovery mode. let me know what to have a look at to determine how much space.
View 7 Replies
View Related
Mar 2, 2015
I have 10 Gb index and disk space only left 5gb .
How can i rebuild index ?
View 4 Replies
View Related
May 13, 2015
i have add maintenance plan rebuild index in sql server 2014 log file increased very tremendous space.
View 6 Replies
View Related
Jul 6, 2015
For a database, we have 4 data files in a particular file group and the file sizes are almost 70 GB each.
Do I come across any performance issues if I create/pre-allocate an additional data file in the same file group so that the existing files don't grow too much?
View 5 Replies
View Related
Apr 27, 2015
In a server we had File Growth,And then We had to Add New Hard Drive And New File On It.And Now We have New server with a Huge Hard Drive.But all files remaind.Can I Reduce This files to One data file or not ?
View 3 Replies
View Related
Jan 24, 2015
We are in the middle of re-designing few tables (namely transaction tables) that would store very large data and would be hosted on cloud (Azure). The old design of this product breaks transaction tables into monthly tables. i.e. say ORDERS Table would be physically broke into twelve monthly tables over a year like ORDERS0115 (mmyy), ORDERS0215 and so on.
We are in the opinion that keeping the entire transactions in one Table is better. Would like to know what's the best practices for transaction tables like the one mentioned above? Is it better to use one table with partitions. I read somewhere that partitions can slow down SELECT queries if not designed and thought properly.Since this would be hosted on cloud (Azure), do you think some additional things are to be taken care? How a site like Amazon keeps their transactions tables?
View 8 Replies
View Related
Jul 5, 2014
I have 6 tables which are very huge in row count and need to be partitioned for better manageability.
Little info: Every day, 300 Million records are inserted and 300 million records are deleted in below 7 tables. we maintain only 8 days worth of data in below tables which is the reason records which are older than 8 days are continuously deleted.
Master table which has [ID],[Timestamp]
Table Name: Sample - 2,578,106
Child tables: Foreign key [ID] is common for all the tables. There is no timestamp column in child table.
dbo.ConnectionDB - 1,147,578,048
dbo.ConnectionSS - 876,458,321
dbo.ConnectionRT - 118,133,857
dbo.ConnectionSample - 100,038,535
dbo.Command - 100,032,235
I would like to partition the above child tables based on the IDs that are inserted every 4 hours. Meaning, All IDs that are inserted in 4 hours window should be in a partition.
View 1 Replies
View Related
Nov 11, 2014
Is there a method of forcing existing tables into the in-memory filegroup so the table data can benefit from in-memory processing.
View 7 Replies
View Related
Jul 1, 2015
I created columnstore index on the table with 20 columns and about 1000 000 000 rows
every day added about 5M rows
"select" queries became faster because of batch mode and table demand less disk space then before
I have also 6 similar tables with 5 000 000 000 rows and plan to move them on columnstore index
server has 128 G RAM
What pitfalls I could face if I will have so many columnstore indexes on one server?
How a could see problems in DMV?
View 3 Replies
View Related
Mar 21, 2014
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
View 9 Replies
View Related
Feb 26, 2015
my 4 GB .OST file is corrupted and i am not able to access my mails and contacts.
View 2 Replies
View Related
Nov 7, 2015
A log file size of a production database has been increase from 4gb to 150 gb initial size.Now i want to find when it will grow & how much it grow & which transaction is responsible for this.
View 6 Replies
View Related
Dec 23, 2014
I'm working on databases where statistics of some indexes (tables) are changing too frequently. Once I update them manually, one minute after they get 10-20% change, and five minutes after they get over 100% change. Tables get updated very frequently (multiple times in a second).
When I run a query to read from sys.stats, sys.dm_db_stats_properties and other dynamic views, I see that they were last updated when I did it manually, but the change rate overpassed the 500+20% (tables have multiples of 10K rows). Auto create and update statistics are set to true on all databases, and I don't know why sql server does not do that automatically.
View 2 Replies
View Related
Oct 26, 2015
We would like to import some tables from Microsoft azure data mart to our local database on the server of our premise for some purpose.
View 4 Replies
View Related
Jun 25, 2014
I have a database it is 50 gb with hundreds of columns. I would like to choose a certain column and convert the data in it to .csv or excel file. How can I do that I am very new to MSSQL...
View 1 Replies
View Related
Nov 10, 2014
We have one table where we store all documents in one of the column called "Doc" with varbinary(max) data type.
We want to download those documents from sql table to windows explorer and i wrote BCP in sql 2005. And things were fine.
The format file I used there looks like this,
9.0
1
1 SQLBINARY 0 0 "" 1 Doc ""
Now we are in 2014 and when I try the same code with same format file, it hangs in the middle. So I changed the file to 12.0 instead of 9.0 but still not working.
View 0 Replies
View Related
Oct 19, 2015
is there a way to restore all file groups except one? example: Database A has 10 filegroups, but 1 of them is defunct, so i cant delete it and there's no backup for restore it.Can I create a new DB restoring the 9 good FGs from a database A's backup?
View 9 Replies
View Related
Oct 2, 2014
I have the following setup:
- An MSSQL 2014 Standard server that houses multiple small databases (in excess of a hundred).
- These databases are frequently dropped and restored by an application that uses this SQL Server.
- There is a business need for this setup at this time, so I can't get away from it. Therefore answers like "don't have so many small databases that are frequently dropped and restored" would be somewhat unuseful
This is the problem I have:
- When I connect SSMS 2014 to the server and expand the "Databases" node, it takes forever to display. In comparison, SSMS 2008 connected to SQL 2008R2 server with the same number of databases displays the Databases tree very quickly.
I ran a trace to see what exactly SSMS 2014 is doing. When the "Databases" node is expanded, it runs a query that checks each database for Memory-Optimized Tables (new and wonderful feature of SQL 2014 for sure, but I'm not using it, at least yet). Naturally, when you have to loop through over a hundred DBs, it takes time. Worse yet, if one of these DBs is in process of being restored, the query sits and waits to time out before proceeding to the next DB. Sometimes this causes outright timeouts. Here is the query:
use [MyDatabase]
SELECT
ISNULL((select top 1 1 from sys.filegroups FG where FG.[type] = 'FX'), 0) AS [HasMemoryOptimizedObjects]
To be sure, this is NOT a SQL Server performance issue. This server processes a rather heavy workload and has been doing so for over a month, and the workload completes within expected time limits or better. Even so I've done some basic performance measuring, and the server itself is quite all right.
Moreover, if I connect SSMS 2008 to it, I get an error message (Index out of bounds or somesuch), but SSMS 2008 does connect, and displays the Databases tree much faster than SSMS 2014.
I'd like to turn off the option to check for Memory Optimized Objects altogether, as I'm not using the feature.
View 3 Replies
View Related
Nov 6, 2015
As per attachment, i have been created replications but in local subscription it is not populated any thing at the same time, Subscription database has been created but tables is not populated as per publication table.
View 2 Replies
View Related
Apr 8, 2014
What is the use of creating manifest file in SSIS... because I was schedule a job and its not needed and manifest file then why we are creating this.
View 4 Replies
View Related
Jan 10, 2015
I have a windows 8 pc that I just got and installed sqlexpress 2014. My buddy haw windows 7 and installed sqlexpress on his pc. We create a db on his pc, did a backup, copied the backup to my pc. In ssms I right click on "database" > restore database. click device and the button to find my file. I navigate to the folder where the file shows in file explorer but the .bak file does not show in ssms to restore from. This is probably a windows thing but I have don't know what to look at.
View 4 Replies
View Related
Jan 21, 2015
I was running an operation to shrink file/emptyfile a data file, and then remove it.
It blocked and caused a huge mess, I suspect on the removal part. But I want to confirm that the emptyfile completed (and that the engine isn't going to try to put more data in there for when I schedule the removal part again a week or more from now).
How does the engine know not to put any more data in there, and how long does that situation last?
View 3 Replies
View Related
Jul 7, 2015
While i execute dbcc sqlperf(logspace); I get following values.
Database NameLog Size (MB)Log Space Used (%)
master 16.17969 13.30275
tempdb 7.429688 61.7245
model 0.7421875 45.78947
msdb 5.554688 25.87904
distribution 2808.93 0.8172179
BANKDB 23438.87 48.20037
WSMIRSDB 109.7422 4.839111
For database BANKDB , Log Space used(%) is 48.83% and Log size is about 23438.87 where as my database size of BANKDB is 60 GB. FULL database and Log back is done every day night one time. My database is performing slow now.
Do we need to take log backup frequently like once a 1 hour so that Log space used will be less. Same query is taking more time to execute than before in same database is it because of log file has increased.
I do index organize and rebuild once a week and stats apply nightly.
Is it correct once log space size is increasing more than 10%. Do we need to take log backup?
View 4 Replies
View Related
Feb 2, 2015
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
View 9 Replies
View Related
Apr 30, 2015
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
View 9 Replies
View Related
May 14, 2015
I have installed SQL 2014 (Evaluation Version) on testing machine. We want to import some excel files on database. I manually created one Test Database and now trying to import excel file. Import completed successfully but I am not able to see any table created as result of Import. I tried it 3-4 times and even restarted sql services but no luck.
View 3 Replies
View Related
Oct 29, 2015
usual way to check if file exists
DECLARE @File_Exists INT
EXEC Master.dbo.xp_fileexist 'serverBSQL_Backupfile.bak', @File_Exists OUT
print @File_Exists
1
And if check folder, can use "nul", but it doesn't work for UNC path
DECLARE @File_Exists INT
EXEC Master.dbo.xp_fileexist 'serverBSQL_Backupul', @File_Exists OUT
print @File_Exists
0
If use xp_subdirs like:
EXEC master.dbo.xp_subdirs 'serverBSQL_Backups'
If the folder doesn't exists, Msg 22006, Level 16, State 1, Line 3 xp_subdirs could not access 'ServerBSQL_Backups*.*': FindFirstFile() returned error 67, 'The network name cannot be found.'
how to check if UNC folder exists in Backup? in my code I want to check if the unc folder exists before doing backup, the unc path is retrieved from other table or backup history.
View 6 Replies
View Related
Dec 18, 2014
I want to setuo AlwaysON that requires Node and FileShare Majority Quorum settings..
I want to know how to configure File share for windows clustering ????
View 2 Replies
View Related
Jun 29, 2015
I have question about tempdb needs to be configured 100GB 64kb block size.its fresh installation.
how to allocate the file sizes.still im not sure how many log files needs to be created with 100GB equals to 64KB block size.
what is 64KB block size and how to divide the logfiles 64KB into the 100gb or 50GB?
what is 64 KB cluster has 128 sectors?
tempdb drives should be formatted with a 64K allocation? how many files needs to created for good performance with 50GB or 100GB? ot 1TB
View 3 Replies
View Related
May 7, 2014
A little background on what I am trying to achieve first. We are moving to Azure virtual machines and we will have 8 disks on the SQL Server box. I am adding more files to the primary file group and each file will go on its own drive. I am then rebalancing data across these files by rebuilding all of the indexes on the tables which is working fine. No problems so far all is good.
I now have an additional problem. If there is a lob or blob column on the table, rebuilding the clustered index and all the non clustered indexes doesn't rebalance the blob or lob data across the disks such as it does with in row data.
I cannot find any articles on rebalancing lob or blob data because all the articles say to move to a new file group. I do not want a new file group, I just want to use the primary file group where the data already resides, and just redistribute it evenly in the same way that I can in row data which is working fine.
One solution I thought about was to BCP data out of the table, truncate the table and then BCP back into the table which I imagine would have the desired effect of distributing the data evenly over the files.
View 2 Replies
View Related