SQL 2012 :: Re-vamping Data Files In A Production Database?
Dec 22, 2014
(SQL 2012 Standard)
I have a database with one 320 .mdf file and one 200GB .ndf file.
I would like to reconfigure my database to have four 200GB .mdf files. How do I get from here to there?
View 5 Replies
ADVERTISEMENT
May 15, 2008
Hi,
We have a database in production which has free space about 200 GB in Data files and Index files, I want to shrink Data files and Index files.If I do incremental shrink in daytime does it hurt the performance of the database or please advise what is the best practice.
thanks
View 5 Replies
View Related
Jul 21, 2014
I've stepped into a new environment and have never dealt with multiple data files on user databases only with Temp db.What would be the best way to get all my data files in sync. I have done this on databases that aren't that big in size or off in size by a lot. Here is what I have
Mdf -- 69 GB
ndf -- 3
ndf -- 3
ndf -- 3
ndf -- 3
ndf -- 4
ndf --4
ndf -- 2
View 7 Replies
View Related
Jul 23, 2014
I am attempting to create a Test db from a full backup of the production db. With 2012, I cannot do it the the way i had done it in previous versions (and now i understand why because of Logical names).
The Test db runs in the same instance as Prod db.
I attempted to run this but come up with errors. This is what i executed:
RESTORE DATABASE TEST FROM DISK = 'E:<path>FULL.BAK'
WITH REPLACE, RECOVERY,
MOVE 'PROD' TO 'E:<path>TEST.MDF';
The errors are all cannot execute due to PROD is in use.
View 9 Replies
View Related
Jan 17, 2007
Hello,
I am trying to refresh a test database with data from a production database. Both database structures are identical, e.g. constraints, stored procs, PK, etc. I am trying to create a package in SSIS that accomplishes this task and I am having extensive problems. The import export wizard is out of the question because the constaints are not carried over, plus when I try to refresh the data using the import export wizard, it fails on 1 specific table because of a column in that table named "Error code". I think "Error code" is a micrsoft keyword, so it fails on this column. Does anyone know a workaround that I can do to accomplish this simple task, that could be completed in minutes using DTS. I understand that SSIS is not as straight forward as DTS, but this task is something that DBA's do on a regular basis and therefore should not be this difficult.
Any help would be appreciated,
David
View 1 Replies
View Related
Nov 19, 2015
I have question regarding SQL Transactional Replication methodology
1. Let's say successfully created SQL Transactional Replication and running / transferring data from publisher to subscriber
2. Now one day the source production / publisher SQL Server is down and the remaining DR SQL Server is up (subscriber)
3. Next day, we fixed and bring up the production / publisher SQL Transactional Replication server, then how can we sync back all existing data records from subscriber into publisher side ?
View 3 Replies
View Related
Mar 3, 2015
I'm having an argument with our infrastructure architect who has just gone and bought lots of SSD drives to use for our tempdb data and log files, sounds great doesn't it? There is a catch though, his plan is to add the disks to the two available slots in each blade in a RAID0+1 configuration, effectively giving you one usable drive, and adding both data and log files on to one disk.
I then pointed out that SQL Server best practice is to host tempdb data and log files on two separate drive to reduce contention. The architect then basically said that because this isn't spinning disk the issue of drive, r/w contention isn't an issue I don't agree with this and wanted to get some opinions from the community, I'm still advising that two separate disks should be used but someone just went and spent £80k ($150k) on SSDs and doesn't want to back down...
View 4 Replies
View Related
Feb 25, 2006
Hi,
Is there any tool available to migrate the data from the SQL Server test database to SQL Server production database. Data Migration should be based on a condition which can be given as an input for a table by the user. The dependant tables also should be migrated based on the given condition. i.e data subsetting based on the matching conditions.
Ex : Salary > 2000
The rows of the table which matches the condition alone need to be migrated for the corresponding table. Also its dependant table's rows should be migrated based on the given condition. Please help me with a tool which can automate this.
Thanks,
MiraJ
View 4 Replies
View Related
Jun 26, 2015
I can detach/attach SSAS database.But I have a software that protects(backs up) the files of the SSAS Database.
What the customer needs is to be able to take these backed up files and port it to a different server and attach it there.But the new server complains that these files have no corresponding detach-log files.
The customer doesn't want to backup and restore the SSAS databases.
View 2 Replies
View Related
Jul 23, 2013
I have a small project to be done in which I need to fetch the pdf file from a my system and save it in database and also fetch the name of it and save it in the database.
View 9 Replies
View Related
May 23, 2014
I am using SSIS to load raw files into database. In my files I have columns Date which has format
1/1/2010 12:00:00 PM.
I want to load this column in format 1/1/2010 24:00:00. I mean in 24 hour format.
View 5 Replies
View Related
Aug 12, 2015
Been practicing DR strategies with a test SQL instance by following the scenarios listed here: [URL] ....
> Took a backup of the Model database
> Stopped SQL Server
> Deleted model database data & log file
> Started SQL Server and it obviously wouldn't start because TempDB needs a model database present.
> Started SQL instance with trace flags 3608 & 3609
> Connected to SQL instance using command prompt.
> Issued restore command but was met with this error:
Shared Memory Provider: The pipe has been ended.
Communication link failure
And found this in the SQL log..
2015-08-12 16:21:32.83 spid51 Starting up database 'tempdb'.
2015-08-12 16:21:36.88 spid51 Error: 3456, Severity: 21, State: 1.
2015-08-12 16:21:36.88 spid51 Could not redo log record (59:136:21), for transaction ID (0:0), on page (1:20), allocation unit 458752, database 'tempdb' (database ID 2). Page: LSN = (30:165:3), allocation unit = 458752, type = 1.
[Code] .....
View 9 Replies
View Related
Jun 8, 2015
I recently set up a SQL 2012 FCI with a NetApp fileshare to store the data files. The install worked just fine, but I can't run an integrity check for any of my databases. Whenever I try, I get these error messages:
Msg 1823, Level 16, State 2, Line 1
A database snapshot cannot be created because it failed to start.
Msg 1823, Level 16, State 8, Line 1
A database snapshot cannot be created because it failed to start.
Msg 5120, Level 16, State 104, Line 1
Unable to open the physical file "path-to-fileshareMSSQL11.MSSQLSERVERMSSQLDATAmodel.mdf:MSSQL_DBCC12". Operating system error 1: "1(Incorrect function.)".
Msg 7928, Level 16, State 1, Line 1
The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.The error message suggests SQL had a problem creating the snapshot, but I checked through some NetApp documentation for configuring SMB 3.0 for SQL.
View 3 Replies
View Related
Oct 1, 2015
I understand that we shouldn't shrink data files as it might cause heavy fragmentation along with log usage, high IO/CPU etc.
In a DB in which lot of DML transaction occur, there will be empty spaces whenever deletions occur.
Will SQL Server fill that part with data when new insertions occur ?.
View 4 Replies
View Related
Feb 24, 2014
When opening .sql files, I get a connect to database engine prompt every single time. how to stop this from prompting vs. just using my current active connection?
View 4 Replies
View Related
Apr 15, 2014
I am currently investigating aa high avg write time ms issue (145ms) which seems to be only occuring on the tempdb data files.I have followed the recommended setup of TEMPDB in that
1. Data files = number of physical cores
2. Data files and logfiles are on separate partitions away from the other databases.
3. Tempdb is presized and no incremental file increases look like they are happening with frequency.
We have sharepoint 2012 setup on other sql servers and with TEMPDB setup following the same guidelines, with far more Sharepoint activity on a similary specified hardware which is why its confusing.FileIO auditing on the partitions themselves shows that the FileIO is very fast on the partitions that the tempdb data file which leads me to beleive that Sharepoint may be the culprit perhaps due to excess use of tempdb with operations taking a long time to resolve.
View 3 Replies
View Related
Dec 29, 2014
LocalDb cannot restore a backup whose original data and log files are in different folders
[URL]
View 1 Replies
View Related
Jan 9, 2015
I'm trying to move a log file of a database that is part of an availability group. I have been following steps from the article: [URL]
At first this worked fine for me in a test environment. When I tried it in a production environment the database on the secondary went into "Recovery Pending" state and I can't get it out.
I checked to ensure that the dB is looking in the right place for the log file, and it is. It just doesn't seem to actually use the new file. If I start and stop SQL service, the dB comes back up and is fine.
Here are the steps I'm going through and what is happening at each step:
--------------------------------------
:Connect DEVSQL --This is currently PRIMARY
USE[master]
GO
ALTER AVAILABILITY GROUP [DP-AG-DEV] MODIFY REPLICA ON N'DEVSQL' WITH (SECONDARY_ROLE(ALLOW_CONNECTIONS = NO))
[Code] ....
All is good so far. Both the Primary and the Secondard have had their logical files changed, which has not taken affect yet because there has been no failover.
--Make SQL10 the PRIMARY
:Connect SQL10
ALTER AVAILABILITY GROUP [DP-AG-DEV] FAILOVER;
GO
SQL10 is now the Primary for this AG. And, as expected, the database [AG-Test] is in "Recovery Pending" because it is now looking for the log file in the new location. I need to move the file to the new location.
:Connect DEVSQL
--Enable XP_CMDSHELL
sp_configure 'show advanced options',1
go
reconfigure
go
sp_configure 'xp_cmdshell',1
[code].....
This is where the script is failing, returning the error:
Msg 1468, Level 16, State 5, Line 5
The operation cannot be performed on database "AG-Test" because it is involved in a database mirroring session or an availability group. Some operations are not allowed on a database that is participating in a database mirroring session or in an availability group.
Msg 5069, Level 16, State 1, Line 5
ALTER DATABASE statement failed.
I can not get the dB to recognize the log file at it's new location.
If I restart the SQL Service, it comes back fine, which seems to indicate to me that it is not a permission problem and confirms that the file is in the right place.
How do I force SQL to look for the log file again without restarting the service?
View 2 Replies
View Related
Jan 21, 2015
Got following query:
SELECT
event_data.value('(event/data/value)[4]', 'bigint') AS cpu_time,
--database name
event_data.value('(event/data/value)[5]', 'bigint') AS duration,
--estimated cost
--estimated rows
--nest level
[code]...
Basically, is a simple T-SQL query that reads the local file for my already setup extended event sessions. But I can't find the way to retrieve the following attributes as part as the T-SQL query:
--database name
--estimated cost
--estimated rows
--nest level
--object name
I am trying to find a BOL or some MS link with the full list of possible values for event_data.value but can't find one.
View 2 Replies
View Related
Mar 17, 2014
I am testing out a blank database created over two physical files on two separate disks with one table called data which has one column called values nvarchar(max).
I filled the table up with a whole load of data and ran a select * against it. If I run Permon at the same time I can see that the read load has been spread over multiple disks as each of these disks is getting read from in parallel. If I create the same database on a single file and run the same select * again it takes much longer, proving that the read load has been distributed across multiple disks.
Now moving onto writes, this is where the confusion lies. I understand that SQL server fills files evenly until they need growing, after which it will then fill files individually until they are full in a round robin fashion unless you have trace 1117 turned on. What I don't understand is why the writes aren't distributed out whilst it is filling these file groups.
I ran an continual insert into my table with go 1000000 to monitor how the files are being filled up. I monitored where SQL server was physically placing the files as they were being inserted by running the following query:
;WITH CTE AS
(SELECT
sys.fn_PhysLocFormatter (%%physloc%%) col1,
RIGHT(LEFT(sys.fn_PhysLocFormatter (%%physloc%%),2),1) AS [Physical RID],
DATAID
[Code] ....
I could see that it would a thousand or so records into file 1, then a thousand or so into file 2, then a thousand or so into file 1 etc etc. In another words it would hit one disk, then another disk, then back to disk one to fill the file evenly. Is there any way to make SQL Server distribute the writes out in parallel so that both disks are writing in tandem?
By the looks of it, multiple disks only scale reads, as with writes only one disk is ever written to at once which is annoying. Any way to harness the write power of multiple disks?
View 6 Replies
View Related
Jul 2, 2015
So I'm using the 2012 SSMS to connect to a SQL 2008 database, upon which we have code that audits all SQL logins to production and notifies us via email that someone is using logins they shouldn't. Mostly it's to notify us if people other than the DBAs are using these logins from their desktops instead of using their windows accounts.
This morning I opened up SSMS 2012, logged into my production servers in Object Explorer using Windows Auth. An hour later, I had to update an exception table, so I opened a new query with SQL Auth and used a SQL-only login it. Immediately the email pops that someone on my desktop is using a SQL Login. That's okay. That's expected.
What I didn't expect is that after I closed the query window, the email kept popping. The query window isn't even open / connected to production any more, but SQL still thinks I'm logged in using a SQL Login instead of with Windows Authentication (which is what I used on the Object Explorer connection).
Is there a bug with 2012 that causes a new connection type to affect all current connections? I.E., did it change my DB Windows Auth connection to SQL Auth on the other connection?
View 0 Replies
View Related
Feb 24, 2015
I have the need to delete old backup files via TSQL job. Found this solution online:
PushD "
emoteservershareDIFF" &&(
forfiles -m *DIFF*.sqb -d -1 -c "cmd /c del /q @path"
) & PopD
It works remotely if I run it via command prompt. But when I add this to a TSQL job on my remote SQL instance, it runs without deleting anything. What I'm missing?
View 6 Replies
View Related
Jan 29, 2014
Full Text Searches are working on my test server, but not on my production server. My test scenario is as follows:
CREATE FULLTEXT CATALOG FTC_Test
AS DEFAULT
AUTHORIZATION dbo
CREATE FULLTEXT INDEX ON guest.FtsTest(FullName)
KEY INDEX PK_FtsTest ON FTC_TestI wait briefly and then check to see if the index has been populated:
SELECT * FROM sys.fulltext_indexescrawl_end_date is not null,
So I'm assuming I don't have to wait anymore before I try some FTS searches. Right? I can't get any queries to return anything, though.
The following tells me the full text item count for the table is zero:
DECLARE @TableId INT
SELECT @TableId = id FROM sys.sysobjects WHERE [Name] = 'FtsTest'
SELECT OBJECTPROPERTYEX(@TableId, 'TableFulltextItemCount') AS TableFulltextItemCount
As mentioned, the full text search works on my test server. Both of them are SQL 20012 SP1 (11.0.3000) x64 running on WinServer 2008 R2 SP1.
View 3 Replies
View Related
Mar 20, 2014
just i see a database and a table 'tbl_OutBox_MT' where there is now primary key and have index (non unique, non cluster). and it store almost 3000000 data per everyday. and wipe out data from their and archive all data to other location and broadcast this table 'tbl_OutBox_MT' by mobile operator everyday from morning to evening. but when it perform broadcast it to mobile operator it takes huge time. because this table gather data from different sources (tables) by using complex query and INSER INTO statement and insert into this table.
I need to perform first, my observation is there is no primary key. when i run any complex query into this table it takes huge time and sometimes shows transaction deadlock error.
CREATE TABLE [dbo].[tbl_OutBox_MT](
[TRAN_ID] [varchar](36) NOT NULL,
[OUT_MSG_ID] [int] IDENTITY(1,1) NOT NULL,
[OUT_MSG_ID_TELCO] AS (CONVERT([bigint],((((CONVERT([varchar](4),datepart(year,[PROCESS_TIME]),(0))+case len(CONVERT([varchar](2),datepart(month,[PROCESS_TIME]),(0))) when (1) then '0' else '' end)
[code]....
View 1 Replies
View Related
Oct 17, 2014
I keep getting requests to increase the width of a varchar colum every now and then.
I want to ask if its perfectly ok when you have active users connecting to the application to do this?
View 7 Replies
View Related
Mar 3, 2015
Setting up Transaction Replication in test environment. I am willing to bet that most of you take a production backup (if so, how, and using what?), restoring the database to your test environment, then running a snapshot to your subscriber and away you go.
But perhaps you take a backup of your publisher and subscriber, if so, how do you know there are no inconsistences because there were transactions sitting on the distributor?
What do you do if you have additional indexes on the subscriber for reporting, that are not on the publisher?
Here at work we are having issues with getting consistent databases set up with T Rep, missing rows, duplicate keys at subscriber etc. How to avoid these issues.
View 0 Replies
View Related
Jan 24, 2008
We have 1 TB database and we recently got space so
1) can i add data files and put in different disk in production hours
2) what are the effects of doing this.
JUst want to get expert advise
View 1 Replies
View Related
Mar 10, 2008
Hi,
Here's my problem, i need to create a task that reads a folder looking for new files, if the folder contains new files, the task should read the files, extract the data and insert it into a database. I read something about the Bulk Insert Task, but i didn't understand it very well.
If someone can provide me code example or a more detail explanation i would be very grateful.
View 9 Replies
View Related
Feb 12, 2008
I am learning SQLServer Integration Services.
I created a file People.txt containing firstName, LastName seperated by a pipe.
------------------content-----------
John | Doe
Mike | James
Adam | Smith
-----------------------------------------
and another one called gender.txt
------------------content-----------
M
---------------------------------------
I will would like to create integration services package that compines each record of the first file with the record of the second file and inserts the result into table.
--------------Result table content------------------
John
Doe
M
Mike
James
M
Adam
Smith
M
-----------------------------------------------------------
Thanks
View 5 Replies
View Related
May 6, 2015
I have a requirement to
a. Read data from Different CSV files.
b. insert and update data to Data base in multiple table using joins.
This execution runs for 1-2 hours.I can use C# with Ado.net, but only concern I see is if in between execution fails due to some connection or other error. All insert data has to be cleaned up again.I feel writing and Store procedure inside transaction, which will take path's for CSV file as input and insert data in database. using transaction we will have flexibility rollback to original state.
View 9 Replies
View Related
Apr 23, 2008
I would like to deploy several reports to production server, Do i need to install reporting services entire software in order to run the reports or is it possible to just have runtime files installed on it to run the reports.
please help, i have almost 100 reports to be deployed on this server which is located in other country.
Thanks for the helpful information.
(i am using SQL server 2005 / reporting services 2005.)
View 6 Replies
View Related
Apr 29, 2014
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB
2) Create table called TEST on primary
3) Insert 40MB of data into test
4) Create another file group called temp in primary size 200MB
5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group.
6) Add another 2 files called DATA2 and DATA3. Both are 200MB.
7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3
8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB,
DATA2 = 13MB,
DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB
DATA1 = 10MB
DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over
the remaining files in PRIMARY.
View 3 Replies
View Related
Sep 27, 2006
First of all i do not know whether this is the right form to ask the question Let me describe the scenario iam using Iam generating xml files at a particular place and sending them to a server xml1|--------------------->dataset1------------------------------>adapter1.update(dataset1)xml2|----------------------->dataset2----------------------------->adapter2.update(dataset2)xml3|----------------------->dataset3------------------------------>adapter3.update(dataset3) all the three updates should happen in only one transaction if any one of the update fails then the transaction should rollbackcan anyone tell me a way to do iti am desperately in search of any ways to do it can anybody help please
View 2 Replies
View Related