SQL Server Admin 2014 :: How A New Partition Function Apply For Current Data
Apr 15, 2015
I have a heavy database , More than 100 GB only for six month .every Query on it takes me along time and I dont have enough space to add more indexes.by a way I decided to do partitioning. I create a partition function , on date filed and all Data records per month was appointed to a separate file.And is partitioning only for Future data entry?
View 9 Replies
ADVERTISEMENT
Nov 28, 2014
I work for a 24/7 shop. We currently have a table that is partition on monthly. I have to created a script that will add a new file group, add the new file to the group, and alter the the partition scheme and function. However, I need for this process to not cause a lock on the table. Typically I get the locking and issues when I am run the split command. Is there a way to prevent this from happening?
View 4 Replies
View Related
Jun 29, 2015
I have question about tempdb needs to be configured 100GB 64kb block size.its fresh installation.
how to allocate the file sizes.still im not sure how many log files needs to be created with 100GB equals to 64KB block size.
what is 64KB block size and how to divide the logfiles 64KB into the 100gb or 50GB?
what is 64 KB cluster has 128 sectors?
tempdb drives should be formatted with a 64K allocation? how many files needs to created for good performance with 50GB or 100GB? ot 1TB
View 3 Replies
View Related
Aug 12, 2015
I have a requirement to implement CDC for 50+ tables to implement incremental data changes warehouse/reporting rather than exporting the whole table data. The largest table is having more than half a billion records.
The warehouse use a daily copy of OLTP db (daily DB refresh). How can I accomplish this. Is there a downside in implementing CDC just for the sake of taking incremental changes on the tables?
Is there any performance impact if we enable CDC on OLTP db?
Can we make use of the CDC tables on the environment we do daily db refresh so that the queries don't hit OLTP database?
What is the best way to implement CDC to take incremental changes for reporting.
View 0 Replies
View Related
Dec 4, 2013
And have chosen the destination - unstructered (flat) file. But the wizard proposes to export only one table (dbo.Acocount) and all the others from the list are not exported. How can I export ALL the data into one file.I need to do this to edit the syntax in the editor and then import this data and database structure into Postgresql
View 4 Replies
View Related
Jun 8, 2015
I'm having problems with the following code:
--DROP MASTER KEY
--GO
USE master;
CREATE MASTER KEY
ENCRYPTION BY PASSWORD = 'Pass@word1';
GO
USE master;
[code]....
What am I missing? What do I have to do if I get in a situation where I need to back out and start over?
[URL]
View 9 Replies
View Related
Nov 21, 2013
We have deleted 120GB of data but space did not released even after 2 days. Is there any reason for this? tell me how exactly it releases the space after truncating a 120GB table?
View 8 Replies
View Related
Aug 6, 2014
Does Replication use linked server.
If not then how data will transfer from source to destination.
Note : As per my knowledge it is not backup and restore like logshipping.
View 2 Replies
View Related
Aug 11, 2014
I am trying to replicate data from a view in the publisher to a table in the subscriber (transaction replication). I do not need the view's base table, or the view itself, replicated to the subscriber. I only want to data from the view to feed a table in the subscriber.
Is this possible?
Running SQL Server 2008 R2 Enterprise.
View 1 Replies
View Related
Nov 13, 2014
If data is modified (by an insert, update, or delete) while the backup is running, will the backup contain those changes or will it be added to the database afterwards?
View 2 Replies
View Related
Dec 29, 2014
how to identify the data leakage in a database , as I heard in one of my environment?
what is the meaning for data leakage ?
View 3 Replies
View Related
Jan 21, 2015
I was running an operation to shrink file/emptyfile a data file, and then remove it.
It blocked and caused a huge mess, I suspect on the removal part. But I want to confirm that the emptyfile completed (and that the engine isn't going to try to put more data in there for when I schedule the removal part again a week or more from now).
How does the engine know not to put any more data in there, and how long does that situation last?
View 3 Replies
View Related
May 12, 2015
We saved huge log data from user behaviour in our site .
But In data mining time , we saw that most of them cant use for data mining
What is the best practice about data gathering from user movement in site?
is there any best practice Template for this ?
View 0 Replies
View Related
Jun 17, 2015
I need to encrypt some column level data in multiple tables in SQL server 2014. I've never tried encryption in SQL server 2014. How can I achieve it?
View 4 Replies
View Related
Oct 14, 2015
how to fetch data from oracle database in sql server 2014
example:
oracle schema :t1
sql server :t2
now am in t2 sql server database
now am executing below query
select * from t1.tablename ;
View 1 Replies
View Related
Apr 8, 2015
My company is migrating all their servers to a new data center and I get to specify what we need for the db servers.
We've got a 22 prod servers (mainly physical) with a couple of TB of data on sql 2000 to 2012.
We expect to move to sql2014, and consolidate and virtualise where ever possible.
But I'd like start with specifying an overall architecture for this: some Best Practices to guide the build at a server and an installation level
View 1 Replies
View Related
Apr 15, 2015
From distribution db, which table(s) store info about filtered data?
View 0 Replies
View Related
Apr 21, 2015
USE <database>
select * from sys.database_files
and
select * from sys.master_files where database_id= <db id>
give me different size of memory optimized file in <database>
Microsoft SQL Server 2014 - 12.0.2456.0 (X64)
View 1 Replies
View Related
Apr 30, 2015
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
View 9 Replies
View Related
Oct 7, 2015
how to load the data from oracle to sql server
oracle source is having 7 tables
sql server target is having 7 tables
i have used VISUAL STUDIO and created the one data for individual but how to run in sql server that ssis package
View 9 Replies
View Related
Oct 15, 2015
I need to modify a table to reside on a new filegroup and also point TEXTIMAGE_ON to that filegroup instead of PRIMARY. Apparently in the past, the only way to achieve this via SQL is to create a new table, copy over data, drop the old table and rename the new table to the original name. I found this solution in the SQL Server 2005 forum.
Is there any other way to alter this table in order to point the TEXTIMAGE_ON to new filegroup using SQL Server 2014? We are on Standard edition. The technique I am using is the drop constraint (with move option) and add constraint (to new filegroup) commands. The data and indexes move, but not the text data (it still is in primary filegroup).
View 0 Replies
View Related
Nov 6, 2015
I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.
On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.
Is this normal behaviour? I've seen more blocking as a result of this.
Is there any way to reduce the number of SPIDs generated?
View 0 Replies
View Related
Sep 11, 2015
We have oracle linked server created on one of the sql server 2008 standard , we are fetching data from oracle and updating some records in sql server . Previously its working fine but we are suddenly facing below issue.
Below error occurred during process .
OLE DB provider "OraOLEDB.Oracle" for linked server "<linkedservername>" returned message "".
Msg 7346, Level 16, State 2, Line 1
Cannot get the data of the row from the OLE DB provider "OraOLEDB.Oracle" for linked server "<linked server name>".
View 7 Replies
View Related
Jan 13, 2014
I want to use service SIDs for my SQL Service accounts but also want to have the data files on a NetApp filer CIFS share. The 2014 installer prevents installation if CIFS and Service SIDs are used. I tried to install with domain account on CIFS, and then to swap back to Service SIDs afterwards, but couldn't find a way to do it.
I granted the AD Computer account Full Control to the CIFS share, so it should work, but I just can't work it out at the moment.
View 0 Replies
View Related
Feb 2, 2015
I've been trying to get a definitive answer to this question but alas I have conflicting and patchy answers so far from other sources. I have an index that, lets say, requires 10GB of data space to rebuild..This index resides on a filegroup that spans 2 files on two seperate drives (i.e. a mdf and ndf)
When I rebuild this index how will each of these datafiles grow as the rebuild proceeds to completion? Lets for the time being remove the caveats of any other activity hitting the example index/database in question.My tests seem to show that only the mdf will grows (or the file with the lowest id in the that filegroup) provided there is enough space available in that particular file to complete the operation. The secondary ndf dat file doesnt grow at all if the mdf has enough space.
Is expected behavior? i.e. the index will be rebuilt in a contiguous manner relative to the files contained with the filegroup i.e. fileid 1 will grow till limit reached then next fileid grows etc?
View 0 Replies
View Related
Jul 27, 2015
I have been creating databases in SQL 2008 with a primary filegroup for the system objects and a secondary, marked Default, for the data.
We are preparing a migration to SQL 2014, and the administrator is complaining he won't adopt this structure on the new servers because 'there is no benefit' and 'a backup cannot be restored (!?)'.
View 2 Replies
View Related
Aug 4, 2015
Background information: SQL 2014 Ent. highly volatile OLTP environment. We generate 10 - 12 GB compressed transaction log backup files every 15 minutes.
Currently - we have two-node A/P cluster residing on flash array. Need to leverage AlwaysOn to offload processing. Replica server with have Flash storage. Replica node has same CPU and memory footprint. 10GB connection between nodes. Anyone generating such large transaction log for 15/30 minute time period?
View 0 Replies
View Related
Aug 27, 2015
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
View 0 Replies
View Related
Mar 31, 2015
I set up the collector, and specify the Run As as my AD account in the Collector Set - Properties - General screen. My AD account is the local admin of the remote server.
However, the collector does not seem to work. Although the collecting set is shown as running, the The blg file stays at 64K. If I open it, there is nothing inside (no counter at the bottom). What did I miss?
View 1 Replies
View Related
Aug 26, 2013
I try to load data into a memOpt table (INSERT INTO ... SELECT ... FROM ...). The source table has a size about 1 Gb and 13 Mio Rows. During this load the LDF File grows to size of 350 GB (until the space if the disk is run out of space). The Server has about 110 GB Memory for the SQL Server reserved. The tempdB doesn't grow. The Bucket Size in the create statement has a size of 262144. The Hash key as 4 fields`(2 fields have the datatype int,1 has smallint, 1 has varchar(200). ) The disk for the datafiles has still space for the datafiles (incl. the hekaton files).
How can I reduce the size of the ldf files during the load of the data ?
View 9 Replies
View Related
Oct 1, 2014
I have a Windows Server 2012 R2 2 node cluster with SQL Server 2014 FCI installed. Data files are on a separate Windows Server 2012 R2 file server. Data files share has been permissioned to the SQL Server service and SQL Server Agent service accounts as Full Control. NTFS Permissions are Full Control.
When I try to attach a database
CREATE DATABASE AdventureWorksDW2012
ON (FILENAME = 'apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATAAdventureWorksDW2012_Data.mdf')
FOR ATTACHI get this error:
Msg 5120, Level 16, State 101, Line 4
Unable to open the physical file "apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATAAdventureWorksDW2012_Data.mdf". Operating system error 5: "5(Access is denied.)".
If I log into the file server (called APRICOT) and look at the NTFS permissions they all look good. I have also reapplied the NTFS permissions from the root folder down.
EDIT
If I log on to one of the nodes in the cluster as the SQL Server service account and navigate to apricotmssql_VIOLETMSSQL12.MSSQLSERVERMSSQLDATA and copy and paste the data file, it works fine.
EDIT2:
If I log on to the file server and Enable Inheritance at the root level, then Replace all child objects with inheritable permission entries from this object, I get this error:
User Account Control settings on all nodes and the file server are set to Never notify
View 0 Replies
View Related
Nov 12, 2014
I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.
so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.
View 2 Replies
View Related
May 7, 2014
A little background on what I am trying to achieve first. We are moving to Azure virtual machines and we will have 8 disks on the SQL Server box. I am adding more files to the primary file group and each file will go on its own drive. I am then rebalancing data across these files by rebuilding all of the indexes on the tables which is working fine. No problems so far all is good.
I now have an additional problem. If there is a lob or blob column on the table, rebuilding the clustered index and all the non clustered indexes doesn't rebalance the blob or lob data across the disks such as it does with in row data.
I cannot find any articles on rebalancing lob or blob data because all the articles say to move to a new file group. I do not want a new file group, I just want to use the primary file group where the data already resides, and just redistribute it evenly in the same way that I can in row data which is working fine.
One solution I thought about was to BCP data out of the table, truncate the table and then BCP back into the table which I imagine would have the desired effect of distributing the data evenly over the files.
View 2 Replies
View Related