How Sparse Are My Datafiles?
Feb 21, 2008Hi!
Is there a way to see how sparse my datafiles are?
I would like to know on forehand if running dbcc shrinkfile will make sense...
/Thanks
Hi!
Is there a way to see how sparse my datafiles are?
I would like to know on forehand if running dbcc shrinkfile will make sense...
/Thanks
Greetings!I could really use some suggestions on how to improve on thefollowing, it at all possible:Table 'Customer'---------------------ID GUID PK....Table 'Facility'-----------------ID GUID PKCustomerID GUID FK (FK to Customer GUID)....Table 'Rate'----------------ID PKOwnerID GUID Nullable FK (FK to Customer, Facility GUID PK)OwnerLevel INT Contraint 1-3<Rate Data>Table 'Rate' is a sparse hierarchy of data. There are 3 possiblelevels in the hierarchy as follows:OwnerID <NULL>OwnerLevel 1This indicates Global rate data.OwnerID <Customer.ID>OwnerLevel 2This indicates Customer-specific rate data.OwnerID <Facility.ID>OwnerLevel 3This indicates Facility-specific rate data.Now, a given Customer need not have an entry in the Rate table. If aCustomer does not have an entry, it is supposed to 'inherit' Globalrate data. A given Facility need not have an entry in the Rate table.If a Facility does not have an entry, it is supposed to inheritCustomer-specific rate data, and in the absence of an entry for theFacility's parent Customer, it is supposed to inherit Global ratedata.The challenge is that I want to write a view to give me back theappropriate rate record for Customer and Facility. Here's what I'vedone so far.View _Rate--------------SELECTRate.*,NULL AS TargetIDFROMRateWHERERate.OwnerID IS NULLUNIONSELECTRate.*,Customer.ID AS TargetIDFROMRateCROSS JOINCustomerWHERERate.OwnerID IS NULLOR Rate.OwnerID = Customer.IDUNIONSELECTRate.*,Facility.ID AS TargetIDFROMRateCROSS JOINFacilityWHERERate.OwnerID IS NULLOR Rate.OwnerID IN (Facility.CustomerID, Facility.ID)View view_Rate--------------------SELECT_Rate.*FROM_RateINNER JOIN(SELECTTargetID,MAX(OwnerLevel) AS OwnerLevelFROM_RateGROUP BYTargetID) AS Filtered_RateON_Rate.TargetID = Filtered_Rate.TargetIDAND _Rate.OwnerLevel = Filtered_Rate.OwnerLevelThe combination of these two views gives a resultset that contains 1record for every Target ID as follows:TargetID <NULL>OwnerID <NULL>OwnerLevel 1This indicates Global rate data established at the Global level.TargetID <Customer.ID>OwnerID <NULL>OwnerLevel 1This indicates Customer rate data for the specific Customer identifiedby Customer.ID is inherited from the Global rate data.TargetID <Customer.ID>OwnerID <Customer.ID>OwnerLevel 2This indicates Customer-specific rate data for the specific Customeridentified by Customer.ID (not inherited).TargetID <Facility.ID>OwnerID <NULL>OwnerLevel 3This indicates Facility rate data is inherited from the Global ratedata.TargetID <Facility.ID>OwnerID <Customer.ID>OwnerLevel 2This indicates Facility rate data for the specific Facility identifiedby Facility.ID is inherited from the Facility's parent Customer'sCustomer-specific rate data.TargetID <Facility.ID>OwnerID <Facility.ID>OwnerLevel 3This indicates Facility-specific rate data for the specific Facilityidentified by Facility.ID (not inherited).I know this is a lengthy post, and a complicted query scenario, butI'm not willing to accept that my solution is the best solution justyet. Please consider that I really need this functionality in a VIEWas much as possible.Thank you for your learned consideration.I eagerly await your replies.Darryll
View 2 Replies View RelatedI have recently converted a table with many columns that are now using sparse columns.However I have now found that a lot of my queries are not working.It seems to be due to the fact that I am using a lot of #Temp tables in my queries along with the SELECT... INTO... method to get data into the temporary table.This worked previously fine, however the sparse columns are not coming across to the #Temp table.Here is an example of what previously working fine:-
(This obviously just grabbed everything in the Client table)
Select *
INTO #temptable1
from ClientI have tried the following without any success (Knowing I now need to specify the column names)
Select ClientId, Val1, Val2, Val3, Val4, Val5, Val6
INTO #temptable1
from ClientVal3, Val4, Val6 and Val6 are sparse columns.
The query runs fine, however when I try and do a select of any of the sparse columns from #temptable1, it just says they do not exist.After reading up. It would seem I can not use the SELECT... INTO... however I have not found any other alternatives that I can convert my code to?I have a lot of quite complex queries that use this method and I am looking for something that I can convert them all over to.
As I understood, if SPARSE is used on a column, which have many NULL marks, then the storage could be efficently used (we need less spaces to save NULL marks, hence a table which has many NULL marks with SPARSE property needs less storage than the same table, but without SPARSE. I created two table as follow:
/******* Table with Sparse ******/CREATE TABLE Sprstb(
unsprsid INT IDENTITY(1,1) NOT NULL,
Firstname varchar(20) NOT NULL,
Lastname varchar(20) NOT NULL,
Tel int NOT NULL,
adress nvarchar(60) SPARSE NULL)/***** Table without Sparse*******/CREATE TABLE Unsprstb(unsprsid INT IDENTITY(1,1) NOT NULL,Firstname varchar(20) NOT NULL,
Lastname varchar(20) NOT NULL,
Tel int NOT NULL,
address nvarchar(60) NULL)
I have populated the Sprstb with 5 Milion records. It needs 509,961 MB storage. Then I have copied this table into Unsprstb
SET IDENTITY_INSERT [dbo].[Unsprstb] ON
Insert [dbo].[Unsprstb](unsprsid,Firstname,Lastname,Tel,adress)
SELECT unsprsid,Firstname,Lastname,Tel, adress FROM [dbo].[Sprstb]
SET IDENTITY_INSERT [dbo].[Unsprstb] OFF
The Unssprstb need only  466,031MB !
That means the Table with SPARSE column need more storage, Why?Â
By the way, in table Sprstb column address has  1666198  Null mark (from 5000000)
I founded couple thread about splitting but not find same situation what i have.
I have server with Raid10 with big storage massive, and Test server with two 30Gb HardDrives (not into a Raid or Somethink). I have situation when in real server database grown over 30Gb and now i can not restore database copy into a test server because i have one big Data File.
Can i split somehow 35Gb Data file when i restore to test server to 25Gb and 10Gb ???
Or can u recomend some solutions
At the moment cant to do hardware upgrate.
I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.
1.) three steps for each column in each table in each db.
Step 1: update table to allow for nulls
Step 2: update tabe set column=null where column =0.00
Step 3 update table set sparse columns
2.)
Step 1: Create entirely new table with sparse column definitions
Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS
Step 3: drop original table, rename new table to original name
Hi,
I€™m trying to create a VERY wide table, with 1,000 columns of type varchar(MAX), nullable.
The CREATE TABLE statement (both in SQL 2005 & 2008), gives the following warning:
Warning: The table "WIDE_TABLE" has been created, but its maximum row size exceeds the allowed maximum of 8060 bytes. INSERT or UPDATE to this table will fail if the resulting row exceeds the size limit.
When I insert data into the table, filling all columns with small, 10-byte string values, I get the following error:
Msg 50000, Level 16, State 1, Procedure sp_pivot, Line 118
Cannot create a row of size 15034 which is greater than the allowable maximum of 8060.
I€™d like to verify this observation: each row is created with 2000 bytes of offset data (2 byte * 1000 columns), 125 bytes for null bitmap (1000 columns / 8 bits) and some more €œwasted€? row information. This leaves less than 6K for the data itself. But since not all columns can fit within the page, forwarding pointers in the row need to be created, 24 byte per column, which very quickly add up to more than 8K, thus the error. So the 8K limit is met for much less columns than the max 1024 column restriction.
Furthermore, in SQL 2008, SPARSE columns will not solve the problem (maybe save some €œmetadata€? space in case the columns are null, but if not, I€™m with the same problem again, or even worse, since now each value takes more storage space. The max 30,000 columns in 2008 is only for cases where the column values are really sparse€¦
Is this the right observation? if so, is there a workaround besides splitting to multiple tables?
Thanks,
Aviv.
I need to move Datafiles and logfiles to another Drive. How can this be done.
Thanks
Hi groupI want to make a new serverinstance, but based on some old files from aprevious instance I once had from a SQL I scrapped.I only know how to use the SQL Server enterprise manager console, and Ihaven't found any tools in there to do that.Any suggestion will be more than welcome.Adrian
View 1 Replies View RelatedHi,
I have a datadabase with 1 datafile from 60Gb. Is it a good thing(preformance) to split up this datafile in smaller datafiles from 6Gb each?
I don't have separete diskslices so a can't spread my datafiles on my disks but i only need to know if a datafile from 60Gb sin't too big for MSSQL2000.
Thanks,
Ewoud,
Good morning,
When I install a database I set up datafiles to another volume with greater capacity. I would like to know about the report service, there is need for datafiles be placed in another volume, or it may be placed in the same volume of the operating system?
Thank you,
Best Regards,
Ralph Nogueira Haddad
Hey guys i want to relocate my database datafile and transaction logs from C: drive to D:
From what i have in mind , correct me if i am wrong: First I will create the same folder on D drive as they are on C drive then copy the datafile from C to D , then come back and change the paths on the database files to point on D.
Hi,
I need to set up a Standby database using a copy of datafiles of the primery database. (An SQL backup of the database is not an option).
Is there any option to restore a database from datafiles?
Or is there an option to attach database but into an standby mode?
Please advice,
Zvi Gilinsky
Hello,I am trying to clean up a database I inherited.I have an 80GB SQL 2000 database with 20 datafiles each 4096 in size. Ihave been able to remove unneeded data and am now trying to clean up.If I do a Shrink on each datafile would able to recover on average 2gbout of 4g, however I would prefer to have 10 full datafiles and 10empty. (or better yet 5 full 8GB datafiles and 15 empty)Can someone point me in the right direction on how to move the dataaround so that dont have 20 partially filled datafiles?I have noticed that I can shrink a single file and use the "empty thefile option (and move data to other files in the group)." option. Ihave already done this to the last 2 datafiles as a test but not surehow to do this on a large scale. I have also set the 1st 10 datafilesto be able to grow to 8 GB.For lack of a better way to say this, Is there a way to defrag orreorganize the data ables so everything "moves to the front".BTW, I have already run a maintenance plan to reorganize the data andindex pages.
View 2 Replies View RelatedHi,
I have formatted my server because of serious problem and i did not backup my database. I have only a phisical copy of the disk containing data on another disk. :( How I can recover my db? Thank you in advance.
I had asked a question last week and I realized that I had phrased it in a misleading way. Within a lab environment at my company, we are experimenting w/ accessing remote SQL data files stored separately on another machine from the SQL server itself (whether it be another client or server).
The experiment is this: You have one client machine accessing a SQL server via Enterprise Manager. The datafiles that you are trying to access are stored on another machine that does not have SQL Server installed. This remote machine only has the datafiles.
Can one map a static path via UNC from the SQL server to the datafiles on the remote machine and then access these datafiles (as long as the user's account has the appropriate permissions to the remote machine)? To my understanding, within SQL server you can only access datafiles that are stored locally on the SQL server itself.
The other thing that I was wondering was if the type of user account had some significance in this situation. More than likely, a local user account would not be able to access the remote machine w/ the datafiles even if the SQL server could map an UNC path and retain it. A domainuser account might be able to do this though.
Any help that you could provide would be appreciated.