This is a question that has always intrigued me: what is the ideal File Allocation Unit Size for a disk holding only data or index pages on a server running SQL Server? It seems to me that 8,192 would be the ideal size as it would enable the system to gobble up an entire page in one go. Any ideas?
Since a couple days, we are getting this message the errorlog of one of our SQL2012 server
LogEntry: Error [36, 17, 145] occurred while attempting to drop allocation unit ID 451879652360192 belonging to worktable with partition ID 451879652360192. (version Microsoft SQL Server 2012 - 11.0.5058.0 (X64))
I am wondering what is the best way trying to troubleshoot this issues? I do not know from which of out database this is coming.
I am using DTS to transfer tables from Oracle 9i to SQL Server 2000 sitting in a shared environment and managed to migrate a lot of tables without glitch..
When I was migration a table <XYZ> from Oracle to SQL Server..The table was created in the SQL Server whilst the DTS threw an error that read when it was copying data and 0 rows were copied with the error message being
"Cannot create a row of size 8387 which is greater than the allowed maximum of 8060"
Incidentally the have a table in the Oracle DB that has 152 Rows of Data with 94 Columns..
Does any change needs do be done on the Admin side of the SQL Server to resolve this problem and faciliate effective transfer of data from the DB's?
I am using DTS to transfer tables from Oracle 9i to SQL Server 2000 sitting in a shared environment and managed to migrate a lot of tables without glitch..
When I was migration a table <XYZ> from Oracle to SQL Server..The table was created in the SQL Server whilst the DTS threw an error that read when it was copying data and 0 rows were copied with the error message being
"Cannot create a row of size 8387 which is greater than the allowed maximum of 8060"
Incidentally the have a table in the Oracle DB that has 152 Rows of Data with 94 Columns..
Does any change needs do be done on the Admin side of the SQL Server to resolve this problem and faciliate effective transfer of data from the DB's?
Sorry Had I started this topic in the wrong forum.
I have a Db that is 1.7 gigs. The table data takes approximately 200megs. The transaction logs were truncated. Where else can this large size be coming from and how can I confirm?
DB is generally small. ~25 tables, 100 SPs, 10 views, etc.
Note:
I have 4 queues using SQL Notifications, but when selecting from them results in no data.
The following query returns a value of 0 for the unit percent when I do a count/subquery count. Is there a way to get the percent count using a subquery? Another section of the query using the sum() works.
Here is a test code snippet:
--Test Count/Count subquery
declare @Date datetime
set @date = '8/15/2007'
select -- count returns unit data Count(substring(m.PTNumber,3,3)) as PTCnt, -- count returns total for all units
(select Count(substring(m1.PTNumber,3,3))
from tblVGD1_Master m1
left join tblVGD1_ClassIII v1 on m1.SlotNum_ID = v1.SlotNum_ID
Where left(m1.PTNumber,2) = 'PT' and m1.Denom_ID <> 9
and v1.Act = 1 and m1.Active = 1 and v1.MnyPlyd <> 0
and not (v1.MnyPlyd = v1.MnyWon and v1.ActWin = 0)
and v1.[Date] between DateAdd(dd,-90,@Date) and @Date) as TotalCnt, -- attempting to calculate the percent by PTCnt/TotalCnt returns 0 (Count(substring(m.PTNumber,3,3)) /
(select Count(substring(m1.PTNumber,3,3))
from tblVGD1_Master m1
left join tblVGD1_ClassIII v1 on m1.SlotNum_ID = v1.SlotNum_ID
Where left(m1.PTNumber,2) = 'PT' and m1.Denom_ID <> 9
and v1.Act = 1 and m1.Active = 1 and v1.MnyPlyd <> 0
and not (v1.MnyPlyd = v1.MnyWon and v1.ActWin = 0)
and v1.[Date] between DateAdd(dd,-90,@Date) and @Date)) as AUPct -- main select
from tblVGD1_Master m
left join tblVGD1_ClassIII v on m.SlotNum_ID = v.SlotNum_ID
Where left(m.PTNumber,2) = 'PT' and m.Denom_ID <> 9
and v.Act = 1 and m.Active = 1 and v.MnyPlyd <> 0
and not (v.MnyPlyd = v.MnyWon and v.ActWin = 0)
and v.[Date] between DateAdd(dd,-90,@Date) and @Date
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
I have a log file that is approximately 50 GIG. I backed up just the log and the file size of the .bak is 192 GIG . Why is this? Shouldn't it be closer to the 50 GIG.
Normally I wouldn't let log grow this much. But we are in process of getting new server up and running and don't have backups going yet. They are working on getting that up and running this week.
So I did a log backup to give me back some log space for now but was concerned when I saw the size of the .bak file.
When I view media contents of the backup device it shows one tranaction log back up and size of 192 GIG.
What is up with this. I know in SQL 2000 the log backup files where never this big. they were about the size of the log itself.
I installed sql 2005 a while back. Then I recently found out my file system was fat32 (I don't understand why the hardware people did this...) and I had to convert to NTFS. Naturally the sql service no longer worked so I uninstalled inorder to reinstall now I can't reinstall it I keep getting this message
native_error=5039, msg=[Microsoft][SQL Native Client][SQL Server]MODIFY FILE failed. Specified size is less than current size.
I have one db test with one .mdf and .ldf file...mdf file size is 100mb and for some reson i removed all the tablesfrom that .mdf file and transfer it into new secondary file so all thetables moved into secondary file now i want to reduce the first .mdffile from 100 mb to 50mb is that possible,it's showing 90mb is free.Please reply
We have an application with replicated environment setup on sql server 2012 . Users will have a replica on their machines and they will replicate to the master database. It has 3 subscriptions subscribed to the publications on the master db.
1) We set up a replica(which uses sql server 2012) on a machine with no sql server on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 33gigs and ldf has grown to 41 gigs. I went to sql server management studion . Right click and checked the properties of the local database. over all size is around 84 gb with little empty free space available.
2) We set up a replica(which uses sql server 2012) on a machine with sql server 2008 on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
3) We set up a replica(which uses sql server 2012) on a machine with sql server 2012 on it. We have dropped the local database and recreated the local db and did the initial synchronization using replmerge tool. The mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
Why it is allocating the space differently? This is effecting our initial replica set up times.
I need to write a process to get file size in kb and record count in a file. I was planning on writing a c# console app that takes the file path and name as a param however should i use a CLR?
I cant put a script in the ssis when it's bringing the file down because it has been deemed that we only use ssis for file consumption.
What is the recommended size and file growth for a database and log file? We will be storing approx 10000 records a day.Currently we have the following:
CREATE DATABASE Dummy ONÂ PRIMARY ( NAME = Dummy_data, Â Â FILENAME = 'D:....DATADummy.mdf', Â Â SIZE = 250MB, Â Â FILEGROWTH = 25MB ) LOG ON ( NAME = Dummy_log, Â Â FILENAME = 'D:....DATADummy_log.ldf', Â Â SIZE = 50MB, Â Â FILEGROWTH = 5MB ) ; GO
I have a database whose log file size is 4 time greater then data file size, and its continuously growing day by day. Recently face limited disk related issue.
Is there any way to truncate log file???
What is impact on db if i truncate log file???
Is there any way to prevent this file continuously growing???
i'm trying to write this script that check my database file and log size(in MB) and insert them into a table.i need the following columns dbid,dbname,compatability_level,recovery_model,db_size_in_MB,log_size_in_MB. i try to write this a got stuck. select sysdb.database_id,sysdb.name,sysdb.compatibility_level, sysdb.recovery_model_desc,sysmaster.size from sys.databases sysdb,sys.master_files sysmaster where sysdb.database_id = sysmaster.database_id
We have 2 SQL Server 2k5 servers running the same build - 9.0.2047 . When I backup any database from one server and attempt to restore it to the other, the log file generally increases by 100 fold. It errors out after I try to restore a 100MB db and it tries to create a 9.8GB log file. This happens both when I use the GUI to restore and when I restore from a T-SQL script. What am I doing wrong?
I've started researching on Unit Testing and I must admitI had never heard of Unit Testing until a couple of monthsago. Obviously I am interested in Unit Testing StoredProcedures.I read the TSQLUnit documentation (not all of it) and i also raninto a newsgroup post saying TSQLUnit is very small comparedto NUnit. The conclusion I am making out of this post is thatI should rather spend time resarching/reading about NUnit thanTSQLUnit. Is that a good assessment?I would like to you what you use and if you use actuallyUnit Testing or some other method? I ran into White Box/Black BoxQA testing. All these are new to me. Any good place to read about"Extreme Programming"? I ran into one link that I saved it at work.That's one place i will read more.Any links, documentation or books you would suggest?I searched Amazon and I didn't find anything interestingregarding SQL Server and Stored Procedures.Thank you
I want to test my custom component with unit tests and i thought i must only initilize the component to play around with it. But when i calling the ProviderComponentProperties method and there the RemoveAllInputsOutputsAndCustomProperties method a NullReference exception is thrown. After debugging the test i had seen that the ComponentMetaData of the component is null. Is there a way to initilize the ComponentMetaData?
The Code of the Component looks like this:
Code Block [DtsPipelineComponent( DisplayName = "TestSourceAdapter", ComponentType = ComponentType.SourceAdapter, IconResource = "TestSourceAdapter.TestSourceAdapter.ico" )] public class TestSourceAdapter: PipelineComponent {
public override void ProvideComponentProperties() {
I have a set of revenue records where there is a UNIT column and a REVCHARGE column. What I need to do is breakout the records into single records where the unit count is > 1 and calc the actual charge:
Ex:
Units REVCHG FIELD_A FIELD_B ..... 3 3.00 ABCD EFGH
I am developing automated .Net Unit Tests, and as a prerequisite of each test, I would like to clear the service broker queues of any messages. Executing the
RECEIVE * FROM statement appears to only return a message at a time, and not all as I expected. Any ideas on how to make this happen, besides not having to delete the queues and then having to rebuild them?
We were asked to create an SQL function to return a unit price based on various criteria. The function works fine except for the tiered pricing (use of BillingPriceTable) calculation. What we need to do is break up the total quantity passed to the function and return the total of prices found. In our example, we passed a quantity of 9,721 units and need to return a total price of 231.92 using the table below.
Low Qty    High Qty    Fee       Actual Qty       Price 0                 7500       0.025           7500          187.50 7501           15000       0.020           2221         44.42
Below is the table definition that we have to work with (ugghh).
What we have so far is shown below. The columns that start with bdxx are the "High Qty" values and the columns that start with prxx are the price for that quantity range. So, the current SELECT is shown below and it returns the price based on the entire qty of 9,721 and returns a unit price of 0.020 and should return 0.023857628
The current SELECT is shown below and is returning 0.020 which is the fee for the total rather than calculating the fee twice, once for the 0-7500 and again for the 7501-15000 (actually 7501-9721). Two things came to mind, one was a WHILE loop and the other was possibly a ranking function of some sort.Â
ALTER FUNCTION [dbo].[fn_GetPrice] ( @plincdvarchar(3), @pgrpcodevarchar(4), @pitmcodevarchar(4), @qtydecimal(10,1) = 1, @corpnbrvarchar(9) )
First of all i would like to thank everyone for there time and efforts in this web page I am new to the feild of DBA and i have some uncleared points that i would like any one to make them clear for me Why the transactions log file size is not decreasing after the truncation of log? is there any thing i have to do or is it normal way?
I am currently trying to get file sizes and insert them into a table. The table already has the path to the actual file, so its just a matter of using that path and getting the size.