Msdbdat.mdf Size Increased
Jun 3, 2008How to decrease msdbdat.mdf size? This is a bit urgent and it would be great if anyone can reply on this.
Gaurav Arora
How to decrease msdbdat.mdf size? This is a bit urgent and it would be great if anyone can reply on this.
Gaurav Arora
Hello....Can someone help me please?I had a mdf file with 48 Mb...and suddenly, the size of it increasedto 690 Mb (without any action or movement of data)...Why could it be done?....is there any solution for thar?Thank you.Tony
View 1 Replies View RelatedHere are my scenarios:
We have an application with replicated environment setup on sql server 2012 . Users will have a replica on their machines and they will replicate to the master database. It has 3 subscriptions subscribed to the publications on the master db.
1) We set up a replica(which uses sql server 2012) on a machine with no sql server on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 33gigs and ldf has grown to 41 gigs. I went to sql server management studion . Right click and checked the properties of the local database. over all size is around 84 gb with little empty free space available.
2) We set up a replica(which uses sql server 2012) on a machine with sql server 2008 on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
3) We set up a replica(which uses sql server 2012) on a machine with sql server 2012 on it. We have dropped the local database and recreated the local db and did the initial synchronization using replmerge tool. The mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
Why it is allocating the space differently? This is effecting our initial replica set up times.
Database size increased from 600GB to 1000 GB in 4 hours. There was no process running. I am not able to understand why would database size increase in 4 hours.
Does anyone know how should I trouble shoot this?
Does any one know what could be making the size of a backup get so big in SQL server?
I noticed if you back-up the same database in Enterprise Manager over and over (without making any changes to the database), the size of the backup gets bigger and bigger. To get around this I simply erase the backup and create a new one.
Now I'm experiencing the same kind of problem, different situation. I decided to make very few changes to my database. If anything, I shrunk the size of the tables and stored procedures.... Now all of a sudden my database backup is 7 times larger.
What could be increasing the size so much, if I haven't increased the amount of tables or stored procedures?
What is the log file about? Mine is huge? Is there a way to reset it or clear it?
any help would be great.
Thank you,
Alec
Hi,
I was running out of space and thus deleted some rows from a table. To my surprise the db size increased. I then shrunk it to bring it back to what it was earlier.
When i deleted some 5000 rows, some space must have been released. Where did the space go and why did the db size increase after deleting the records?
I thght it might be log files..but db is set to Simple Recovery which does not utilize a Log File.
Any reasons?
Hi,I want to setup one of the fields in a table so it incrementssequentially(int data type). i.e the first record should be record 1and the second one should be 2 and so on. This field will also be thekey field. I am new to SQL and don't know how to do this.I am using SQL server 2000.Thanks for the help in advance.-S
View 9 Replies View RelatedHi, recently we increased our internal memory from 2 to 3.5 GB.Nevertheless, SQL Server does not seem to use this extra memory.Do we have to do something extra ???Arno de Jong,The Netherlands.
View 1 Replies View RelatedYesterday night, I rebuilded all indexes one a table since they were having high percentage of fragmentation . After rebuilding, I immediately checked the fragmentation percentage and it got reduced and I felt happy. However, when i saw the percentage of fragmentation after 6 hours, it increased from 1% to 48%. How to find what caused the increase and how to fix it.
View 36 Replies View Relatedi have a field Count that i want it to be automatically increased by 1 whenever a SELECT stament is called. i come out with the following sql query and it works on SQL Query Analyzer:
declare @count int
select @count = count+1 from pictures_posts where itemtype='3'
update pictures_posts set count=@count where itemtype='3'
select postID, color, count from pictures_posts where itemtype='3'
go
but i am thinking it must be a much better to do this. can you guys help?
i have add maintenance plan rebuild index in sql server 2014  log file  increased very tremendous space.
View 6 Replies View RelatedWhy I see absolutely no performance improvement when I spread my primary file group over 8 separate files on 8 separate disks, as opposed to having the primary file group all in one file on one disk.
I have set up 2 identical databases, one spread over 8 disks and one on one disk. Each database has a table called DATA and a column called VALUE. Value is NVARCHAR(200). I have filled each table up in both databases with 20,000 rows.
I then perform a select on each table in each database using CHECKPOINT and DBCC DROPCLEANBUFFERS to ensure I am reading from disk before each query and the execution times are identical in both databases.
I then ran the same queries against each database using a load testing tool and the batch requests per second on each DB is identical under load.
Surely the database with data spread over 8 disks should be FAR faster than the single file database as you have the combined reading power of 8 disks as opposed to 2??
Also, the same is happening for write speeds. When I create the data on both databases, the time it takes is identical on both.
BOL says it should be faster with multiple disks.
Just FYI this is on an Azure virtual machine and each disk is a locally redundant data disk that I have attached to the virtual machine.
Whether write speeds should increase with multiple disks or just read speeds?
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
Any help with this process?
I installed sql 2005 a while back. Then I recently found out my file system was fat32 (I don't understand why the hardware people did this...) and I had to convert to NTFS. Naturally the sql service no longer worked so I uninstalled inorder to reinstall now I can't reinstall it I keep getting this message
native_error=5039, msg=[Microsoft][SQL Native Client][SQL Server]MODIFY FILE failed. Specified size is less than current size.
I'll try to post the full log in a new post.
Hi folks,Can anyone enlighten me here? I'm trying to use a SPROC which, when supplied with an int, looks up the table and returns certain columns from it. I'm using a SqlCommand, here's my codebehind: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ SqlCommand dataSource = new SqlCommand("retrieveData", new SqlConnection(dbConnString)); dataSource .CommandType = CommandType.StoredProcedure; dataSource .Parameters.AddWithValue("id", poid); dataSource .Parameters.AddWithValue("title", title).Direction = ParameterDirection.Output; dataSource .Parameters.AddWithValue("creator", creator).Direction = ParameterDirection.Output; dataSource .Parameters.AddWithValue("assignee", assignee).Direction = ParameterDirection.Output; etc, etc... And the SPROC:------------------------------------------------------------------------------------------------------------------set ANSI_NULLS ONset QUOTED_IDENTIFIER ONGOALTER PROCEDURE [dbo].[retrieveData] @id int, @title varchar(50) OUTPUT, @creator varchar(50) OUTPUT, @assignee varchar(50) OUTPUT, @contact varchar(50) OUTPUT, @deliveryCost numeric(18,2) OUTPUT, @totalCost numeric(18,2) OUTPUT, @status tinyint OUTPUT, @project smallint OUTPUT, @supplier smallint OUTPUT, @creationDateTime datetime OUTPUT, @amendedDateTime datetime OUTPUT, @locked bit OUTPUT AS /**SET NOCOUNT ON; **/ SELECT [title] AS [@title], [datetime] AS [@creationDateTime], [creator] AS [@creator], [assignee] as [@assignee], [supplier] as [@supplier], [contact] AS [@contact], [delivery_cost] AS [@deliveryCost], [total_cost] AS [@totalCost], [amended_timestamp] AS [@amendedDateTime], [locked] AS [@locked] FROM purchase_orders WHERE [id] = @id; ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ The id being passed in is definately not null, and is set to a value of an item I know exists. The resulting error is:
Exception Details: System.InvalidOperationException: String[1]: the Size property has an invalid size of 0.Line 63: retrievePODetails.Connection.Open();Line 64: retrievePODetails.ExecuteNonQuery();[InvalidOperationException: String[1]: the Size property has an invalid size of 0.] System.Data.SqlClient.SqlParameter.Validate(Int32 index) +717091... ... Can anyone see anything I'm missing? Thanks,Ally
Using C#, SQL Server 2005, ASP.NET 2, in a web app, I've tried removing the size from parameters of type NCHAR, NVARCHAR, and VARCHAR. I'd rather just send a string and let the size of the parameter in the SP truncate any extra chars if need be. I began getting the error below, and eventually realized it happened only with output parameters, as in the code snippet below.String[3]: the Size property has an invalid size of 0. par = new SqlParameter("@BusinessEntity", SqlDbType.NVarChar); par.Direction = ParameterDirection.Output; cmd.Parameters.Add(par); cmd.ExecuteNonQuery();What's the logic behind this? Is there any way around it other than either finding out what the size should be, or assigning a size larger than would ever be needed? ThanksMike Thomas
View 6 Replies View RelatedI have one db test with one .mdf and .ldf file...mdf file size is 100mb and for some reson i removed all the tablesfrom that .mdf file and transfer it into new secondary file so all thetables moved into secondary file now i want to reduce the first .mdffile from 100 mb to 50mb is that possible,it's showing 90mb is free.Please reply
View 1 Replies View RelatedI want to know encrypted data's size for designing database field size.
For example, cardnumber varchar(20) Encrypted by Triple_DES and PassPhrase, How match size does need to encrypted data store field.
I think the size does not depend to PassPhrase char length.
Regards,
Yoshihiro Kawabata
I am getting error to run stored procedure using executenonquery method. The Stored Procedure is having OUTPUT parameter.
ExecuteNonQuery statement is called using SqlHelper.
Error : String[18]: the Size property has an invalid size of 0
Just wanted to know what is a general rule of thumb when determining log file space against a database's data file.We allow our data file for our database to grow 10%, unlimited. We do not allow our log file to autogrow due to a specific and poorly written process (which we are in a three month process of remove) that can balloon the log file size.Should it be 10% of the Data file, i.e. if the Date file size is 800MB the log file should be 8MB?I realize there are a myraid of factors that go against file size but a general starting point would be nice.ThanksJeff--Message posted via http://www.sqlmonster.com
View 4 Replies View RelatedThe Tabular model is showing 19 GB on disk, but it is acquiring around 40 GB in memory.
View 3 Replies View RelatedHi,
An MSSQL DB running SAP indicates a smaller DB size (MMC & SAP) than the actual physical size. The difference is about 8 GB.
A lot of records were deleted before this. Did they remain in the DB as NULL values or something ?
Does anyone know what the reason for this could be ? And how to clean this up ?
Thanks in advance,
Paul
Hi,
i use this script that show me the size of each table and do the sum of all the table size.
SELECT
X.[name],
REPLACE(CONVERT(varchar, CONVERT(money, X.[rows]), 1), '.00', '') AS [rows],
REPLACE(CONVERT(varchar, CONVERT(money, X.[reserved]), 1), '.00', '') AS [reserved],
REPLACE(CONVERT(varchar, CONVERT(money, X.[data]), 1), '.00', '') AS [data],
REPLACE(CONVERT(varchar, CONVERT(money, X.[index_size]), 1), '.00', '') AS [index_size],
REPLACE(CONVERT(varchar, CONVERT(money, X.[unused]), 1), '.00', '') AS [unused]
FROM
(SELECT
CAST(object_name(id) AS varchar(50)) AS [name],
SUM(CASE WHEN indid < 2 THEN CONVERT(bigint, [rows]) END) AS [rows],
SUM(CONVERT(bigint, reserved)) * 8 AS reserved,
SUM(CONVERT(bigint, dpages)) * 8 AS data,
SUM(CONVERT(bigint, used) - CONVERT(bigint, dpages)) * 8 AS index_size,
SUM(CONVERT(bigint, reserved) - CONVERT(bigint, used)) * 8 AS unused
FROM sysindexes WITH (NOLOCK)
WHERE sysindexes.indid IN (0, 1, 255)
AND sysindexes.id > 100
AND object_name(sysindexes.id) <> 'dtproperties'
GROUP BY sysindexes.id WITH ROLLUP) AS X
ORDER BY X.[name]
the problem is that the sum of all tables is not the same size when i make a full database backup.
example of this is when i run this query against my database i see a sum of 111,899 KB that they are 111MB,but when
i do full backup to that database the size of this full backup is 1.5GB,why is that and where this size come from?
THX
It is possible to find table size and in that table each row size.
View 4 Replies View RelatedHi, I have a problem importing data from SQL Server 2000 'text' columns to SQL Server 2005 nvarchar(max) columns. I get the following error when encountering a transfer of any column that matches the above.
The error is copied below,
Any help on this greatly appreciated...
ERROR : errorCode=-1071636471 description=An OLE DB error has occurred. Error code: 0x80004005.An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Unicode data is odd byte size for column 3. Should be even byte size.". helpFile=dtsmsg.rll helpContext=0 idofInterfaceWithError={8BDFE893-E9D8-4D23-9739-DA807BCDC2AC} (Microsoft.SqlServer.DtsTransferProvider)
Many thanks
Hi,
I am using
exec sp_helpdb
go
dbcc sqlperf(logspace) for
getting database size and log size. Is this gives the correct
database size and log size or Is there any other way to get the logsize and database size by means of query analyzer.
Thanks in Advance.
Seenu. S
I have a SQL Server database that is showing 2853.44 mb in size but when Iexport the data into MS Access the size is less than 1 mb. Can anyone tellme how to reduce the size of my SQL Server database so that it's less than15 mb?Thanks in advance! Rob
View 5 Replies View RelatedWhy is it that lately our full backup (.bak) file size is more than 2x the size of the actual database file size (.mdb)?
View 9 Replies View RelatedI have a log file that is approximately 50 GIG. I backed up just the log and the file size of the .bak is 192 GIG . Why is this? Shouldn't it be closer to the 50 GIG.
Normally I wouldn't let log grow this much. But we are in process of getting new server up and running and don't have backups going yet. They are working on getting that up and running this week.
So I did a log backup to give me back some log space for now but was concerned when I saw the size of the .bak file.
When I view media contents of the backup device it shows one tranaction log back up and size of 192 GIG.
What is up with this. I know in SQL 2000 the log backup files where never this big. they were about the size of the log itself.
Any ideas?
Stacy
Hi!
I'm using SQL Server 2000!
How to know then Data files size and Log files size by Store Procedures or Sql query?
Thanks!
Hi all,
I need your help in fixing SQL error. I have a SQL table which has more than 50 columns. 5 of them should hold 1000+ character text. I allocated nvarchar(2000) for each of those fields. But Iam getting an error when Iam trying instert the text from C#.net application.
Error Message is:
System.Data.SqlClient.SqlException: Cannot create a row of size 8739 which is greater than the allowable maximum of 8060. The statement has been terminated. at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at UsefulFunctions.DataBase.ExecuteSql(String Sql, ArrayList arrParam) in D:ProjectsUsefulFunctionsClass1.cs:line 395 at racf2.RacfTicketClose.Btn_Close_Click(Object sender, EventArgs e) in d:documents and settingswwwwwwwwwwwwwwinhousesecurity
acf2
acfticketclose.aspx.cs:line 167
But When I copied complete row(from table) in to MS word file and check the character count, it's not 8739. It's only 3457.
Do you guys have any idea , why am I getting an error and how to fix it.
Thanks in advance.
Is their any tricks that will allow you get around the 8060 byte row size limit.
View 2 Replies View RelatedHi
What is the advantages of specifying a big size for DB when I create it? taking in consideration that my DB will grow and reach that size in one year. For example, if I know my DB size in 1 year will be 10GB, but at the time of creating it, it is about 500MB, is there a big advantage of allocating 10GB at th beginning?
Thanks