Question About Log File Size When Alter Huge Table
Oct 31, 2005
I have the next question, and i would like to hear what do you think
about, and if is there a better solution for "my problem"
here is the question, I have a huge table with 60GB of data (image
files). The problem happen always when i try to ALTER the structure of
the table. For example I change a field char(3) to char(4)...the
sqlserver then performs the "alter table" command...that must be
something similar than "insert into the new table + drop the actual
table" and for that I need about 60GB o space for my LOG file, and
takes hours to complete the operation.
Is this the only way to alter a single field in my table??
I need to alter a table (expand the column size for varchar(10) to varchar(255)) and the table has 200 million rows. Please suggest me the best and the fastest method to achieve it. The database is on SQL 7.0
Hi, My DB size (Right click on DB Name, Data Files tab, Space Allocated field) was 10914 MB.
I delete a huge table (1.2 million records * 15 columns). I checked the db size again. It didnt change. Shouldn't it decrease because I delete a huge table ??
I have transnational replication setup on two environments, on one server distribution database is tiny, but on the second server the distribution database is 5 times bigger, and taking up lot of space, both environments have almost same size of data.
Has anyone implemented split data for an application between two databases because the data size is extremely large? If so could you please point me to relevant information.In this split data scenario, a table will automatically carry over to another database whenever the size limit for the current database is reached. The challenge is here for the DAL (data access layer) to automatically look into the appropriate database when the next row of data is in another database. OR Perhaps there is another solution to this terasize data problem..Any help on this would be greatly appreciated.
I am in the process of designing a SQL 2005 database with tables that may hold several hundreds of millions of rows.
Due to various constraints, I am trying to save as much space as possible by optimizing the size of a row. Currently one row contains the following columns:
byte(4), byte(3), byte(3) = 10 bytes.
Adding 4 bytes for the row header, plus 3 bytes for the null bitmap (as described in BOL) I am ending up with 17 bytes/row. In a real world test it was an average of 18.3 bytes/row.
There are no indexes, no primary key and all columns are not NULL. Hence my question: since no column allows a null value, is there a possibility to "remove" the 3 bytes Null Bitmap?
Is there any other way to shave off one or two more bytes by using a clustered index etc?
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
Hi people,I?m trying to alter a integer field to a decimal(12,4) field in MSACCESS 2K.Example:table : item_nota_fiscal_forn_setor_publicofield : qtd_mercadoria integer NOT NULLALTER TABLE item_nota_fiscal_forn_setor_publicoALTER COLUMN qtd_mercadoria decimal(12,4) NOT NULLBut, It doesn't work. A sintax error rises.I need to change that field in a Visual Basic aplication, dinamically.How can I do it? How can I create a decimal(12,4) field via script in MSACCESS?Thanks,Euler Almeida--Message posted via http://www.sqlmonster.com
We have 300+ databases on one sinlge server. If I need to change log size to "unlimited" for all of them, is there any way to do so? Please advice. -Julie
I recently saw that the transaction log files of user dbs grow undefinitely in SQL Server 2000 - one of our customers had a 11 GB log file which totally slowed down the server. Another customer of ours uses one of my applications logging all actions in a MSDE database file and I fear that the corresponding transaction log file will grow and block the system too - is there any way that I could shrink and set the max size of the transaction log file through SQL?
I already know the command "SHRINK FILE ('filename')" but I haven't found a SQL command to set the max size.
Hi all, I found my database log file is 26GB and the database file is just about 280MB. We are doing full backup everyday. However, my sql server seems running very slow now and please advise:
1. How can I decrease/truncate my log file? 2. Would the huge size of the log file be the reasons slowing up my sql server? 3. Would anyone give me direction knowing more on the transaction log? Thank you and appreciated!
Hi guys, its my first post! Its also like my first time really diving into sql. We are using sharepoint on site here along with sql server 2005, one of our log files is 255 GBs and needs to be made smaller very fast!! We are almost out of disk space and the log is growing fast.
I am very new to sql and dont even know where to go to enter commands, so youll have to bear with me here. I've read about truncating and shrinking and some other things, I am just worried and dont want to mess anything up. I know this is probably a simple task, but like I said, with the truncate command I was reading about, I dont even know where to go to type it in!!! If someone could please help it would be much appreciated. Thanks so much.
I have a .bak file of 72gb. But my database size is only 32gb, I got this value from sp_spaceused? Anyone know why the .bak file is so big?. Is it possible to reduce the size? How could i reduce it?
Hi this is regarding SQL Sever 2000. ( it was upgraded form sql7). its log file is increasing in very high manner. say 40 gb, 50 gb and now 57 gb. Mdf file is around 15 mb. we created back up and tried to restore to another system. its asking 57 gb free space. how to proceeed with file recovery. we have backups but it askes more space for log file. how to retrieve the data. rgds Pramod
I am using SQL Server 7 and have about 5 databases. One of them has a data file of about 10 Meg, and most of the others are larger. I do a nightly backup to both a local and mapped drive. On both, the size of the backup file for this database is more than 500 Meg, but the rest appear to be an appropriate size. Does anyone know why this would be happening? The database works fine, it does not get a lot of insert/delete activity and I run DBCC every weekend. If anyone has any ideas I would sure like to hear from them.
I have a huge log file (285M) on SQL Server 7. The database itself is about 10M. How can I reduce the log file ? Is it possible to build it again from scratch ?
I tried the Truncate Transaction Log but it didn't help.
I would like to add an Identity to an existing column in a table using astored procedure then add records to the table and then remove the identityafter the records have been added or something similar.here is a rough idea of what the stored procedure should do. (I do not knowthe syntax to accomplish this can anyone help or explain this?Thanks much,CBLCREATE proc dbo.pts_ImportJobsas/* add identity to [BarCode Part#] */alter table dbo.ItemTestalter column [BarCode Part#] [int] IDENTITY(1, 1) NOT NULL/* add records from text file here *//* remove identity from BarCode Part#] */alter table dbo.ItemTestalter column [BarCode Part#] [int] NOT NULLreturnGOSET QUOTED_IDENTIFIER OFFGOSET ANSI_NULLS ONGOhere is the original tableCREATE TABLE [ItemTest] ([BarCode Part#] [int] NOT NULL ,[File Number] [nvarchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NULLCONSTRAINT [DF_ItemTest_File Number] DEFAULT (''),[Item Number] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULLCONSTRAINT [DF_ItemTest_Item Number] DEFAULT (''),[Description] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULLCONSTRAINT [DF_ItemTest_Description] DEFAULT (''),[Room Number] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULLCONSTRAINT [DF_ItemTest_Room Number] DEFAULT (''),[Quantity] [int] NULL CONSTRAINT [DF_ItemTest_Quantity] DEFAULT (0),[Label Printed Cnt] [int] NULL CONSTRAINT [DF_ItemTest_Label Printed Cnt]DEFAULT (0),[Rework] [bit] NULL CONSTRAINT [DF_ItemTest_Rework] DEFAULT (0),[Rework Cnt] [int] NULL CONSTRAINT [DF_ItemTest_Rework Cnt] DEFAULT (0),[Assembly Scan Cnt] [int] NULL CONSTRAINT [DF_ItemTest_Assembly Scan Cnt]DEFAULT (0),[BarCode Crate#] [int] NULL CONSTRAINT [DF_ItemTest_BarCode Crate#] DEFAULT(0),[Assembly Group#] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULLCONSTRAINT [DF_ItemTest_Assembly Group#] DEFAULT (''),[Assembly Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULLCONSTRAINT [DF_ItemTest_Assembly Name] DEFAULT (''),[Import Date] [datetime] NULL CONSTRAINT [DF_ItemTest_Import Date] DEFAULT(getdate()),CONSTRAINT [IX_ItemTest] UNIQUE NONCLUSTERED([BarCode Part#]) ON [PRIMARY]) ON [PRIMARY]GO
I am using sql server ce.I am changing my tables sometimes.how to use 'alter table alter column...'.for example:I have table 'customers', I delete column 'name' and add column 'age'.Now I drop Table 'customers' and create again.but I read something about 'alter table alter column...'.I use thi command but not work.I thing syntax not true,that I use..plaese help me?
I have a log file that is approximately 50 GIG. I backed up just the log and the file size of the .bak is 192 GIG . Why is this? Shouldn't it be closer to the 50 GIG.
Normally I wouldn't let log grow this much. But we are in process of getting new server up and running and don't have backups going yet. They are working on getting that up and running this week.
So I did a log backup to give me back some log space for now but was concerned when I saw the size of the .bak file.
When I view media contents of the backup device it shows one tranaction log back up and size of 192 GIG.
What is up with this. I know in SQL 2000 the log backup files where never this big. they were about the size of the log itself.
I have inherited a SQL 2000 server, and am therefore an absolute beginner of SQL2000.
I know this has been covered before, but I don't know how to use the KB as I don't know how to run the commands/script. http://support.microsoft.com/kb/272318/
I can no longer backup the SQL database because the 'transaction log backup' file is about 17GB. The SQL database is only about 2GB! The partition that it is backed up onto fills up every day.
I installed sql 2005 a while back. Then I recently found out my file system was fat32 (I don't understand why the hardware people did this...) and I had to convert to NTFS. Naturally the sql service no longer worked so I uninstalled inorder to reinstall now I can't reinstall it I keep getting this message
native_error=5039, msg=[Microsoft][SQL Native Client][SQL Server]MODIFY FILE failed. Specified size is less than current size.
I am writing a .NET service, and I need to insert files in a SQL DB for temporary storage.
I have never inserted a file into a SQL database before. I am thinking about using the image field type. Has anyone done this before? How did you do it?
I am creating a database for an application through script. After the tables, views, and sp's are created, the database is populated with data. After all of this (and before the application is even run), the log file is about 700MB. If I shrink the database, it takes the log down to 1MB. The mdf file is about 165 MB before and after it has been shrunk.
I have two questions: 1. Is there something I should look for in my database scripts or is there a setting that could prevent this from being created so large.
2. Is there a script I can run in my sql code after the database has been created and populated to shrink it.
I have one db test with one .mdf and .ldf file...mdf file size is 100mb and for some reson i removed all the tablesfrom that .mdf file and transfer it into new secondary file so all thetables moved into secondary file now i want to reduce the first .mdffile from 100 mb to 50mb is that possible,it's showing 90mb is free.Please reply
I have a database data file almost at 2tb maxing out a windows drive. Only 16gb left. Should I just add another data file on another Windows drive for growth? Or just move current huge data file to a new GPT drive? Or do both adding another data file and moving existing to its own new GPT drive?