SQL Server 2008 :: Log File Space Is 5 Times The Data File
Mar 16, 2015
one of my database data file is 100 GB and the log file is 500 GB.DB is in full recovery model and the transaction logs happen once in 6 hours.Even then, the Database log file isn't reducing in size.
We are receiving following alerts frequently about 1:40 AM in the morning. We have backups running on 11:00 PM everyday and rebuild job running at 2:00 AM. Not sure the exact cause of this error.
Error:
The file group "PRIMARY" for the database "tempdb" in SQL instance "MSSQLSERVER" on computer "XYZ" is running out of space.
tempdev Initial size : 133,100 MB Growth: By 10 percent, Limited to 140000 MB templog Initial Size : 5,475 MB Growth: By 10 percent, Unlimited
I've production sql server 7 sp3 on windows NT. I had a 8GB data file ofwhich 5GB were used and 3GB were unused. I wanted to take back the unused3GB.So I did the following with EM GUI:1. I tried to "truncate fre space from end of the file". Didn't truncatethe file. I believe there was no empty space at the end of the file.2. Next I chose the option to "shrink file to 5GB". And to my horror thedata file instead of taking just 5GB took the empty spaces also and the sizeof the used data file went to 8GB.Any idea what's going on?TIA,SP
Historically I've always written a VB script to copy a file from a sharepoint library. I don't like this method because I have to input a username & password in the script and maintain a config file.
Yesterday I was playing around with using a file system task. The sharepoint file has a UNC path so why not? I created a simple test package with a single file system task that copies the sharepoint file (addressed via UNC) to another network location. Package runs fine locally.
I try running on our utility server but am getting a "The file name [SHAREPOINT UNC PATH] specified in the connection was not valid" error. Package is running with a proxy on the server and the proxy account has the same permissions to the sharepoint site (so far as I can tell) as me.
Designing a solution for loading data into SQL destination from a single 5/10 GB flat file? If yes, what kind of performance measures you have taken while designing the solution ?
I want to create a XML file with data in my table. I have a question about tags.
SELECT -- Root element attributes 'http://tempuri.org/Form.xsd' AS 'xmlns', 'http://www.w3.org/2001/XMLSchema-instance' AS 'xmlns:xsd', ( SELECT -- Creating a default element
[Code] ....
This is my query. When I use 'xmlns' namespace the result is below:
We have installed SQL Server 2008 R2 SP1 instance and it's having Share Point 2010 databases.
We have 2 dedicated drives for Tempdb on SAN with 50 GB space. Both tempdb data & log files are created with default size. I would like to presize them.
What are the best values to start with?
U ->Tempdbdata having tempdb.mdf file V->Tempdblog having templog.ldf file
I'm trying to create an import package using BIDS. I'm using SQL Server 2008. The data is saved as a .csv file so that I can use the flat file option for data source. The issue I am having is that when I preview the flat file after selecting it as the datasource, some of the data that have the numeric file format are showing up as non numeric, for instance the value -1,809,575,682,700 is being viewed as ""1 and the package is giving a conversion error.
One people created a word input file (15 pages, including check boxes, text boxes, drop down lists...). Is it possible to save data in word input file to SQL table?
I'm trying to quantify the number of times folks use SQL Server Management Studio to change client data in one of our production databases. Does SQL Server keep this statistic? How do I get to this data?
Today we received an issue on an application database on internal free space on the DB is 0% that was designed with as below
name fileid filename filegroup size maxsize growth usage XX 1 I:DataMSSQL.1MSSQLDataNew XX.mdf PRIMARY 68140032 KB Unlimited 0 KB data only XX_log 2 I:DataMSSQL.1MSSQLDataNew XX_log.LDF NULL 1050112 KB 2147483648 KB 102400 KB log only XX_2 3 I:DataMSSQL.1MSSQLDataNew XX_2.ndf PRIMARY 15458304 KB Unlimited 0 KB data only XX_3 4 I:DataMSSQL.1MSSQLDataNew XX_3.ndf PRIMARY 13186048 KB Unlimited 0 KB data only XX_4 5 I:DataMSSQL.1MSSQLDataNew XX_4.ndf PRIMARY 19570688 KB Unlimited 204800 KB data only XX_5 6 I:DataMSSQL.1MSSQLDataNew XX_5.ndf PRIMARY 19591168 KB Unlimited 204800 KB data only
2 of the secondary data files had its autogrowth enabled to unrestricted with 200MB and 3 of the data files including primary had its Autogowth turned OFF. Application use is complaining that there is no internal freespace on the DB.
What fails to understand us is that when the Auto growth was already TURNED OFF on 3 data files ( 1 primary and 2 secondary ) still why was the application trying to increase the space on the .mdf and .ndf files; as well when the Autogrowth is TURNED ON on 2 of the secondary data files, why was the DB not able to expand these file groups when the autogrowth is already turned off on 3 of its other files.
What more data i need to ensure i submit an analysis to this.
I have a table 300+GB. it holds 10 years of Data. I need to delete 5 years of data and put it to another server so I can have more space.
If I delete 5 years of data, Transaction log gets so huge and size of the database even gets bigger because of the .ldf file which even gets bigger! I think I can shrink the log file and the data file. Is this the best way to do it?
For a database, we have 4 data files in a particular file group and the file sizes are almost 70 GB each.
Do I come across any performance issues if I create/pre-allocate an additional data file in the same file group so that the existing files don't grow too much?
In a server we had File Growth,And then We had to Add New Hard Drive And New File On It.And Now We have New server with a Huge Hard Drive.But all files remaind.Can I Reduce This files to One data file or not ?
I have a sharepoint content database in sql 2008 R2 (WSS_Content) that is at 230Gb size, but has 40% of it is empty space. This is because we have removed a large amount of old content from sharepoint. The log file is fine. I have 60GB left in my drive that host the database files. I would like to shrink the datafile to get disk space back. I found that under the files property, the WSS_Content data file's initial size is 228702 MB (220 Gb or so).
When i try to do a shrink file (data file) from management studio, i see the 60 GB of drive space keep dropping. So i have to kill the process. what i should do to reduce this data file.
why it keep using up all the free space in the drive when i try to shrink the data file?
While i execute dbcc sqlperf(logspace); I get following values.
Database NameLog Size (MB)Log Space Used (%) master 16.17969 13.30275 tempdb 7.429688 61.7245 model 0.7421875 45.78947 msdb 5.554688 25.87904 distribution 2808.93 0.8172179 BANKDB 23438.87 48.20037 WSMIRSDB 109.7422 4.839111
For database BANKDB , Log Space used(%) is 48.83% and Log size is about 23438.87 where as my database size of BANKDB is 60 GB. FULL database and Log back is done every day night one time. My database is performing slow now.
Do we need to take log backup frequently like once a 1 hour so that Log space used will be less. Same query is taking more time to execute than before in same database is it because of log file has increased.
I do index organize and rebuild once a week and stats apply nightly.
Is it correct once log space size is increasing more than 10%. Do we need to take log backup?
I have one .mdf and two .ndf files on the same drive. The .mdf file size =275GB, one .ndf file size = 300GB and other .ndf file size = 135GB. Is this normal to have 3 different file size? if not what can I do to fix this? I don't have option to make all files to initial size equal to 300GB as a .ndf.If I have to add a .ndf file (in case of running out the above drive), what initial file size should I set up for new file on new drive? And how data gets distributed across all 4 files (including new .ndf on different drive)?
I have a flat source file(.csv) which I am importing to SQL server. Now, if the source file is not available at the specified location, then the SSIS package should retry to execute n times( say 3 times) after certain time interval. The number of retries and the time interval should be configurable.
I have used Flat File connection manager for the source and OLEDB connection manager for the destination.
I am quite new to SSIS. Any help would be highly appreciated.
I am working with SSIS package. It executes everyday.
It has the file system task. It moves the production backup from one server to the different server. In today's execution the package failed with the following error
Error Description:An error occurred with the following error message: "The process cannot access the file 'ECOSQLDumpsTest_backup_2015_02_03_230004_1557700.bak' because it is being used by another process.".
How to find which process is using that test backup file?
I work with sql server 2008 on a database.we have export schema and datas with the command export datas
click rigth on database => tasks => generate scripts => select all object => click advanced => select type of data to script => schema and data
Now we have a file with all datas and schema That's perfect ...But how i can insert the file in a other database?ok i can copy paste all datas in management studio and press f5 but when i do this the management studio fail because the size of the file is > 200 mega !
I want to import xml file directly from web page into microsoft sql table. At the moment the import is done after the XML file is downloaded local.I want to skip this step to manualy download the file.It can be done in SQL? when i change the path i get this error: Cannot bulk load because the file URL... could not be opened. Operating system error code 123(The filename, directory name, or volume label syntax is incorrect.)
below is the code
DECLARE @idoc INT DECLARE @doc XML SET @Doc = (SELECT * FROM OPENROWSET(BULK 'F:Folderbrfxrates.xml', SINGLE_CLOB) AS xmlData) -- 1 LOCAL works --SET @Doc = (SELECT * FROM OPENROWSET(BULK 'http://www.bnr.ro/nbrfxrates.xml', SINGLE_CLOB) AS xmlData) -- from web i get error SELECT @Doc
I know i could use Process explorer to find processes which are accessing a drive or a folder, i need exact same thing to be recorded/monitored. Basically i need list of all processes accessing a drive/folder.
I have a CSV file with roughly 6 million rows. The file is unstructured; that is, some rows have 5 fields, others have 15, and there are as many 50 fields in one row.
I am using bulk insert to read the entire file into a table in database, with each row being a database record. With that, I have one column that contains a row of comma delimited fields. All fields are character string and I want to find a quick way of parsing each row and placing each comma-delimited value in a column. For example:
Column CSVString contains the a CSV row (I don't know how many filelds (no. of commas + 1) in the row, but if the row contains 10 fields, I need to populate columns C1-C10. If the row has 15 fields, I populate columns C1-C15.
How can I do this in a very efficient way? I tried CTE but performance was not very good.
We had some SAN issues and we dont have Transaction Log files for some databases.. The drive which was holding this Tlog files were missing.. How to bring back databases.