Name Size Type Modified
------------------------------------------------
student.ldf 52,416 KB Database File 2/11/2004 10:34PM
student.mdf 63,424 KB Database File 2/11/2004 10:34PM
My question is now that it has been specified 'Unrestricted file growth'
for both the data and the log file, why both haven't increased any in
size since 2/11/2004? Actually, the modified timestamps of some other
databases files are also '2/11/2004 10:34PM'. That's weird.
I found the following message that's relevent to the above timestamp in
the SQL Server Logs:
=====
2004-02-11 22:34:10.12 server SQL Server terminating because of
system shutdown.
=====
I'm pretty sure there have been a lot updates taking place on this
database. We don't hear any complaints from the customers that they
have had any problems (such as, no space left) with the databases.
Did the SQL server write the data and log somewhere else?
Any insight on what's going on would be appreciated.
My primary (and only) data file has reached the point where it is auto growing. I would like to grow this file in one big chunk at an off peak time. I can't seem to find the code I need to make the file grow when I want it to?
Bear with me - My SQL Server 2005 Maintenance is as good as a Newbie..
I was running a Very Large Transaction over the weekend (Say 10Mill Inserts).. And after waiting for 3/4 Hrs for the transaction to complete -- Checked the LDF File, I has grown to a 100 GB.
After that i discovered that i had the Recovery Model as FULL .. So Killed the Job and Changed the recovery mode from Full -> Simple.
Now i see that the LDF file is not growing in size even though there are many transactions that were complete successfully (Still Very slow though)...
What am i missing here - Iam clueless as to why my LDF is not growing in size?
I'm running a long and heavy query. during the running the log file of the DB is growing more than 20 GB and i'm running out of disk space consequently. Is there a way to restrict the log file size without demaging my query?
For one of our database we have an issue where its log file got increased rapidly last week on Fri and Sat. The database is on SQL server 2008 R2 with compatibility level at 80. Please see below log grow events :
First, we thought Index maintenance like Re-index and update stats could have been the reason, but when check the schedule that job ran on 16th using below code:
USE ABC GO EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 80)" GOEXEC sp_updatestatsGO
I know above is OLD fashioned, but we believe that should not be the major cause here? How can i determine what happened on 14th and 15th which cause the event to trigger and log file bumps to 80 and 70 GB both days.
My understanding is that the log file is not supposed to grow if the database is under simple recovery mode.I am in a situation where the log grows if do any inserts that involve millions of rows.How do i make sure that it does not grow?
For a database, we have 4 data files in a particular file group and the file sizes are almost 70 GB each.
Do I come across any performance issues if I create/pre-allocate an additional data file in the same file group so that the existing files don't grow too much?
In a server we had File Growth,And then We had to Add New Hard Drive And New File On It.And Now We have New server with a Huge Hard Drive.But all files remaind.Can I Reduce This files to One data file or not ?
Hi,I am trying to use BULK INSERT with format file. All of our data hasfew bytes of header in the data file which I would like to skip beforedoing BULK INSERT.Is it possible to write format file to skip these few bytes ofheader before doing BULK INSERT? For example, I have a 1 GB data filewith 1000 byte header. Except for first 1000 bytes, rest of the data isgood for BULK INSERT.Thanks in advance. Sorry if it is really a dumb question as I am newto BULK INSERT and practicing still.Bob
I've production sql server 7 sp3 on windows NT. I had a 8GB data file ofwhich 5GB were used and 3GB were unused. I wanted to take back the unused3GB.So I did the following with EM GUI:1. I tried to "truncate fre space from end of the file". Didn't truncatethe file. I believe there was no empty space at the end of the file.2. Next I chose the option to "shrink file to 5GB". And to my horror thedata file instead of taking just 5GB took the empty spaces also and the sizeof the used data file went to 8GB.Any idea what's going on?TIA,SP
its my flow in one of my packates (ETL job) Excel file contains monthly revenue details, i wanna import the excel data to my database staging table, so i've created the package. its working fine...
Problem if we change the new data for the next month and running the package its not running; the same file, same format, only we delete the contents, of the file except first row of the excel sheet, and pasting the new data; new data is coming from Oracle DataBase in the form of excel sheet ( manually they will copy the data and sending to us)
i open that package in design mode and while double clicking the excel file source it says <column name>'s Meta Data needs to be synchronized Do you want to Fix this issue automatically with the available external column's meta data
Clearly noted that its a data type issue; i have changed the corresponding data types as it is in the previous Excel sheet which is equivalant to the Table its copying to.
now the package is running with validation warnings, External Column "Invoice Amount" needs to be updated...etc. some 2 or three warning messages i can able to see in the package Execution wizard,
ok, i'm ready to accept these warnings, and i want my package running from my server;( packages had been deployed in to the Centeralized server; every time if we want to run the package, we have the asp.net webpage, that is executing the package in an On_click event)
The package is not running from the server, its due to the meta data change in the Excel file( i guess)
please suggest me some guide lines to resolve this meta data issue, i want my excel sheet meta data should not change when we have new updates in it;
otherwise suggest me some solutions that i can validate the excel sheet before running the package and testing whether the data is in correct format or not? its a kind of Data Profiling activity;
i know its some what crazy, but i need to maintain the system with permanent solution, instead of facing this meta data mismatch issue!!!
some what lenthy explanation--> its needed for my dear powerful microsoft responders. i think i 've explained my problem clearly, if i don't let me know your queries, i'll try my level best.
How to use Microsoft Data Link(Udl) type file to connect with SQL Server using SQL Native Client,If i use OLDEDB Connection i can make connection using UDL file with Database ,but how can i use a file for a connection using SQL Native Client.Is there any other data link file that can be used to connect with SQL server using SQL Native Client.
In a foreachloop, I am inserting records into a flat file which is working fine. But the thing is that as the file grows, it takes longer for it to locate the EOF(End of File) of the flat file so as to insert the records.
I have around 70-100 lines written to the file at each loop and there are more than 20k records to be looped. wihich means that at the end I should be having 1400k - 20000k line in the text file.
One solution would be to insert the records at the start of the file itself so that it does not has to lookup the EOF each time before writting.
Another would be to generate separate files and then merge it.
Any idea how can this can be done?
Beside this I have to zip the file and then SFTP to a given address.
I am running my package in sql server 2012, in which i am giving network path for flat file destination. And its working fine. But if i give m local path, its giving me error " cannot open data file" ...
I am trying to create and later read a data file from a package deployed in SSISDB, but it is not reading it while it is successfully creating the file. The same package when run from the file system package, runs successfully. Generating ispac and deploying in SSISDB is running for infinite time. Is it a permission issue?
one of my database data file is 100 GB and the log file is 500 GB.DB is in full recovery model and the transaction logs happen once in 6 hours.Even then, the Database log file isn't reducing in size.
How do I insert data from a flat file or .csv file into an existing SQL database???
Here what I've come up with thus far and I but it doesn't work. Can someone please help? Let me know if there is a better way to do this... Idealy I'd like to write straight to the sql database and skip the datset all together...
How do I insert data from a flat file or .csv file into an existing SQL database???
Here what I've come up with thus far and I but it doesn't work. Can someone please help? Let me know if there is a better wway to do this... Idealy I'd like to write straight to the sql database and skip the datset all together...
I want to load data that i receive everydays from my customers in .xls file format (excel) or cvs file format, to the database that i have created on this purpose. but when trying to do that whith SSIS; i got an error message .... that i can't import redudant data in my database column.
Why shrinkfile empty file does not redistribute data evenly in the primary file group with multiple files:
Please run the script attached to see what the end result is.
This is what I set up last night on my test machine.
1) Create database [FGTest] size 200MB 2) Create table called TEST on primary 3) Insert 40MB of data into test 4) Create another file group called temp in primary size 200MB 5) Shrinkfile('FGTest',emptyfile) so that all data is transfered from FGTest into temp file group. 6) Add another 2 files called DATA2 and DATA3. Both are 200MB. 7) We now have 3 empty files that I want data distributed evenly on. FGTest, DATA2 & DATA3 8) Shrinkfile('temp',emptyfile) to move all the data from temp over the 3 file groups evenly
I would expect at this stage to have the following:
FGTest = 13MB, DATA2 = 13MB, DATA3 = 13MB
(40MB of data over 3 files should be about 13 MBish in each file)
What I actually end up with is this:
FGTest = 20MB DATA1 = 10MB DATA2 = 10MB
It looks as though SQL Server is allocating 50% of all data to the original file and then 50% evenly over the remaining files in PRIMARY.
I created 1 database with 2 file group : 1 primary and 1 index. - Primary file group includes 1 data file (*.mdf): store all tables - Index file group includes 1 index file (*.ndf): store all indexes
Most of indexes are non-cluster indexes
After a short time using, data file is 2GB but index file is 12 GB. I do not know what problem happened in my database. I have some questions:
1/ How so I reduce size of index file ? 2/ How to know what is stored in index file? 3/ How to trace all impact to index file? 4/ How to limit size growing of index file?
I have a database whose log file size is 4 time greater then data file size, and its continuously growing day by day. Recently face limited disk related issue.
Is there any way to truncate log file???
What is impact on db if i truncate log file???
Is there any way to prevent this file continuously growing???
We have a large 'History' database that is currently about 4.5TB, with most of that in a datafile that is 4.2TB. We wanted to stop growth on the one large data file and have SQL Server allocate new data to the other data files, but this throws an error when we attempt to change the MAXSIZE settings:
ALTER failed for Database 'History' MODIFY FILE failed. Specified size is less than or equal to current size.
The SQL Server is saying we can have a max size of 2TB, and anything over that is blocked. Since this is being blocked, the file continues to grow.
Is there any way to cap the growth of the 4.2TB file and not allow any more data to be written to it?
I have one of our production Accounting Databases starting from 2 GBnow grown into a 20 GB Database over the period of a few years...I have been getting timeouts when transactions are trying to updatedifferent tables in the database.. Most of the error I get are I/Orequests to the data file (Data file of the production dbAccounting_Data.MDF).I would like to implement the following to this Accounting database.I need to split the Data file into multiple files by placing some ofthe tables in different file groups. I have the server upgraded to beable to have different drives in different channels. I can place thesedata and log files in different drives so it will be less I/Oconflicts..I would like to have the following file groups..FileGroup 1 - which will have all database definitions (DDL).FileGroup 2 - I will have the AR Module tables under here..FileGroup 3 - I will have the GL module tables under here..FileGroup 4 - I will have the rest of the tables under hereFileGroup 5 - I will like to place the indexes under here....Also where will the associated transaction files go?I would like to get some help doing this. Is there any articles / helpavailable that I can refer to. Any suggestions / corrections/criticisms to what I have mentioned above is much appreciated...!Thanks in advance....
i'm trying to write this script that check my database file and log size(in MB) and insert them into a table.i need the following columns dbid,dbname,compatability_level,recovery_model,db_size_in_MB,log_size_in_MB. i try to write this a got stuck. select sysdb.database_id,sysdb.name,sysdb.compatibility_level, sysdb.recovery_model_desc,sysmaster.size from sys.databases sysdb,sys.master_files sysmaster where sysdb.database_id = sysmaster.database_id
I've been through the forum and read a number of threads on people's DBs not growing and the answer usually is they don't have auotmatically grow data file. Unfortunately I have this on, but when I look at the properties of the database it reports the space available is 0.00 MB? Up until about two weeks ago I was showing appx 48% space utilization. When I ran an SP to show growth, it tells me that it was expanded by 20% yesterday, but SQL Server is still telling me the space available is zero.
The log file is also set for auto growth. The DB is 14.5 GB in size and the drives still have around 92 GB of space.
Has anyone experienced this before? Any ideas? Does anyone know of an SPs that can give me detailed info on internal data file size compared to stated size (i.e. wasted space in data file)? Is SQL Server doing something funny in the way it is seeing the database or data files individually? Any help is appreciated.