I have a large database currently using up 5GB of space --
I deleted some records of a table and the free space is increased.
So the question is, if I delete some of the data from this table
will the database size decrease? Is there anything else I can do to
reduce the size of the database?
Thank you so very much for any help with this!
how does one make tempdb smaller, when i try to reduce the allocated size it wont let me? it fell over during a process i have tried to srhink the database in enterprise and tried dbcc shrinkdatabase in analyser but to no avail!
hi,If we consider the following scenerio:a school with 500 students.On average student takes [x] courses per year, and [y] tests per course.table of grades:+------------+-----------+---------+-------+| student_id | course_id | term_id | grade |+------------+-----------+---------+-------+| | | | |+------------+-----------+---------+-------+| | | | |+------------+-----------+---------+-------+as average, x = 20, y =6 thennumber of records = 500 * 20 * 6 = 120,000 record per year.is this design correct considering number of records generated per year?Is it important to reduce the "number of records" or no?thank you.
I have a table in my database and the table has almost 45 columns and the rowsize is 10468 bytes.in that most of the colums have varchar datatypes and and i think coz of poor knowledge of the data most of the columns with varchar data were given more column length. Now i want to decrease the size of those columns and to see the row size would be around 8k Bytes.If i do this now, does it affect the table performance much....Infact can i do this as there is lot of data (almost 2 million rows) in the table.If it is possible is there anything to be taken care before changing the column lenghts.
In a ASP application, which used Access and is now being migrated to SQL 2005, I have a medium-sized query with many inner joins. I optimized the tables, with indexes, etc, and if I run the query on the SSMS it returns me like 55 lines in less then a second. On the other hand, when the ASP page passes the query to the SQL 2005, it takes 10 sec. to get the data. Using the Profiler I found that the sp_fetch is being executed a lot of times. It looks like the query is decomposed in smaller pieces and the rows as selected one by one. I'm using OleDb.
How to make the ADO (the culprit in my opinion) to execute that query at once?
We are processing 60,00, 000 rows(2 GB file) available in a flat file and loading them in to a database tables using OLEDB Destination components. In the data pipeline of an SSIS package we have 1 flat file source reader, 7 look up components(full cache mode), 1 multicast component and 2 OLE DB destinations with fast load option.
We have observed that first 10,00, 000 rows are processed and loaded in to target tables in just 4 minutes time. The second set of 10,00, 000 rows are processed in 15 minutes time. After this for processing each 1,00,000 rows SSIS is taking approximately 8 - 10 minutes time. We are not able to identify the reasons for the unexpected behaviour of SSIS.
We thought that as the input file size is 2 GB SSIS is not able to manage and slowing down over time of execution. We did split the big input file in to 60 small 37 MB (approx) size files. Then we modified the package by adding For-Each loop task to process all the 60 small files and load them in to database server sequentially. Even in this approach also we have identified data loading has slowed down drastically after processing 13 files.
In order to verify is there any problem with reading source file or transformation, we have replaced OLEDB destinations component with Flat File destinations. With Flat file destination the time taken for processing rows is very constant. For every 8 minutes package is able to process 10,00,000 rows and write them in to the destination files. So, there is no problem with the with either Look up components or flat file source reader.
We are sure that target database server is in same state/condition from the starting to the end of package execution. The client box in which we are running the package is having 1 GB RAM. During package execution time the CPU usage is at 30 % and PF usage is 580 MB. SP1 is also installed on both Client and Server.
Does any one have clue what is causing slow down of data load over the time of package execution?
Hi, i use this script that show me the size of each table and do the sum of all the table size.
SELECT X.[name], REPLACE(CONVERT(varchar, CONVERT(money, X.[rows]), 1), '.00', '') AS [rows], REPLACE(CONVERT(varchar, CONVERT(money, X.[reserved]), 1), '.00', '') AS [reserved], REPLACE(CONVERT(varchar, CONVERT(money, X.[data]), 1), '.00', '') AS [data], REPLACE(CONVERT(varchar, CONVERT(money, X.[index_size]), 1), '.00', '') AS [index_size], REPLACE(CONVERT(varchar, CONVERT(money, X.[unused]), 1), '.00', '') AS [unused] FROM (SELECT CAST(object_name(id) AS varchar(50)) AS [name], SUM(CASE WHEN indid < 2 THEN CONVERT(bigint, [rows]) END) AS [rows], SUM(CONVERT(bigint, reserved)) * 8 AS reserved, SUM(CONVERT(bigint, dpages)) * 8 AS data, SUM(CONVERT(bigint, used) - CONVERT(bigint, dpages)) * 8 AS index_size, SUM(CONVERT(bigint, reserved) - CONVERT(bigint, used)) * 8 AS unused FROM sysindexes WITH (NOLOCK) WHERE sysindexes.indid IN (0, 1, 255) AND sysindexes.id > 100 AND object_name(sysindexes.id) <> 'dtproperties' GROUP BY sysindexes.id WITH ROLLUP) AS X ORDER BY X.[name]
the problem is that the sum of all tables is not the same size when i make a full database backup. example of this is when i run this query against my database i see a sum of 111,899 KB that they are 111MB,but when i do full backup to that database the size of this full backup is 1.5GB,why is that and where this size come from?
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
Hi, I am using exec sp_helpdb go dbcc sqlperf(logspace) for getting database size and log size. Is this gives the correct database size and log size or Is there any other way to get the logsize and database size by means of query analyzer.
I'm getting this error while trying to insert records into a SQL Server Compact Edition database. I have pasted my connection string that was used when creating the database as well as for accessing that same database from my Windows application.
Thanks for any help any of you can give!
Data Source=OnTheGo.sdf;Encrypt Database=True;Password=<password>;Max Database Size=4091
I am developing a smart device application with Visual Studio .Net 2005 and SQL Server Compact Edition database. And also using merge replication to synchronize the data from the mobile device to the SQL Server.
My database size is around 350MB. So when I am trying to synchronize this is the error message that I get. " The database file is larger than the configured maximum database size. The setting takes effect on the first concurrent database connection only.[Required Max Database size ( in MB; 0 if unknown)=129].
I tried changing the Max database size in the connection string and my connection string looks as follows and still did not have any luck.
hi my database on remote server i cannot access directly. i can access it only with query analyzer. my log file size is 9mb but nothing in database. only few tables there so how i can reduce log file size with query.
We have a SQL server database most time for read only because the data source comes from another database( which is not SQL server database, now it is about 2G). Every day we have a job running as following: Step 0. extra the data from another database, and create plain text files for each table for BCP job. Step 1. drop all of the index Step 2. truncate all of the tables. Step 3. BCP in all of the data from plain text file. Step 4. create all of the index again Step 5. shirnk the database.
Everything runs fine but the database grows 1 G from yesterday's database. I am sure we do not have so much data entry in one day. Any one can give some suggestion? I wonder if I need do shirnk the database or shirnk the data file before create the index. How can know how much size for all of the index file? Thank you very much. Judy
I am a beginner with sql and I have been inputting data (txt files) to my database and now have approached 4GB in size and it will not let me expand any further on the primary file group. My 'boss' said that sql has no size. I am using desktop
Is there a stored procedure that returns the current size of a database, and the maximum size of a database whose growth property IS NOT set to unlimited?
When I run sp_spaceused on my database it give me a total size of 258MB The backup of this database reads 226MB The Properties screen tells me that my data is a total of 278MB I am running SQL 7.0 . Why am I getting different sizes?
I have a simple question, why Am I running out of space with my current 3.5GB database, I am constantly expanding it, yet when I dump the database nightly the dump is just around 1.5GB?
Newbie here ....I started with a datbase of 90 megs...it now grew to 300 megs in 2 months...what is the best way to see the largest table or to find out why this database grew so fast? Thanks!
Hi.. My database data file are over 2.1 Gbs now. can anyone tell me is my data file too big?? or not safe?? or there's a limite for MSSQL 2000. my last choice are move some data to another database. but i have consider the report i have to make. so i really dont want to do that. can anyone advice. thanks
Hi, I have 30 servers and about 600 databases. I want to know the size of databases ( data files as will log files) both used and allocated space. Can any one tell me the easiest way to get this information?
Hi all, We are using sql6.5. srvpack 3. I noticed that one of our database was maxed out, and increased the size of the device and then increased the size of the database. when I checked the database it is still showing the database is maxed out. I did it twice, it still shows that the space has been used. Can anyone please explain to me why this is happening and how to get the actual space measurement. Thanks in advance.
After we ran sp_spaceused on one database, the size for the UNUSED column is a negative number -1324KB. Can someone explain how this could happen? I really appreciate.
I need to check the log size of my database periodically and send a notification if the size exceeds 8 GB. I want to set this up as job that runs every 30 minutes or so and page the DBA if it exceeds the above said limit. Can anyone give me the command which can be used to obtain the logsize into a variable. I am new to SQL and would appreciate your expert advise.
I want how can i do to do capacity planning in sqlserver 2000. Do you make scripts (vbs) who query the sysfiles tables and transfer the answer in excel ? Please help me. Phil
I have a database that is almost 1.55 GB in size according to the database properties window.
However I checked the size of all the tables and indexes using the stored procedure 'sp_spaceused' and the sysindexes table and the total size of all tables and indexes is only 162 Mb.
Does anyone know why the database is so big, and how I may be able to reduce the size of it?
I know I have read that MSDE had a 2GB limit on database size and that Express should have one on 4GB, but I am by 'accident' ended up with a database with an .mdf of 4400MB and .log 2000MB in my Express SP1 ... when will it stop me ... and how ?
-- This one's tricky. You have to use calculus and imaginary numbers for this. You know, eleventeen, thirty-twelve and all those.