The following sql stmt seems to find a particular table's size
programmatically:
select top 1 [rows],rowcnt
from sysindexes
where ID = object_id('aUserTable')
and status = 0
and used > 0
However,
a) I'm not 100% sure of its consistency;
b) Both [rows] col and [rowcnt] col seems to produce same data, which
one is supposed to be more accurate (or more up to date)?
I'm trying to determine how much space some tables use (SQL2000), and I found 2 suggestions posted earlier, but can't get them to work for me.
The first was "Right click on the DB in Enterprise Manager, select view then taskpad."
When I try that, I get some of the tables displayed with the info I want, but I can't see them all and can't scroll down beyond the first 22 tables in the database.
It says "command copleted successfully, but where does the output to this go ?? Is there something other than "print" I should use ? The grid pane is empty.
Is there any way to get size of the individual column in a table?
I know we can use sp_spaceused to get the size of the table. But my question is diiferent. I have a table with 50 columns and approx 2 million rows in it. I wanted to know which column is taking most of the space.
How can we monitor the all tables in all databases and send notifications to the team.Is there a way to check to find the no of rows and size of a table last month and find out growth % now
Hi,How do I find the max row size for a particular table?This was the error I recieved while execting my proc with the relevanti/p I need to:"cannot sort a row of size 8192, which is greater than the allowablemaximum of 8094"I also understand that the max bytesize of a row is 8060 bytes.Butwhtz this 8094?TIA,Seetha
every time i am doing my database shrinking manually.
what i am asking i find out every time when i shrink my database that sql is showing minimum size of database to shrink.how can its getting minimum size
but i need to find out how to get minimum size of particular database programatically.
I'm trying to establish the mb usage of a series of nonclustered indexes, I'm used to using the manage indexes GUI in 6.5, and showcontig doesn't quite give me what I want, any suggestions?
We're looking at optimizing some of our tables because we have indexes on columns that are not used. So for example we might have a table that has 6GB of data and 4GB in indexes (according to sp_spaceused). We need to know how much of the 4GB of indexes is consumed by each of the indexes individually. I've tried dbcc showcontig, but that doesn't tell me the amount of space used by the index that I run it for.
Second (and by far not as important), is there a way to determine how much space is being consumed by data in a specific column. The original creator of some of the tables added an incrementing identity column that is used no where in the system. I'd like to be able to say..."If we drop this column we'll save XXX in space."
Knowing the index space is more critical, the column space would be nice but not necessary.
hi,There is facility to do backup of the database. I can use thatfacility to backup the database. But before i do a backup i want tocheck if the disk space available enough to backup that database. Ihave a 22gb database mdf file, when i took a backup of that its only3gb. SO i cannot use the size of mdf file to determine the databasedump file. Is there any facility available to find out the backupdatabase dump size before doing the backup?ThanksNabhonil.
Hi, i use this script that show me the size of each table and do the sum of all the table size.
SELECT X.[name], REPLACE(CONVERT(varchar, CONVERT(money, X.[rows]), 1), '.00', '') AS [rows], REPLACE(CONVERT(varchar, CONVERT(money, X.[reserved]), 1), '.00', '') AS [reserved], REPLACE(CONVERT(varchar, CONVERT(money, X.[data]), 1), '.00', '') AS [data], REPLACE(CONVERT(varchar, CONVERT(money, X.[index_size]), 1), '.00', '') AS [index_size], REPLACE(CONVERT(varchar, CONVERT(money, X.[unused]), 1), '.00', '') AS [unused] FROM (SELECT CAST(object_name(id) AS varchar(50)) AS [name], SUM(CASE WHEN indid < 2 THEN CONVERT(bigint, [rows]) END) AS [rows], SUM(CONVERT(bigint, reserved)) * 8 AS reserved, SUM(CONVERT(bigint, dpages)) * 8 AS data, SUM(CONVERT(bigint, used) - CONVERT(bigint, dpages)) * 8 AS index_size, SUM(CONVERT(bigint, reserved) - CONVERT(bigint, used)) * 8 AS unused FROM sysindexes WITH (NOLOCK) WHERE sysindexes.indid IN (0, 1, 255) AND sysindexes.id > 100 AND object_name(sysindexes.id) <> 'dtproperties' GROUP BY sysindexes.id WITH ROLLUP) AS X ORDER BY X.[name]
the problem is that the sum of all tables is not the same size when i make a full database backup. example of this is when i run this query against my database i see a sum of 111,899 KB that they are 111MB,but when i do full backup to that database the size of this full backup is 1.5GB,why is that and where this size come from?
I need to setup a script to read all the table names in the database above and then query the database to find the list of Stored Procedure using each table.(SQL Server)
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
I have table 'stores' that has 3 columns (storeid, article, doc), I have a second table 'allstores' that has 3 columns(storeid(always 'ALL'), article, doc). The stores table's storeid column will have a stores id, then will have multiple articles, and docs. The 'allstores' table will have 'all' in the store for every article and doc combination. This table is like the master lookup table for all possible article and doc combinations. The 'stores' table will have the actual article and doc per storeid.
What I am wanting to pull is all article, doc combinations that exist in the 'allstores' table, but do not exist in the 'stores' table, per storeid. So if the article/doc combination exists in the 'allstores' table and in the 'stores' table for storeid of 50 does not use that combination, but store 51 does, I want the output of storeid 50, and what combination does not exist for that storeid. I will try this example:
'allstores' 'Stores' storeid doc article storeid doc article ALL 0010 001 101 0010 001 ALL 0010 002 101 0010 002 ALL 0011 001 102 0011 002 ALL 0011 002
So I want the query to pull the one from 'allstores' that does not exist in 'stores' which in this case would the 3rd record "ALL 0011 001".
The largest table in our database eats up above 4G . we do "sp_spaceused" for this table.The length of all columns of this table ( just int, char, varchar, money ,numeric fields types) is about 200 bytes, and the table has around 1,300,000 rows, but the reserved spaced for this table is 4,800,000kb and the data space is around 4,600,00kb.
How can average each row take 3.7kb ( the total size of all columns just 200 bytes)? Any other things I need check? Any one can give any suggestion what cause this problem? or it is normal?
I am wondering if there is the limitation of maximum table size in SQL 6.5. I have a table with 2.6GB and 12,000,000 rows in SQL 6.5 database. Is this a problem?
Is there a practical size limit, in MB's, of a table in SQL Server 6.5?
Is there a size, that once exceeded, degrades performance signifigantly?
I am speaking of raw megabytes. The table in question will consist of only 3 int columns but has the possiblity of becoming VERY LARGE (+1,000,000 rows). I am still in the design phase and can change my strategy if this will prove to be a problem.
I am trying to solve this problem for quite some time.. I was wondering if I can get some help..
These questions are all abt. MSSQL 6.5
1. Is there a limit on the size of the table ? 2. Does it make sense to have more tables if the size of the row size is more that the limit set by 6.5 or i should let have more rows in a different table with duplicate entries for a particular field. 3. What is the number of rows before the performance of a query starts getting affected..
Is there a maximum or optimum number of rows I should have in a table so that I can have fastest search queries. I am a novice programmer just developed something for my work place. The database has a table created by converting data from excel spreadsheets. There were 24 spreadsheets for 12 months each having approximately 500 rows. Designed this way the table will have approximately 24 * 500 = 12000 records. Should I consider redesigning the database to make searches faster
Hi. I am trying to get a row count of each row of each table in the database. Is that possible? Using a SP or UDFS? I dont want the column size of each table but the total datasize of each row.So for example if I have 5 rows each in 3 tables I need a query that will return 15 rows with the size of each row(size of all coumn data summed together). Thanks.