Does DB Size Decrease When I Delete A Huge Table ??
Jan 15, 2004
Hi,
My DB size (Right click on DB Name, Data Files tab, Space Allocated field) was 10914 MB.
I delete a huge table (1.2 million records * 15 columns).
I checked the db size again. It didnt change.
Shouldn't it decrease because I delete a huge table ??
View 14 Replies
ADVERTISEMENT
Oct 31, 2005
I have the next question, and i would like to hear what do you thinkabout, and if is there a better solution for "my problem"here is the question, I have a huge table with 60GB of data (imagefiles). The problem happen always when i try to ALTER the structure ofthe table. For example I change a field char(3) to char(4)...thesqlserver then performs the "alter table" command...that must besomething similar than "insert into the new table + drop the actualtable" and for that I need about 60GB o space for my LOG file, andtakes hours to complete the operation.Is this the only way to alter a single field in my table??I would like to heard you opinions...Thanks..ALberto
View 2 Replies
View Related
May 30, 2008
Hi Everyone,
We have a large test database with million of records for more than company site Code. Sometime we want to refresh the data of that database for one or more site Codes.
In order to do that I have to delete all records of the site code we want to refresh on the test database first then copy a new set of data from production database over. Since we refresh data based on the site code therefore I have to use the Delete command instead of Truncate.
Since this is a huge database with thousand of tables and million of records per table I have a performance issues with delete command. So what would be the best to delete a large number of records without writing any information to database log file?
FYI: The Recovery model of this database is Simple
Regards,
Jdang
View 9 Replies
View Related
Dec 7, 2005
Dear all,
I wanna ask you how to decrease log file size..?
For example i have database file with data file size is 122,166 Kb
and the database log file is 6,330,176 KB
Thanks,
Oko Sakti Banget
View 4 Replies
View Related
Jul 30, 2007
Hello Guys,
Just wondering if someone can help me decrease the size of mdf and ldf files. In the past production database "NewUniverse" had been allocated space of 100 GB for mdf file and 8 GB of ldf file. However the data file has only used 30 GB of data. But now due to disk space related reason, I tried to decrease the datafile size from 100 GB to 40 GB. But I am not able to do it.
Any help in this regard would be appreciated.
View 4 Replies
View Related
Jun 6, 2004
Hi
I installed " Web Wiz Forum ASP SQL 2000 DB "
it work fine but
when i added some data in the forum for example my db size is 1.45 MB
after i delete those data the db size will not decrease
is there any code that i must enter on the sql server setup file
Excuse me i asked this question in the web wix forum site but they don`t
answer me
if know what i must to do plz tell me
Thanks
View 6 Replies
View Related
Jan 23, 2008
I have a database that the transaction log grows about 1 GB per day. I would like this size was decreased daily. Does anyone have any suggestions?
Some friends told me that after the Full backup that is done daily, I should perform a backup of transanction log with option to truncate and after, make a shrink in the database. That is exactly what should be done?
Thank you,
Best Regards,
Ralph Haddad
View 22 Replies
View Related
Jul 19, 2007
Hi all,
How to derease the database file size in Primary filegroup.
Thanks,
View 5 Replies
View Related
Oct 15, 2015
I have a database consisting of two main tables and 12 sub tables.
This was leading to increase in database size. So we thought of storing the sub tables data in the main tables in form of xml in a column of varchar(2000) type.
So we created a new database that only had 2 tables and we stored the data of the sub tables in the new column of the main table.
Surprisingly we saw that the database size increased rather than decreasing .
View 9 Replies
View Related
Apr 3, 2000
SQL 7 SP1 NT4 SP5
I have a TRANSACTION table with 150 million rows.
I have a USER table.
Each user has about 600 records in the TRANSACTION table.
The TRANSACTION cluster index is on USERID + RECID . The second index is on USERID + Fieldx + Fieldy.
The TRANSACTION table gets about 1.4 million inserts in a normal day and about 40,000 updates.
I want to go through the USER table and delete all users who have not visited me in a while.
I want to do this without substantially hindering performance in a production environment. I can perform this over a week period or two if needed.
The best way I thought of doing this was to grab x amount of users in a cursor and loop through deleting their corresponding TRANSACTION records.
Does anyone have any ideas on a better way. What is going to happen to my indices during this time ?
Thanks !!!
View 3 Replies
View Related
Feb 15, 2006
how to delete a very huge log file, to free up some harddisk space
View 3 Replies
View Related
Jul 14, 2015
I have transnational replication setup on two environments, on one server distribution database is tiny, but on the second server the distribution database is 5 times bigger, and taking up lot of space, both environments have almost same size of data.
View 15 Replies
View Related
Aug 30, 2005
Has anyone implemented split data for an application between two databases because the data size is extremely large? If so could you please point me to relevant information.In this split data scenario, a table will automatically carry over to another database whenever the size limit for the current database is reached. The challenge is here for the DAL (data access layer) to automatically look into the appropriate database when the next row of data is in another database. OR Perhaps there is another solution to this terasize data problem..Any help on this would be greatly appreciated.
View 8 Replies
View Related
Jul 20, 2005
Hi All,I've the following table with a PK defined on an IDENTITY column(INSERT_SEQ):CREATE TABLE MYDATA (MID NUMERIC(19,0) NOT NULL,MYVALUE FLOAT NOT NULL,TIMEKEY INTEGER NOT NULL,TIMEKEY_DTTM DATETIME NULL,IID NUMERIC(19,0) NOT NULL,EID NUMERIC(19,0) NOT NULL,INSERT_SEQ NUMERIC(19,0) IDENTITY(1,1) NOT NULL)GOALTER TABLE MYDATAADD CONSTRAINT PK_MYDATAPRIMARY KEY (INSERT_SEQ)GOThe TIMEKEY_DTTM field is generated, from the value actually insertedinto theTIMEKEY field, by the following trigger:CREATE TRIGGER TIMEKEY1ON MYDATAFOR INSERT ASBEGINDECLARE @M_TIMEKEY_DTTM DATETIMESELECT @M_TIMEKEY_DTTM = DATEADD(SECOND, INS.TIMEKEY +EP.GMT_OFFSET * 0 ,'1970-01-01 00:00:00.000')FROM INSERTED INS, LOCATIONINFO EPWHERE INS.EID = EP.EIDUPDATE MYDATASET TIMEKEY_DTTM = @M_TIMEKEY_DTTMFROM INSERTED INS, MYDATA MDWHERE MD.INSERT_SEQ = INS.INSERT_SEQENDGOThere is also a composite, non unique, index defined on thetuple:(MID,IID,TIMEKEY,EID)CREATE INDEX IX_METDATA ON MYDATA (MID,IID,TIMEKEY,EID)GOAs a consequence of an application design change, I would also changethis index to be UNIQUE, but when I try to drop and create it I get anerror, because the tables stores some duplicated rows...In order to succesfully upgrade the index definition, I wrote some DMLstaementsto lookup and remove the duplicated rows, keeping only the firstrecord inserted, i.e. the one with the lowest INSERT_SEQ:---- This table stores then umber of duplicated records eventuallydiscovered-- into the MYDATA table; the initial value for the NUM_DUPLICATESfield is-- 0 (no duplicated record)--DROP TABLE DUPLICATESGOCREATE TABLE DUPLICATES (TABLENAME VARCHAR(17),NUM_DUPLICATES NUMERIC(19,0) )GOINSERT INTO DUPLICATES VALUES ('MYDATA',0)GOINSERT INTO DUPLICATES VALUES ('CATEGORIESDATA',0)GO---- ///////// CLEAN UP OF MYDATA TABLE--DROP TABLE TMP_MYDATAGOCREATE TABLE TMP_MYDATA (MID NUMERIC(19,0) NOT NULL,TIMEKEY INTEGER NOT NULL,IID NUMERIC(19,0) NOT NULL,EID NUMERIC(19,0) NOT NULL,INSERT_SEQ NUMERIC(19,0) )GO---- Insert into the TMP_MYDATA table all the duplicated records for-- the tuple (MID,IID,TIMEKEY,EID) and NULL for the INSERT_SEQ field--INSERT INTO TMP_MYDATA (MID,IID,TIMEKEY,EID)SELECT MID,IID,TIMEKEY,EIDFROM MYDATAGROUP BY MID,IID,TIMEKEY,EIDHAVING COUNT(*)>1GO---- Updates the INSERT_SEQ field to the lowest value in the group-- of duplicated records--UPDATE TMP_MYDATASET TMP_MYDATA.INSERT_SEQ = (SELECT MIN(INSERT_SEQ)FROM MYDATAWHERE TMP_MYDATA.MID = MYDATA.MID ANDTMP_MYDATA.IID = MYDATA.IID ANDTMP_MYDATA.TIMEKEY = MYDATA.TIMEKEY ANDTMP_MYDATA.EID = MYDATA.EID )GO---- Updates the value of NUM_DUPLICATES for the MYDATA table.--UPDATE DUPLICATESSET NUM_DUPLICATES = (SELECT COUNT(*) FROM TMP_MYDATA)WHERE TABLENAME = 'MYDATA'GO---- Delete from the MYDATA table all the duplicated records,-- keeping only the row with the lowest INSERT_SEQ-- The delete is performed only if there are duplicated recors;-- this is achieved using a "short circuit" AND on the number ofrecords-- stored into the NUM_DUPLICATES field of the DUPLICATES table for-- the MYDATA table...--DELETE FROM MYDATAWHERE ( SELECT NUM_DUPLICATES FROM DUPLICATES WHERE TABLENAME ='MYDATA') > 0 ANDEXISTS ( SELECT 1FROM TMP_MYDATAWHERE MYDATA.MID = TMP_MYDATA.MID ANDMYDATA.IID = TMP_MYDATA.IID ANDMYDATA.TIMEKEY = TMP_MYDATA.TIMEKEY ANDMYDATA.EID = TMP_MYDATA.EID ANDMYDATA.INSERT_SEQ > TMP_MYDATA.INSERT_SEQ )GOThis tecnique works fine on a normal table (1M recs) but is not veryperformanton huge tables (>10M records)!Do you know a better way to achieve the task of removing all theduplicates records, preserving the lowest INSERT_SEQ betwee theduplicates and also preserving the sequence seed, so that a new recordinserted at time t1>t0 is enumerated with an INSERT_SEQ|t1 >max(INSERT_SEQ)|t0 ?Thanks a lot for your help!PatrizioPS. sorry for such a large post!
View 1 Replies
View Related
Apr 21, 2007
I am in the process of designing a SQL 2005 database with tables that may hold several hundreds of millions of rows.
Due to various constraints, I am trying to save as much space as possible by optimizing the size of a row. Currently one row contains the following columns:
byte(4), byte(3), byte(3) = 10 bytes.
Adding 4 bytes for the row header, plus 3 bytes for the null bitmap (as described in BOL) I am ending up with 17 bytes/row. In a real world test it was an average of 18.3 bytes/row.
There are no indexes, no primary key and all columns are not NULL. Hence my question: since no column allows a null value, is there a possibility to "remove" the 3 bytes Null Bitmap?
Is there any other way to shave off one or two more bytes by using a clustered index etc?
Thanks
View 4 Replies
View Related
Jun 10, 2014
It is possible to find table size and in that table each row size.
View 4 Replies
View Related
Feb 1, 2007
Hi seniors
there are two tables involve in replication let say table1 and replicated table is also rep.table1.
we are not deleting records physically in table1 so only a bit in table1 has true when u want to delete a record but the strange thing is that replication agaent report that this is hard delete operation on table1 so download and report hard delete operation and delete the record in replicated table which is very crucial.
plz let me know where am i wrong and how i put it into right way.
there is no triggers on published tables and noother trigger is created on published table.
regards
Ahmad Drshen
View 6 Replies
View Related
Mar 14, 2001
We have a huge table which has 12 million records. And when I run the following script, it took 50 hours. Is there anyone who can help? Thanks.
update TableA
set In=e.In, EA=e.ea, We=e.we
from TableA c, TableB e
where c.code=e.code
TableA 12,000,000 records.
TableB 750,000 records.
And have clustered index on each code field.
View 6 Replies
View Related
May 18, 2001
I need to alter a table (expand the column size for varchar(10) to varchar(255)) and the table has 200 million rows.
Please suggest me the best and the fastest method to achieve it. The database is on SQL 7.0
Thanks
View 1 Replies
View Related
May 13, 2008
Hello
i want to ask about the huge table(table with many tera records) backup time cost , any one can help me please in determining the time cost nearly
View 2 Replies
View Related
Jul 1, 2005
Hi,
I have a table with 52 million rows which resides on Primary file group in my database. Because of huge number of rows the performance has gone very down and I would like to break the table into parts.
Can anyone suggest me the steps for doing the same and the number of parts that should be made. It is named as Account_Transactions and contains information of Policies in an insurance database.
Rajat
View 2 Replies
View Related
Mar 12, 2008
Hey guys,
I have a table with about 80 columns and 400 millions records. Each columns has different responses that I need to get frequency for. I need to get counts for each response from all the columns... I have a query that does it, but it will run forever... what is the best way to do so?
My starting query:
select res, sum(cnt) from
(
select col1 res, count(*) as cnt from table1 with (nolock)
group by col1
union all
select col2 res, count(*) as cnt from table1 with (nolock)
group by col2
........................
select col80 res, count(*) as cnt from table1 with (nolock)
group by col80
)a group by res
View 1 Replies
View Related
Mar 16, 2004
I have a huge table with 4 primary keys on it. I need to delete the data from this table ( approx. 5.6 millions records to be deleted). It takes a hell lot of time to delete it by normal query.
Can someone please suggest me a better way?
Any help will be appreciated.
View 14 Replies
View Related
Jul 24, 2013
I have a very large table , and from that table I need just 2 records with column1 = 'A' and column1 = 'B' .
Here I don't think if I can not use OR or IN or Case operators because I need exactly 2 records not more.
View 6 Replies
View Related
Mar 23, 2008
Hi Guys,
What is the fast way to move huge table (77 million) records with 25 columns across servers? The servers are not linked though.
Thanks for the help.
View 3 Replies
View Related
Nov 19, 2015
SQL Server: 2008 R2
Question A : I need to truncate a table, it has 21 millions of rows and it has a size of 14 GB.
1- How do I find out if this table is not being referenced by a FOREIGN KEY?
2- Does it Participates in a indexed view?
3- Is being published by using transactional replication or merge replication?
Question B: How do I safely truncate that table?
View 8 Replies
View Related
Jul 12, 2007
Hey Guys
I have meet the same problem, too.
I create a table with 1,380,000 rows data,
the db real size about 114 MB.
The primary key size is nchar(6).
When I use RDA pull, I found that the primary
key in the PDA disappear. So, It took a long time
to get query response.
But when I delete some rows to 680,000 rows of data.
After I pull, The primary key can pull from the SQL Server.
PS: I didn't change any code. Just delete some rows.
Is that SQL-Mobile's bug??
PS: 1.Database and Temp Database limitation both are 384MB
2.If I use query analyzer to add primary key it works! so strange!!
3.Pull process return "S_OK".
4.After Pull process finished, the db connection still alive. It seems not like
time out problem.
5.Local Connection String:"Data Source='%s\%s';SSCEatabase Password='%s';SSCE:Encrypt Database='true';SSCE:Max Database Size=384;SSCE:Temp File Max size=384;SSCE:Temp File Directory=%s"
View 1 Replies
View Related
Jul 23, 2005
I encounter one weird problem, I have a database with around 7 GB ...when I delete a bunch of data from it, it suppose to reduce thedatabase file size, but weirdly, the file size increase to 8 GB.Wondering why. Is it suppose to be like that?Is it the architecture is designed to work like that?Is there any way for me to reduce the database file size?Thanks.Peter CCH
View 2 Replies
View Related
Jan 24, 2008
Hi
I have a table (Sql server 2000) which has 14 cost columns for each record, and now due to a new requirement, I have 2 taxes which needs to be applied on two more fields called Share1 and share 2
e.g
Sales tax = 10%
Use Tax = 10%
Share1 = 60%
Share2 = 40%
So Sales tax Amt (A) = Cost1 * Share1 * Sales Tax
So Use tax Amt (B) = cost1 * share2 * Use tax
same calculation for all the costs and then total cost with Sales tax = Cost 1 + A , Cost 2 + A and so on..
and total cost with Use tax = Cost1 +B, Cost 2 +B etc.
So there are around 14 new fields required to save Sales Tax amt for each cost, another 14 new fields to store Cost with Sales Tax, Cost with Use tax. So that increases the table size.
Some of these fields might be used for making reports.
I was wondering which is a better approach out of the below 3:
1) To calculate these fields dynamically while displaying them on the User interface and not save in DB (while making reports, again calculate these fields dynamically and show), or
2) Add new formula field columns in database table to save each field, which would make the table size bigger, but reporting becomes easier.
3) Add only those columns in database on which reports needs to be made, calculate rest of the fields dynamically on screen.
Your help is greatly appreciated.
Thanks
View 3 Replies
View Related
Jan 24, 2008
Hi
I have a table (Sql server 2000) which has 14 cost columns for each record, and now due to a new requirement, I have 2 taxes which needs to be applied on two more fields called Share1 and share 2
e.g
Sales tax = 10%
Use Tax = 10%
Share1 = 60%
Share2 = 40%
So Sales tax Amt (A) = Cost1 * Share1 * Sales Tax
So Use tax Amt (B) = cost1 * share2 * Use tax
same calculation for all the costs and then total cost with Sales tax = Cost 1 + A , Cost 2 + A and so on..
and total cost with Use tax = Cost1 +B, Cost 2 +B etc.
So there are around 14 new fields required to save Sales Tax amt for each cost, another 14 new fields to store Cost with Sales Tax, Cost with Use tax. So that increases the table size.
Some of these fields might be used for making reports.
I was wondering which is a better approach out of the below 4:
1) To calculate these fields dynamically while displaying them on the User interface and not save in DB (while making reports, again calculate these fields dynamically and show), or
2) Add new formula field columns in database table to save each field, which would make the table size bigger, but reporting becomes easier.
3) Add only those columns in database on which reports needs to be made, calculate rest of the fields dynamically on screen.
4) Create a view just for reports, and calculate values dynamically in UI and not adding any computed values in table.
Your help is greatly appreciated.
Thanks
View 4 Replies
View Related
Oct 18, 2015
I want to append the column to the transaction table(60 million records in it.) ..
Our transaction table is being used in production.. but i have very less amount of time ..
Instead of alter table.. (IF we use the alter to take backup of table and do the processing it will take more time). Is there any way to append the column to the transaction table ..
View 8 Replies
View Related
Oct 23, 2004
Hello:
Need some serious help with this one...
Background:
Am working on completing an ORM that can not only handles CRUD actions -- but that can also updates the structure of a table transparently when the class defs change. Reason for this is that I can't get the SQL scripts that would work for updating a software on SqlServer to be portable to other DBMS systems. Doing it by code, rather than SQL batch has a chance of making cross-platform, updateable, software...
Anyway, because it needs to be cross-DBMS capable, the constraints are that the system used must work for the lowest common denominator....ie, a 'recipe' of steps that will work on all DBMS's.
The Problem:
There might be simpler ways to do this with SqlServer (all ears :-) - just in case I can't make it cross platform right now) but, with simplistic DBMS's (SqlLite, etc) there is no way to ALTER table once formed: one has to COPY the Table to a new TMP name, adding a Column in the process, then delete the original, then rename the TMP to the original name.
This appears possible in SqlServer too --...as long as there are no CASCADE operations.
Truncate table doesn't seem to be the solution, nor drop, as they all seem to trigger a Cascade delete in the Foreign Table.
So -- please correct me if I am wrong here -- it appears that the operations would be
along the lines of:
a) Remove the Foreign Key references
b) Copy the table structure, and make a new temp table, adding the column
c) Copy the data over
d) Add the FK relations, that used to be in the first table, to the new table
e) Delete the original
f) Done?
The questions are:
a) How does one alter a table to REMOVE the Foreign Key References part, if it has no 'name'.
b) Anyone know of a good clean way to get, and save these constraints to reapply them to the new table. Hopefully with some cross platform ADO.NET solution? GetSchema etc appears to me to be very dbms dependant?
c) ANY and all tips on things I might run into later that I have not mentioned, are also greatly appreciated.
Thanks!
Sky
View 1 Replies
View Related
Nov 17, 2006
I'm trying to clean up a database design and I'm in a situation to where two tables need a FK but since it didn't exist before there are orphaned records.
Tables are:
Brokers and it's PK is BID
The 2nd table is Broker_Rates which also has a BID table.
I'm trying to figure out a t-sql statement that will parse through all the recrods in the Broker_Rates table and delete the record if there isn't a match for the BID record in the brokers table.
I know this isn't correct syntax but should hopefully clear up what I'm asking
DELETE FROM Broker_Rates
WHERE (Broker_Rates.BID <> Broker.BID)
Thanks
View 6 Replies
View Related