Altering a table which is having more than 100 million rows. Would like to know the best possible way to add a new column to this table without impacting the performance much.
I am using Master Data Service for couple of months now. I can load, update, merge and soft delete data in MDS. Occasionally we even have to hard delete data from MDS. If we keep on soft deleting records in a MDS table eventually there will be huge number of soft deleted records. Is there an easy way to hard delete all the soft deleted records from all MDS tables in a specific Model.
Presently, we have allocated 600MB to tempdb and when I run the sp_spaceused command for tempdb I notice that the unallocated space is a negative number. I have not received any errors in the error log with regards to expanding the segment because segment is full. Is this just a bug in SQL 6.5 or something else?
I have tempdb data device size default 2 MB, which has completely filled up. I am trying to expand data device to it. I created new device tempdb_data_ext (250 MB) and tried to expand tempdb data device. But everytime I do it, it ends up adding space to tempdb log device. How can I expand tempdb data device?
tempdb data file has 8Mb and log file has 1Mb - but I´m getting message that log is full.
Once tempdb is shrinked and expanded by the system (we even don´t see it at database folder!!), what can be done, (except reinstall from scrach and restore DBs) to make tempdb not vulnerable to very frequent expanding/shrinking (I guess this can be one of the root of the problem) ?
Against my better judgement, we are using fixed allocations of tempdb on some of our servers. This is to deal with specific limitations of our applicaitons and hardware configuration that I'm not allowed to discuss in much detail.
The problem that I have is that the present plan is to configure the data file at around 18 Gb and the log file at around 2 Gb. This seems just plain wrong to me, but I haven't been able to find a formal recommendation that gives any relative sizing. I would expect to have about twice as much log as data space, especially for tempdb.
Does anyone know of a formal citation (preferably from Microsoft) that discusses this?
It is possible that Data Collection can cause massive increasing MB/sec to tempdb ? I cannot find connection with tempdb and I set cash file, but on same disk.
Or it can be something different? Last two weeks what I checked was Read/Write MB/s to tempdb increasing progressively.
One time it was about 20MB/sec
After it was reseting and again 1MB/sec..
What I checked , External company which install SQL Server made one file for tempdb, next week or during breaktime(it will be possible), I would like make 8files next weekend work.
Now I saw that TempDB mdf was still increased, but using was just 8-10%
I'm having an argument with our infrastructure architect who has just gone and bought lots of SSD drives to use for our tempdb data and log files, sounds great doesn't it? There is a catch though, his plan is to add the disks to the two available slots in each blade in a RAID0+1 configuration, effectively giving you one usable drive, and adding both data and log files on to one disk.
I then pointed out that SQL Server best practice is to host tempdb data and log files on two separate drive to reduce contention. The architect then basically said that because this isn't spinning disk the issue of drive, r/w contention isn't an issue I don't agree with this and wanted to get some opinions from the community, I'm still advising that two separate disks should be used but someone just went and spent £80k ($150k) on SSDs and doesn't want to back down...
I have some stored procedures that create table data types for temporary tables. From everything I've read about these, they shouldn't occupy space in tempdb. However we've been having problems with our tempdb log growing quite large, so I've done some profiling on the database and found that the temporary tables are being created in tempdb, and I don't understand why.
The Stored Procedure looks like: -- WORKING TABLES DECLARE @t_InsertParams TABLE (ParamKey1 VARCHAR(50) NOT NULL, ParamValue VARCHAR(1024), ParamOrder INT)
and GetParamsOffload is a user defined function that returns a table: CREATE FUNCTION GetParamsOffload(@p_ParamsList AS VARCHAR(8000), @p_TargetTable AS VARCHAR(50)) RETURNS @t_Params TABLE (ParamKey1 VARCHAR(50) NOT NULL, ParamValue VARCHAR(1024), ParamOrder INT) AS <snip> INSERT @t_Params SELECT @_ParamKey ,@_ParamValue ,ISNULL(@_ParamOrder, 0) <snip> RETURN
During the profiling of the application, I see numerous records of EventType "Object:Created" for the tempdb DatabaseId and the ObjectName is both @t_InsertParams and @t_Params. I am also monitoring the "Objecteleted" EventType, but I never see these objects deleted.
My first question is why this code uses the tempdb in the first place. I've read that you can't use the table data type in a few instances (can't insert the values of a SP execute into the table data type; cannot do SELECT INTO or INSERT INTO statements). Is there an issue with inserting data into the table data type from a function? Is there something that can be changed in the way the table data type is defined/used that will not use tempdb?
If I am stuck with the above using tempdb, is there something that should be explicitly called to Delete the specified objects from tempdb? Do I need to Drop those tables in my SP?
This SP is called via a CSV file parser - gets called A LOT over and over parsing files and I see these objects getting created in tempdb. I need to figure out how this process can manage the space in tempdb better.
"tempdb is skipped. You cannot run a query that requires tempdb"?
We're running a .Net web application with a SQL Server 2000 backend, and we get the error intermittently. Restarting the SQL Server service seems to fix it, as it causes tempdb to be rebuilt, but this isn't a long term solution. Any direction or hints would be greatly appreciated. Thanks! - Mike
I am using 6.5 . Here is the error that i get Think that tempdb is small how do i change that or what is this error about'
-------------------------- AIMSMan --------------------------- Application-defined or object-defined error 40002
37000: [Microsoft][ODBC SQL Server Driver][SQL Server]Can't allocate space for object '##RevByNetSALIMJUMMA' in database 'tempdb' because the 'default' segment is full. If you ran out of space in Syslogs, dump the transaction log. Otherwise, use ALTER DATABASE or sp_extendsegment to increase the size of the segment.( 1105)
ODBC
I only have data and log device how do i increase the tempdb device
We had someone create an extra data file and log file for tempdb. Sowe currently have two data files and two log files. Is it possible todelete the newly created data and log files? If I just delete thephysical files, I assume they'll get created as soon as SQL Servergets started back up. Any help would be great, since a single dataand log file for tempdb is my goal.Thanks much.sean
In tweaking performance of tempdb by adding add'l data files I want to reset back to defaults and remove all add'l files I've created. Â I was not able to do it for most as they were in use, but by starting the server in single-user mode with all other sql services shut off, and using sqlcmd I was able to use the ALTER DATABASE tempdb REMOVE FILE <tempdev#> to remove the files... except for one.
Restarted SQL server, and tried the ALTER DATABASE ... REMOVE FILE again but am always denied with the message that the file can't be removed b/c it's still in use.
I also tried to shrink it with EMPTYFILE but that also fails with the message that a page is a work table page and can't be removed.
I really need to get tempdb back to just one data file but am unable to find a way to remove this last data file.
I've been trying to make the following query more performant by breaking it up into smaller pieces.
SELECT MT.A3+MT.A4 AS A34,MT.A3 -- ,M.* FROM Master_TAB M JOIN (SELECT M.A1,t3.A3,t3.A4,M.A6,M.A2,ROW_NUMBER() OVER (PARTITION BY A1,A6,A3,A4 ORDER BY A5 DESC) AS rownum
[Code] ....
I know that the Spill is caused by the Sort but I can't remove the sort (sort can't be done in front end). My master table had 1.7 million rows and almost 200 columns (bad design? I know but can't be changed as there's too much that would be affected) every row is little over 1KB
Here's my attempt...
-- MASTER_TAB has 1.7 million rows and 50 columns CREATE TABLE [dbo].[tmp_ABC]( [A1] [varchar](13) NOT NULL, [A2] [varchar](5) NOT NULL, [A3] [varchar](4) NOT NULL, [A4] [varchar](4) NOT NULL, [A5] [int] NULL ) ON [PRIMARY]
[Code] ...
This is the Query that is causing the Spill (in reality I'm supposed to bring back all 200 columns fro the master table but for debug purposes I limited the columns)
Select c.A3+c.A4 as A34, c.A3, c.A1 -- M.* from tmp_DEF c join MASTER_TAB M on M.A1 = c.A1 and M.A2 = c.A2 order by c.A3, C.A4
if I just run the following I get no spill:
Select c.A3+c.A4 as A34, c.A3, c.A1 from tmp_DEF c order by c.A3, C.A4
as soon as I add the Master table as a Join I get the Spill...
I read many articles, tried many suggested things (creating indexes... clustered, non-clustered) without success. Maybe I'm totally in Left Field and should enhance the performance going another route?
As far as i know sql server iterators spill to tempdb when estimation number of rows less than actual number of rows ? for my example estimation > actual,but sort iterator still spill to temdb.
We have installed SQL Server 2008 R2 SP1 instance and it's having Share Point 2010 databases.
We have 2 dedicated drives for Tempdb on SAN with 50 GB space. Both tempdb data & log files are created with default size. I would like to presize them.
What are the best values to start with?
U ->Tempdbdata having tempdb.mdf file V->Tempdblog having templog.ldf file
I have scenario where I have process that loades data into SQL server 2012 database by doing some manipulation on data like sorting , aggregation, etc. Once this process is completed it's not free up the Tempdb space. If I restart the database, then it does.
is there any way (apart from shirking) to release space for Tempdb, like writing some post SQL queries to delete/ truncate the data and logs from temp db?
I'm running this procedure which insert into table_name(id, name.....) select id, name.... from table_name. For some reason the tempdb data file grow up to 200GB. The tempdb is set to expand unrestricted by 10%. How can I prevent that from hapening? Thanks.
I am currently investigating aa high avg write time ms issue (145ms) which seems to be only occuring on the tempdb data files.I have followed the recommended setup of TEMPDB in that
1. Data files = number of physical cores 2. Data files and logfiles are on separate partitions away from the other databases. 3. Tempdb is presized and no incremental file increases look like they are happening with frequency.
We have sharepoint 2012 setup on other sql servers and with TEMPDB setup following the same guidelines, with far more Sharepoint activity on a similary specified hardware which is why its confusing.FileIO auditing on the partitions themselves shows that the FileIO is very fast on the partitions that the tempdb data file which leads me to beleive that Sharepoint may be the culprit perhaps due to excess use of tempdb with operations taking a long time to resolve.
This probably has been addressed before but I was unable to get the search to work properly on this site. I am needing a script/way of deleting all rows from a DB with the exception of one record left for each row that has duplicate column data. Example : Row 1 Field1 = 12345 Field2 =xxxxx Field 3=yyyyy Field4=zzzzz etc. Row 2 Field1 = 12345 Field2 =zzzzzz Field 3=xxxxxx Field4=yyyyyy etc. Row3 Field1 = 12345 Field2 =20202 Field 3=11111 Field4=zzzzz etc. Row 4 Field1 = 54321 Field2 =xxxxx Field 3=yyyyy Field4=zzzzz etc. Etc. Etc.
I want to be able to find the duplicates for Field1 and then delete all but 1 of those rows.( I don't care which one I keep just so only one is left.) The data in the other fields may or may not be unique.
I know how to find the duplicates it's just the deleting part I am having problems with. Any help would be much appreciated. Thanks,
Hello I have a little problem with deleting data from database. I have to tables Category: CategoryID,CategoryName,CategoryImageID - collumns CategoryImage - table CategoryImageID,CategoryImage - collumns I have foreign key between Category.CategoryImageID and CategoryImage.CategoryImageID How can I delete category from Category table.
i like to delete data. I can read and write into the database. But somehow i cant delete data from my Microsoft SQL Server. Always get error on CanUpdate().
POSITION pos = m_list_all.GetFirstSelectedItemPosition();
//open Recordset if(m_pData.Open()==0) { AfxMessageBox("Datenbank Verbindung fehlgeschlagen", IDOK); return; } if (MessageBox("Möchten Sie diesen Datensatz wirklich löschen?","Datensatz löschen", MB_YESNO | MB_ICONQUESTION) == IDYES) { do { m_ListItem = m_list_all.GetItemText(nItem,1); //delete data while(!m_pData.IsEOF()) { //found data then delete if(strcmp(m_pData.m_articel,m_ListItem)==0) {
m_pData.Delete(); //delete data in database m_pData.Update(); //confirm & update database nItem = m_list_all.GetNextSelectedItem(pos); m_list_all.DeleteItem(nItem); //delete data in listbox MessageBox("Datensatz wurde erfolgreich gelöscht","Datensatz löschen", MB_OK); } m_pData.MoveNext(); }//end while }while (pos != NULL); m_pData.Close(); }//end if }//end OnBnClickedDelete
1. I do not know how to loop over all the tables and delete data. 2. the database have database diagram so there are dependency with tables so the delete order is hard to decide.
a solutions is:
I temporarily delete the diagram and I loop all the tables and delete all the data.My question is how to loop over all the tables?
I'm using a DTS package to import a large CSV file. There is a particular column that contains text or numbers. I want to delete the row if that column has a number, I've used IsNumeric in the selection portion of the statement, but can't figure out how to use it as part of my where clause.
How can I erase all data from every table in a SQL Server 7.0 database and leave all constraints and relationships in tact? I'm wanting to have just the structure or frame work with no data in any table. There are over 130 tables so I need to automate this. Any Suggestions?
I want to reset my application and delete all the data in all tables.
But I have question for this.
1. I do not know how to loop over all the tables and delete data. 2. the database have database diagram so threre are dependency with tables so the delete order is hard to decide.
HiI have a SQL2000 server with 128m rows of data. I want to delete about65m of that. So far I have bcp'ed the relevent data out and put theminto another SQL database.We have a small amount of space for our transaction log so I cannotdelete all 65m rows in one go. So far I have been doing them is 0.5mchuncks, but it is extremly slow.Would a faster way be to bcp the data I wan to keep and truncate thetable and bulk import them in again ?What hapnes to log size in when builk import is happening and is thereanother way of doing this ?Thanks for any help
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.