The column I'm adding needs to be part of the clustered PK (it will be the last of three columns) so I need to recreate all the indexes.
My DB is set for FULL recovery mode ALLOW_SNAPSHOT_ISOLATION ON. I've tried two methods so far.
Method 1:
BEGIN TRANSACTION
CREATE TABLE dbo.Tmp_copyoftablewithnewfield
(
) ON PRIMARY
IF EXISTS(SELECT * FROM dbo.originaltable)
EXEC('INSERT INTO dbo.Tmp_copyoftablewithnewfield (<original fields>)
SELECT <original fields> FROM dbo.originaltable WITH (HOLDLOCK TABLOCKX)')
GO
DROP TABLE dbo.originaltable
GO
EXECUTE sp_rename N'dbo.Tmp_copyoftablewithnewfield', N'originaltable',
'OBJECT'
GO
<recreate PK constraint>
<rebuild indexes>
COMMIT
Pro's: Lets me add the new field in the spot I'd like it (not a big deal)
Con's: Tons of wasted space and time. It took about 15 hours.
Method 2:
SET XACT_ABORT ON
GO
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
<drop PK constraint>
<drop indexes>
ALTER TABLE [dbo].[originaltable] ADD
[newfield] [tinyint] NOT NULL CONSTRAINT [DF_originaltable_newfield] DEFAULT
((1))
Pro's: No making a copy of the entire table taking up 200GB more space in the db data file
Con's: My tempdb grew to accomodate the row versioning info for every row in the 200GB table. It took over 30 hours.
A lot of time and disk space is wasted with both.
Since the db is going to be unavailable to users I have some flexibility here. I was considering turning ALLOW_SNAPSHOT_ISOLATION OFF and then trying method 2 again which should stop the versioning in tempdb and then turning it back on.
I was also curious if setting the database recovery mode to SIMPLE would cut down on db log usage and then I could set it back to FULL when done.
Do these really need to be in a transaction? If there's some hardware failure or something unexpected I can just restore from backup and do the conversion again. If the presence of the transaction itself is causing more disk usage for logging or any other slowdown, I think I'd rather do without.
Given the amount of time this conversion takes, I wanted to get some
feedback other than "just try it" before doing any new tests.
I simply need the ability using SQL to add columns in an existing table before (or after) columns that already exist.
The MS SQL implementation of ALTER TABLE doesn't seem to provide the before or after placement criteria I require. How is this done in MS SQL using SQL or is there a stored procedure I can use?
If on the source I have a new column, the script generated by SqlPackage.exe recreates the table on the background with moving the data into a temp storage. If the table is big, such approach can cause issues.
Example of the script is below: in the source project I added columns [MyColumn_LINE_1] and [MyColumn_LINE_5].
Is there any way I can make it generating an alter statement instead?
BEGIN TRANSACTION; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; SET XACT_ABORT ON; CREATE TABLE [dbo].[tmp_ms_xx_MyTable] ( [MyColumn_TYPE_CODE] CHAR (3) NOT NULL,
[Code] ....
The same script is generated regardless the table having data or not, having a clustered or nonclustered PK.
I have a table size 2078mb, number of row +530,900. Is it normal for sql to lock users out of the db when I add a column to the end of the table? I'm running SQL 7.0. The table has 4 col regular indexes. No primary keys. It locked the user out for about 10 min. I thought with SQL 7.0 this problem went away?
Can I add a column to a database table without dropping and recreating the table?
The problem is that everytime a user creates an action that requires a new table - at the moment I drop the table and recreate the table with the new column.
This requires lots of resources as I have to populate the table again.
Hi, I would like to delete a data from a 750million row table in chunks of 10000,without blocking the users.As ours is a 24/7 shop I donot want to block the users for a long time. Answer for this is highly appreciated. Thanks Samna
I've got a table with 36+ million rows. I've been asked to modify thetable and add in an identity column. The code I used caused SQL tolock up and it maxed out the log files. :)The code I used is:Begin TransactionAlter Table ODS_DAILY_SALES_POSADD ODS_DAILY_SALES_POS_ID BigInt NOT NULL IDENTITY (1,1)CommitIs there a way to break up the code? Maybe only do a few millionrecords at a time? Or is there a way to do this without lockinganything up?Thanks,Jennifer
I have the following tablestblGroupsGroupID intGroupName nvarchar(50)tblGroupMembersGroupID int (FK)UserID intI need a stored proc which:returns the groupID and name of all the groups of which userid 5 is a member AND also return the number of members that a group has (so the numbers of records in tblGroupMembers with a specific groupID)I have 2 sp's:myspGetGroupMembersCount which takes as input a groupID and has as output an integer valuemyspGetGroupsforUser which must return the entire resultsetSo say that userID 5 is a member of GroupID 6,18 and 22the following must be the resultset:UserID GroupID GroupName Members5 6 bla 132 5 18 yes 17 5 22 whatever 200 I think I need to call myspGetGroupMembersCount from within myspGetGroupsforUser and add it to the resultset (I dont want to work with a temptable)...but I dont know how...
I have the folowing databases DB1,DB2,DB3,D4,DB5........ I have to loop through each of the databases and find out if the database has a tablename with the word 'Documents'( like 'tbdocuments' or 'tbemployeedocuments' and so on......) If the tablename having the word 'Documents' is found in that database i have to add a column named 'IsValid varchar(100)' against that table in that database and there can be more than 1 'Documents' table in a database. can someone show me the script to do it? Thanks.
I have an existing table which has about 70 columns with 3 million rows in it. I was asked to add additional 50 new columns into the table. I have tried to add them in through the Enterprise manager design table but experiencing some problems. The adding process seemed never going to be end. Is there any good efficient way to do it??? I appreciate the help!
Sorry I'm pretty new to SQL so I don't know if this is a simple question. I have a table, and I am trying to add a column to the table and populate this column using what would be called an 'IF' function in Excel.
Basically 'column A' has numbers in it. I want SQL to look at 'column A' and if the first 5 digits of the number in 'column A' are 00001, then put 'description A' into new column 'column B'. If the first 5 digits of the number in 'column A' are 00002, then put 'description B' into 'column A' etc.
I want to add new primary key into existing table which already has a primary key. But,I do not want to remove the old primary key, since there are many records and the old primary key also have relationship with other table
When I am using this query:
alter table hem154 add indexNO uniqueidentifier default newid()
alter table hem154 add CONSTRAINT pk_hem154_indexNo PRIMARY KEY (PK_indexNO) go
Note: Hem154 ~ Table name indexNo ~ Column Name
I get this runtime error:
Msg 1779, Level 16, State 0, Line 1 Table 'hem154' already has a primary key defined on it. Msg 1750, Level 16, State 0, Line 1
I'm merge replicating a SQL Server 2005 database (publisher) to SQL Compact databases (subscribers) on mobile devices. I understood that I could add a "not null" column to a replicated table on the server as long as I specified a default value, but it seems this is not possible. I ran the following script on the server database:
ALTER TABLE Activity ADD ActivityRequiresProject bit not null default(0)
which executed OK. When I went to synchronize the db on the mobile device I got the following error:
Alter table only allows columns to be added which can contain null values. The column cannot be added to the table because it does not allow null values. The SQL statement failed to execute. If this occurred while using merge replication, this is an internal error. If this occurred while using RDA, then the SQL statement is invalid either on the PULL statement or on the SubmitSQL statement. [ SQL statement = alter table "Activity" add "ActivityRequiresProject" bit not NULL constraint "DF__Activity__Activi__4A47DDAE" default ( ( 0 ) ) ]
Does anyone know if this is a valid error? Is is possible to add a not null column with default, and if not how do I update the schema on a replicated database?
I removed all constraints in order to load a bunch of data into a table, now I'm wondering if I can add an identity column to this table which does contain data or if I have to create a new table with the identity column and insert the data into that.
I am mapping an entity from SQL 2005 to another entity in another system on SQL 2000. Since the destination system has its own ID generator, I want to keep the generated ID for each row of my table in a column of my table in SQL 2005. The new column is in the dataset now , but I don't know how to update my table to have that column values (The OleDbDestination just insert new items.)
I'm new to replication and am trying to determine the best approach to add a column (NOT NULL with no DEFAULT) to a replicated table. The only success I have had is if I do the following:
Delete entire Subscription Delete entire Publication Add column to table Create new Publication Create new Subscription Run SnapShot
The problem with this approach is that each step affects the entire database and not just the modified table. I think it is inefficient to redo replication for a simple object change. What am I missing? Is there a way to only replicate the changes made to the one table without having to run a SnapShot for the entire publication? Keep in mind the column must be defined as NOT NULL and cannot have a Default.
After i run the sql which adds some columns on one particular table.I am getting this Warning
Warning: The table 'usac499_499A' has been created but its maximum row size (9033) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes.
I got a series of the above warning message , but the coulmn wa created.
I have a situation that I need to add a field to a table over linked server. The specifications of this is dynamic and it is being done in TQL / Stored procedures and this can not change. My code is generating the statement just fine and if I copy paste it to a new SSMS window and execute it WORKS.. The problem is I need to dynamically generate the statement (I am doing that just fine, I THINK). THEN I need to execute the statement IN THE SPROC, this part is not working.
Here is the code:
SET @AlterSQL = @DestinationServerName + '.[' + @DestinationDBName +'].' + @DestinationSchemaName + '.sp_executesql N'' ALTER TABLE ' + @DestinationTableName + ' ADD ' + @TempColumn + ' int' + CHAR(39)
The above Creates this when I expose it via a PRINT statement:
addb15.[FSParallel].dbo.sp_executesql N' ALTER TABLE Node ADD ImportIdentity int'
After I create the statement I use:
EXEC @AlterSQL
And this returns the following error:
Msg 2812, Level 16, State 62, Procedure ETLDynamicImport, Line 244 Could not find stored procedure 'FSParallel.dbo.sp_executesql N' ALTER TABLE Node ADD ImportIdentity int''.
i have the folowing databases DB1,DB2,DB3,D4,DB5........
i have to loop through each of the databases and find out if the database has a table with the name 'Documents'( like 'tbdocuments' or 'tbemplyeedocuments' and so on......)
If the tablename having the word 'Documents' is found in that database i have to add a column named 'IsValid varchar(100)' against that table in that database and there can be more than 1 'Documents' table in a database.
I have a scenario where I need to add a blank column to a table that is a publisher. This table contains over 100 million records. What is the best way to add the column? In the past where I had to make an update, it breaks replication because the update would take forever as jobs are continuously updating the table so replication can't catch up.
If I alter a table and add a column, would this column automatically get picked up in replication?
Hello all,I'm using SS2K on W2k.I'v got a table say, humm, "Orders" with two fields in the PK:OrderDate and CustomerID. I would like to add an "ID" column whichwould be auto-increment (and would be the new PK). But, I would reallylike to have orders with the oldest OrderDate having the smallest IDnumber and, for a same OrderDate, I'd to have the smallest CustomerIDfirst. So my question is:How could I add an auto-increment column to a table and make it createits values in a particular order (sort by OrderDate then CustomerIDhere)?In the real situation, the table I want to modify has around 500krecords and the PK has 5 fields and I want to sort on three of them.Thanks for you helpYannick
I have come up with one scenarios where I have three table like Product, Services and Subscription. I have to create one table say Bundle where I can have some of the product id , service id and Subscription id , i.e. a bundle may contains sum prduct , services and subscription . How I can design these relations ?
I’ve recently taken on a SQL Server related assignment with a client running a very large database in a very poor environment.
We have a 200GB database with virtually no failover support. I’m interested in transactional replication, hardware redundancy, failover support and find that before moving forward on that we probably need to seriously re-evaluate our whole logical and physical design. My experience is more on the TSQL side rather than full dba hardware, optimization, tuning areas. Any suggestions for quickly ramping up my skills on the dba side (books, courses, etc.) and also recommendations for short term support resources would be appreciated. I’m want to make sure I cover all my bases with regard to future scalability, disaster recovery, high availability etc.
IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall') BEGIN ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0 IF ( @@ERROR <> 0 ) GOTO QuitWithRollback END
It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.
Hi, I have a doubt about the behaviour of SQL Server 2005 in the situation I'm going to describe you. Suppose that you have a SQL Server 2005 database on your PC, and suppose that this database has a table with a column classified as "unique" (so it's impossible for this table to contain 2 records having the same value in this column). Suppose that you publish this database and you create 2 SQL Server Mobile 2005 subscriptions on 2 Pocket PCs. Suppose now that the first PPC (using an embedded program) creates a record with a certain value for the column (and adds it to the table), and the second PPC makes the same thing (it inserts a record with the same column value of the first PPC). At this point, you connect the 2 PPCs to your PC (one by one, of course), to synchronize (using merge replication) the databases...
WHAT HAPPENS??? Does an error raise? Must you give a publication setting in which you say that, if this situation occurs, PC SQL Server holds the last (or the first, as you decide) record acquired? Is it possible?
In my production box is running on SQL7.0 with Merge replication and i want add one more table and i want add one more column existing replication table. Any body guide me how to add .This is very urgent Regards Don
Is there any way or option to get the all columns of dataset added to table when we add a table in data region. It will take lot of time to add one by one and also there are chances to add one column ore than once.
What is the syntax for adding a column where you are adding a year to a date in a date format? For example adding a column displaying a year after the participation date in date format?