I'm merge replicating a SQL Server 2005 database (publisher) to SQL Compact databases (subscribers) on mobile devices. I understood that I could add a "not null" column to a replicated table on the server as long as I specified a default value, but it seems this is not possible. I ran the following script on the server database:
ALTER TABLE Activity ADD ActivityRequiresProject bit not null default(0)
which executed OK. When I went to synchronize the db on the mobile device I got the following error:
Alter table only allows columns to be added which can contain null values. The column cannot be added to the table because it does not allow null values.
The SQL statement failed to execute. If this occurred while using merge replication, this is an internal error. If this occurred while using RDA, then the SQL statement is invalid either on the PULL statement or on the SubmitSQL statement. [ SQL statement = alter table "Activity" add "ActivityRequiresProject" bit not NULL constraint "DF__Activity__Activi__4A47DDAE" default ( ( 0 ) ) ]
Does anyone know if this is a valid error? Is is possible to add a not null column with default, and if not how do I update the schema on a replicated database?
I'm new to replication and am trying to determine the best approach to add a column (NOT NULL with no DEFAULT) to a replicated table. The only success I have had is if I do the following:
Delete entire Subscription Delete entire Publication Add column to table Create new Publication Create new Subscription Run SnapShot
The problem with this approach is that each step affects the entire database and not just the modified table. I think it is inefficient to redo replication for a simple object change. What am I missing? Is there a way to only replicate the changes made to the one table without having to run a SnapShot for the entire publication? Keep in mind the column must be defined as NOT NULL and cannot have a Default.
I have a scenario where I need to add a blank column to a table that is a publisher. This table contains over 100 million records. What is the best way to add the column? In the past where I had to make an update, it breaks replication because the update would take forever as jobs are continuously updating the table so replication can't catch up.
If I alter a table and add a column, would this column automatically get picked up in replication?
Hope to be my last question. I used Transacational with update sub method. When adding new column to replicated table. Do I need to generate new snapshot again? Just want to know how can I apply the new schema to subsciber DB without doing all regenerate snapshot, recreate all tables and bulk copy.. Please help
Hi All, I've one table named tableAB. in that i've added one new column with not null option in the enterprise manager. then i've generated the script, and run the script in client database. because already data is there, it is not accepting to put null value in the new column. so the is missing. anyway backup is there with me.
I have a strange problem. I have a computed column in a replicated table, the Formula is as follows:
(isnull(hashbytes('SHA2_256',CONVERT([varchar](256),[AccrualReference],(0))),(0))) the column is also Persisted.
This gets around a case sensitivity issue and is used as the Primary Key column, which works well.The problem is that this table is then replicated to another server. On the subscriber the value of this computed field is being returned as 0x00000000 for every row, so this must be the ISNULL function doing its job. But why? The AccrualReference is the true PrimaryKey and is never NULL.
If I remove the computed specification and set the field up as varbinary(64) the value then gets replicated. This then means maintaining a different table schema for in excess of 500 tables.
How would I best go about changing a published table's column from smallint to int? I could not find anything about it in BOL or MS.com. I do not think EM/Replication Properties allows the change. I suspect I have to run "Alter Table/Column" on the Publisher and each Subscriber the old-fashioned way. Is that true?
Hi All, Is there a way by which we can modify the width of a column of a table which is being replicated without touching the ongoing transactional replication? This is for MSSQL2000 Transactional Replication.
I know (and successfully tried) that we can add a column to a table and that gets propaged to the replicate database and indeed the added column gets reflected there. How to add a column? sp_repaddcolumn or Right Click on the Publication-Properties and it shows a button to Add a Column.
This is what I have tried for modifying the width of a column of a table participating in Transactional Replication from varchar(10) to varchar(100)
MH (source) -> MH1 (Replicate)
The column €ścol1€? had width of varchar(10) and this was altered to varchar(100).
I receive this message when I try to run any report. The reportserver and reportservertempdb databases were upgraded using backup/restore from SQL2000 to SQL2005 on a separate server which is running RS2005 . Please help. Thanks
I simply need the ability using SQL to add columns in an existing table before (or after) columns that already exist.
The MS SQL implementation of ALTER TABLE doesn't seem to provide the before or after placement criteria I require. How is this done in MS SQL using SQL or is there a stored procedure I can use?
If on the source I have a new column, the script generated by SqlPackage.exe recreates the table on the background with moving the data into a temp storage. If the table is big, such approach can cause issues.
Example of the script is below: in the source project I added columns [MyColumn_LINE_1]  and [MyColumn_LINE_5].
Is there any way I can make it generating an alter statement instead?
BEGIN TRANSACTION; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; SET XACT_ABORT ON; CREATE TABLE [dbo].[tmp_ms_xx_MyTable] ( [MyColumn_TYPE_CODE] CHAR (3) NOT NULL,
[Code] ....
The same script is generated regardless the table having data or not, having a clustered or nonclustered PK.
I have a table size 2078mb, number of row +530,900. Is it normal for sql to lock users out of the db when I add a column to the end of the table? I'm running SQL 7.0. The table has 4 col regular indexes. No primary keys. It locked the user out for about 10 min. I thought with SQL 7.0 this problem went away?
Can I add a column to a database table without dropping and recreating the table?
The problem is that everytime a user creates an action that requires a new table - at the moment I drop the table and recreate the table with the new column.
This requires lots of resources as I have to populate the table again.
I've got a table with 36+ million rows. I've been asked to modify thetable and add in an identity column. The code I used caused SQL tolock up and it maxed out the log files. :)The code I used is:Begin TransactionAlter Table ODS_DAILY_SALES_POSADD ODS_DAILY_SALES_POS_ID BigInt NOT NULL IDENTITY (1,1)CommitIs there a way to break up the code? Maybe only do a few millionrecords at a time? Or is there a way to do this without lockinganything up?Thanks,Jennifer
I have the following tablestblGroupsGroupID intGroupName nvarchar(50)tblGroupMembersGroupID int (FK)UserID intI need a stored proc which:returns the groupID and name of all the groups of which userid 5 is a member AND also return the number of members that a group has (so the numbers of records in tblGroupMembers with a specific groupID)I have 2 sp's:myspGetGroupMembersCount which takes as input a groupID and has as output an integer valuemyspGetGroupsforUser which must return the entire resultsetSo say that userID 5 is a member of GroupID 6,18 and 22the following must be the resultset:UserID GroupID GroupName Members5 6 bla 132 5 18 yes 17 5 22 whatever 200 I think I need to call myspGetGroupMembersCount from within myspGetGroupsforUser and add it to the resultset (I dont want to work with a temptable)...but I dont know how...
I have the folowing databases DB1,DB2,DB3,D4,DB5........ I have to loop through each of the databases and find out if the database has a tablename with the word 'Documents'( like 'tbdocuments' or 'tbemployeedocuments' and so on......) If the tablename having the word 'Documents' is found in that database i have to add a column named 'IsValid varchar(100)' against that table in that database and there can be more than 1 'Documents' table in a database. can someone show me the script to do it? Thanks.
I have an existing table which has about 70 columns with 3 million rows in it. I was asked to add additional 50 new columns into the table. I have tried to add them in through the Enterprise manager design table but experiencing some problems. The adding process seemed never going to be end. Is there any good efficient way to do it??? I appreciate the help!
Sorry I'm pretty new to SQL so I don't know if this is a simple question. I have a table, and I am trying to add a column to the table and populate this column using what would be called an 'IF' function in Excel.
Basically 'column A' has numbers in it. I want SQL to look at 'column A' and if the first 5 digits of the number in 'column A' are 00001, then put 'description A' into new column 'column B'. If the first 5 digits of the number in 'column A' are 00002, then put 'description B' into 'column A' etc.
I want to add new primary key into existing table which already has a primary key. But,I do not want to remove the old primary key, since there are many records and the old primary key also have relationship with other table
When I am using this query:
alter table hem154 add indexNO uniqueidentifier default newid()
alter table hem154 add CONSTRAINT pk_hem154_indexNo PRIMARY KEY (PK_indexNO) go
Note: Hem154 ~ Table name indexNo ~ Column Name
I get this runtime error:
Msg 1779, Level 16, State 0, Line 1 Table 'hem154' already has a primary key defined on it. Msg 1750, Level 16, State 0, Line 1
The column I'm adding needs to be part of the clustered PK (it will be the last of three columns) so I need to recreate all the indexes.
My DB is set for FULL recovery mode ALLOW_SNAPSHOT_ISOLATION ON. I've tried two methods so far.
Method 1:
BEGIN TRANSACTION CREATE TABLE dbo.Tmp_copyoftablewithnewfield ( ) ON PRIMARY IF EXISTS(SELECT * FROM dbo.originaltable) EXEC('INSERT INTO dbo.Tmp_copyoftablewithnewfield (<original fields>) SELECT <original fields> FROM dbo.originaltable WITH (HOLDLOCK TABLOCKX)') GO DROP TABLE dbo.originaltable GO EXECUTE sp_rename N'dbo.Tmp_copyoftablewithnewfield', N'originaltable', 'OBJECT' GO <recreate PK constraint> <rebuild indexes> COMMIT
Pro's: Lets me add the new field in the spot I'd like it (not a big deal) Con's: Tons of wasted space and time. It took about 15 hours.
Method 2: SET XACT_ABORT ON GO SET TRANSACTION ISOLATION LEVEL SERIALIZABLE GO BEGIN TRANSACTION <drop PK constraint> <drop indexes>
ALTER TABLE [dbo].[originaltable] ADD [newfield] [tinyint] NOT NULL CONSTRAINT [DF_originaltable_newfield] DEFAULT ((1))
Pro's: No making a copy of the entire table taking up 200GB more space in the db data file Con's: My tempdb grew to accomodate the row versioning info for every row in the 200GB table. It took over 30 hours.
A lot of time and disk space is wasted with both.
Since the db is going to be unavailable to users I have some flexibility here. I was considering turning ALLOW_SNAPSHOT_ISOLATION OFF and then trying method 2 again which should stop the versioning in tempdb and then turning it back on.
I was also curious if setting the database recovery mode to SIMPLE would cut down on db log usage and then I could set it back to FULL when done.
Do these really need to be in a transaction? If there's some hardware failure or something unexpected I can just restore from backup and do the conversion again. If the presence of the transaction itself is causing more disk usage for logging or any other slowdown, I think I'd rather do without.
Given the amount of time this conversion takes, I wanted to get some feedback other than "just try it" before doing any new tests.
I removed all constraints in order to load a bunch of data into a table, now I'm wondering if I can add an identity column to this table which does contain data or if I have to create a new table with the identity column and insert the data into that.
I am mapping an entity from SQL 2005 to another entity in another system on SQL 2000. Since the destination system has its own ID generator, I want to keep the generated ID for each row of my table in a column of my table in SQL 2005. The new column is in the dataset now , but I don't know how to update my table to have that column values (The OleDbDestination just insert new items.)
After i run the sql which adds some columns on one particular table.I am getting this Warning
Warning: The table 'usac499_499A' has been created but its maximum row size (9033) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes.
I got a series of the above warning message , but the coulmn wa created.
I have a situation that I need to add a field to a table over linked server. The specifications of this is dynamic and it is being done in TQL / Stored procedures and this can not change. My code is generating the statement just fine and if I copy paste it to a new SSMS window and execute it WORKS.. The problem is I need to dynamically generate the statement (I am doing that just fine, I THINK). THEN I need to execute the statement IN THE SPROC, this part is not working.
Here is the code:
SET @AlterSQL = @DestinationServerName + '.[' + @DestinationDBName +'].' + @DestinationSchemaName + '.sp_executesql N'' ALTER TABLE ' + @DestinationTableName + ' ADD ' + @TempColumn + ' int' + CHAR(39)
The above Creates this when I expose it via a PRINT statement:
addb15.[FSParallel].dbo.sp_executesql N' ALTER TABLE Node ADD ImportIdentity int'
After I create the statement I use:
EXEC @AlterSQL
And this returns the following error:
Msg 2812, Level 16, State 62, Procedure ETLDynamicImport, Line 244 Could not find stored procedure 'FSParallel.dbo.sp_executesql N' ALTER TABLE Node ADD ImportIdentity int''.
i have the folowing databases DB1,DB2,DB3,D4,DB5........
i have to loop through each of the databases and find out if the database has a table with the name 'Documents'( like 'tbdocuments' or 'tbemplyeedocuments' and so on......)
If the tablename having the word 'Documents' is found in that database i have to add a column named 'IsValid varchar(100)' against that table in that database and there can be more than 1 'Documents' table in a database.