Adding New Column To The Table With 16 Million Row
Jul 11, 2013We have a table with 16 Million records, and also this table is replicated.
We want to add a new column in to this table for some reason?
We have a table with 16 Million records, and also this table is replicated.
We want to add a new column in to this table for some reason?
IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END
It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.
I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?
View 13 Replies View RelatedI'm looking for some performance assistance on updating a column value in a table that contains approximately 50 million rows. I have a permanent table in another database that has the key column and value to be set. My query is listed below, but I'm afraid it will run quite awhile. Any suggestions would be appreciated.
update mytable
set column2 = b.column2
from mytable as a
join mytable1 as b
on a.column1 = b.column1
There is a one to one relationship between the two tables.
I simply need the ability using SQL to add columns in an existing table before (or after) columns that already exist.
The MS SQL implementation of ALTER TABLE doesn't seem to provide the before or after placement criteria I require. How is this done in MS SQL using SQL or is there a stored procedure I can use?
Thanks.
If on the source I have a new column, the script generated by SqlPackage.exe recreates the table on the background with moving the data into a temp storage. If the table is big, such approach can cause issues.
Example of the script is below: in the source project I added columns [MyColumn_LINE_1] and [MyColumn_LINE_5].
Is there any way I can make it generating an alter statement instead?
BEGIN TRANSACTION;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SET XACT_ABORT ON;
CREATE TABLE [dbo].[tmp_ms_xx_MyTable] (
[MyColumn_TYPE_CODE] CHAR (3) NOT NULL,
[Code] ....
The same script is generated regardless the table having data or not, having a clustered or nonclustered PK.
I have a table size 2078mb, number of row +530,900. Is it normal for sql to lock users out of the db when I add a column to the end of the table? I'm running SQL 7.0. The table has 4 col regular indexes. No primary keys. It locked the user out for about 10 min. I thought with SQL 7.0 this problem went away?
View 1 Replies View RelatedCan I add a column to a database table without dropping and recreating the table?
The problem is that everytime a user creates an action that requires a new table - at the moment I drop the table and recreate the table with the new column.
This requires lots of resources as I have to populate the table again.
Is there a design way I can go around this?
I'm trying to add a column trough command "ALTER" and I've tried in many different ways.
Here is the result of statement..
I am having a problem adding new column to a table via SQLCMD . Here is the code
:r ServerParam.sql
:connect $(sqlServer) -U $(sqlUser) -P $(sqlPass)
DECLARE @cmd varchar(5000)
DECLARE @SServer varchar(1000)
DECLARE @database varchar(1000)
DECLARE @tableName varchar(500)
[Code] ....
When I am running this script, it creates the database and the table but when I am trying to add the last to field vi
ALTER TAABLE ADD
I am getting error
----------------------------------------------------------------------
C:CalJobsSQLCMDSQLScripts>SQLCMD -v subgt=SJI txtFile=LTS_ERROR_10022014.TXT
tblName=LTS_ERROR_10022014 -i CreateErrorTableSQL.sql
Sqlcmd: Successfully connected to server 'VULCAN'.
------------------------------------------------
Start processing Table: CalJobsErrors.dbo.SJI_LTS_ERROR_10022014
------------------------------------------------
[Code] ....
I've got a table with 36+ million rows. I've been asked to modify thetable and add in an identity column. The code I used caused SQL tolock up and it maxed out the log files. :)The code I used is:Begin TransactionAlter Table ODS_DAILY_SALES_POSADD ODS_DAILY_SALES_POS_ID BigInt NOT NULL IDENTITY (1,1)CommitIs there a way to break up the code? Maybe only do a few millionrecords at a time? Or is there a way to do this without lockinganything up?Thanks,Jennifer
View 2 Replies View RelatedI have the following tablestblGroupsGroupID intGroupName nvarchar(50)tblGroupMembersGroupID int (FK)UserID intI need a stored proc which:returns the groupID and name of all the groups of which userid 5 is a member AND also return the number of members that a group has (so the numbers of records in tblGroupMembers with a specific groupID)I have 2 sp's:myspGetGroupMembersCount which takes as input a groupID and has as output an integer valuemyspGetGroupsforUser which must return the entire resultsetSo say that userID 5 is a member of GroupID 6,18 and 22the following must be the resultset:UserID GroupID GroupName Members5 6 bla 132 5 18 yes 17 5 22 whatever 200 I think I need to call myspGetGroupMembersCount from within myspGetGroupsforUser and add it to the resultset (I dont want to work with a temptable)...but I dont know how...
View 2 Replies View RelatedI have the folowing databases DB1,DB2,DB3,D4,DB5........
I have to loop through each of the databases and find out if the database has a tablename with the word 'Documents'( like 'tbdocuments' or 'tbemployeedocuments' and so on......)
If the tablename having the word 'Documents' is found in that database i have to add a column named 'IsValid varchar(100)' against that table in that database and there can be more than 1 'Documents' table in a database.
can someone show me the script to do it?
Thanks.
I have an existing table which has about 70 columns with 3 million rows in it. I was asked to add additional 50 new columns into the table. I have tried to add them in through the Enterprise manager design table but experiencing some problems. The adding process seemed never going to be end. Is there any good efficient way to do it??? I appreciate the help!
J8
Sorry I'm pretty new to SQL so I don't know if this is a simple question. I have a table, and I am trying to add a column to the table and populate this column using what would be called an 'IF' function in Excel.
Basically 'column A' has numbers in it. I want SQL to look at 'column A' and if the first 5 digits of the number in 'column A' are 00001, then put 'description A' into new column 'column B'. If the first 5 digits of the number in 'column A' are 00002, then put 'description B' into 'column A' etc.
Any ideas?
I want to add new primary key into existing table which already has a primary key. But,I do not want to remove the old primary key, since there are many records and the old primary key also have relationship with other table
When I am using this query:
alter table hem154
add indexNO uniqueidentifier default newid()
alter table hem154
add CONSTRAINT pk_hem154_indexNo PRIMARY KEY (PK_indexNO)
go
Note:
Hem154 ~ Table name
indexNo ~ Column Name
I get this runtime error:
Msg 1779, Level 16, State 0, Line 1
Table 'hem154' already has a primary key defined on it.
Msg 1750, Level 16, State 0, Line 1
Could not create constraint. See previous errors.
The column I'm adding needs to be part of the clustered PK (it will be the last of three columns) so I need to recreate all the indexes.
My DB is set for FULL recovery mode ALLOW_SNAPSHOT_ISOLATION ON. I've tried two methods so far.
Method 1:
BEGIN TRANSACTION
CREATE TABLE dbo.Tmp_copyoftablewithnewfield
(
) ON PRIMARY
IF EXISTS(SELECT * FROM dbo.originaltable)
EXEC('INSERT INTO dbo.Tmp_copyoftablewithnewfield (<original fields>)
SELECT <original fields> FROM dbo.originaltable WITH (HOLDLOCK TABLOCKX)')
GO
DROP TABLE dbo.originaltable
GO
EXECUTE sp_rename N'dbo.Tmp_copyoftablewithnewfield', N'originaltable',
'OBJECT'
GO
<recreate PK constraint>
<rebuild indexes>
COMMIT
Pro's: Lets me add the new field in the spot I'd like it (not a big deal)
Con's: Tons of wasted space and time. It took about 15 hours.
Method 2:
SET XACT_ABORT ON
GO
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
<drop PK constraint>
<drop indexes>
ALTER TABLE [dbo].[originaltable] ADD
[newfield] [tinyint] NOT NULL CONSTRAINT [DF_originaltable_newfield] DEFAULT
((1))
<recreate PK constraint>
<rebuild indexes>
COMMIT TRANSACTION
Pro's: No making a copy of the entire table taking up 200GB more space in the db data file
Con's: My tempdb grew to accomodate the row versioning info for every row in the 200GB table. It took over 30 hours.
A lot of time and disk space is wasted with both.
Since the db is going to be unavailable to users I have some flexibility here. I was considering turning ALLOW_SNAPSHOT_ISOLATION OFF and then trying method 2 again which should stop the versioning in tempdb and then turning it back on.
I was also curious if setting the database recovery mode to SIMPLE would cut down on db log usage and then I could set it back to FULL when done.
Do these really need to be in a transaction? If there's some hardware failure or something unexpected I can just restore from backup and do the conversion again. If the presence of the transaction itself is causing more disk usage for logging or any other slowdown, I think I'd rather do without.
Given the amount of time this conversion takes, I wanted to get some
feedback other than "just try it" before doing any new tests.
Thanks.
Hi,
I'm merge replicating a SQL Server 2005 database (publisher) to SQL Compact databases (subscribers) on mobile devices. I understood that I could add a "not null" column to a replicated table on the server as long as I specified a default value, but it seems this is not possible. I ran the following script on the server database:
ALTER TABLE Activity ADD ActivityRequiresProject bit not null default(0)
which executed OK. When I went to synchronize the db on the mobile device I got the following error:
Alter table only allows columns to be added which can contain null values. The column cannot be added to the table because it does not allow null values.
The SQL statement failed to execute. If this occurred while using merge replication, this is an internal error. If this occurred while using RDA, then the SQL statement is invalid either on the PULL statement or on the SubmitSQL statement. [ SQL statement = alter table "Activity" add "ActivityRequiresProject" bit not NULL constraint "DF__Activity__Activi__4A47DDAE" default ( ( 0 ) ) ]
Does anyone know if this is a valid error? Is is possible to add a not null column with default, and if not how do I update the schema on a replicated database?
Regards,
Greg
I removed all constraints in order to load a bunch of data into a table, now I'm wondering if I can add an identity column to this table which does contain data or if I have to create a new table with the identity column and insert the data into that.
thx
Kat
Dear Friends,
I need a SQL Query to add a identity coloumn for anExisting table. (ie) when i try to alter the table i want to add an identity coloumn.
Thanks in advance.
Hi,
I am mapping an entity from SQL 2005 to another entity in another system on SQL 2000. Since the destination system has its own ID generator, I want to keep the generated ID for each row of my table in a column of my table in SQL 2005. The new column is in the dataset now , but I don't know how to update my table to have that column values (The OleDbDestination just insert new items.)
Samy
I'm new to replication and am trying to determine the best approach to add a column (NOT NULL with no DEFAULT) to a replicated table. The only success I have had is if I do the following:
Delete entire Subscription
Delete entire Publication
Add column to table
Create new Publication
Create new Subscription
Run SnapShot
The problem with this approach is that each step affects the entire database and not just the modified table. I think it is inefficient to redo replication for a simple object change. What am I missing? Is there a way to only replicate the changes made to the one table without having to run a SnapShot for the entire publication?
Keep in mind the column must be defined as NOT NULL and cannot have a Default.
Thanks, Dave
After i run the sql which adds some columns on one particular table.I am getting this Warning
Warning: The table 'usac499_499A' has been created but its maximum row size (9033) exceeds the maximum number of bytes per row (8060). INSERT or UPDATE of a row in this table will fail if the resulting row length exceeds 8060 bytes.
I got a series of the above warning message , but the coulmn wa created.
I have a situation that I need to add a field to a table over linked server. The specifications of this is dynamic and it is being done in TQL / Stored procedures and this can not change. My code is generating the statement just fine and if I copy paste it to a new SSMS window and execute it WORKS.. The problem is I need to dynamically generate the statement (I am doing that just fine, I THINK). THEN I need to execute the statement IN THE SPROC, this part is not working.
Here is the code:
SET @AlterSQL = @DestinationServerName + '.[' + @DestinationDBName +'].' + @DestinationSchemaName + '.sp_executesql N'' ALTER TABLE '
+ @DestinationTableName + ' ADD ' + @TempColumn + ' int' + CHAR(39)
The above Creates this when I expose it via a PRINT statement:
addb15.[FSParallel].dbo.sp_executesql N' ALTER TABLE Node ADD ImportIdentity int'
After I create the statement I use:
EXEC @AlterSQL
And this returns the following error:
Msg 2812, Level 16, State 62, Procedure ETLDynamicImport, Line 244
Could not find stored procedure 'FSParallel.dbo.sp_executesql N' ALTER TABLE Node ADD ImportIdentity int''.
<hr noshade size='1' width='250' color='#BBC8E5'>
I have a SQL VIEW with col1, col2, col3. I need to add a new column to the view col4 coming from a TABLE in SQL Server.
View 4 Replies View Relatedi have the folowing databases DB1,DB2,DB3,D4,DB5........
i have to loop through each of the databases and find out if the database has a table with the name 'Documents'( like 'tbdocuments' or 'tbemplyeedocuments' and so on......)
If the tablename having the word 'Documents' is found in that database i have to add a column named 'IsValid varchar(100)' against that table in that database and there can be more than 1 'Documents' table in a database.
can someone show me the script to do it?
Thanks.
I have a scenario where I need to add a blank column to a table that is a publisher. This table contains over 100 million records. What is the best way to add the column? In the past where I had to make an update, it breaks replication because the update would take forever as jobs are continuously updating the table so replication can't catch up.
If I alter a table and add a column, would this column automatically get picked up in replication?
Hello all,I'm using SS2K on W2k.I'v got a table say, humm, "Orders" with two fields in the PK:OrderDate and CustomerID. I would like to add an "ID" column whichwould be auto-increment (and would be the new PK). But, I would reallylike to have orders with the oldest OrderDate having the smallest IDnumber and, for a same OrderDate, I'd to have the smallest CustomerIDfirst. So my question is:How could I add an auto-increment column to a table and make it createits values in a particular order (sort by OrderDate then CustomerIDhere)?In the real situation, the table I want to modify has around 500krecords and the PK has 5 fields and I want to sort on three of them.Thanks for you helpYannick
View 7 Replies View RelatedI have come up with one scenarios where I have three table like Product, Services and Subscription. I have to create one table say Bundle where I can have some of the product id , service id and Subscription id , i.e. a bundle may contains sum prduct , services and subscription . How I can design these relations ?
View 3 Replies View RelatedHi, I have a doubt about the behaviour of SQL Server 2005 in the situation I'm going to describe you.
Suppose that you have a SQL Server 2005 database on your PC, and suppose that this database has a table with a column classified as "unique" (so it's impossible for this table to contain 2 records having the same value in this column).
Suppose that you publish this database and you create 2 SQL Server Mobile 2005 subscriptions on 2 Pocket PCs.
Suppose now that the first PPC (using an embedded program) creates a record with a certain value for the column (and adds it to the table), and the second PPC makes the same thing (it inserts a record with the same column value of the first PPC).
At this point, you connect the 2 PPCs to your PC (one by one, of course), to synchronize (using merge replication) the databases...
WHAT HAPPENS??? Does an error raise?
Must you give a publication setting in which you say that, if this situation occurs, PC SQL Server holds the last (or the first, as you decide) record acquired? Is it possible?
Thank you very much
i have a directory database with approx. 80 million records. i am feeding the database with bulk_insert. Indexing one of the fields took about 8 hrs. After indexing when i run queries with the indexed field the response time is under 1 sec. However if i run select queries with like on non-indexed fields it takes more than 2 mins. So i decided to index 4 other fields in the database and it looks like the indexing process is going to run for 2 days.
i am a novice in SQL database design and i am not sure if this is the best way to index the table. i am just using create index. Any suggestions / advice welcome.
Hello,We maintain a 175 million record database table for our customer.This is an extract of some data collected for them by a third partyvendor, who sends us regular updates to that data (monthly).The original data for the table came in the form of a single, largetext file, which we imported.This table contains name and address information on potentialcustomers.It is a maintenance nightmare for us, as prior to this the largesttable we maintained was about 10 million records, with lesscomplicated updates required.Here is the problem:* In order to do the searching we need to do on the table it has 8 ofits 20 columns indexed.* It takes hours and hours to do anything to the table.* I'd like to cut down as much as possible the time required to updatethe file.We receive monthly one file containing 10 million records that arenew, and can just be appended to the table (no problem, simple importinto SQL Server).We also receive monthly one file containing 10 million records thatare updates of information in the table. This is the tricky one. Theonly way to uniquely pair up a record in the update file with a recordin the full database table is by a combination of individual_id, zip,and zip_plus4.There can be multiple records in the database for any givenindividual, because that individual could have a history that includesmultiple addresses.How would you recommend handling this update? So far I have mostlytried a number of execution plans involving deleting out the recordsin the table that match those in the text file, so I can then importthe text file, but the best of those plans takes well over 6 hours torun.My latest thought: Would it help in any way to partition the tableinto a number of smaller tables, with a view used to reference them?We have no performance issues querying the table, but I need somethoughts on how to better maintain it.One more thing, we do have 2 copies of the table on the server at alltimes so that one can be actively used in production while we runupdates on the other one, so I can certainly try out some suggestionsover the next week.Regards,Warren WrightDallas
View 7 Replies View RelatedI don't work much with the back end of software development so there is a lot about SQL Server I do not know.
We are building a database. The database will have about 10 tables in it. 3 of these tables will probably have a huge amount of data in them. Specifically each one of the 3 tables will each have about a half a million database records in it. Each record is about 100 characters max in length.(Im am including numbers as characters and summing the individual columns/fields to come up with 100).
Will a SQL server database table with A half a million records in it be possible? We have tried to normalize the database to cut down on the size of the table but it all comes out to about a half a million records per table.
Any help is deeply appreciated.
Bill