Transact SQL :: Adding New Columns To Server 2012 Table
Apr 28, 2015
I am planning to add some new columns to an existing sql server 2012 table. I know that I need to use the alter statement to accomplish this goal. However my questions is the location of where I want to add the new columns to the table. It would make more sense to add the new columns to the middle of the table since these columns have a similar meaning as other columns in the middle of the table.However is it better to add these new columns at the end of the table? I am asking this question since I am thinking I might need some sql to move the values of existing columns and values around?Thus is it better to add new columns to a table in the middle of the table, at the end of the table, or at the end of the table? If so, can you tell me why one location is better than another location?
I am planning to add some new columns to an existing sql server 2012 table. I know that I need to use the alter statement to accomplish this goal. However my questions is the location of where I want to add the new columns to the table. It would make more sense to add the new columns to the middle of the table since these columns have a similar meaning as other columns in the middle of the table.
However is it better to add these new columns at the end of the table? I am asking this question since I am thinking I might need some sql to move the values of existing columns and values around?
Thus is it better to add new columns to a table in the middle of the table, at the end of the table, or at the end of the table? If so, why one location is better than another location?
I have a table. I want to add 2 date columns. One when we are inserting any record it will show and another whenever the record updated to record that.
I want to insert dummy data for the previous dates. How to insert those dummy dates in batch wise?
Is there any way or option to get the all columns of dataset added to table when we add a table in data region. It will take lot of time to add one by one and also there are chances to add one column ore than once.
Is there a good way to add columns to a table type?
I built several procs which make use of table-valued-parameters, and they work pretty nicely, until I need them to accept additional columns. Then I have to drop all the procs that use them, alter the types, and rebuild all the procedures, which is a huge pain in the rear.
Is there any good way (built in, or custom) to alter the def of a table type that's used as a parameter to multiple stored procedures?
I am trying to insert a single row in Temp table #InventoryItems . The temp table is mentioned in a curosor childcur_inventory. While looping through the cursor I have mentioned nested if else condition . In the Else condition where I have mentioned PRINT "Hello World" I want to insert a single row in the temp #InventoryItems. I trying to use Select Top 1 but the cursor is looping indefinitely trying to insert multiple record
I want to insert only one record with null values.
DECLARE childcur_inventory CURSOR FOR SELECT Structure_Number, State, Neighbor_State, Border_Bridge_Structure_Number FROM #InventoryItems OPEN childcur_inventory FETCH childcur_inventory INTO @Structure_Number, @State, @Neighbour_State, @Border_Bridge_Structure_Number
I have a situation that I need to add a field to a table over linked server. The specifications of this is dynamic and it is being done in TQL / Stored procedures and this can not change. My code is generating the statement just fine and if I copy paste it to a new SSMS window and execute it WORKS.. The problem is I need to dynamically generate the statement (I am doing that just fine, I THINK). THEN I need to execute the statement IN THE SPROC, this part is not working.
Here is the code:
SET @AlterSQL = @DestinationServerName + '.[' + @DestinationDBName +'].' + @DestinationSchemaName + '.sp_executesql N'' ALTER TABLE ' + @DestinationTableName + ' ADD ' + @TempColumn + ' int' + CHAR(39)
The above Creates this when I expose it via a PRINT statement:
addb15.[FSParallel].dbo.sp_executesql N' ALTER TABLE Node ADD ImportIdentity int'
After I create the statement I use:
EXEC @AlterSQL
And this returns the following error:
Msg 2812, Level 16, State 62, Procedure ETLDynamicImport, Line 244 Could not find stored procedure 'FSParallel.dbo.sp_executesql N' ALTER TABLE Node ADD ImportIdentity int''.
I am getting inconsistent results when BULK INSERTING data from a tab-delimited text file. As part of my testing, I run the same code on the same file again and again, and I get different results every time! I get this on SQL 2005 and SQL 2012 R2.
We have an application that imports data from a spreadsheet. The sheet contains section headers with account numbers and detail rows with transactions by date:
AAAA.1234 /* (account number)*/ 1/1/2015 $150 First Transaction 1/3/2015 $24.233 Second Transaction BBBB.5678 1/1/2015 $350 Third Transaction 1/3/2015 $24.233 Fourth Transaction
My Import program saves this spreadsheet at tab-delimited text, then I use BULK INSERT to bring the data into a generic table full of varchar(255) fields. There are about 90,000 rows in each day's data; after the BULK INSERT about half of them are removed for various reasons.
Next I add a RowID column to the table with the IDENTITY (1,1) property. This gives my raw data unique row numbers.
I then run a routine that converts and copies those records into another holding table that's a copy of the final destination table. That routine parses though the data, assigning the account number in the section header to each detail row. It ends up looking like this:
AAAA.1234 1/1/2015 $150 First Purchase AAAA.1234 1/3/2015 $24.233 Second Purchase BBBB.5678 1/1/2015 $350 Third Purchase BBBB.5678 1/3/2015 $24.233 Fourth Purchase
My technique: I use a cursor to get the starting RowID for each Account Number: I then use the upper and lower RowIDs to do an INSERT into the final table. The query looks like this:
SELECT RowID, SUBSTRING(RowHeader, 6,4) + '.UBC1' AS AccountNumber FROM GenericTable WHERE RowHeader LIKE '____.____%'
Results look like this:
But every time I run the routine, I get different numbers!
Needless to say, my results are not accurate. I get inconsistent results EVERY TIME. Here is my code, with table, field and account names changed for business confidentiality.
TRUNCATE TABLE GenericImportTable; ALTER TABLE GenericImportTable DROP COLUMN RowID; BULK INSERT GenericImportTable FROM 'SERVERGeneralAppnameDataFile.2015.05.04.tab.txt' WITH (FIELDTERMINATOR = ' ', ROWTERMINATOR = '', FIRSTROW = 6) ALTER TABLE GenericImportTable ADD RowID int IDENTITY(1,1) NOT NULL SELECT RowID, SUBSTRING(RowHeader, 6,4) + '.UBC1' AS AccountNumber FROM GenericImportTable WHERE RowHeader LIKE '____.____%'
I just created a new table with over 100 Columns and I need to populated just the first 2 columns.
The first columns to populate is an identify column that is the primary key. The second column is a foreign_key to an other column and I am trying to populate this columns with all the values from the foreign_key value. This is what I am trying to do.
column1 = ID column2= P_CLIENT_D
SET IDENTITY_INSERT PIM1 ON
INSERT INTO PIM1 (P_CLIENT_ID) SELECT Client.ID FROMP_Client
So I am trying to insert both an identity values and a value from an other table while leaving the other columns blank. How do I go about doing this.
So I have been trying to get mySQL query to work for a large database that I have. I have (lets say) two tables Table_One and Table_Two. Table_One has three columns: Type, Animal and TestID and Table_Two has 2 columns Test_Name and Test_ID. Example with values is below:
In Table_One all types come under one column and the values of all Types (Mammal, Fish, Bird, Reptile) come under another column (Animals). Table_One and Two can be linked by Test_ID
I am trying to create a table such as shown below:
This should be my final table. The approach I am currently using is to make multiple instances of Table_One and using joins to form this final table. So the column Bird, Reptile, Mammal and Fish all come from a different copy of Table_one.
For e.g
Select Test_Name AS 'Test_Name', Table_Bird.Animal AS 'Birds', Table_Mammal.Animal AS 'Mammal', Table_Reptile.Animal AS 'Reptile, Table_Fish.Animal AS 'Fish' From Table_One
[Code] .....
The problem with this query is it only works when all entries for Birds, Mammals, Reptiles and Fish have some value. If one field is empty as for Test_Two or Test_Three, it doesn't return that record. I used Or instead of And in the WHERE clause but that didn't work as well.
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
I have a table with 8 columns, I need to update data in multiple columns on this table, this table contains 1 million records, having single update was taking time so I broke the single update into multiple update statements and running multiple update statements in parallel, Each update statement updates different column.
This approach is working fine but I am getting the deadlock error.
Transaction (Process ID 65) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I am trying to produce a query result that will be using a Case statement to determine values based on scores in a table for each row. The result needs to be exported to be used to upload to a state reporting website. My problem is that the state requires in the CSV file that is uploaded a lot of fields that we do not actually have in the database table we are creating the result set from. After I receive my result set using the Case statement, is there a way to add additional columns that don't actually exist in a table so I can export directly from SQL?
I have a LIVE SQL 2000 database. I am trying to add some new columns to a table. My question is will I need to take the database off-line to perform this operation? I have replicated it to a sample table if I can't.
Code written so far. this pivots the column deck and jib_in into rows but thats it only TWO ROWS i.e the one i put inside aggregate function under PIVOT function and one i put inside QUOTENAME()
DECLARE @columns NVARCHAR(MAX), @sql NVARCHAR(MAX); SET @columns = N''; SELECT @columns += N', p.' + QUOTENAME(deck) FROM (SELECT p.deck FROM dbo.report AS p GROUP BY p.deck) AS x;
[Code] ....
I need all the columns to be pivoted and show on the pivoted table. I am very new at dynamic pivot. I tried so many ways to add other columns but no avail!!
I have a table (we will cal DateTable) with several (20) columns, each being a date type. Another table's (Project) PK is referenced in the DateTable.
I am trying to write a query that will pull all dates for a specific project from the DateTable if they meet certain criteria(i.e. if the date is <= 7 days from now.
I started with a normal select statement selecting each column with a join to the project and then a where clause using
(DateTable.ColumnName BETWEEN GETDATE() AND DATEADD(day, 7, GETDATE()) OR (DateTable.ColumnName BETWEEN GETDATE() AND DATEADD(day, 7, GETDATE())) ...
The rest of the columns(all with OR between them).
The problem with this is that because I am using OR once one of the dates meets the criteria it selects all the dates that are associated with the project. I ONLY want the dates that meet the criteria and don't care about the rest.
Obviously because I have all the columns in the select statement... So I need something like
Select ALL Columns from DateTable d Join Project p where p.ProjectID = d.ProjectID AND only dates BETWEEN GETDATE() AND DATEADD(day, 7, GETDATE()))
what is best and quicker way to add 500 columns to existing table having 145 columns already.
Is there any way to avoid manual work of adding columns one by one in design mode or using script.
I have a TXT file (comma delimited) that contains all those columns names as a first row,but I am not sure if i can use DTS package to create table design having such sourcre TXT file.
I am trying to insert values in a single table with four columns from 4 different sources. is it possible to run these 4 insertions in parallel. all these insertion are independent of each other
SELECT TOP (100) PERCENT dbo.Filteredfs_franchise.fs_franchiseid AS FranchiseId, dbo.Filteredfs_franchise.fs_brandidname AS Brand, dbo.Filteredfs_franchise.fs_franchisetypename AS [Franchise Type], dbo.Filteredfs_franchise.fs_franchisenumber AS [Franchise Number], dbo.Filteredfs_franchise.fs_transactiontypename AS [Transaction Type], dbo.Filteredfs_franchise.fs_franchisestatusname AS [Status Code],
[Code] ....
I need to pivot this so I can get one row per franchiseID and multiple columns for [Franchisee Name Entity] and [Franchise Name Individual]. Each [Franchisee Name Entity] and [Franchise Name Individual] has associated percentage of ownership.
This has to be dynamic, because each FranchiseID can have anywhere from 1 to 12 respective owners and those can be any combination of of Entity and Individual. Please, see the attached example for Franchise Number 129 (that one would have 6 additional columns because there are 3 Individual owners with 1 respective Percentage of ownership).
The question is how do I PIVOT and preserve the percentage of ownership?
For each customer, I want to add all of their telephone numbers to a different column. That is, multiple columns (depending on the number of telephone numbers) for each customer/row. How can I achieve that?
I want my output to be
CUSTOMER ID, FIRST NAME, LAST NAME, TEL1, TEL2, TEL3, ... etc
Each 'Tel' will relate to a one or more records in the PHONES table that is linked back to the customer.
I'm fairly new to SQL and am just setting up a Windows 8 app using an Azure SQL server. The issue I have is looking up a part number supersession and getting the latest number. One part number can have multiple supersessions (ie RTC5756 > STC8572 > STC3765 > STC9150 > STC9191 > SFP500160 ).The data I am supplied monthly has both the superseeded items and the supersession information in both columns and is not easy to decipher - for example:
The newest part number is kept in a separate table - called "source" - which in this instance is SFP500160. I need access to the latest part number but also to the part's previous numbers, due to the fact that some people may still be stocking them as an old part number and for them to search by. Is there an easy and efficient way of doing both a lookup for the supersessions and a join on the two tables to minimize the queries on the database?
Say you have a fact table with a few columns that all reference the same key column in a dimension table, you want to write a view to return the information for those keys?
USE MyTestDB; GO SET NOCOUNT ON; IF OBJECT_ID ('dbo.FactTemp' ,'U') IS NOT NULL DROP TABLE dbo.FactTemp;
[Code] ....
I'm using very small data at the moment, and the query plan and statistics don't really say which way.
I have a table of raw data where each column can be null. The thought was to create an identity key (1,1) and set as primary for each row. (name/ address / zip/country/joindate/spending) with surrogate key: "pkid".However other queries will not use this primary key. So for instance they may count the # of folks at a zip, select all names, addresses etc. The queries may order by join date, or select all the people that joined on a specific date.No other code would logically use the primary key (surrogate primary id key), therefore would it still have any performance benefits? at this time the table would have no clustured or nonclustured indexes or keys. I'm curious if there are millions of records.
Select MemberNbr, EligYear,EligMonth, count(*) FROM [MemberMonth] GROUP BY MemberNbr, EligYear,EligMonth HAVING COUNT(*) > 1
THIS WILL GIVE ME ZERO ROWS,,, That is GOOD which means NO DUPLICATES..I need to add a constraint to the table MEMBERMONTH to make sure no one can insert duplicate records.
IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall') BEGIN ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0 IF ( @@ERROR <> 0 ) GOTO QuitWithRollback END
It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.
I am having trouble trying to find the max of 2 columns in one table. I've tried using a common table expression and a subquery, but can't seem to get the correct results. I want to get the max from refnum, then the max "number" associated with that max refnum along with the date and decision