I've got a large table (140+ million rows of very wide
data) that we want to change the schema on -- basically
to remove a number of the unused data elements that we
don't use.
Anyway, does anyone know if SQL will do an in-place
change, or if it will copy the table to a new table, thereby
increasing my space allocation needs? I'd effectively,
temporarily, need space for two tables while the change
is happening if it copies the table first. This is not good as
I do not have enough available space at the moment.
If you've got pointers to specific MS docs regarding
this issue, please let me have 'em.
I’ve got a situation where the columns in a table we’re grabbing from a source database keep changing as we need more information from that database. As new columns are added to the source table, I would like to dynamically look for those new columns and add them to our local database’s schema if new ones exist. We’re dropping and creating our target db table each time right now based on a pre-defined known schema, but what we really want is to drop and recreate it based on a dynamic schema, and then import all of the records from the source table to ours.It looks like a starting point might be EXEC sp_columns_rowset 'tablename' and then creating some kind of dynamic SQL statement based on that. However, I'm hoping someone might have a resource that already handles this that they might be able to steer me towards.Sincerely, Bryan Ax
we have a table in our ERP database and we copy data from this table into another "stage" table on a nightly basis. is there a way to dynamically alter the schema of the stage table when the source table's structure is changed? in other words, if a new column is added to the source table, i would like to add the column to the stage table during the nightly refresh.
I'd like to create a temporary table with the same schema as an exiting table. How can I do this without hard coding the column definitions into the temporary table definition?
I'd like to do something like:
CREATE TABLE #tempTable LIKE anotherTable
..instead of...
CREATE TABLE #tempTable (id INT PRIMARY KEY, created DATETIME NULL etc...
I am wondering if tempdb stores all results tempararily whenever I query a large fact table with over 4 million records which joins another dimension table? Since each time when I run the query, the tempdb grows to nearly 1GB which nearly runs out all the space on my local system drive, as a result the performance totally down. Is there any way to fix this problem? Thanks a lot in advance and I am looking forward to hearing from you shortly for your kind advices.
I come from a MySQL background and have been having some trouble finding the MSSQL command that corresponds to MySQL's "describe." I Googled for around 1/2 hour but can't seem to find it.
I'd like it to simply use the schema of an already existing table (plus an additional column).
Is there a way to do this without having to manually write the table schema?
A simple example is:
I have a table OriginalTable (idCol, NameCol, InfoCol)
I'd like to create a temp instance of that table called tempTable which would have a final schema of
tempTable (idCol, NameCol, InfoCol, myTempCol)
The reason I'd like to do this using the schema is so that I don't need to update all my procedures in the future when we decide to add some more detail to the originaltable which needs to be selected as well.
Thanks a lot for any help or direction you can provide. -Ashleigh
I have a group of TV listing data need to map into database tables.Data looks like following:http://www.oniva.com/upload/1356/crew.jpgI want to create a table for productionCrew of each TV programthe data is like -crew -programID -member-member-member ... etc-programID -member-member-member ... etc-programID -member-member-member ... etc... etcabove are data from productionCrew of all TV program, for eachprogramID we have a list of members.Should I merge all member into a big string?Or should I use 2 tables to store the Crew data?If i use 2 tables, how the fields / column will look like?
I am developing an application that has a table with lots of records(network traffic) but the data is summarize every so often to create summary records (old records are deleted). The problem is that I have a PK based on an autoincrement ID (int) that will run out of numbers. However, this ID is not referenced anywhere, (not a foreign key from another table, not use for deletion and there is no update in this table whatsoever).
So my possibilites are: 1.- reseed the id when it is about to run out. 2.- make the id bigint 3.- remove the id and change the PK to 2 other fields 4.- remove the id and without PK
I am leaning toward option 4, because I do not see the need for a PK, but I understand that it is quite out of the normal.. So I would like to hear from other people ( I do not have much experience with DB).
I also like option 3. I already have a index on one of the other fields (time).
If I use BCP to export a very large table will that table be blockedfor writes during the export process? I don't want to prevent usersfrom accessing that table during the bcp process?Thank You, TFD.
I have a problem, in one of my db's I have a table tblpersons in schema web the rest of the table's in the db are in schema dbo. Now I removed the user web, but I want to change the schema of tblpersons from web.tblpersons to dbo.tblpersons. I created a stored procedure with the line " ALTER SCHEMA dbo TRANSFER web.tblpersons", when executing this I get an error that the table tblpersons couldn't be found.
Hi all, i m using sql server2005 CTP...i created a database called TEL and in that database i created a user(in security) as USE [TEL] GO /****** Object: User [COLL_DB] Script Date: 09/27/2005 15:38:51 ******/ GO CREATE USER [COLL_DB] FOR LOGIN [loginName] WITH DEFAULT_SCHEMA=[COLL_DB]
Now,when i m trying to create a table in the database TEL as CREATE TABLE [COLL_DB].abc (c numeric) commit;
it gives me error saying The specified schema name "COLL_DB" either does not exist or you do not have permission to use it.
Now,can someone tell me...what i have to do to fix this error????????? thanks...
We currently have all tables in the dbo schema, but for organizational reason we would like to split them up in multiple schemas and I wonder if that can be done without re-creating the tables.
So my question is, is there a way to move a table into a different schema without re-creating it? (For those familiar with Postgres I'm looking for an equivalent to "ALTER TABLE foo SET SCHEMA newschema") sp_rename only allows a "one-part name" for the new name, so apparently that cannot be used.
I have created a table on SQL Server from SAS. The table gets created fine. However, the table schema has my user ID in it (AD-ENTmyuserid.Table1). How can I change the schema to dbo.(dbo.Table1)? It's fine if I have to make this change in SQL Server Management Studio.
I am more familiar with writing SQL for Oracle than MS SQL Server. Within Oracle there is a simple command, 'Describe', which actually shows the data types, and whether or not an attribute accepts NULLS. MS SQL Server does appear to support such a command, however I am looking for a way to describe the attributes of tables nonetheless.
I am a bit confused and wish to share this with you for help. We are designing a billing application to bill telephone calls. It currently handles a single rate plan. So what it does is that it looks up the RATES table and matches the called number area code with the RATES.ACCESS_Code field to find the tariff for that area and multiplies that by the number of minutes. Here is the current schema.
CALLS •ID (pkid) •Called Number •Duration
RATES •Destination Name •Access_Code (pkid) •Tariff
Now the problem is that we need to process calls based on RATES per OPERATOR. Each operator is a telephony carrier with similar RATES. However, each call will be prefixed with a number to indicate which operator carried that call. Accordingly, the database should relate that prefix with the proper operator and then looks up the RATES that are related to that operator.
In conclusion we will have a replica of the RATES table for multiple operators. An operator is only supposed to have two fields I guess (name and ID).
So now we need to re-engineer the schema to adapt to this situation.
Eg. 95004433313445 (Will be identified as BT operator) 93004422376234 (Will be identified as AT&T operator)
I created a schema, Admin. I have to transfer a table from the dbo schema to the admin schema. I keep getting an error that I do not have permission or the table does not exist.
Simply looking for confirmation here - is my syntax correct?
ALTER SCHEMA Admin TRANSFER MyShop.Addresses; (MyShop is the Database, Addresses is the table)
NOTE: When I created the schema, I did not create an inner table. The syntax for that was simply CREATE SCHEMA Admin;
I am more familiar with writing SQL for Oracle than MS SQL Server. Within Oracle there is a simple command, 'Describe', which actually shows the data types, and whether or not an attribute accepts NULLS.
MS SQL Server does appear to support such a command, however I am looking for a way to describe the attributes of tables nonetheless.
I need the schema details of the table like datatype of column A , B and C as well as default values of all the columns and all other details of those 3 columns.
acutally i am looking to get the script of table that is already in the database, like there is a table
INFORMATION_SCHEMA.ROUTINES
WHERE
routine_type = 'PROCEDURE'
which return us the routine defination of the procedure. is there anything similar for table ? i know i can just right clike on table and create a script but i don't use this method.
Locally I develop in SQL server 2005 enterprise. Recently I recreated my db on the server of my hosting company (in sql server 2005 express).I basically recreated the tables and copied the data in it.I now receive the following error when I hit the DB:The 'System.Web.Security.SqlMembershipProvider' requires a database schema compatible with schema version '1'. However, the current database schema is not compatible with this version. You may need to either install a compatible schema with aspnet_regsql.exe (available in the framework installation directory), or upgrade the provider to a newer version.I heard something about running aspnet_regsql.exe, but I dont have that access to the DB. Also I dont know if this command does anything more than creating the membership tables and filling it with some default data...Any other solutions/thought on what this can be?Thanks!
Hi, I have this page that upload's PFD's to a table. In principle this works fine. Until I try to upload large files (3 to 4 MB)I need to even upload larger files than that. (Don't really know as of yet what users are going to come up with) I get TimeOut problems. Now some people say it is not possible to exceed a limit of about 4 MB. But that there is a workaround by changing something to the web.config file.Can somebody give me info about that, (I am quite a novice really)I tried to change it like this, but to no avail: <system.web><httpRuntime maxRequestLength="102400"enable = "True"requestLengthDiskThreshold="102400" useFullyQualifiedRedirectUrl="True"executionTimeout="102400"/></system.web> Thanks for any help!
On a nightly basis, this table gets rebuilt in a temporary database. Once the table has been built and scrubbed, i need to move it into our webservers db.
I'd like to do this with minimal interuption to the website.
Possible techniques:
1) I could set up a DTS package to copy the table object overwriting the destination table
2) I could export to a flat file and then bulk import into the live table (after truncating it)
3) I could run a process to update smaller chunks of data at a time running delete queries and insert queries.
Anybody have a thought on the best way to do this so that the web users would be virtually unaware that anything was happening ?
I am absolutely innocent as far as T-SQL is concerned. I need to detect all duplicates (key consists of 5 fields) in the table and delete the duplicates. I tried different approaches like joins etc but nope. Any help is appreciated Thanks
OK, I imported 680 million records into an unindexed table. That went well.
Then, I went into Enterprise Manager and added a two column non-unique clustered index to that table to speed access.
It's been running for ~36 hours and I have no idea when it will complete. I have deadlines that I'm going to miss and am very nervous; what can I do?
SQL Server 2000 Enterprise Edition (8.00.818 - sp3 + hotfixes) Dual 3Ghz Xeon (two physical CPUs each have HyperThreading enabled) Windows 2000 SP4 4GB RAM (although I just noticed the 3GB OS switch wasn't on) SCSI boot drive tempdb, data, and transaction log are on a FibreChannel RAID SAN
Hi folks! I'm looking for advice on partitioning a large table. In the DDL below I've changed names to protect the guilty.
My table has this schema:
CREATE TABLE [dbo].[BigTable] ( [TimeKey] [int] NOT NULL, [SegmentID] [int] NOT NULL, [MyVal] [tinyint] NOT NULL ) ON [BigTablePS1] (TimeKey) -- see below for partition scheme
-- will evaluate whether this one is needed, my thinking is yes -- based on the expected select queries. create index NCI_SegmentID on BigTable(SegmentID asc)
The TimeKey column is sort of like a unix time. It's the number of minutes since 2001/01/01, but always floored to a 5 minute boundary. so only multiples of 5 are allowed.
Now, this table will be rather big. There are about 20k possible SegmentIDs. For every TimeKey from 2008/01/01 to 2009/01/01 (12 months), I'll have on the order of 20000 rows, one for each SegmentID.
For the 12 month period, there are 365*24*60/5=105120 possible TimeKey values. So the total rowcount is over 2 billion. (20k * 105120)
Select queries are expected to be something like this:
-- fetch just one particular row... select MyVal from BigTable where TimeKey=5555 and SegmentID=234234
--fetch for a certain set of SegmentID and a particular time... select b.SegmentID ,b.MyVal from BigTable b join OtherTable t on t.SegmentID=b.SegmentID where b.TimeKey=5555 and t.SomeColumn='SomeValue'
Besides selects, also I need to be able to efficiently issue update statements against the table with new values in the MyVal column based on a range of TimeKey values (a contiguous span of a few days) and sets of about 1000 SegmentID. updates would always look like this:
update t set t.MyVal=p.MyVal from BigTable t join #myTempTable p on t.TimeKey=p.TimeKey and t.SegmentId=p.SegmentId
where #myTempTable would have order of 1000*24*60 rows in it, all with contiguous TimeKey values, and about 1000 different SegmentID values. #myTempTable also has a clustered pk on (timekey asc, SegmentId asc).
After the table is loaded, it would never get any inserts or deletes. only selects and updates.
Given the size, and the nature of the select and update queries, this table seems like a good candidate for partitioning. I'm thinking it makes sense to partition on TimeKey.
So my question is, is it stupid to create a separate partition for each day in the year long span of TimeKeys this table covers? That would mean 365 partitions in the partition function and partition scheme. Something like this:
CREATE PARTITION FUNCTION [BigTableRangePF1] (int) AS RANGE LEFT FOR VALUES ( 3680640 + 0*1440, -- 3680640 is the number of minutes between 2001/01/01 and 2008/01/01 3680640 + 1*1440, 3680640 + 2*1440, 3680640 + 3*1440, ...snip... 3680640 + 363*1440, 3680640 + 364*1440, 3680640 + 365*1440 ); GO
CREATE PARTITION SCHEME [BigTablePS1] AS PARTITION [BigTableRangePF1] TO ( [PRIMARY],[PRIMARY],[PRIMARY], ...snip... [PRIMARY],[PRIMARY],[PRIMARY] ); GO
does anyone have any experience with partitioned tables with so many partitions? Is a few hundred partitions too many? From my understanding of partitions, seems like having so many will be ok. Is it somehow worse than having hundreds of tables in a database?
Even with one partition for each day, I'll still have 24*60*20000/5 ~ 5m rows in each one.
Greetings All, I was wondering what would happen if I were to do a"select * from table" on a table that has about 5 million rows. Wouldmy read block other writers to the same table? Would it block otherreaders? I know SQL uses optimistic lockign by default but I am notsure what this means to other users trying to access the same table?Any advise would be greatly appreciated.TFD
I have query that takes 12 minutes to execute. The query uses around 9 tables but I have narrowed down the problem to one table that has over 65 million rows. The problem table has only 3 fields
The query uses the primary key of this table to perform the join. FieldTwo and FieldThree are only used as output parameters.
I noticed if I remove FieldTwo and FieldThree from the output (but still leave the table in the query), the query executes in 1 second. However if I include FieldTwo and FieldThree in the output, the query takes over 12 minutes to execute.
I cannot index FieldTwo and FieldThree because of the field size and I cannot reduce the size of the fields because of the data that needs to be stored in it? How can I index or do something similar to speed up the table look up.
I have been asked to look at some performance isssues with an application that utilises a 800GB table. This table is huge and contains 4 int columns and 1 decimal column. The table has a clustered index that covers 4 of the int columns and is heavily fragmented and it has not been maintained for a long time. The system has limited free space to even attempt rebuilding the index. Does anyone have any experience of running a the Alter Index Reorganize command on such a large table? Any information on what storage would be required to attempt this, how long would this take?