We have a table that simply stores all changes to a specific record in another table and can get very large. The relationship is such there are many records in the transaction table for each record in the parent table. How many depends on how many times the record has been updated and can contain multiple entries for each column.
The transaction table contains a clustered index over a column that is defined as a UNIQUEIDENTIFIER. There are other indexes as well over business fields (basically foreign key columns). This obviously has some performance implications and becomes fragmented very quickly during heavy loads. Then as expected, the performance issues cascade to queries, etc.
Anyway, we are looking at two options - (a) removing the clustered index altogether and treat it as a heap or (b) add another column defined as identity (1,1) and make that the clustered index.
My initial research tells me the heap is not the way to go as there still may be performance issues with it. Using the second option guarantees me that all new data is always added to the end and will minimize the fragmentation. Keep in mind we do have regular maintenance jobs to rebuild / reorganize indexes and lob data.
Can anyone shed their thoughts on these two options for this situation?
Hi guru,I've been new company for only a month and started analysing IndexFragmentation.After I ran DBCC DBREINDEX and capture data into permanent table, I 'veseen lots of tables with no indexes. These tables showed:Very low scan density,High extent fragmentationHigh Avg. Bytes Free per PageWhat are the best strategies to defragment tables with no indexes?I'm planning to make a rule that each table must have a clustered indexand this index must be created on the best column (highestselectivity).Please help.Thanks,Silaphet,
I have a big table (heap)... well, not so big, I have a small serverand I want to spread access to it across several new disks dedicatedonly to that table.I known its possible to do that creating a clustered index with "ONfilegroup" option but I want to maintain it as a heap, is there anyway to do this without dropping indexes/references - bulk unload -create table - bulk load - create indexes?.
"The system reports 99 percent memory load. There are 8584744960 bytes of physical memory with 5799936 bytes free. There are 8796092891136 bytes of virtual memory with 8794956038144 bytes free. The paging file has 7447801856 bytes with 5201920 bytes free."
The packages running have been running for the last year with no issues. Admintidley they were the only jobs running against the Instance. I have now introduced additional databases and packages.
The package do not over lap when they run so there is currently no contention for resource.
I have no idea where to start looking to identify the culprit.
I have observed the available pagefile being completely consumed and physical memory. The only way I can get it released si by bouncing the instance.
the only thing that has changed is pagefile. The system has 8GB RAM and was configured with a 4GB pagefile. I have recently created a second pagefile on an alternative disk of 12GB and reduced the 4GB pagefile to 100MB and left it on the root drive. When I view the pagefile size in properties, it says that there is only 100MB even though it creates a 12GB pagefile.
We have a highly transactional database. It was owned by a third party before but now both the database and the application is on our site and we are trying to improve this project. So, we have a big (902919 rows), heap table, which is getting bigger and bigger everyday and sometimes deadlocks occur. The table has only 4 columns, "token", "type", "value" and "cacheTime", unique index cannot be created. It has one index on "token"(char(36)) and "type"(varchar(50)) ("value" should also be included but it is nvarchar(max)).
The driver table , which keeps track of what datamarts ran and for what date range gets updated frequently during the etl run . There can be as many as 250 updates issued on this table in a single second.
Now this table is a heap , and there are no indexes on it .
During these updates , we encounter deadlocks causing the ETL job to fail .
IF (SELECT OBJECT_ID('t1')) IS NOT NULLDROP TABLE t1GOCREATE TABLE t1 (c1 INT, c2 INT)DECLARE @n INTSET @n = 1WHILE @n <= 454BEGININSERT INTO t1 VALUES (@n, @n)SET @n = @n + 1ENDSELECT name, indid, CASE indidWHEN 0 THEN 'Table'WHEN 1 THEN 'Clustered Index'ELSE 'Nonclustered Index'END AS Type,dpages, rowcntFROM sysindexesWHERE id = OBJECT_ID('T1')name indid Type dpages rowcnt---- ----- ---- ------ ------NULL 0 Table 2 454I have a table containing 454 rows of two columnsof type INT with each being 4 bytesc1 int = 4 bytes+c2 int = 4 bytes=8 bytes per rowIf I entered 454 rows : 454 * 8 = 3,632 byteseach SQL Page is 8KB = 8 * 1024 bytes= 8,192 bytesa data page header takes the first 96 bytesleaving 8096 bytes for data and row offsets.Each record uses a row offset at the end of the pageconsisting of 2 bytes. 454 * 2 = 908 bytes.8096 - 3632 - 908 = 3,556 bytes. Should this befree data bytes?For a heap table, does SQL add an internal uniqueidentifiercolumn also? or my question is when does SQL adda uniqueidentifier? I am reading Inside SQL 2000 andtrying to understand a few things.A uniqueidentifier of 4 bytes gets added when a clustered indexexists but it is NOT a UNIQUE clustered index. AND onlyif duplicate record is added those two records only geta uniqueidentifier value.But in my example it's a heap table with no indexes. Evenon a heap table with no indexes a ROWID or Uniqueidentifierget added? Based on the INSERT statement above allvalues are unique.So what am I missing to understand why 453 rowsmake one data page to be used whereas 454 rowsmake two data pages to be used?Thank you
I have bunch of heap tables and the fragmentation seems to be high, i am not sure whether i shall add index for them, as these tables are inserted and updated every day.
While I have learned a lot from this thread I am still basically confused about the issues involved.
.I wanted to INSERT a record in a parent table, get the Identity back and use it in a child table. Seems simple.
To my knowledge, mine would be the only process running that would update these tables. I was told that there is no guarantee, because the OLEDB provider could write the second destination row before the first, that the proper parent-child relationship would be generated as expected. It was recommended that I create my own variable in memory to hold the Identity value and use that in my SSIS package.
1. A simple example SSIS .dts example illustrating the approach of using a variable for identity would be helpful.
2. Suppose I actually had two processes updating these tables, running at the same time. Then it seems the "variable" method will also have its problems. Is there a final solution other than locking the tables involved prior to updating them or doing something crazy like using a GUID for the primary key!
3. We have done the type of parent-child inserts I originally described from t-sql for years without any apparent problems. (Maybe we were just lucky.) Is the entire issue simply a t-sql one or does SSIS add a layer of complexity beyond t-sql that needs to be addressed?
I want to insert a new record into a table with an Identity field and return the new Identify field value back to the data stream (for later insertion as a foreign key in another table).
What is the most direct way to do this in SSIS?
TIA,
barkingdog
P.S. Or should I pass the identity value back in a variable and not make it part of the data stream?
I have table of three column first column is an ID column. However at creation of the table i have not set this column to auto increment. Then i have copied 50 rows in another table to this table then set the ID column values to zero.
Now I have changed the ID column to auto increment seed=1 increment=1 but the problem is i couldn't figure out how to update this ID column with zero value set to each row with this auto increment values so the ID column would have values from 1-50. Is there a away to do this?
Ok,I just need to know how to get the last record inserted by the highestIDENTITY number. Even if the computer was rebooted and it was twoweeks ago. (Does not have to do with the session).Any help is appreciated.Thanks,Trint
Hi, I am having problem in bulk update of a sql server table haning identity column from a datatable( has no identity column) using sqlbulkcopy. I tried several approaches, but it does not show any error nor is the table getting updated. But the identity value seems to getting increased every time. thanks. varun
I'm working with a third-party database (SQL Server 2005) and the problem here is the following:
- There are a bunch of ETL processes that needs to insert rows on a table (let's call this table T) and at the same time, an ERP (owner of T) is up and running (reading, updating and inserting on T).
- The PK of T is an Integer.
Today all ETL processes uses (select max(ID) + 1 from T) to insert new rows, so just picture the scenario. It is a mess! Everyday they get duplicate key error when 2 or more concurrent processes are trying to insert a row (with the max) at the same time.
Considering that I can't change the PK, what is the best approach to solve this problem?
To sum up:
* I need to have processes in parallel inserting on T
when i alter non identity column to identity column using this Query alter table testid alter column test int identity(1,1) then i got this error message Msg 156, Level 15, State 1, Line 3 Incorrect syntax near the keyword 'identity'.
i have found loads of topics on this but have yet to find one that gives the answer i need.I want to display the last insert id into a label/textbox after the INSERT functionhow do i do this?
I got two tables one table has the fields ie table1 orderid ofd orderdate customername where order id is autonumber the other table2 orderid ofd product id productname the problem here is thatif customer purchases 3 product at a time all the 3 products get the same ofd number ........and any 2 customers can have the same ofd number................ now i have to pull the order ID value from table 1 to table 2............ can somebody help with this i am the front end is asp.net amd the database is done on SQL server management studio
Hi, I was looking through this thread about @@Identity: http://forums.asp.net/p/1039145/1443971.aspx#1443971 I'm still unsure how to ue it. I have an Orders table, a Products table and a Products_Orders table. When I add an order to the Orders table I want the PK OrderID in this table to also update the FK OrderID in my Products_Orders table. I'm using SQL EXP05 and I'm in C#.
I know identity key in a table can cause problems when the table is replicated. Should we avoid using identity key altogether, if we don't know in advance whether replication will come into the picture?
I've got a problem reading the @@identity in vb.net I tried it the way below and get the error: Public member 'EOF' on type 'Integer' not found. (--> means with rsLastIdent)
comm_user = "SET NOCOUNT ON; INSERT INTO user (firstname, lastname, company, emailAddress) VALUES ...); SELECT @@IDENTITY AS Ident;"
I was wondering if someone could help me out with this stored procedure I have. I am trying to execute a transaction in one of my sps and am getting pk violations on 'OrderID'. This where i encounter this error:
SELECT @OrderID = @@Identity /* Copy items from given shopping cart to OrdersDetail table for given OrderID*/ INSERT INTO OrderDetails ( OrderID, ProductID, Quantity, UnitCost ) SELECT @OrderID, ShoppingCart.ProductID, ShoppingCart.Quantity, Prices.UnitCost FROM ShoppingCart INNER JOIN Prices ON ShoppingCart.ProductID = Prices.ProductID WHERE CartID = @CartID
is there any way to rewrite this statement so that I can put it in the form insert()values(). ?
HiTrying to get a return value from this code, but only gets a 0. Am using SQLExpress.SqlParameter[] p = new SqlParameter[4];p[0] = new SqlParameter("@a", "aaa");p[1] = new SqlParameter("@b", "bbb");p[2] = new SqlParameter("@c", "ccc");p[3] = new SqlParameter("@d", SqlDbType.Int, 40);p[3].Direction = ParameterDirection.ReturnValue; string s = @"set nocount on INSERT INTO ABC(A, B, C) VALUES(@a,@b,@c) SELECT scope_identity()"; using(SqlConnection conn = new SqlConnection(this._connection)){ conn.Open(); SqlHelper.ExecuteNonQuery(conn, CommandType.Text, s, p); int foo = p[3].Value;}
I am trying to follow other examples I have seen on the site, and am still getting the Must declare the scalar variable "@@INDENTITY". string sqlAdd = string.Format("INSERT INTO " + siteCode + "_campaign_table (campaign_name, prod_id, type) " + "VALUES('{0}', '{1}', '{2}'); SELECT @@INDENTITY", campaignName, prodID, type); SqlCommand comAdd = new SqlCommand(sqlAdd, con); comAdd.CommandType = CommandType.Text; con.Open(); //comAdd.ExecuteNonQuery(); int identity; identity = Decimal.ToInt32((decimal)comAdd.ExecuteScalar()); lblErrorMessageAdd.Text = identity.ToString(); con.Close();
I would like to know the best way to select/maintain a sequence number in SQL Server. I've seen locking problems with using the @@identity and was wondering if there is a better way.
Several of our applications have the need to generate a sequence number that is inserted into one or several tables. In one application they have done the following ... - Created a table with a column defined with identity attribute, for example TableA ColA defined as Int with Identity checked ColB define as char(20)
- In the application, code looks like to get the sequence number: insert into tableA (ColB, 'anything'); select (@@identity) as sequence from TableA
Then the last 5 positions of sequence are used to insert into another table. Problem with this is that several rows are being created in tableA when only a sequence number is needed. Also, we need to make sure no one else does an insert before the select @@identity.
Another approach I'm thinking about would be to create a one row table that contains an integer field initialized with a value of 1. To select/update the sequence number the code would need to: set transaction serializable select number from tableA UPDLOCK update tableA set number = number + 1
How are most people generating a sequence number in SQL Server? In Oracle this would be done by selecting sequence.nextval. For example: Select sequenceA.nextval from dual;
Is there an equivalent way in SQL Server 7.0? Thanks.
Is there a way, I can give access(not dbo or sa) to a person so they can bcp into tables that have identity columns? I want to be able to give permissions ahead so I do not have to bother setting the identity insert on every time he wants to bcp.
I'm successfully inserting into a db using a stored proc, but I need to replicate the ClientID to 10 other tables. For some reason, this one is escaping me.
Does anyone know if there is bug/problem with the @@identity global variable in SQL Server 7.0? I have a stored procedure that inserts a row into a table with an identity column and returns (outputs) the value of the identity column just generated. The SP is called by a Java program. The SP works fine most of the time, however from time to time it retuns a NULL value! Your comments/suggestions are much appreciated.