End 35 Million Conversations Quickly - Production Issue!

Dec 15, 2006

We have a system that has 35 million conversations piled up. We didn't know to explicitly end the conversation once the processing has completed. Oops. Now, our production box has 35 mm sitting in the table, and we have run into the problem where the amount in sys.conversation_endpoints has exceeded memory and they are being dumped into tempdb, which is killing our disk space, thus bringing the box down. We have fixed the code to end the conversations, but we now have to end the conversations in a hurry. If we select one by one out of the table and end the conversation via END CONVERSATION, it is slow. Very slow. It will finish in a few months. :(

Does anyone know how to get rid of these conversations in a hurry? All of the messages have been applied to our system, so killing the conversations will (should) have no affect on the processed data. Something like a TRUNCATE statement?

Thank you so much in advance,

John Hennesey



View 5 Replies


ADVERTISEMENT

Help On Updating 1.3 Million Rows On The Production Server

May 4, 2000

I need to update about 1.3 million rows in a table of mine.
I am getting the data from one of the columns of the same table and
updating the new column.
I am doing this using a cursor which I have put in a stored procedure.
As this is a production table which users might be accessing.It is a
web based application and I can't slow the system down.
So I am willing to run the stored prcedure during off peak hours.
However, do I need to put this in a transaction?
If I did put it in a transaction what type of isolation level should I
opt for?
Data integrity is very important for me and I don't mind to compromise
on the performance.
I am doing this because one of the columns which has "short description"
entry is has become too small for business purposes and we want to increase it's
length from varchar(100) to varchar(150).
As this is SQL 6.5, I can't increase the lenght of the column.
So Iadded a new column and will run the stored proc.
What precautions are to be taken?
This is on a high priority basis and very important too.

Thanks in advance...

Stored procedure code:

USE DB_Registration_Dev
GO
IF EXISTS (SELECT NAME FROM SYSOBJECTS WHERE NAME='usp_update_product' AND TYPE='P')
DROP PROCEDURE usp_update_product
GO
CREATE PROC usp_update_product
AS
DECLARE @short_desc varchar(100)
DECLARE @prod_id int

DECLARE sdesc_curs CURSOR
FOR
SELECT [Product].[product_id] , [Product].[short_description]
FROM Product

OPEN sdesc_curs

FETCH NEXT FROM sdesc_curs
INTO @prod_id, @short_desc

WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE Product
SET [Product].[sdesc] = @short_desc
WHERE Product_id=@prod_id
FETCH NEXT FROM sdesc_curs
INTO @prod_id, @short_desc
IF @@FETCH_STATUS <> 0
PRINT ' Finished Updating the table...go ahead and have fun ...! '
END
DEALLOCATE sdesc_curs
GO

View 1 Replies View Related

DB Engine :: Deleting 1 Million Records From Transaction Table Of 10 Million Data On 24/7 Environment

Jun 12, 2015

I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?

View 13 Replies View Related

Conversations

Apr 24, 2007

I am currently designing an auditing application using Service Broker. Right now, when I send a message from a trigger, I start a conversation, and later on when the message has been processed, the conversation has ended. One thing I am concerned with is that when a lot of updates are occurring on the system, if the amount of conversations being created will eat up system resources. Does it make sense to create them and end them later, or should I try to reuse them?
Tim

View 10 Replies View Related

Do All Conversations Have To Be Bi-directional?

Mar 20, 2008

From a service broker newbie...

Most of the examples I've found and played with demonstrate two way conversation. A sender initiates a call, and gets a message back.

My Requirements doesn't really need two way communication. I have a scenario where triggers on two different tables result in modifications to a third table, and I don't want the triggers to deadlock each other, so an asynchronous queueing mechanism seems like the perfect solution...

But I can't seem to make it work one way.

I can get one message through, and then all subsequent messages hang up in the transmission queue with the very informative "One or more messages could not be delivered to the local service targeted by this dialog."

I'm thinking all the examples work the way they do because you have to notify the transmitter that the message was
received by sending a message back... and by not doing this I'm stuck in the first conversation. I was thinking that by doing END CONVERSATION <Msg Handle> in the stored procedure bound to the receiver's queue was doing that.

Do I have to communicate bi-directionally always? I guess this is a safety feature but I trust MSMQ to deliver messages...

Thx

View 3 Replies View Related

Viewing Closed Conversations

Jan 5, 2006

Wierd problem here

As one user, when i select * from sys.conversation_endpoints I can see all (I assume) conversations in all states specifically DO, DI and CD

However when I change to another user I see only DI

Why is this?

If it is a permissions issue what permission do I have to grant to a user to see all conversations in sys.conversation.endpoints?

View 1 Replies View Related

Conversations &&amp; Machine Restart

Aug 26, 2006

Say I have a conversation established and the initiator server needs to reboot. Will the conversation automatically restart when the server comes back up? If not, can I get it to with some setting? If not, what is the best way to handle this?

Thanks - Amos

View 1 Replies View Related

Need A Way To Guarantee Message Ordering When Re-using Conversations

Sep 17, 2007

Hi -

In my application, I need to be able to guarantee that processing for a re-used conversation is completed prior to starting processing the next (re-used) conversation. My application is based on the concepts from the sample posted on Remus's blog: http://blogs.msdn.com/remusrusanu/archive/2007/05/02/recycling-conversations.aspx#comments). Essentially (in this sample), we create a new conversation for each SPID and re-use the conversation, so that messages are sent through the queue (and processed in order) for each SPID. SPID was used in the sample code as an example of some application-specific "thing" that you care about message ordering for. To prevent a conversation from living forever (using up log/resources), they are ended after 1 hour using DialogTimer and a customer message type.


My conundrum is this:

Assume conversation 1 (on SPID 1) is flooded with a large number of messages just before the conversation timer expires. The DialogTimer then expires before the target queue is drained. The sample code (mentioned above) then creates a new conversation for the same SPID (with a DialogTimer of 1 hour). Until the queue for conversation1 is drained, we have 2 conversations being processed for the same SPID. This same problem would occur in any application where you re-use conversations for a period of time (using DialogTimer), and then start a new conversation when the DialogTimer expires.



So although I like having the idea fof being able to re-use conversations, I would need to guarantee that conversation 1 is finished processing before conversation 2 starts processing (for the same SPID, to be consistent with the sample above). If I could get these 2 conversations into the same conversation group on the target queue, the CG locking would solve the problem. But because conversation groups only apply to the initiator queue (when you begin dialog with related conversation), I have no "out of the box" way to control how the conversation groups are associated on the target queue. Remus posted an idea here (bottom of thread): http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=182646&SiteID=1, which was to just send a special message at the beginning of each new conversation (containing the conversation group to use), and then doing a move conversation to conversationgroupid on the target queue. I've tried this solution, and the problem is 1.) if the set convo message fails for some reason, the conversation group is not set and 2.) if the target queue seems to reject most of my move conversation commands with the error "The destination conversation group '<conversation guid>' is invalid." - which I am guessing is due to the fact that this convo group id is being used on the initiator as well.



Any ideas?



Thanks!



Terryc

View 6 Replies View Related

Target Sys.conversation_endpoints Not Purged Of Closed Conversations

Dec 7, 2006

I hope someone can help me with this as we plan on using Service Broker in a high volume production environment. The script that builds everything is available if it's needed to diagnose the problem I'm having.

I'm having an issue where sys.conversation_endpoints on the target side of a conversation is never getting purged of closed conversations even after the 30 minute delay. The view is filled with closed conversations and database size is growing every day. I'm aware I can end conversation with cleanup on these conversations, but I would prefer that Service Broker behaves as expected. I'm also aware of the problems with the fire and forget model, but my model is request/response/end between 2 databases on the same server instance. Here's the typical series of events:

Initiator sends request
Target receives request
Target processes request
Target sends response
Initiator receives response
Initiator processes response
Initiator ends conversation
Target receives EndDialog message
Target ends conversation

Occasionally during the target's processing of a request, an exception is caught and the Target ends the conversation with an error:

Initiator sends request
Target receives request
Target processes request and recognizes error
Target ends conversation with error
Initiator receives EndDialog message
Initiator ends conversation

Here's the trace where Database ID 23 is initiator and 24 is target, no error:










EventClass
DatabaseID
TextData
EventSubClass

Broker:Conversation Group
23

1 - Create

Broker:Conversation
23
STARTED_OUTBOUND
11 - BEGIN DIALOG

Broker:Conversation
23
CONVERSING
1 - SEND Message

Broker:Message Classify
23

1 - Local

Broker:Conversation Group
24

1 - Create

Broker:Conversation
24
STARTED_INBOUND
12 - Dialog Created

Broker:Conversation
24
CONVERSING
6 - Received Sequenced Message

Broker:Activation
24

1 - Start

Broker:Conversation
24
CONVERSING
1 - SEND Message

Broker:Message Classify
24

1 - Local

Broker:Conversation
23
CONVERSING
6 - Received Sequenced Message

Broker:Activation
23

1 - Start

Broker:Conversation
23
DISCONNECTED_OUTBOUND
2 - END CONVERSATION

Broker:Conversation Group
23

2 - Drop

Broker:Message Classify
23

1 - Local

Broker:Conversation
24
DISCONNECTED_INBOUND
7 - Received END CONVERSATION

Broker:Conversation
23
CLOSED
10 - Received END CONVERSATION Ack

Broker:Conversation
24
CLOSED
2 - END CONVERSATION

Broker:Conversation Group
24

2 - Drop

Broker:Activation
23

2 - Ended

Broker:Activation
24

2 - Ended

Here are the typical records in the target sys.conversation_endpoints. These records never disappear:










Normal
With Error

conversation_handle
3FE27EE5-1E86-DB11-B009-000BDB714730
53E17EE5-1E86-DB11-B009-000BDB714730

conversation_id
0A432392-55F5-461B-87D5-0058795BC3AE
BCCDFA85-86A3-43B8-9648-24FFE5C0ED3F

is_initiator
0
0

service_contract_id
0
0

conversation_group_id
00000000-0000-0000-0000-000000000000
00000000-0000-0000-0000-000000000000

service_id
0
0

lifetime
2074-12-25 21:29:28.640
2074-12-25 21:29:28.000

state
CD
CD

state_desc
CLOSED
CLOSED

far_service
http://my.domain.com/schemas/test/Initiator/2006-12-07
http://my.domain.com/schemas/test/Initiator/2006-12-07

far_broker_instance
227D0898-0399-40E0-954B-C8B685EE415A
227D0898-0399-40E0-954B-C8B685EE415A

principal_id
5
5

far_principal_id
6
6

outbound_session_key_identifier
DEBEB4DB-D186-410B-9555-A34F8F5C9FE2
B82BB074-5AE5-4164-9D0B-53E364B0B52B

inbound_session_key_identifier
1DBAE307-5DFF-4050-9D94-71003D8BD058
57B5C7D8-9E8B-4614-9325-5AA30AED3670

security_timestamp
2006-12-07 18:45:52.763
1900-01-01 00:00:00.000

dialog_timer
1900-01-01 00:00:00.000
1900-01-01 00:00:00.000

send_sequence
1
1

last_send_tran_id
0x550800000000
0x700700000000

end_dialog_sequence
-1
1

receive_sequence
2
1

receive_sequence_frag
0
0

system_sequence
0
0

first_out_of_order_sequence
-1
-1

last_out_of_order_sequence
0
0

last_out_of_order_frag
0
0

is_system
0
0

View 9 Replies View Related

Why Is Sys.conversation_endpoints Filling Up Even When Reusing Dialog Conversations

Aug 5, 2007

Hi! I'm wondering why is my sys.conversation_endpoints table inserting a new row for each message i send even when i reuse conversations?
when i send the first message i get the first row in the sys.conversation_endpoints with a uniqueidentifier for the conversation_handle. this uniqueidentifier is then saved in the table which i query the next time i send a message to reuse the dialog conversation.
But even though it looks like the uniqueidentifier is reused i still get a new row for every message i send with a different conversation_handle?
this happens in both target and initator db.

I've tried to understand this by i don't.

Also for the moment i don't end conversations. But as i understand it this shouldn't matter.

Also the message successfully arives to the target and sys.transmission_queue is empty in both databases.
Neither queues have any error messages in them.

Thanx

View 1 Replies View Related

Closed Conversations Are Not Purged From The Receiver Endpoints Table

Nov 30, 2007



Hi,

I implemented the pattern suggested in the 'Recycling Conversations' article that Remus Resanu presented. Everything works great except ended conversations on the receiver remain in the sys.conversation_endpoints table forever in the 'CLOSED' state.

Is there some setting I am missing to have those conversations purged from the endpoints table. I am concerned that in the production environment this table will grow very large.

Thanks

View 2 Replies View Related

How To Quickly Fill A Database

Apr 23, 2008

I have a database with a couple of tables i need to expand to 4 gigabytes in order to run some tests. (currently 300 megs)
Does anyone have a script or some method that would quickly populate my tables with random data so that i can grow my database to the desired size for testing.

Thanks

I have SQL server 2005 express. I have the management studio installed too.

View 4 Replies View Related

How Quickly Does RAID10 Format?

Mar 20, 2007

We have a server with some pesky RAID5 - which has on 3 separate occasions corrupted the Databases when a drive failed. So we had a maintenance window and decided to change it to RAID10.

We started the configuration at 13:00 today, its now 18:00 and it has done 25% ... is that normal?

Its 8 disks (so 4 mirror-pairs), it will have around 300GB of usable space when its done.

What would happen if we needed to do this in a time critical window? (for this debacle we have moved the database onto the Web Server, so we can survive for a few more hours ...)

Kristen

View 4 Replies View Related

How To Quickly Learn The Syntax Of SQL?

Jun 15, 2007

Hi, All:

I know oracle SQL, now I need to do a lot of SQL query on Microsoft SQLSERVER, can any one point out any place that I can find out the syntax of SQLserver SQL statement? Since this is just a short term assignment, so I don't want to buy a book, just hoping I can learn something quickly from online. I don't need learn anything deep, just need to know some simple syntax so I can do join, count, concatenate, min(), max(), sum () etc.

thanks in advance.

View 5 Replies View Related

Recycling Conversations - Locking When Trying To Insert New Conversation Handle To SessionConversations Table

Feb 8, 2008



Hi,

I have implemented Remus Resanu's implementation from the Recycling Conversations article and I am experiencing locking issue when I try to insert new conversation handles to the SessionConversations table. I have copied the code in the article exactly including the activation procedure. Any ideas why I may be locking. I am thinking it is related to the HOLDLOCK hint on the table.

The sepcific line where I see locking is directly from the article:

INSERT INTO [SessionConversations] (SPID, FromService, ToService, OnContract, Handle) VALUES (...etc)

Thanks

View 6 Replies View Related

Problem With Date Search Plz Help Me Quickly

Oct 22, 2007

I am trying to search for stored files "for example from date: 15/12/2003 to: 24/6/2006" and when i press search no results appeare the following is the database code:
1    public DataTable searchData(string fileNo, string Title, string dFrom, string dTo, string brief)2        {3            string str = "";4    5            str = "select * from Tb_File where Active = 1 ";6    7            if (fileNo != "")8                str += " and FileNo='" + fileNo + "'";9            if (Title != "")10               str += " and Title like '%" + Title + "%' ";11           if (brief != "")12               str += " and Brief like '%" + brief + "%' ";13           if (dFrom != "")14               str += " and DFrom >= convert(datetime,'" + Convert.ToDateTime(dFrom).ToShortDateString() + "',103) ";15           if (dTo != "")16               str += " and DTo < convert(datetime,'" + Convert.ToDateTime(dTo).ToShortDateString() + "',103) ";17   18           ole.Open();19           SqlDataAdapter DA = new SqlDataAdapter(str, ole);20           DataTable DT = new DataTable();21           DA.Fill(DT);22           ole.Close();23           return DT;24   25       }
 i am using sql 2000, with Visual Studio 2005.
 
 

View 14 Replies View Related

Selecting Distinct Records Quickly

Mar 26, 2007

Good day,

I have a table of approximately 10 million rows. The table has 3 field making up the key, namely:
ID, Date, Program

I need to extract all the distinct Program's from the table.
I have don so with:
Select distinct Program from table
This unfortunately takes roughly 2 minutes which is far to long. Is there something I can do to help speed this process up?

Thanks in advance.

View 14 Replies View Related

Change Database Collation - Quickly

May 19, 2004

There have been several threads about changing a database's collation but none have come up with an easy answer before.
The suggestion before was to create an empty database with the correct collation and then copy the data across.
However this is hard work as you have to populate tables in a specific order in order not to violate foreign keys etc. You can't just dts the whole data.

There follows scripts we have written to do the job. If people use them, please could you add to this thread whether they worked successfully or not.

Firstly we change the default collation, then change all the types in the database to match the new collation.

===================
--script to change database collation - James Agnini
--
--Replace <DATABASE> with the database name
--Replace <COLLATION> with the collation, eg SQL_Latin1_General_CP1_CI_AS
--
--After running this script, run the script to rebuild all indexes

ALTER DATABASE <DATABASE> COLLATE <COLLATION>

exec sp_configure 'allow updates',1
go
reconfigure with override
go
update syscolumns
set collationid = (select top 1 collationid from systypes where systypes.xtype=syscolumns.xtype)
where collationid <> (select top 1 collationid from systypes where systypes.xtype=syscolumns.xtype)
go
exec sp_configure 'allow updates',0
go
reconfigure with override
go
===================

As we have directly edited system tables, we need to run a script to rebuild all the indexes. Otherwise you will get strange results like comparing strings in different table not working.
The indexes have to actually be dropped and recreated in separate statements.
You can't use DBCC DBREINDEX or create index with the DROP_EXISTING option as they won't do anything(thanks to SQL Server "optimization").
This script loops through the tables and then loops through the indexes and unique constraints in separate sections. It gets the index information and drops and re-creates it.
(The script could probably be tidied up with the duplicate code put into a stored procedure).

====================
--Script to rebuild all table indexes, Version 0.1, May 2004 - James Agnini
--
--Database backups should be made before running any set of scripts that update databases.
--All users should be out of the database before running this script

print 'Rebuilding indexes for all tables:'
go

DECLARE @Table_Name varchar(128)
declare @Index_Name varchar(128)
declare @IndexId int
declare @IndexKey int

DECLARE Table_Cursor CURSOR FOR
select TABLE_NAME from INFORMATION_SCHEMA.tables where table_type != 'VIEW'

OPEN Table_Cursor
FETCH NEXT FROM Table_Cursor
INTO @Table_Name

--loop through tables
WHILE @@FETCH_STATUS = 0

BEGIN
print ''
print @Table_Name

DECLARE Index_Cursor CURSOR FOR
select indid, name from sysindexes
where id = OBJECT_ID(@Table_Name) and indid > 0 and indid < 255 and (status & 64)=0 and
not exists(Select top 1 NULL from INFORMATION_SCHEMA.TABLE_CONSTRAINTS
where TABLE_NAME = @Table_Name AND (CONSTRAINT_TYPE = 'PRIMARY KEY' or CONSTRAINT_TYPE = 'UNIQUE') and
CONSTRAINT_NAME = name)
order by indid

OPEN Index_Cursor
FETCH NEXT FROM Index_Cursor
INTO @IndexId, @Index_Name

--loop through indexes
WHILE @@FETCH_STATUS = 0
begin

declare @SQL_String varchar(256)
set @SQL_String = 'drop index '
set @SQL_String = @SQL_String + @Table_Name + '.' + @Index_Name

set @SQL_String = @SQL_String + ';create '

if( (select INDEXPROPERTY ( OBJECT_ID(@Table_Name) , @Index_Name , 'IsUnique')) =1)
set @SQL_String = @SQL_String + 'unique '

if( (select INDEXPROPERTY ( OBJECT_ID(@Table_Name) , @Index_Name , 'IsClustered')) =1)
set @SQL_String = @SQL_String + 'clustered '

set @SQL_String = @SQL_String + 'index '
set @SQL_String = @SQL_String + @Index_Name
set @SQL_String = @SQL_String + ' on '
set @SQL_String = @SQL_String + @Table_Name

set @SQL_String = @SQL_String + '('

--form column list
SET @IndexKey = 1

-- Loop through index columns, INDEX_COL can be from 1 to 16.
WHILE @IndexKey <= 16 and INDEX_COL(@Table_Name, @IndexId, @IndexKey)
IS NOT NULL
BEGIN

IF @IndexKey != 1
set @SQL_String = @SQL_String + ','

set @SQL_String = @SQL_String + index_col(@Table_Name, @IndexId, @IndexKey)

SET @IndexKey = @IndexKey + 1
END

set @SQL_String = @SQL_String + ')'

print @SQL_String
EXEC (@SQL_String)

FETCH NEXT FROM Index_Cursor
INTO @IndexId, @Index_Name
end

CLOSE Index_Cursor
DEALLOCATE Index_Cursor



--loop through unique constraints
DECLARE Contraint_Cursor CURSOR FOR
select indid, name from sysindexes
where id = OBJECT_ID(@Table_Name) and indid > 0 and indid < 255 and (status & 64)=0 and
exists(Select top 1 NULL from INFORMATION_SCHEMA.TABLE_CONSTRAINTS
where TABLE_NAME = @Table_Name AND CONSTRAINT_TYPE = 'UNIQUE' and CONSTRAINT_NAME = name)
order by indid

OPEN Contraint_Cursor
FETCH NEXT FROM Contraint_Cursor
INTO @IndexId, @Index_Name

--loop through indexes
WHILE @@FETCH_STATUS = 0
begin

set @SQL_String = 'alter table '
set @SQL_String = @SQL_String + @Table_Name
set @SQL_String = @SQL_String + ' drop constraint '
set @SQL_String = @SQL_String + @Index_Name

set @SQL_String = @SQL_String + '; alter table '
set @SQL_String = @SQL_String + @Table_Name
set @SQL_String = @SQL_String + ' WITH NOCHECK add constraint '
set @SQL_String = @SQL_String + @Index_Name
set @SQL_String = @SQL_String + ' unique '

if( (select INDEXPROPERTY ( OBJECT_ID(@Table_Name) , @Index_Name , 'IsClustered')) =1)
set @SQL_String = @SQL_String + 'clustered '

set @SQL_String = @SQL_String + '('

--form column list
SET @IndexKey = 1

-- Loop through index columns, INDEX_COL can be from 1 to 16.
WHILE @IndexKey <= 16 and INDEX_COL(@Table_Name, @IndexId, @IndexKey)
IS NOT NULL
BEGIN

IF @IndexKey != 1
set @SQL_String = @SQL_String + ','

set @SQL_String = @SQL_String + index_col(@Table_Name, @IndexId, @IndexKey)

SET @IndexKey = @IndexKey + 1
END

set @SQL_String = @SQL_String + ')'

print @SQL_String
EXEC (@SQL_String)

FETCH NEXT FROM Contraint_Cursor
INTO @IndexId, @Index_Name
end

CLOSE Contraint_Cursor
DEALLOCATE Contraint_Cursor

FETCH NEXT FROM Table_Cursor
INTO @Table_Name
end

CLOSE Table_Cursor
DEALLOCATE Table_Cursor

print ''
print 'Finished, Please check output for errors.'
====================

Any comments are very welcome.

View 1 Replies View Related

New To SSIS Need Help In Solving This Problem Quickly

Mar 6, 2008

I am very new to SSIS. Can someone give me a basic out line to this problem. I kind of understand control tasks, data flow, etc... but not in details(watched couple of webcasts). I need to see something like below in action to understand this better.

Basically, I need to process a flat csv file on daily basis and load it into a table. As I am loading the records, I will need to verify(on a key column) to see if record exists in table already. If so then just update the record otherwise insert a new record. When I find a record, I need to possibly do a checksum on a set of columns before I do update. So, only update if these set of columns are different from file vs. table. I also need to keep performance in mind as I am processing this record one at a time looking up this record. I am thinking this should be fairly easy but I am getting little lost in control tasks and dataflow as to what goes on what. By the way I am using visual studio 2005 and sqlserver 2005.

I would appreciate your help. thanks again. I dont mind an example solution file.

View 3 Replies View Related

Load A Dataset To SQL Mobile (Quickly)

Jul 10, 2007

I have five small tables that I need to insert to a SQL CE database.

I am using the 2.0 Compact Framework with the 2.0 System.Data.SqlServerCe.



My table definition is dynamic so I never know it's design.



1- If I go Row by Row using an this.ExecuteNonQuery(_global, par); it takes about 26 seconds to insert 5 tables of 330 rows.



2- If a use

StringBuilder sbColumns = new StringBuilder();

foreach (DataColumn dc in table.Columns)

{

if (sbColumns.ToString() != "")

sbColumns.Append(",");

sbColumns.Append(dc.ColumnName);

}

SqlCeDataAdapter da = new SqlCeDataAdapter("SELECT " + sbColumns.ToString() + " FROM " + _tablename, m_con);

SqlCeCommandBuilder cb = new SqlCeCommandBuilder(da);

da.MissingMappingAction = MissingMappingAction.Passthrough;

da.InsertCommand = cb.GetInsertCommand();

da.Update(table);

da.Dispose();



it takes about 46 seconds.



How Can write it faster or is this fastest it can go?



Thanks

View 1 Replies View Related

Transferring A LARGE Database To A New Server - QUICKLY!!!!

Nov 13, 2000

Good morning one and all,

I need to transfer a database (contining one table) containing over 35 million records from one server to another. I have two options at present :
(a) Use DTS to do the transfer
(b) Copy the mdf file across and sp_attach_db it

Does any1 have a better idea, or does any1 know which of the two methods will be the quickest?

TIA

Gurmi

View 1 Replies View Related

SQL Server 2008 :: Index Fragmented Quickly

Apr 8, 2015

I just did index defragmentation for some databases include MSDB . I notice there are 3 indexes from MSDB database that fragmented quickly ( I did rebuild last nite at 10 PM - > fragmentation level becomes zero but today at 9 am it become 80 % ).The indexes are backupsetuuid, backup media family uuid, backupmediasetuuid. I am thinking to set the fill factor for those indexes = 80 respectively.

View 6 Replies View Related

SQL Server 2008 :: Indexes Fragment Really Quickly

May 1, 2015

This application runs on a SQL Server 2008 R2 database.This application receives messages from an integration module. It has a core table: Table-A. Each message is inserted as 1 row into Table-A. Then when it is processed, that row in Table-A is updated.

There are two environments which are both connected to the same integration. So in both environments, Table-A has exactly the same amount of records inserted and updated. In both environments Table-A has around 80 million rows, with an extra 150,000 rows being inserted and then updated every day.Table-A has 8 indexes. For some reason unknown to me, the 8 indexes fragment really quickly in one environment but not in the other.

e.g. In Environment-1 the index fragmentation ranges from 0 - 19% and this environment has not been re-indexed for over 2 months.BUT a reindex was performed in Environment-2 and only 2 days later the index fragmentation ranges from 72 - 99.93%!

Our DBA has confirmed the re-index in Environment-2 completed successfully and has shown stats before and after the reindex to show that the 8 indexes for Table-A in Environment-2 went down to 0% fragmentation.

My question is, how can the indexes in Environment-2 fragment so much more quickly than the indexes in Environment-1? Both environments are on exactly the same hardware and have exactly the same inbound messages. The database on Environment-1 is actually a clone from Environment-2. The only known differences between the 2 databases is Environment-1 is STANDARD edition - SQL Server 2008 R2 (SP2) whereas Environment-2 is ENTERPRISE edition - SQL Server 2008 R2 (SP1). Could this difference be due to the Service Pack levels or even because one is STANDARD and the other ENTERPRISE?

This is what I have checked so far:

1) In both Environments all 8 indexes have "Set Fill Factor" unchecked and "Automatically recompute statistics", "Use row locks...", "Use page locks..." checked.
2) The "Index Usage Statistics" report in both Environments shows a similar amount of #UserUpdates and #UserScans

View 9 Replies View Related

How Do I Quickly Populate A New Database With Place-holder Content?

Apr 28, 2008

I am using SQL server to create a rather complicated client database for a nonprofit organization. I have access to an ancient version of the database in Access format, but would rather create a new database from scratch instead of "up-sizing" the old database. Although the old database is mostly useless, it contains a goldmine of names and addresses that I could use to populate the new database that I'm creating. My question is this: Is there any relatively easy way to cut and paste from external data sources into a new SQL database? For example, I would love to just select twenty rows of "first names" from the old database and then paste that into my new table. Can anyone suggest any quick and easy tricks for populating a new database with place-holder content? Thanks! 

View 2 Replies View Related

T-SQL (SS2K8) :: How To Quickly Update XML Column In A Table To NULL

Mar 9, 2015

I have a table with hundreds of millions of records and need to update an xml column to null. I do something like this:

UPDATE [Mydb].[MySchema].[MyTable]
SET [XMLColumn] = null

Currently it is taking around 6 hours.

Is there a quicker way to do this?

View 1 Replies View Related

SQL Server 2012 :: How To Quickly Update / Insert 3M Records In Large Table

Mar 28, 2015

Our system runs a SQL Server 2012 DB, it has a table (table_a) which has over 10M records. Our system have to receive data file from previous system daily which contains approximate 3M updated or new records for table_a. My job is to update table_a with the new data.

The initial solution is:

1 Create a table (table_b) which structur is as the same as table_a

2 Use BCP to import updated records into table_b

3 Remove outdated data in table_a:
delete from table_a inner join table_b on table_a.key_fileds = table_b.key_fields

4 Append updated or new data into table_a:
insert into table_a select * from table_b

As the test result, this solution is very inefficient. Step 3 costs several hours, e.g. How can I improve it?

View 9 Replies View Related

75 Million Row Update???

Apr 17, 2003

Hi all,
I have a table with approx 75 million rows of names and addrersses in it that I am trtying to update...so far the update is running 5 hours and with no end in sight...a liitle background is that this is running on a quad zion 500 with 3 gb ram ands one 145 gb drive (boooo) without improving the hardware needs can i improve the performance...I have indexed all the where fields that i read on and only update the table but once or twice a month, but I do daily selects by zip or county (all indexed) i even have a composite key on phone and zip...

i have heard of horizontal partioning but i always thought that was reserved for archiving old transactional data that rarely gets read on....

when i performed a trace there are plenty of reads but no writes...is this normal during an update like this...

i have been running this proc for the past 7 HOURS!!!....any help is appreciated, since all i have is time at this point....

THANKS!!!!

--Set rowcount to 100000 to limit number of updates
--performed in each batch to 100K rows.
Set rowcount 100000

--Declare variable for row count
Declare @rc int
Set @rc=100000

While @rc=100000
Begin

Begin Transaction

--Use tablockx and holdlock to obtain and hold
--an immediate exclusive table lock. This unusually
--speeds the update because only one lock is needed.
Update [2000] With (tablockx, holdlock)
set [source] = '2000'

--Get number of rows updated
--Process will continue until less than 10000
Select @rc=@@rowcount

--Commit the transaction
Commit
End

View 5 Replies View Related

500 Million Rows Of Data?

Apr 9, 2008

I'm new to using a DB and have a few questions about what I'm trying to do. I have some historical options data and want to place it into a sql express database. (I understand I might need to use a none express version once the db gets to big.) A months worth of data is over 5.5 million rows of data. So six years worth is ~400 million rows. Is it possible to put this into a sql db and be able to search it very fast? I have a months worth in a db now and it is pretty slow. Should I use a new table for each month and then have 6 years * 12 month = 72 tables to increase the search speed? I search by date and stock_symbol and the data looks like this:
 Date, Stock_Symbol, Option_Symbol, Strike, BidPrice, AskPrice, Volume, OpenInterest, (and a few others)
The select statement is simple: SELECT * FROM Options WHERE Date = @Date and StockSymbol = @Symbol
Thanks

View 4 Replies View Related

SQL INSERT 1.6 Million Records

Jan 27, 2006

I am currently working on a simple page to insert 1.6 million UK postcode records into an SQL server table. The table has three columns for the postcode, longditude coordinate and lattitude coordinate. The data is sourced from a pipe (|) delimited txt file and inserted into the database using a FOR loop. The problem I have is that the page will hang after inserting only 10,000 records, the page displays either an invalid View State error or a page cannot be found error.
Now I assume the viewstate error stems from the fact that there is a form on the page which simply contains a button to execute the script and a few labels to show the progress. But without the form and associated viewstate the insert still fails to complete.... any ideas?? Would I be better running this on a thread or should I just do it in stages and be patient. I have now modified the page to read the database on load and pick up from where it crashes?

View 2 Replies View Related

Updating 4 Million Records

Aug 30, 2006

Meg writes "Hi,

I have a table that has 4+ million records. I need to update those records. I am facing some performance issue. Can someone please advice?

update stage
set batch_status = 1
where update_status = 0


Update transaction
Set aId = s.aId,
b = s.b,

from stage s
Where s.aId = transaction.aId
and s.batch_status = 1


Update stage
Set update_status = 1,
batch_status = 2

where

batch_status = 1

When I run the above query with "set rowcount 1000", it runs in one minute. When I run the query for "set rowcount 10000", it runs in 1 hour 56 minutes. Can someone help me to optimize it?

Thanks.
Meg"

View 4 Replies View Related

56 Million Records Search

Jul 20, 2005

Hey folks...So I have a table that looks like this:CREATE TABLE [tblStation] ([CAMPAIGN] [varchar] (8),[LISTNUM] [varchar] (10),[PHONE] [varchar] (10),[EVENTTIME] [datetime] ,[STATION] [int],[OPERATOR] [varchar] (16),[EVENTCODE] [varchar],[CALLSPAN] [decimal](18, 0),[FDISP] [int],[RECORDNUM] [varchar],[STC] [varchar],[PROMOC] [varchar],[EXP_CAMP] [varchar],[PROMO3] [varchar],[MAXATT] [char],[LISTNAME] [varchar],[SITENAME] [char],[Row_id] [int] IDENTITYIt's taking nine seconds to run the following command:SELECT count([fdisp])FROM [TrunkFiles_new].[dbo].[tblStation] WITH (NOLOCK)WHERE fdisp IS NULLAnyone familiar with a table of this size having performance likethis? The [fdisp] column has a non clustered index on it.Thanks in advance...

View 1 Replies View Related

How Well SQL Server Can Support 300 Million Records...

Nov 16, 2001

How well SQL Server can support 300 million records...
Any body is working on big database like this. can anyone give me some input on this. it's going to be 60GB database size.

View 1 Replies View Related

Copying 4 Million Rows Everyday

Mar 21, 2000

In our database, we have a very large table that gets updated every morning, start of the day is copying 4 million rows from the fact table from previous date to today's date in the same table and then some other processing. It takes 1 1/2 to 2 hrs to do this. There is a dts package created to copy these rows into temp table and then to this fact table.

This table has more than 200 million rows

Any ideas on how to accomplish this without doing the copy twice and not running into locking problems.

Thanks for any suggestions.

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved