Slow Inserts Into Large Tables

Nov 29, 2000

We are inserting into a table, which includes an identity primary key column. When the table gets really large (i.e. 1.5 million records), the performance of the inserts reduce.

I noticed that when we insert into the table an exclusive lock on the table is obtained. Do inserts into tables with identities always lock the table?

Given the table size is unavoidable, does anyone have a suggestion to improve the performance?

Thanks,
Matt

View 6 Replies


ADVERTISEMENT

Large FullText Tables - Slow Queries

May 31, 2007

Hi,



I currently have a large table (35 million rows, over 80GB). I have one varchar(max) column on the table that is used in the fulltext index.



To query the complete index is fast, for example:



SELECT 'ipod', COUNT(*)

FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], 'ipod') CT



This took 70 seconds (which I can live with). However, I seldom run queries like this, most are more like:



SELECT 'ipod', COUNT(*)

FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], 'ipod') CT

JOIN Pages ITP ON ITP.PageID = CT.[Key]

JOIN Feeds ITF ON ITP.IPID = ITF.IPID

JOIN Buyers ITB ON ITB.IBID = ITF.IBID

WHERE ITB.ID IN (1342,246)



These queries are much slower (this example took 17 minutes). I understand that FT searches the index and returns all rows that match the query to SQL. SQL then performs the joins and counts only the correct results. (Correct me if I'm wrong here).



One solution I've seen to this to put data or "tags" into the FT column - so my Body column would become something like:



'{ID:1342}' + [Body]



That sounds like a very good idea. I could then change the 2nd query above to be:



SELECT 'ipod', COUNT(*)

FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], '("ID:1342" OR "ID:246") AND "ipod"') CT



That all works well until I want to select 1000 different ID's because the FT query will become very long and complex. Also I'm only including one column (ID) in this example - but I have about 7 or 8 columns that I would need to include in these "tags". Quering multiple columns become very complex quickly and no doubt I will reach a query limit at somepoint.



If anyone has any other suggestions to the above I'd love to hear them. Another thought I'm having is to partition the table. I can find very little online about how FT behaves on partitioned tables - I fear it behaves exactly the same, what I'd like to think is that I could partition the table on an ID say 100 per partition or something, and then fulltext would only search the relevant partitions. If it behaves like this it may work. If no-one knows then I'll give it ago, but this will take me a while due to the table size - so I'm hoping one of you clever lot know!



Many thanks for any advice.



Simon





View 2 Replies View Related

Large Inserts

Jul 5, 2007

Dear Experts,What is the best way to do a large insert WITHOUT having direct accessto the machine SQL Server is running on? For example, imagine I wantto insert something like 20,000 records. If I were to have access tothe server, I could BULK INSERT into a temp table and then insert intothe destination table. But if I can't create a file on the server touse for BULK INSERT, what is the next best alternative to doing lotsof 1 record insert statements?Thanks,-Emin

View 4 Replies View Related

ADO Inserts Running Slow

Oct 13, 1999

Hi list,
I'm a long time lurker on this list and really enjoy the discussions, although I rarely get a chance to participate.

Here is my situation: We are importing chunks of data (500 records at a time) from a C++ interface. The records have to be transformed before inserting into the target table which I am doing using a stored proc which is working fine. The records are in memory in C++ and the programmer is looping through the records building inserts into a temp table through ADO (which my proc picks up). The server business object is using the connection.execute method which is inserting one record at a time. That part of the process is taking over 15 seconds for 500 records which is the bulk of the total time.

My question is: Using ADO is there a better way to insert these records into the temp table? I see mention of a recordset interface but my programmers are new to ADO and since I am the DBA and have never used ADO, I am not sure what to tell them.

Any insight would be greatly appreciated.

shawn

View 2 Replies View Related

ODBC Inserts Are Very Slow

Dec 22, 2004

Hi,

I am new to the windows world. We use Informatica on UNIX for ETL process. We have a requirement to load approx. 200,000 rows to a MS SQL Server table . The table is not that big and it is a heap table (no indexes). Inserts are taking 69 rows/per minute. We are using DataDirect Closed 4.10 SQL Server ODBC driver.

SQL Profiler tells us that is is doing a row by row processing and using sp_execute procedure.

Is there a way we can speed up the ODBC process?

-Thanks in advance
srv

SQL Server Version:
Microsoft SQL Server 2000 - 8.00.818 (Intel X86) May 31 2003 16:08:15 Copyright (c) 1988-2003 Microsoft Corporation Standard Edition on Windows NT 5.0 (Build 2195: Service Pack 4)

View 3 Replies View Related

Slow INSERTs On A Table

Apr 5, 2007

Hello all. I've got a problem with really slow INSERTs on one (and only one) of the tables in a database. For example, using SQL Management Studio, it takes 4 minutes and 48 seconds to insert 25 rows. There are only about 8 columns in the table and only about 1500 records. All the other tables in the database are very fast for inserts.

Another odd thing uniquely associated with INSERTs on this table: prior to inserting the 25 new rows of data, SQL Management Studio tells me that it inserted 463 rows of data which I know did not happen. Here's the INSERT statement:

INSERT INTO FieldOps(StudySiteID
, QA_StructureID
, Notes
, PersonID)
SELECT DISTINCT StudySiteKey
, QA_StructureKey
, SampleComments1
, '25'
FROM ScriptOutput_Nitrate
WHERE (ScriptOutput_Nitrate.StudySiteKey IS NOT NULL)

and SQL Management Studio (eventually) says:
(463 row(s) affected)
(463 row(s) affected)

(25 row(s) affected)

The table has an index on the primary key (INT data type with auto increment). I tried running the following code to fix things but it made no difference:

USE [master]
GO
ALTER DATABASE [FieldData] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO

use FieldData
GO
DBCC CHECKTABLE ('FieldOps', REPAIR_REBUILD) With ALL_ERRORMSGS
GO

USE [master]
GO
ALTER DATABASE [FieldData] SET MULTI_USER WITH ROLLBACK IMMEDIATE
GO

I'm guessing that the problem might be related to the index (??). I don't know... Does anyone here have a suggestion as to what I should do to fix this problem.

View 9 Replies View Related

Inserts Gradually Slow Down

Jul 23, 2005

I'm inserting 2 million+ records from a C# routine which starts outvery fast and gradually slows down. Each insert is through a storedprocedure with no transactions involved.If I stop and restart the process it immediately speeds up and thengradually slows down again. But closing and re-opening the connectionevery 10000 records didn't help.Stopping and restarting the process is obviously clearing up someresource on SQL Server (or DTC??), but what? How can I clean up thatresource manually?

View 9 Replies View Related

Distributed Query Doing Inserts -----Very Slow

Jul 26, 2002

Below given query is being executed on a Sql 2k box with 4CPU and 2GB RAM
testXX.DB_GRP.dbo.group1-----> is a sql 7 box with single CPU and 512MB RAM
The result set is abt 30,000 rows .
This whole Process is taking abt 5 mins to do the Insert Process.
Is there a way to optimise the query and bring down the execution time


insert into testXX.DB_GRP.dbo.group1
select num, group_num,group_desc from group2
where id = 20

---------------
If we just run the
select num, group_num,group_desc from group2
where id = 20

it takes 10 secs to execute this selct statement so i was wondering why it takes 5 mins to do the insert process across the network thru linked server query.

Any help would be appreciated?

Thanks,

MK

View 3 Replies View Related

Slow DB Inserts. Is Reflection Really This Expensive?

Aug 10, 2007

I'm relatively new to compact framework and SQL Server Compact so bear with me if I've got an obvious thing I've forgotten.

I've written my own database helper layer. My idea is to generate SQL Insert statements dynamically based on what is in the contents of each object. Peformance however is horrible. I try to do 1800 inserts and it takes about 50 seconds on the device (Release Build, outside of the IDE).

I pass in a list of objects to be inserted which derive from a ModelBase class. ModelBase includes some ORM information (what table the object goes into. what fields are mapped to which properties). I generate one SQLCeCommand object, one sql string (new params for each insert), and am using SqlCeResultResultSets.

What can I do to make this run faster? Thank you





Code Snippet
public bool Insert(List<ModelBase> recs)
{
SqlCeConnection con = new SqlCeConnection(connectionString);

try
{
con.Open();
con.BeginTransaction();
}
catch
{
return false;
}

SqlCeCommand cmd = new SqlCeCommand();
cmd.Connection = con;
String sqlString = "INSERT INTO [" + recs[0].tableName + "] (";
String sqlString2 = ") VALUES (";
PropertyInfo[] props = recs[0].GetType().GetProperties();
for (int x = 0; x < props.Length; x++)
{
PropertyInfo pi = props[x];
if (recs[0].fieldMap.ContainsKey(pi.Name))
{

sqlString += "[" + recs[0].fieldMap[pi.Name] + "]";
sqlString2 += "@" + pi.Name;
sqlString += ", ";
sqlString2 += ", ";
}
}
sqlString = sqlString.Substring(0, sqlString.LastIndexOf(", "));
sqlString2 = sqlString2.Substring(0, sqlString2.LastIndexOf(", "));
sqlString += sqlString2 + ")";
cmd.CommandText = sqlString;
cmd.CommandType = CommandType.Text;
foreach (ModelBase rec in recs)
{
cmd.Parameters.Clear();
for (int x = 0; x < props.Length; x++)
{
PropertyInfo pi = props[x];
if (rec.fieldMap.ContainsKey((pi.Name)))
{
if (pi.GetValue(rec, null) == null)
cmd.Parameters.AddWithValue("@" + pi.Name, DBNull.Value);
else
cmd.Parameters.AddWithValue("@" + pi.Name, pi.GetValue(rec, null));
}
}

try
{
SqlCeResultSet rs = cmd.ExecuteResultSet(ResultSetOptions.None);
}
catch(Exception e)
{
cmd.Connection.Close();
cmd.Connection.Dispose();
Program.log.Error(e.Message,e);
return false;
}
}
con.Close();
return true;

}

View 7 Replies View Related

Temporarily Disable Logging During Large Inserts?

Feb 19, 2007

I have an application that dumps massive amounts of data into a database during the installation. My log file always ends up being 30-40GB+ at the end of the install. Can I turn off logging while I do the install and enable it after? What are my options.

View 6 Replies View Related

Problems With Replicating Large Batch Inserts

Nov 14, 2006

I have transactional replication set up between 2 SQL Server 2005 databases on 2 different boxes. Both the log reader and distribution agent run in continuous mode. The distributor is residing on a third SQL Server box. We are having performance issues with replication when there are large batch deletes/inserts happening on the publisher. There is a batch job that runs for about 8-10 hours everyday on the publisher and deletes/inserts thousands of records as part of transactions. The amount of time to replicate all this data on the subscriber is around 13-15 hours which is not acceptable to our user community. While monitoring I found that the distribution database (MSRepl_commands table) at times has millions of records in it which would explain why the latency is so high. Add to it the fact that there are large transactions occuring on the publisher.

I was wondering if anyone has faced similar problem before. Are there any conifguration changes I can make to the replication infrastructure to reduce latency?

Would appreciate your help.

Thanks.

View 4 Replies View Related

Slow SELECT Blocking Critical INSERTs

May 20, 2008

Our company records live sales into an SQL Server database. The same tables that store the sale information are also used by a reporting interface to query sales figures, but occasionally a SELECT generated by the reports is so slow that it causes the INSERT operations to time out and fail.

We've tried increasing the CommandTimeout property in .NET of the application performing the inserts, but despite it being set to 90 seconds, it's still not enough to prevent the occasional sale from failing to record which is a big no-no for us. We've run the Database Tuning Advisor on the stored procedure that generates the SELECT for the reports, and it is now fully optimized but still this isn't enough. The tables are quite massive (millions of rows) and the SELECT requires JOINs to other tables, some of which are in a separate database on the same server.

Is there a solution to this problem, aside from increasing the CommandTimeout property to the point where no timeout errors occur? My concern is that doing this could increase the number of concurrent connections and we'd hit another limit, so it's not really solving the problem. Is there a way to configure SQL Server to always favour INSERTs over SELECTs? The reporting users won't really care if the reports are slower but it's critical to get these INSERTs up to 100% reliability.

I'm not a DBA (just a developer) so I don't have an intricate knowledge of databases and this problem is a bit beyond my level of expertise. Obviously we don't want to re-design our entire system, but we'll do whatever necessary to ensure we aren't failing to record sales.

One idea I had, which may be awful I don't know, is to submit these sales to an MSMQ and write some software that will read from the queue and insert the records from there instead. We could then deal with the timeout issue by just re-submitting the sale until it is accepted, then removing it from the queue.

View 4 Replies View Related

Slow Inserts/updates As The Database Size Grows

Oct 23, 2007

Hi all,

This managed application was written to run on a Symbol 3090 Win CE 5.0 scanning device. We are using the symbol provided classes to access the scanning interface, and SQL Compact database on the device to collect the scanned data, and then using merge replication to synchronize scanned data when the device is docked. The problem we have experienced seems to be releated to the performance when inserting and updating records in the database.

We have tested some randomly generated 1000 records and inserting/updatating into a database. At first the time to commit a record increases when the database is flushing into the memory (The flush interval in the connection string property is 10 seconds by default). and then as the database size grows increasing the time to commit every single record which is causing the application to perform slowly as they scan items into the database. However, the device program memory remains consistant as they are scan items. From our tests, I found the time to execute either a update/insert command on 2MB sqlMobile database (upto 10000 records, depending on the size of the columns) is taking nearly 2 to 2 and half seconds to complete. Below is the only code I am executing,


If Not sqlObj.UpdateItem(1061022, itemNo, 1) Then

sqlObj.InsertResultSet(1061022, itemNo, itemObj.Style, itemObj.Color, itemObj.Size, itemObj.Description, 0, 1)

End If

For the notes, I am using prepared updated command and resultset.insert methods to perform update and insert commands into the database.

Any help on this issue is highly appreciated.

Thanks
Ravi.



View 1 Replies View Related

Large Table, Really Slow Queries

Jul 26, 2007

I'm working with a table with about 60 million records. This monster is growing every minute of the day as well, by 200,000 - 300,000 records/day. It's 11 columns wide, and has one index on a datetime column. My task is to create some custom reports based on three of these columns, including the datetime one.

The problem is response time. Any query executed on this table takes forever--anywhere between 30 seconds and 4 minutes. Queries such as this one below, as simple as it is, can take a minute or more:

select
count(dt_date) as Searches
from
SearchRecords
where
datediff(day,getdate(),dt_date)=0


As the table gets larger and large, the response time is going to get worse and worse. Long story short, what are my options to get the speed of queries down to just a few seconds with a table this big? So far the best I can come up with is index any other appropriate columns (of which there is one for sure, maybe two).

View 6 Replies View Related

Large Database, Slow Query Speed. Help!

May 29, 2008

Hi guys,

I am asking this question on behalf of a friend. I have little knowledge of SQL 2005 but my friend is quite knowledgeable, although this is the first time he is dealing with large database for a client. So here's the story.

His client has a database containing 1.5 million books. Now he is setting up a website which will enable users to search books. Searching by ISBN is no problem as it only takes 1 seconds. The problem is, searching by Title takes more than 20seconds, which is unacceptable. My friend has only done smaller database and he just recently thought of implementing indexing and now looking for other ideas.

Each row contains book details such as Title, Author1, Author2, Author3, Publisher, Publication Date, ISBN, etc.

Can anyone who are more experienced in doing large database share with me some design ideas? His client is aiming for 8seconds or less.

Thanks in advance!

View 14 Replies View Related

Large Table/slow Query/ Can Performance Be Improved?

Jul 20, 2005

I am having performance issues on a SQL query in Access. My query isaccessing and joining several tables (one very large one). The tables arelinked ODBC. The client submits the query to the server, separated byseveral states. It appears the query is retrieving gigs of data from thetable and processing the joins on the client. Is there away to perform moreof the work on the server there by minimizing the amount of extraneous tabledata moving across the network and improving performance (woefully slowabout 6 hours)?

View 3 Replies View Related

Fulltext Large (500.000) Count Query Performace Too Slow

Feb 14, 2008

Hi,

I am with the response time for a simple count on a fulltext search that is too slow.

Even using the most simple query on a good server (64 bit Dual Opteron 4GB Ram with high speed 16 raid disk storage)):

select count(*) from content_books where contains(searchData,'"english"')
Takes 4 seconds to count the avg 500.000 resultsI have removed all the joins with real table data so that the query is only inside the fulltext engine..

I would expect this to be down to 4 milli seconds. Isn't it just getting the size of the "english" word result index?

It seems the engine is going through all the results because if a do a more complex search that returns less results the performance is better.

Any clues of how to do this faster? I never read the thousands of records BUT i need to count them...

Thank you very much.

View 2 Replies View Related

SQL Server 2012 :: Asynchronous Cursor Population Slow For Large Result Sets

Jul 2, 2015

so async cursor population is supposed to create the cursor and return the cursor id quickly, while the server works on async populating the results. For a keyset-driven cursor, SQL Server stores the key sets in tempdb, which it then uses to fetch data for cursor results. Anyway, this works fine for smaller tables, but I'm finding for large result sets, the async cursor population is very slow and indeed seems to approximate synchronous time. The wait stat I get while it is running (supposedly asynchronously) is TRANSACTION_MUTEX.

Example:
--enable async cursor
exec dbo.sp_configure 'cursor threshold', 0; reconfigure;
declare @cursor int, @stmt nvarchar(max), @scrollopt int, @ccopt int, @rowcount int;
--example of giant result set
set @stmt = 'select * from sys.all_objects o1, sys.all_objects o1';

[code]...

Note that using the SQL "select * from sys.all_objects o1" is much faster than "select * from sys.all_objects o1, sys.all_objects o2". However, if cursor population is async, I'd expect the time to return a cursor id to be similar between the two.

View 7 Replies View Related

Transact SQL :: Check No Inserts Occurring In Tables

Sep 14, 2015

I want to check that no inserts are occurring in 5 tables that are depending on each other and then drop and create those 5 tables. I have scripts to drop and recreate the tables. How do I check that no inserts are happening in these 5 tables?

Table A
Table B dependant on
Table A
Table C dependant on
Table B
Table D dependant on
Table C
Table E dependant on
Table D

View 9 Replies View Related

Counting Inserts & Updates From A Specific Or Group Of Tables?

Mar 11, 2002

I am kinda new with SQL and am trying to get a count of on the number of updates and or inserts to any given or group of tables and cannot get the syntax correct...can anyone help with this?
Thank you in advance.
Colin P.

View 1 Replies View Related

Transact SQL :: Multiple Inserts Different Columns And Tables Into One Table

Oct 12, 2015

I have a Problem with my SQL Statement.I try to insert different Columns from different Tables into one new Table. Unfortunately my Statement doesn't do this.

If object_ID(N'Bezeichnungen') is not NULL
   Drop table Bezeichnungen;
GO
create table Bezeichnungen
(
 Artikelnummer nvarchar(18),
 Artikelbezeichnung nvarchar(80),
 Artikelgruppe nvarchar(13),
 
[code]...

View 19 Replies View Related

Multiple Tables, Inserts, Identity Columns And Database Integrity

Apr 30, 2008

Hi all,
I am writing a portion of an app that is of intensely high online eCommerce usage. I have a question about identity columns and locking or not.
What I am doing is, I have two tables (normalized), one is OrderDemographics(firstname,lastname,ccum,etc) the other is OrderItems. I have the primary key of OrderDemographics as a column called 'ID' (an Identity Integer that is incrementing). In the OrderItems table, the 'OrderID' column is a foreign key to the OrderDemographics Primary Key column 'ID'.
What I have previously done is to insert the demographics into OrderDemographics, then do a 'select top 1 ID from OrderDemographics order by ID DESC' to get that last ID, since you can't tell what it is until you add another row....
The problem is, there's up to 20,000 users/sessions at once and there is a possiblity that in the fraction of a second it takes to select back that ID integer and use it for the initial OrderItems row, some other user might have clicked 'order' a fraction of a second after the first user and created another row in OrderDemographics, thus incrementing the ID column and throwing all the items that Customer #1 orders into Customer #2's order....
How do I lock a SQL table or lock the Application in .NET to handle this problem and keep it from occurring?
Thanks, appreciate it.

View 2 Replies View Related

Better Practices Wanted For Cascading Inserts Of Hierarchical Data From Staging Tables

Aug 28, 2007

I apologize if this has been asked, but I can't find a complete answer.

We have a situation with parent/child tables which have an identity column as their PK. We need to be able to insert into the live tables from staging tables. The data in the staging tables are related via a surrogate key.

I have found the OUTPUT clause, but that can only refer to columns of the actual table (since there is no FROM clause in an INSERT). Our current best solution to this problem involves adding bogus "staging" columns to the destination tables, and removing them after we've inserted everything from staging. This is an unattractive solution to say the least.

I'll give an example that mirrors our actual solution, and ask if anyone has a better solution?
----------




Code Snippet
CREATE TABLE [dbo].[TABLE_A](
[ID] [int] IDENTITY(1,1) NOT NULL,
[DATA] [nchar](10) NOT NULL,
[STAGING_COLUMN] [bigint] NULL,
CONSTRAINT [PK_TABLE_A] PRIMARY KEY ([ID] ASC)
)
GO
CREATE TABLE [dbo].[TABLE_B](
[ID] [int] IDENTITY(1,1) NOT NULL,
[A_ID] [int] NOT NULL,
[DATA] [nchar](10) NOT NULL,
[STAGING_COLUMN] [bigint] NULL,
CONSTRAINT [PK_TABLE_B] PRIMARY KEY ([ID] ASC)
)
GO
ALTER TABLE [dbo].[TABLE_B]
ADD CONSTRAINT [FK_TABLE_A_TABLE_B] FOREIGN KEY([A_ID]) REFERENCES [dbo].[TABLE_A] ([ID])
GO
CREATE TABLE [dbo].[STAGE_TABLE_A](
[A_Key] [bigint] NOT NULL,
[DATA] [nchar](10) NOT NULL
)
GO
CREATE TABLE [dbo].[STAGE_TABLE_B](
[B_Key] [bigint] NOT NULL,
[DATA] [nchar](10) NOT NULL,
[A_Key] [bigint] NOT NULL
)
GO


The STAGING_COLUMN columns are the ones that will be added before, and dropped after.






Code Snippet
DECLARE @TABLE_A_MAP TABLE (
A_ID INT,
A_Key BIGINT
)
INSERT INTO TABLE_A (DATA, STAGING_COLUMN)
OUTPUT INSERTED.ID, INSERTED.STAGING_COLUMN INTO @TABLE_A_MAP
SELECT DATA, A_Key FROM STAGE_TABLE_A
INSERT INTO TABLE_B (A_ID, DATA)
SELECT TAM.A_ID, STB.DATA
FROM STAGE_TABLE_B STB INNER JOIN @TABLE_A_MAP TAM ON TAM.A_Key = STB.A_Key






This seems to work, but I'd really like another alternative. Even though this is happening when nobody else is using the database, I cringe at the thought of adding and removing columns just to make this work.

Here are a few of my constraints:



The above is a simplification of the actual problem. The actual problem goes about five levels deep (hence the B_Key in STAGE_TABLE_B). At the top level, our larger customer will have 100,000 rows to insert. Each level will average 3 times as many rows as the next higher level, so we're talking about real volumes here.

This has to finish over the course of a weekend.

This has to be delivered to QA this Friday
Thanks for any help or insight.

View 3 Replies View Related

Large Tables In SQL 7.0

Jul 12, 2001

We currently have a data warehouse running on SQL 7.0, SP2. One of our primary fact tables is now well over 155 million rows in it. The table is not very wide, as it only contains 17 columns, most of which are defined as integers. The entire database is only 20 GB.

The issue is that the loads from the staging table to this fact table have significantly deteriorated over the last month or so, dropping from over 400 transactions per second to around 85. We drop all the indexes on the fact table before we load the data into it.

Are there issues with a manageable table size in SQL 7.0 that we need to be concerned about? And should we consider partitioning the table into several smaller tables and join them with a "union all" view?

I really need to get this performance issue resolved, as our IT support vendor is pushing us to port the data warehouse to UDB because they tell us that SQL server is not scalable enough to handle this volume of data.

Thanks for any help you can provide.

George M. Parker

View 6 Replies View Related

Large Tables

Aug 10, 2000

Hi,

How can i partition the large tables so that the insert and updates which iam doing on the tables take less time.

I want to know how can i partition large tables and if i do that how is that the performance is going to be increased.

Thanks.

View 1 Replies View Related

Large Tables

Mar 13, 2001

How can I find largest 5 or 10 tables in a database?

Thanks in advance
Chan

View 2 Replies View Related

Searching In Large Tables........

Mar 7, 2007

Hi there,i am having some problem related to SQL server........ Actually i am having a table called ZipCodes that have around 80,000 rows... and the size of the table is around 100 MB...... and my table is now on web Server,.  now my problem is that when i fire some query that needs to go through whole of the table then it estimated time to execute the query comes to be 13 seconds and the corsor threshold is set to 7 seconds (and i can't change that)....... so the SQL server cancels the query to be fired........Now i need some Methodology/Technique through which i can search Large Tables with minimum calculations in minimum Time............(Any Ideas)....

View 3 Replies View Related

COMPRESSING LARGE TABLES

Mar 19, 2001

Is it possible to compress the large tables in the database,

like COMPRESS, ARCHIVE options we use to reduce the size of files
stored on any operating system.

I know there is a difference between the file stored on disk and the table created in the database, but currently I am facing space problems wherein, I have to manage my database within the space available, so please advice me if the option is available in SQL Server 6.5 or 7.
I will be happy if I get the solution immediatly as currently I am facing this problem and waiting for your reply.
Thank you
Amol

View 1 Replies View Related

Restore Two Tables From Large DB Into A New DB

Feb 12, 2008

I am fairly new to SQL, so please forgive me if my question is a bit elementary. I need to pull two individual tables out of a massive DB into a new DB for testing.

Thanks for the help.

View 2 Replies View Related

Manipulating Large Tables

Feb 15, 2008

I'm in the midst of a long file conversion job. Today I found that one of the tables (converted from csv) to be 6.7 million records. My sql script which I use to reconfigure the weird original date format, into something the rest of the planet uses, times out due to the size.

Does anyone please know of a file utility to automagically split sql server 2005 tables for later re-combining once my scripts have successfully completed their task on the smaller tables?

View 7 Replies View Related

Partioning Large Tables

Nov 14, 2007

I am making a warehouse managment system. The system will cotain much data, but only a small portion of the data will be accessed frequently. Most of the data will only be accessed seldomly, but the customer wants to keep all historic data (just in case they should need it sometime). I have figured I need to partion the tables somehow to keep what is fresh in one place, and historical data in another place. What is the best way to do this? I am thinking about making historical tables. For example I can have a table named PickList and another table named PickListHistorical. When a picklist is processed/complete I can move it over to the PicklistHistorical table, but when the users need to search for a specific picklist I have to look in both tables. I can ofcourse create a view for this to make it transparant. Sql server 2005 introduced some automatically partioning. Will it be better to use this than create my own historic tables? If so, can you please tell me how I do it?

Thank you!

View 11 Replies View Related

Comparing Large Tables

Oct 19, 2007



I've successfully created SSIS packages where I compare two tables in different databases on different servers. However, this is good enough to compare hundreds of thousands of records quickly. The process becomes a huge performance problem when trying to compare table differences when I'm looking at tables that each contain tens of millions of records.


One database is on a SQL 2005 box and the other DB is SQL 7.0 so the lookup component fails for this type of SQL Server. I've been implementing merge joins and conditional components to do my standard table comparisons.

Is there another way to implement this process or maybe partition it somehow to take pieces of the table at a time and compare them? I'm open to ideas.

View 11 Replies View Related

What Is Most Efficient Way To Use DataAdapters With Large SQL Tables

Jan 9, 2008

 I'm using DataAdapters with my SQL database with the intention of all the SELECT, UPDATE, INSERT, DELETE commands to be automatically generated.One table is huge so I'm wondering is it more efficient to "SELECT Top(1) * FROM hugetable" instead of  "SELECT * FROM hugetable" in order to facilitate the generation of commands.I hope this isn't too confusing.Thanks,Geoff  

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved