Searching In Large Tables........

Mar 7, 2007

Hi there,

i am having some problem related to SQL server........ Actually i am having a table called ZipCodes that have around 80,000 rows... and the size of the table is around 100 MB...... and my table is now on web Server,.  now my problem is that when i fire some query that needs to go through whole of the table then it estimated time to execute the query comes to be 13 seconds and the corsor threshold is set to 7 seconds (and i can't change that)....... so the SQL server cancels the query to be fired........

Now i need some Methodology/Technique through which i can search Large Tables with minimum calculations in minimum Time............

(Any Ideas)....

View 3 Replies


ADVERTISEMENT

Searching Large Text

May 28, 2007

I am building a generic job site and well I have hit a speed bump. I need to store resumes in the database to be searched on. Well what is the best way to store these full text resumes? so that they can be easily searched on?

SQL 2005

View 2 Replies View Related

Searching In Tables

Dec 8, 2004

Hi All,


Just wanted to run this idea past u all before I have a go at it.

I have two tables A & B that are similar to the below


Table A
Name1 Name2 Name3
Tom Bill John
Gary Harry Eric


TableB
Name1 Name2 Name3
John Bill Tom
Tom Eric john
Leslie Philip Colin


What I wanted to do is see if the the records from tableA row 1 exist in tableB

As you can see they can appear in any order, partially or not at all.

What I propose to do is take tableA Name1 and see if it matches tableb row1 name 1 OR name 2 OR name 3 and if I find a match use a variable to assign the the value 1 (so I can then see if the match is full (score three) partial socre 1 and two or not at all (0)

Then I will need do the same for tableA row 2

Then goto row 2 of table a and start again.

Does this make sense? Are there beter ways of doing this.

I am using SQL server to do the searching....?

All comments appreciated and welcome.

rgs

Tonuy

View 4 Replies View Related

Searching Tables

Jun 3, 2008

This subject probably isn't correct, but I'm not sure how else to describe what I want to do.

I have two tables.
Table 1:
Act_Num.....Code
1...........33
2...........23
3...........22
4...........26
5...........20

Table 2:
Part_ID.....Act_Listings
1927..........33 14
5819..........23 26

What I need to do is identify records from Table 2 that do not have an Activity in the Act_Listing field contained in Table 1. So for Part_ID=1927, code 33 exists in Table 1, but code 14 doesn't, so that record should be selected.

View 15 Replies View Related

Searching In All Tables In A Database

Apr 7, 2005

Whats the easiest way to search for a keyword in all the tables present in the database.
Or searching in 5-6 tables.
Thanks,
AzamSharp

View 1 Replies View Related

Searching Multiple Tables

Jul 26, 2004

im in need of some help, im new at sql and am trying to seach multiple tables.

i am doing a football database with a table for each of the teams
and each table has the following
"First Name" "Last Name" "Age" "Date of Birth" etc

what i am trying to do is search for either the "first name" or "last name" in every table

if anyone can help me it would be apreciated

View 1 Replies View Related

Searching Multiple Tables

Nov 19, 2004

i have multiple tables that vary very little in name. what i mean by this is they are named tbleffort1, tbleffort2 etc. i need to search all the tables together. there is a large constantly changing number of tables, so i would prefer not to have to write them out one by one! any suggestions would be most appreciated!

View 4 Replies View Related

Searching Thru Stored Procs Via The Sys Tables

Sep 3, 2003

Basically, I want to be able to have a stored procedure that will search through all the stored procedures looking for a given string and returning the names of all the stored procs that contain that string value.

I know I can script it off and then do a text search in Notepad or whatever, but I figured there must be a more elegant way to do it. More than likely dealing with the DB sys tables.

Is there already a tool in SQL Server that does this? Or has anybody had a chance to roll their own?

Thanks,
Tim

View 4 Replies View Related

Searching A List Of Tables, Derived From Another Table

Sep 21, 2005

Relative SQL newbie here......this is probably easy, but....Lets say I have a table (MainTable) that stores a list of input table names,a primary key (PKey), and a field called "Configured" for each one. Each ofthese input tables also contain a field called "Configured", which is set totrue or false in another process based on an OrderNumber. (So an order'sinputs are stored in several input tables, and the MainTable is a summarytable that shows which input tables have been configured for any givenOrderNumber).What I need to do is open each input table, and look for a record containinga specific OrderNumber and where Configured=true. If a record is found, Ineed to update the Configured field for that table in the MainTable, andthen move on to the next sub-table.The way I'm doing it now is with simple SQL and loops. Here is the basiccode (ASP):*****************************************OrderNumber = "562613" ' the current order that is being processed' reset all configured flagssql = "UPDATE MainTable SET Configured = 0"conn.execute sql, , &H00000080' get list of all tablenamessql = "SELECT InputTableName, PKey FROM MainTable WHERE InputTableName <>'---'"set rsTableNames = conn.execute(sql)while not rsTableNames.eof' test each input table for configured flagsql = "SELECT Configured FROM " & rsTableNames("InputTableName")& _" WHERE Configured = 1 AND OrderNumber = '" & OrderNumber &"'"set rs = conn.execute(sql)If Not rs.EOF Then' update the main tablesql = "UPDATE MainTable SET Configured = 1 WHERE PKey='" &rsTableNames("PrimaryKey") & "'"conn.execute sql, , &H00000080end ifset rs = nothingrsTableNames.movenextwend*****************************************There has to be a faster way.. I think.... maybe something that could bewritten as a stored procedure? I use a similar technique in a couple ofother places and it's a bit of a performance hit, especially as the numberof input tables grows.TIA!Calan

View 6 Replies View Related

Large Tables In SQL 7.0

Jul 12, 2001

We currently have a data warehouse running on SQL 7.0, SP2. One of our primary fact tables is now well over 155 million rows in it. The table is not very wide, as it only contains 17 columns, most of which are defined as integers. The entire database is only 20 GB.

The issue is that the loads from the staging table to this fact table have significantly deteriorated over the last month or so, dropping from over 400 transactions per second to around 85. We drop all the indexes on the fact table before we load the data into it.

Are there issues with a manageable table size in SQL 7.0 that we need to be concerned about? And should we consider partitioning the table into several smaller tables and join them with a "union all" view?

I really need to get this performance issue resolved, as our IT support vendor is pushing us to port the data warehouse to UDB because they tell us that SQL server is not scalable enough to handle this volume of data.

Thanks for any help you can provide.

George M. Parker

View 6 Replies View Related

Large Tables

Aug 10, 2000

Hi,

How can i partition the large tables so that the insert and updates which iam doing on the tables take less time.

I want to know how can i partition large tables and if i do that how is that the performance is going to be increased.

Thanks.

View 1 Replies View Related

Large Tables

Mar 13, 2001

How can I find largest 5 or 10 tables in a database?

Thanks in advance
Chan

View 2 Replies View Related

COMPRESSING LARGE TABLES

Mar 19, 2001

Is it possible to compress the large tables in the database,

like COMPRESS, ARCHIVE options we use to reduce the size of files
stored on any operating system.

I know there is a difference between the file stored on disk and the table created in the database, but currently I am facing space problems wherein, I have to manage my database within the space available, so please advice me if the option is available in SQL Server 6.5 or 7.
I will be happy if I get the solution immediatly as currently I am facing this problem and waiting for your reply.
Thank you
Amol

View 1 Replies View Related

Restore Two Tables From Large DB Into A New DB

Feb 12, 2008

I am fairly new to SQL, so please forgive me if my question is a bit elementary. I need to pull two individual tables out of a massive DB into a new DB for testing.

Thanks for the help.

View 2 Replies View Related

Manipulating Large Tables

Feb 15, 2008

I'm in the midst of a long file conversion job. Today I found that one of the tables (converted from csv) to be 6.7 million records. My sql script which I use to reconfigure the weird original date format, into something the rest of the planet uses, times out due to the size.

Does anyone please know of a file utility to automagically split sql server 2005 tables for later re-combining once my scripts have successfully completed their task on the smaller tables?

View 7 Replies View Related

Partioning Large Tables

Nov 14, 2007

I am making a warehouse managment system. The system will cotain much data, but only a small portion of the data will be accessed frequently. Most of the data will only be accessed seldomly, but the customer wants to keep all historic data (just in case they should need it sometime). I have figured I need to partion the tables somehow to keep what is fresh in one place, and historical data in another place. What is the best way to do this? I am thinking about making historical tables. For example I can have a table named PickList and another table named PickListHistorical. When a picklist is processed/complete I can move it over to the PicklistHistorical table, but when the users need to search for a specific picklist I have to look in both tables. I can ofcourse create a view for this to make it transparant. Sql server 2005 introduced some automatically partioning. Will it be better to use this than create my own historic tables? If so, can you please tell me how I do it?

Thank you!

View 11 Replies View Related

Comparing Large Tables

Oct 19, 2007



I've successfully created SSIS packages where I compare two tables in different databases on different servers. However, this is good enough to compare hundreds of thousands of records quickly. The process becomes a huge performance problem when trying to compare table differences when I'm looking at tables that each contain tens of millions of records.


One database is on a SQL 2005 box and the other DB is SQL 7.0 so the lookup component fails for this type of SQL Server. I've been implementing merge joins and conditional components to do my standard table comparisons.

Is there another way to implement this process or maybe partition it somehow to take pieces of the table at a time and compare them? I'm open to ideas.

View 11 Replies View Related

What Is Most Efficient Way To Use DataAdapters With Large SQL Tables

Jan 9, 2008

 I'm using DataAdapters with my SQL database with the intention of all the SELECT, UPDATE, INSERT, DELETE commands to be automatically generated.One table is huge so I'm wondering is it more efficient to "SELECT Top(1) * FROM hugetable" instead of  "SELECT * FROM hugetable" in order to facilitate the generation of commands.I hope this isn't too confusing.Thanks,Geoff  

View 2 Replies View Related

4 Seperate Tables Or One Large Table?

May 10, 2008

I have 4 tables with the respective amount of records
1) 6755
2) 2021
3) 2021
4) 355

They all have the same columns. However, they need to be seperate, or at least when I query them. I'll be accessing this database via the web. i was first afraid that a large database would cause major slow down when accessing the db. So I broke it up into 4 tables. If I combined all 4 tables into one large table and just had a column that differentiated the 4, how significant would be the change in speed when accessing the table? It's not a big deal to keep them seperate, its just that when I have to add or remove a column from one table I have to remove it from all the tables. Furthermore, I'm using a module from DEVEXPRESS, don't know if anyone has heard of it, but when you use a gridview, it loads up the entire table even though your paging (which I think is retarded), so for that reason I was afraid it would slow up my access to the db. Any thoughts?

View 2 Replies View Related

Slow Inserts Into Large Tables

Nov 29, 2000

We are inserting into a table, which includes an identity primary key column. When the table gets really large (i.e. 1.5 million records), the performance of the inserts reduce.

I noticed that when we insert into the table an exclusive lock on the table is obtained. Do inserts into tables with identities always lock the table?

Given the table size is unavoidable, does anyone have a suggestion to improve the performance?

Thanks,
Matt

View 6 Replies View Related

Temp Tables Vs. Large Table

Aug 4, 2005

I have a few hundred users, maybe a dozen or two active at any given time, accessing the same database via ASP. The database has many tables, one being a very large orders table with a few million records, in which I have created a view against. A view only because I need to allow the user to filter quite extensively against the results. The users typically only need to view records for the last 30 days and results for each user might be five thousand records or less.

My question is this. Would I be better off writing each user's resultset to a temp table for that user's session and allow the filtering and sorting by the user go against that temp table and increase my hardware requirements to accomodate that. Possibly to the point of creating a database cluster. OR would I be better off leaving it as is where each users uses the same view.

FYI...each user may need visibility to only a hand full of fields, but over all the view must maintain many fields.

Any thoughts on this would be greatly appreciated. Thanks in advance.

Dave

View 2 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

View 10 Replies View Related

Query Optimization Help For Very Large Tables

Nov 1, 2007

I have the following table structure:
tableA (~85,000 rows) primary key = [colA,colB]
tableB (~850,000 rows) primary key = [colA,colC]
tableC (~120,000,000 rows) primary key = [colA,colB,colC]

IMPORTANT: colC is DATETIME

For a SET of rows in tableA (about 50,000) I need to pull the MOST RECENT (given a date) corresponding values from tables B and C. The only way I can think of doing this is the following:

SELECT tableA.colA
,(SELECT TOP 1 colX FROM tableB WHERE colA = tableA.colA AND colC <= @INPUTDATE ORDER BY colC desc)
,(SELECT TOP 1 colY FROM tableB WHERE colA = tableA.colA AND colC <= @INPUTDATE ORDER BY colC desc)
,... --some more columns from tableB
,(SELECT TOP 1 colX FROM tableC WHERE colA = tableA.colA AND colB = tableA.colB AND colC <= @INPUTDATE ORDER BY colC desc)
,(SELECT TOP 1 colY FROM tableC WHERE colA = tableA.colA AND colB = tableA.colB AND colC <= @INPUTDATE ORDER BY colC desc)
,... --some more columns from tableC
FROM tableA
WHERE tableA.colX = 'some criteria'


Is there any other way anyone can suggest? Unfortunately, because tableC is so large, the disk IO (I think) causes this query to take over an hour. (If I had monster RAM and super fast disk this wouldn't be as big an issue, but that's not an option right now )

Thanks in advance!

View 7 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

Waleed Eissa
http://www.waleedeissa.com

View 9 Replies View Related

Performance Issues With Large Tables

Dec 5, 2007

Hi,

I have a table with over 61 million records having a clustered index on an identity column(Primary key). Simple count queries are taking minutes to execute on this table (ex: select count(1) from table1). I have checked the statistics on the primary key which displayed me the histogram having the 39th million record as the Range-hi-key. I updated the statistics on this column and tried requerying, but still it took atleast 5 minutes to give me the count of records in the table. Also, there were no users using the table when I queried. Inserts into this table were working fine. I have other tables in my database with 41 million records having no such issues. Can anyone point me to the problem areas in such scenarios?


Thanks,
Harish

View 6 Replies View Related

Reducing Large Tables, Re-index And Backup Them

Jan 2, 2003

What is the best procedure/sequence to reduce some tables containing large number of rows of
a SQL 2000 server?
The idea is first to check which tables grow extremely fast (all statistics, user or log tables), reduce the table
according to the number of months the user wishes to keep in the table.
As a second step backup remaining rows of table as txt files on harddisk (using DTS), UPDATE STATISTICS and re-indexing reduced table.
Run DTS Package every month once (delete oldest month and backup newest month) and do the same as above to keep size of tables adequate.
What is a fast way to reduce number of rows of a large table - the following example produces an error (timeout expired) of my
ADO connection when executing:
SET @str = 'DELETE FROM ' + @ProcessTable + ' WHERE ' + @SelectedColumn + ' < DATEADD (m,' +' -' +
@KeepMonthsInDatabase + ',
+ GETDATE())'
EXEC (@str)
Adding ConnectionTimout = 0 did not help unfortunately.

What is the best way to re-index the table just maintained?

Thanks

mipo

View 2 Replies View Related

SQL 2012 :: Index Maintenance For Large Tables?

Mar 8, 2014

We are having very big tables in TBS and wanted to setup a strategy for index maintenance.

View 3 Replies View Related

Rebuilding Large Database Tables And Indexes

Jul 7, 2015

I have come across a database system which isn't designed to work optimally. It is fairly large (~400GB) and performance of loading and querying is degrading (improper data types, fragmented indexes, non unique clustering key and other problems). So, I have quite a task in front of me, but I am up for the challenge. I figure this is not a unique situation, many of us would have come across this before. I have done this before too, but only for smaller databases, some of the operations here I expect to take a couple of hours or more to complete (depending on load/infrastructure speed etc, I know).

My plan is thus:

+ Take a full backup of the database
+ Set the recovery model of the DB to simple
+ Drop non clustered indexes
+ Drop clustered indexes
+ Remove PKs (wrong data types, too large!)
+ Narrow data types (add new column, update column in batches to old value, rename new column to old column)
+ Add PKs, which will create clustered indexes automatically based on PK ID
+ Create non clustered indexes
+ Run a SHRINKDB (normal operations I would never do this, but this is a special case, ensure log file is truncated to a logical size especially after all those table modifications...)
+ Set the recovery model of the DB to Full
+ Ensure everything works OK or better

View 9 Replies View Related

SQL Server 2008 :: Large Tables In OLTP

Jul 14, 2015

How many no of records of the tables are called large tables.

We are getting more deadlocks. We are using default isolation. Read & insert statements are blocking each other and causes dead locks.

I am thinking that might be purging will reduce deadlocks.

The table has 15million records. Is this table consider as large table or not in OLTP systems?

In general how many records we need to consider as large table.

View 1 Replies View Related

Exceedingly Long Update On Large Tables - Why?

Mar 28, 2006

We have a simple UPDATE query, joining two tables, that takes much longer than 10 hours to run, but if we break the table in six (10 million rows in each table), it takes only fifteen minutes to run each part.

Why? And how can we tell in advance whether a query will cross the threshold into l.o.n.g.r.u.n.n.i.n.g query? Or, how can we prevent it?

The system is Windows XP Pro with 4GB RAM (/3GB switch), and SQL Server Standard 2005. Log files, swap files, dbf files are on separate drives. The system is dedicated to SQL Server. No other queries are running at the same time. The database is in Simple logging mode. Each table is a few GB with 60 million rows.

An example problem query is: (updating fewer than 10 bytes)
UPDATE bigtable
SET bigtable.custage = scores.custage, bigtable.custscore = scores.custscore
FROM bigtable
JOIN t2 ON bigtable.custid = scores.custid

In this case, each table has 60 million rows. 'custid' is a sequential, unique integer. SCORES table is clustered on 'custid' and is 1.5GB in size. BIGTABLE has an index on 'custid', and is 6GB in size. There is a one-to-one match between the tables on 'custid', but not enforced. The SCORES table was created by exporting a few fields (but all 60 million records) from BIGTABLE, updating the values in a separate program, then importing back in SQL Server into the SCORES table.

The first time this query was run, we stopped it after it ran 16 hours. When we broke up the bigtable into 10 million record chunks (big1, big2, big3..., big6) each update only took 15 minutes, for 90 minutes total.

* How can in we tell in advance that the full chunk would take more than a few hours?
* Why is it taking SO MUCH LONGER than in smaller chunks?
* When a query is taking that long to run, is there any way to tell where in the plan it is?
* What should we do differently?

Thanks for any help; this is a real head scratcher for us.

View 9 Replies View Related

TableDiff Out Of Memory Exception On Large Tables.

Sep 20, 2007

Hello,
I hope I am posting this in the right forum.

I am using tableDiff.exe to create a diff SQL script for a very large table (~4 million rows).


After a few minutes, I recieve a "System.OutOfMemoryException".

I have 4GB of ram on the machine executing the table diff.
The server is 32-bit, so adding ram is not an option.

I am executing the following command line:





Code Snippet

TableDiff.exe" -sourceserver "SERVER" -sourcedatabase "SourceDB" -sourcetable "Table1" -destinationserver "SERVER" -destinationdatabase "DestDB" -destinationtable "Table1" -f "C:TableDiffsTable1"

I have seen reports of other users executing tableDiff against 2million row tables.

Is there anyway to buffer tableDiff, so that I do not run out of memory on the server?

Could anything else be causing this error?

Thanks,
Dave

View 3 Replies View Related

Large FullText Tables - Slow Queries

May 31, 2007

Hi,



I currently have a large table (35 million rows, over 80GB). I have one varchar(max) column on the table that is used in the fulltext index.



To query the complete index is fast, for example:



SELECT 'ipod', COUNT(*)

FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], 'ipod') CT



This took 70 seconds (which I can live with). However, I seldom run queries like this, most are more like:



SELECT 'ipod', COUNT(*)

FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], 'ipod') CT

JOIN Pages ITP ON ITP.PageID = CT.[Key]

JOIN Feeds ITF ON ITP.IPID = ITF.IPID

JOIN Buyers ITB ON ITB.IBID = ITF.IBID

WHERE ITB.ID IN (1342,246)



These queries are much slower (this example took 17 minutes). I understand that FT searches the index and returns all rows that match the query to SQL. SQL then performs the joins and counts only the correct results. (Correct me if I'm wrong here).



One solution I've seen to this to put data or "tags" into the FT column - so my Body column would become something like:



'{ID:1342}' + [Body]



That sounds like a very good idea. I could then change the 2nd query above to be:



SELECT 'ipod', COUNT(*)

FROM CONTAINSTABLE(MyDB.dbo.Contents, [Body], '("ID:1342" OR "ID:246") AND "ipod"') CT



That all works well until I want to select 1000 different ID's because the FT query will become very long and complex. Also I'm only including one column (ID) in this example - but I have about 7 or 8 columns that I would need to include in these "tags". Quering multiple columns become very complex quickly and no doubt I will reach a query limit at somepoint.



If anyone has any other suggestions to the above I'd love to hear them. Another thought I'm having is to partition the table. I can find very little online about how FT behaves on partitioned tables - I fear it behaves exactly the same, what I'd like to think is that I could partition the table on an ID say 100 per partition or something, and then fulltext would only search the relevant partitions. If it behaves like this it may work. If no-one knows then I'll give it ago, but this will take me a while due to the table size - so I'm hoping one of you clever lot know!



Many thanks for any advice.



Simon





View 2 Replies View Related

Design Of Tables With Large Optional Fields?

Jan 4, 2006

I have a general SQL design-type question.

I want to log errors to a table. If the error is with a URL, I want to store the URL. These URLs can be very large, hundreds of characters, but I only need to store it if it causes the error, which should be very infrequent. Which is the better design:

Create a large varchar field in the log table to hold the URL, or null if the error wasn't with the URL.
Create a foreign key field in the log table to a second URL table, which has a unique ID and a large varchar, and only create a record in this table if the error is with the URL.

One concern I have with design 2 is that there could be many other fields that are infrequent. Do I create a separate table for every one?

Richard

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved