Efficiency: 40 Million Records Script.

Oct 12, 2007

Hi all,


I have a sql script that updates records in a table with 40 million records.

There is some functionality in the script that could be put away in functions for code reuse/elegance.

Functions would cause execution overhead.

What else could I use besides functions that would allow me the code reuse and not compromise the execution over head? Is there any thing like includes in TSQL that would allow me to do so?

TIA..

View 4 Replies


ADVERTISEMENT

DB Engine :: Deleting 1 Million Records From Transaction Table Of 10 Million Data On 24/7 Environment

Jun 12, 2015

I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?

View 13 Replies View Related

SQL INSERT 1.6 Million Records

Jan 27, 2006

I am currently working on a simple page to insert 1.6 million UK postcode records into an SQL server table. The table has three columns for the postcode, longditude coordinate and lattitude coordinate. The data is sourced from a pipe (|) delimited txt file and inserted into the database using a FOR loop. The problem I have is that the page will hang after inserting only 10,000 records, the page displays either an invalid View State error or a page cannot be found error.
Now I assume the viewstate error stems from the fact that there is a form on the page which simply contains a button to execute the script and a few labels to show the progress. But without the form and associated viewstate the insert still fails to complete.... any ideas?? Would I be better running this on a thread or should I just do it in stages and be patient. I have now modified the page to read the database on load and pick up from where it crashes?

View 2 Replies View Related

Updating 4 Million Records

Aug 30, 2006

Meg writes "Hi,

I have a table that has 4+ million records. I need to update those records. I am facing some performance issue. Can someone please advice?

update stage
set batch_status = 1
where update_status = 0


Update transaction
Set aId = s.aId,
b = s.b,

from stage s
Where s.aId = transaction.aId
and s.batch_status = 1


Update stage
Set update_status = 1,
batch_status = 2

where

batch_status = 1

When I run the above query with "set rowcount 1000", it runs in one minute. When I run the query for "set rowcount 10000", it runs in 1 hour 56 minutes. Can someone help me to optimize it?

Thanks.
Meg"

View 4 Replies View Related

56 Million Records Search

Jul 20, 2005

Hey folks...So I have a table that looks like this:CREATE TABLE [tblStation] ([CAMPAIGN] [varchar] (8),[LISTNUM] [varchar] (10),[PHONE] [varchar] (10),[EVENTTIME] [datetime] ,[STATION] [int],[OPERATOR] [varchar] (16),[EVENTCODE] [varchar],[CALLSPAN] [decimal](18, 0),[FDISP] [int],[RECORDNUM] [varchar],[STC] [varchar],[PROMOC] [varchar],[EXP_CAMP] [varchar],[PROMO3] [varchar],[MAXATT] [char],[LISTNAME] [varchar],[SITENAME] [char],[Row_id] [int] IDENTITYIt's taking nine seconds to run the following command:SELECT count([fdisp])FROM [TrunkFiles_new].[dbo].[tblStation] WITH (NOLOCK)WHERE fdisp IS NULLAnyone familiar with a table of this size having performance likethis? The [fdisp] column has a non clustered index on it.Thanks in advance...

View 1 Replies View Related

How To Improve The Efficiency When Search Data From More Than 1000000 Records?

Sep 10, 2007

 Hi everyone,My company has a website, use ms sql server. One table has more than 1000000 records. When users search data from this table(such as search records which contain the word "school" in NewsTile field.And the server often occurred deadlock error.How can I improve it?Thanks.P.S. The table has these fields:NewsIDNewsTitleNewsContentNewsClickTimesNewsInsertTime

View 14 Replies View Related

How Well SQL Server Can Support 300 Million Records...

Nov 16, 2001

How well SQL Server can support 300 million records...
Any body is working on big database like this. can anyone give me some input on this. it's going to be 60GB database size.

View 1 Replies View Related

Indexing A Table With 80 Million Records

Mar 26, 2004

i have a directory database with approx. 80 million records. i am feeding the database with bulk_insert. Indexing one of the fields took about 8 hrs. After indexing when i run queries with the indexed field the response time is under 1 sec. However if i run select queries with like on non-indexed fields it takes more than 2 mins. So i decided to index 4 other fields in the database and it looks like the indexing process is going to run for 2 days.
i am a novice in SQL database design and i am not sure if this is the best way to index the table. i am just using create index. Any suggestions / advice welcome.

View 5 Replies View Related

Fastest Way To Update 20 + Million Records

Mar 19, 2008

Hello,
What is the fastest way to update 20million records in our database.
I have tried to do a simple update statement like this:
update trail_log with (tablockx, holdlock)
set trail_log .entry_by = users.user_identity
from users
where trail_log.entry_by = users.user_id

but it take 10 plus hours to run since it cannot commit the transactions until the very end. So was was thinking that I need to commit in batch like after 50K but that is slow as well.
Set rowcount 50000
Declare @rc int
Set @rc=50000
While @rc=50000
Begin
Begin Transaction
update trail_log With (tablockx, holdlock)
set trail_log.entry_by = users.user_identity
from users
where trail_log.entry_by = users.user_id
and trail_log.entry_by not like '%[0-9]%'
Select @rc=@@rowcount
--Commit the transaction
Commit
End
go
I have let the above statement run for 1.5 hours and it only update 450000 rows. Any ideas...
Maybe I'm doing it wrong. Please Help!!

View 1 Replies View Related

Free Text Search For 2 Million Records

Apr 23, 2007

Hi

I have a new client with an existing system that has just over 2 million business listings in one table. Each business listing is associated with one business category.

* Company Table (around 20 fields):

companyID
companyName
categoryID
state
postCode
etc.

* Category Table (5 fields)

categoryID
categoryName
etc.

We are using MSSQL 2005 Express Edition with Advanced Services

A free text search needs to be performed on the companyName and categoryName limited by region (state and or postcode).

1) What kind of response times should I expect for the free text search (I have not used the free text search before)

2) How should I index the companyName and categoryName so they are both used in a joined query? i.e. Do I just configure the free text search index on each field separately and it should work?

Any suggestions appreciated.

Best Regards

Kevan

View 2 Replies View Related

T-SQL (SS2K8) :: Compare Tables With More Than 4.9 Million Records?

Mar 18, 2014

I want to compare ONLY 1 Column values from 2 tables having more than 4.9 million records. There is a difference of 4000 rows between the 2 tables.

SELECT ID From TABLE1 where ID not in (SELECT DISTINCT ID From TABLE2)

My above query took nearly 4.5 hours to run and I had to cancel it. Is there a better way to write the query . I just want to compare the ID - column values which are missing in TABLE2

View 7 Replies View Related

SQL 2012 :: 1.5 Million Records Into Temp Table

Sep 23, 2014

I come from a web based world were loading 1.5 million records into a temp table is suicide. I’m doing more data warehouse stuff now and I was looking into optimizing a buddies proc and noticed he was loading 1.5 million records into a temp table. We had a discussion about it because being from a web world I was drastically against it. He on the other hand didn’t feel it was an issue being it gets called once maybe twice a day. The tempdb is set to autogrow and it is on a different drive than all the other databases on the box. It has one ldf and mdf. He’s creating an index on the table after load. Why we shouldn’t be loading 1.5 million recs into temp table?

View 5 Replies View Related

Join 2 Tables With More Then Million Records With 2 Parameters

Apr 8, 2008

Hi
I have 2 tables with more then million records in each and I have to perform full outer join.
The problem is that the join clause contains 2 different parameters (int and string) like this:

Select *
From a full outer join b
On a.cli = b.cli OR a.reference = b.reference

Because of the OR in the clause and the million records the query is infinite. If I change to one rule only then it works fine.

How can I join these 2 big tables with 2 rules?
Thanks
Itay

View 2 Replies View Related

Transact SQL :: Updating A Table With 45 Million Records

Jul 21, 2015

I am trying to update a large table which consists of 45 million records , it is taking more than 2 days to the update , below is my approach

1. The table has only one clustered index and no other indexes on the table.
2. I am updating in batches say 20000 record-wise.
3. Changed the recovery mode to bulk logged and auto-growth size is set to  300MB and there is enough space in my disk for transaction log .

But still the query is running slowly.

View 10 Replies View Related

Fuzzy Grouping: Any Success With &&> 3 Million Records?

May 18, 2006

I have tried to process > 3 million Fuzzy grouping records on two different servers with no success. 3 mill works but anything above 4 mill doesn't. Some background:

We are trying to de-dup our customer table on: name (.5 min), address1 (.5 min), city (.5 min), state (exact). .8 overall record min score.
Output includes additional fields: customerid, sourceid, address2, country, phonenumber
Without SP1 installed I couldn't even get a few hundred thousand records to process
Two different servers - same problems. Note that SSIS and SQL Server are running locally on both
The higher end server has 4GB RAM, the other 2.5 GB RAM. Plenty of free disk space on both
SQL Server is configured to use 2 GB of RAM max
The page file is currently at 15GB

After running a number of test on both servers trying different batch sizes etc. the one thing I noticed is that it seems to always error out when SSIS takes over and starts chewing up all the available RAM. This happens after the index is created and SSIS starts "warming caches". On both servers SQL Server uses up about 1.6GB of RAM at this point while SSIS keeps taking over RAM until all physical RAM is used up.

Some questions:

Has anyone been able to process more then 3 million records and if so what is your hardware configuration?
Should we try running SSIS from a different server so it has access to the full amount of physical RAM? (so it doesn't have to fight for RAM with SQL Server)
Should we install Win 2003 Enterprise Server so we can add more RAM?
Any ideas why switching to the page file might be causing errors?

Thanks!!

Keith Doyle





View 17 Replies View Related

SQL Server 2012 :: Updating 25 Million Records In Batches

Nov 10, 2014

I have 2 tables with this schema

CREATE TABLE tableValues(
[LASTENCRYPTIONDT] [datetime] NULL,
[ENCRYPTIONID] [int] NULL,
[NAME] [varchar](50) NULL

[Code] ....

I want to update tableToUpdate in batches of 5000 per batch and set the lastenecryptionDT to null based on the the join to the tableValues using the column ENCRYPTIONID, and also output updated rows into another table. Incase I would need to do a rollback.

View 3 Replies View Related

SQL Server 2008 :: Data Fetching 80 Million Records?

Mar 24, 2015

i have table below

CREATE TABLE [dbo].[DR_Test](
[source_item_id] [int] NOT NULL,
[source_line_no] [int] NULL,
[buyer_id] [int] NOT NULL,
[seller_member_id] [int] NULL,

[code]...

the table contains more than 80 million records so when i fetch the data using buyer_id & timezone its taking lot of more than 1 hours or so....& where buyer_id is not unique.how to fetch the data fast or need to change the structure of the table

View 3 Replies View Related

Need Suggestion On Loading A 50 Million Records Table From Oracle

Feb 16, 2006

All,

I need to load a 50 million records table monthly. Any suggestion about the best/fast way to do it?

Thanks a lot

View 2 Replies View Related

Checking To See If A Records Exists Before Inserting - 3 Million + Rows

Aug 21, 2007

I have 1+ CSV files (using a foreach loop) which I'm doing a lot of transform work on and then inserting into a SQL database table.
Each CSV file usually contains about 2 days worth of data (contains date stamps) - somewhere in the region of 60k records per day.
The destination table currently contains 3 million+ rows and will get bigger.
I need to make sure that before inserting into the destination table, the data doesn't already exist.

I've read the following article: http://www.sqlis.com/311.aspx
While the lookup method works, it takes ages and eats up memory as it caches the 3m+ records before running for each CSV. Obviously this will only get worse as the table grows in size.

To make things a little more efficient what I'd like to do, is first derive the dates I'm dealing with in the current file - essentially storing the max(date) and min(date) in variables. Then in the lookup SQL use those vars, to reduce the amount of data that needs to be brought into the transformation to check against before inserting into the destination table.
Lookup SQL eg. SELECT * FROM MyTable WHERE Date BETWEEN varMinDate AND varMaxDate.

Ideally I'd use an aggregate transformation and then use the subsequent output from that either in the lookup query or store the output in vars, but I don't think you can do that and I get the feeling I'm approaching this with the wrong mindset.

Any thoughts would be great!

View 6 Replies View Related

T-SQL (SS2K8) :: Table With 3 Million Plus Records Taking Half A Minute?

Aug 6, 2015

I have a table that I need to do some computations on all the data but first I need to remove the duplicate records and insert the results into a destination table. Here's the example below. My table has 3.1 million rows. I have tried using the DISTINCT and the GROUP BY but both ways to select the data takes about half a minute to run. I'm wondering if there is a way to increase performance. Users are ok with this time since the process runs overnight but improving it won't hurt. I do have a clustered index on these fields but that doesn't seem to improve any.

SELECTDateYear ,
DateMonth ,
Nbr ,
Nbr1 ,
Nbr2 ,
Datafield1 ,
Datafield2,

[code].....

View 7 Replies View Related

SQL 2012 :: Snapshot Getting Corrupted After Insert Update Few Million Records Into A Table

Mar 12, 2015

We are facing a weird scenario in which the snapshot is getting corrupted after insertupdate few million records in to a table .

SQL Server 2012
windows server 2008 R2
service pack 1
64-bit OS

View 1 Replies View Related

SQL Server 2008 :: Setting To Not Rollback A Failed SSIS Package That Inserts 100 Million Records?

May 20, 2015

I have a pretty simple SSIS package that fast loads a 100 million record table into a SQL Server 2008 table on a daily basis. This normally runs fine and completes in about 1 hour. As this is perhaps one of our largest running SSIS packages, about once every 2-3 weeks this SSIS will fail/drop connection. Once it fails, the large number of records will start rolling back. This rollback process can take 1+ hours so I cannot even restart the failed SSIS package immediately. This is a problem.

I am looking for a solution or option so I do not have to wait on that rollback to restart this particular, long running SSIS package. Is there an option/setting to leave the partial data set committed and not rollback? Then I could just restart the SSIS package immediately or set it the SSIS to auto-restart 1 time on failure. The first step in the SSIS does a truncate of the destination table.

View 2 Replies View Related

About Efficiency

Sep 20, 2006

I want to select one field from a table,but it should on some conditionswhich refer to 5 table ,such as A.FILED1=B.FIELD1 AND B.FIELD2=C.FIELD3 AND....Should I use case "select sum(a.amount) from a,b,c,... wherea.field1=b.field1 and b.field2=c.field2 and ..." or "select sum(a.amount)from select b.field1 from select c.field2 from...."?And which case is moreefficiency?thanks!ÎÒÏë¼ÆËãÒ»¸ö±íÖеÄij¸ö×ֶεĺͣ¬µ«´Ë¼Ç¼ÐèÔÚ´Ó¶à¸ö ±íÖвéѯ´Ë¼Ç¼ÊÇ·ñÂú×ãÌض¨µÄÌõ¼þ¡£ÄÇôÎÒÊÇÓÃselect ..from ...where ..and ..and..and ..and ..»¹ÊÇÓÃselect ..fromselect ..from select ..from ......£¿ÇëÎÊÊÇÄÄÒ»¸öЧÂʸߣ¿Ð»Ð»£¡

View 2 Replies View Related

Search Efficiency

Jan 29, 2007

Hello,I am looking at optimizing site searching on a web application.  I have two thoughts on the idea:1. create views with fulltext indexes combining records from multiple tables.2. create a table with an xml column and primary index.   I understand the xml column type has the overhead of a BLOB under the hood, but that a primary xml index can "shred" the contents and improve parsing.  I also read the xml column is actually searched as a tree, providing some variant of log(n) run time. Does anyone know of good literate on this subject, the more big O notation, runtime analysis types of posts the better.Thanks 

View 5 Replies View Related

SQLClient Efficiency

Jul 24, 2007

Hi guys,
Since the project that i'm developing is rapidly increasing, the pages seems to be getting slower everytime you view it. I would like to ask if code below would be efficient enough for several simultaneous request of data or if you have any other suggestions, you are welcome to add:
1    Public Shared Function QueryDatabase(ByVal sql As String) As DataTable2    3                ' SQL Server Connection Object Variable4                Dim _oConnection As SqlConnection5                ' SQL Server Command Object Variable6                Dim _oCommand As SqlCommand7                ' SQL Server Data Adapter Object Variable8                Dim _oAdapter As SqlDataAdapter9                ' DataTable Object Variable (Early Binding)10               Dim _oDataTable As New DataTable11   12               ' Instantiate Connection Object with connection string13               _oConnection = New SqlConnection("Data Source=XXX.XXX.XXX.XXX;Initial Catalog=XXXXXX;User=XXX;Pwd=XXX;")14               ' Instantiate Command Object with SQL String and Connection Object15               _oCommand = New SqlCommand(sql, _oConnection)16               ' Instantiate Data Adapter Object with Command Object17               _oAdapter = New SqlDataAdapter(_oCommand)18               ' Fill the DataTable Object with the retrieve records19               _oAdapter.Fill(_oDataTable)20   21               ' Release resources used by DataAdapter Object22               _oAdapter.Dispose()23   24               ' Release resources used by Command Object25               _oCommand.Dispose()26   27               ' Close the connection of the Connection Object from SQL Server28               _oConnection.Close()29   30               ' Release resources used by Connection Object31               _oConnection.Dispose()32   33               ' Return the retrieve records34               Return _oDataTable35   36           End Function Thanks a lot.

View 2 Replies View Related

SQL Efficiency 3 QUESTIONS

Nov 13, 2005

Hey,I am developing a website which will be used by a large number of people so I am concerned about efficiency.Sorry for the three posts but anyone with any info would be appreciated.The database has the following tables:                FACILITY-----MEETING ----                  |                                             | USERS----                                              -------- MEETING_INVITE -------- REMINDER                    |                                            |                    ---------CONTACTS-------When the user logs in I use there username to access the rest of the tables. I get all of the users information out of the database in one go and store it in a dataset.So when a user accesses there meetings page, I pass the dataset to that page with a server transfer.Question 1 > Is it more efficient to open the database once and access all the information and pass the information to seperate tables or is it more efficient to access the database on the individual pages and thus not passing of information.---------------------------------------------------------------------------------------------------------------In order to access the information I use 6 Select statements in a rowHere is an example of my select statments: SELECT * FROM USERS WHERE email = textbox_emailSELECT FACILITY.* FROM FACILITY, USERS WHERE FACILITY.email = USERS.email AND USERS.email = textbox_emailBy the time I get to the REMINDER table I am combining all the tables and my query is eight lines long.Question 2 > Is there a way of combining the results of a previous select to access information?---------------------------------------------------------------------------------------------------------------Question 3 > What do you think of my table design? The lines represent one to many relationships. If you can give me any tips on databases please do.Thanks for your time,Padraic

View 2 Replies View Related

Database Efficiency

Nov 21, 2005

Hello all,I am developing a website which may be used by a large number of people in the future and I am concerned about performance.

Is it better to have one table with 50, 000 rows or 5,000 tables with 10 rows each?
Is there a way to divide a table in two if the table reaches a certain size?
Is there a limit on the size of tables?
Is there a limit on the number of tables?
Is it possible to create tables from vb.net?
Is it possible to program checks into sql server? For example, could I delete data that has passed a certain date or send an automated email when a time is reached?
Thanks for your time,Padraic 

View 2 Replies View Related

SQL Efficiency Problem

Sep 7, 2000

Hey people

I'd be really grateful if someone can help me with this. Could someone explain the following:
If the following code is executed, it runs instantly:

declare @SellItemID numeric (8,0)
select @SellItemID = 5296979

SELECT distinct s.sell_itm_id
FROM stor_sell_itm s
WHERE (s.sell_itm_id = @SellItemID )

However, if I use this WHERE clause instead -

WHERE (@SellItemID = 0 OR s.sell_itm_id = @SellItemID)

- it takes 70 micro seconds. When I join a few more tables into the statement, the difference is 4 seconds!

This is an example of a technique I'm using in loads of places - I only want the statement to return all records if the filter is zero, otherwise the matching record only. I think that by using checking the value of the variable in the WHERE clause, a table scan is used instead of an index. This seems nonsensical since the variable is effectively a constant. Wrapping the entire select statement with an IF or CASE works, but when I've got 10 filters I'd have to 100 select statements.
I DON'T GET IT!! There must be a simple answer, HELP!!
Jo

PS this problem seems to occur both in 6.5 and 7.0

View 1 Replies View Related

ADO Update Efficiency

Aug 31, 2004

Hi All,

I tried my luck in the Access forum and I've search the web and MSDN for an answer with little luck.

Simply, is it better to update a table via an UPDATE query or Recordset manipulation?

I have read that if you were to update 10,000 records an UPDATE query is more efficient (obviously), but does that transend down to say 1 - 10 updates?

i.e. There are six unique updates I want to make to 6 different rows. Should I code the backend VB to execute 6 different queries or seek and update a recordset?

It's a MS Access XP app with ADO 2.8.

My gut feeling on this is that making 6 update queries is more efficient, both with system resources and record-locking issues; I'd just like another opinion on the matter.

I appreciate your help!
Thanks,
Warren

View 2 Replies View Related

Cursor Efficiency?

Apr 8, 2008

I am using nested cursors in my script below, and wonder if there is a more efficient way please?


USE ar
GO
DECLARE @mortgage INT,
@mortgage_sequence int,
@getMortgage CURSOR,
@notes_1 varchar(MAX),
@notes_2 varchar(MAX),
@notes_3 varchar(MAX),
@notes_4 varchar(MAX),
@notes_5 varchar(MAX),
@notes_6 varchar(MAX),
@notes_7 varchar(MAX),
@notes_8 varchar(MAX),
@notes_9 varchar(MAX),
@notes_10 varchar(MAX),
@notes_11 varchar(MAX),
@notes_12 varchar(MAX),
@notesComplete varchar(MAX),
@addedUser varchar(255),
@addedDate varchar(255),
@amendedUser varchar(255),
@amendedDate varchar(255),
@sequence int,
@getDetail CURSOR


SET @getMortgage = CURSOR FOR
SELECT DISTINCT Mortgage_Number, Mortgage_Note_Sequence_No
FROM format_additional_notes
GROUP BY Mortgage_Number, Mortgage_Note_Sequence_No
ORDER BY Mortgage_Number ASC
OPEN @getMortgage
FETCH NEXT
FROM @getMortgage INTO @mortgage, @mortgage_sequence
WHILE @@FETCH_STATUS = 0
BEGIN

SET @getDetail = CURSOR FOR
SELECT ltrim(rtrim(Additional_Text_1)),
ltrim(rtrim(Additional_Text_2)),
ltrim(rtrim(Additional_Text_3)),
ltrim(rtrim(Additional_Text_4)),
ltrim(rtrim(Additional_Text_5)),
ltrim(rtrim(Additional_Text_6)),
ltrim(rtrim(Additional_Text_7)),
ltrim(rtrim(Additional_Text_8)),
ltrim(rtrim(Additional_Text_9)),
ltrim(rtrim(Additional_Text_10)),
ltrim(rtrim(Additional_Text_11)),
ltrim(rtrim(Additional_Text_12)),
Mortgage_Note_Sequence_No,
Extra_Added_by_User,
Extra_Added_on_Date,
Extra_Amended_By_User,
Extra_Amended_By_Date

FROM format_additional_notes
WHERE Mortgage_Number = @mortgage AND Mortgage_Note_Sequence_No = @mortgage_sequence
ORDER BY Mortgage_Note_Sequence_No
OPEN @getDetail
SET @notesComplete = ''
FETCH NEXT FROM @getDetail INTO @notes_1,
@notes_2,
@notes_3,
@notes_4,
@notes_5,
@notes_6,
@notes_7,
@notes_8,
@notes_9,
@notes_10,
@notes_11,
@notes_12,
@sequence,
@addedUser,
@addedDate,
@amendedUser,
@AmendedDate
WHILE (@@FETCH_STATUS = 0)
BEGIN
SET @notesComplete = @notesComplete +
ISNULL(@notes_1,'') + ' ' +
ISNULL(@notes_2,'') + ' ' +
ISNULL(@notes_3,'') + ' ' +
ISNULL(@notes_4,'') + ' ' +
ISNULL(@notes_5,'') + ' ' +
ISNULL(@notes_6,'') + ' ' +
ISNULL(@notes_7,'') + ' ' +
ISNULL(@notes_8,'') + ' ' +
ISNULL(@notes_9,'') + ' ' +
ISNULL(@notes_10,'') + ' ' +
ISNULL(@notes_11,'') + ' ' +
ISNULL(@notes_12,'')

FETCH NEXT FROM @getDetail INTO @notes_1,
@notes_2,
@notes_3,
@notes_4,
@notes_5,
@notes_6,
@notes_7,
@notes_8,
@notes_9,
@notes_10,
@notes_11,
@notes_12,
@sequence,
@addedUser,
@addedDate,
@amendedUser,
@AmendedDate
END


INSERT INTO format_additional_notes_1
(Mortgage_Number,
Mortgage_Note_Sequence_No,
Additional_Text,
Extra_Added_By_User,
Extra_Added_on_Date,
Extra_Amended_By_User,
Extra_Amended_By_Date)
VALUES
( @mortgage,
@sequence,
@notesComplete,
@addedUser,
@addedDate,
@amendedUser,
@amendedDate)

CLOSE @getDetail
DEALLOCATE @getDetail
FETCH NEXT
FROM @getMortgage INTO @mortgage, @mortgage_sequence
END
CLOSE @getMortgage
DEALLOCATE @getMortgage
GO

View 6 Replies View Related

Linking Efficiency

May 6, 2008

I would like to use MVJ's formula for creating a date table.

I would like to use it with our main ERP database. However, I am reluctant to make changes to it because I fear that at some point when we upgrade that software and it's database that the upgrade program will delete my table.

So, here is my question. Performance wise, does it matter whether I add the date table to our ERP database or if I create another database (on the same server) for the custom date table? Does linking between databases take substantially longer than linking within the same database?

View 1 Replies View Related

Efficiency Of Views

Jun 12, 2008

Hi,

okay so I'm refactoring some code at the moment. At the moment, I'm working on a search screen. This search screen lets the user enter a number of criterias, I'm working on drags data from a view and then programmatically filters it according to the search filters.

This is obviously inefficent and non-scalable as the view drags out every entry and returns to the data layer, which then filters it.

I'm wondering what the best way to refactor this? i'm thinking the best way is to tell the db what to filter on, so it'll only drag out the right amount of data.

Therefore, should I keep the view? Is there any way of entering parameters into views or am i going to need to change this into a stored proc?

View 2 Replies View Related

About Efficiency(rephrased)

Sep 21, 2006

hi,Allcould you tell me which case is more efficiency?(my tables have no index)And does it has any else case more efficiency?case1:"select sum(Invoice_Production.Quantity) from Invoice_Production,(select[dat_Item].ItemCode from [dat_Item],(select [dat_MachineType].MachineTypeIDfrom [dat_MachineType]"&subQuery&") as T3 where [dat_Item].MachineTypeID =T3.machinetypeid) as T1,(select [Invoice].InvoiceNo from Invoice,(select[users].user_id from [users] where [Users].User_ID = '"& rs2(0) &"') as T4where T4.User_ID = invoice.dealerno and Invoice.Cyear >= "&startYear&" andInvoice.Cyear <= "&endYear&" and Invoice.Cmonth >= "&startMonth&" andInvoice.Cmonth <= "&endMonth&") as T2 where invoice_production.ItemCode =T1.ItemCode and T2.invoiceno = invoice_production.invoiceno"case2:"select sum(Invoice_Production.Quantity) from[Invoice_Production],[Invoice],[dat_MachineType],[dat_Item],[users] where[users].user_id = [invoice].DealerNo and [dat_Item].ItemCode =[Invoice_Production].ItemCode and [dat_Item].MachineTypeID =[dat_MachineType].MachineTypeID and [Invoice_Production].InvoiceNo =[Invoice].InvoiceNo and [Users].User_ID = '"& rs2(0) &"' and Invoice.Cyear


Quote:

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved