Most Efficient Way To Run Update Query
Jun 9, 2006
Hi all,
Any thoughts on the best way to run an update query to update a specific
list of records where all records get updated to same thing. I would think
a temp table to hold the list would be best but am also looking at the
easiest for an end user to run. The list of items is over 7000
Example:
update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',
prod_cat = 'OBS' where item_no = '001-LBK'
update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',
prod_cat = 'OBS' where item_no = '001-LYE'
update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',
prod_cat = 'OBS' where item_no = '001-XLBK'
update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',
prod_cat = 'OBS' where item_no = '001-XLYE'
update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',
prod_cat = 'OBS' where item_no = '002-LGR'
update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',
prod_cat = 'OBS' where item_no = '002-LRE'
All records get set to same. I tried using an IN list but this was
significantly slower:
update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',
prod_cat = 'OBS'
where item_no in
('001-LBK',
'001-LYE',
'001-XLBK',
'001-XLYE',
'002-LGR',
'002-LRE')
Thanks
View 3 Replies
ADVERTISEMENT
Apr 1, 2008
what's the difference, if I use SQLDataReader at code level, making a query of that retrieves 500 rows and 2 columns, and making a query that retrieves 2 rows and 500 columns?
View 6 Replies
View Related
Jun 12, 2008
This query is giving me very slow search .What could be the efficient way
SELECT (
SELECT COUNT(applicationID) FROM Vw_rptBranchOffice
WHERE ( statusDate between '2008-03-13 16:12:11.513' AND '2008-05-30 00:00:00.000'
AND SearchString like '%del%')) AS
totalNO,ApplicationID,SearchString,StudentName,IntakeID,CounslrStatusDate
FROM Vw_rptBranchOffice
WHERE statusDate between '2008-03-13 16:12:11.513' AND '2008-05-30 00:00:00.000' AND
SearchString like '%del%'
View 5 Replies
View Related
Dec 11, 2007
I have a table that has a date and time column. I need to do a search on the table by days and will eventually need to do it by hours as well.
I wanted to ask the question that will the performance get better if I create two additional columns one stateing the "Day of Week" and the other stating " Hour of Week". These will have numerical values prepopulated i.e. for Saturday 7, sunday 1, Monday 2 etc etc etc. And for the time , I will have 1 for 1pm-159pm 2 for 2-2:59pm pm 3 for 3-3:59pm etc etc etc
The total number of rows in the table could total half a million, with filtered to by weekf of day may be reduce to being 80,000 or so.
Is the above criteria to add two numeric columns to the table and putting indexes on those two numeric fields is a good solution? and efficinet or just using the datepart functionality with the actual date column and using the week of day and time parameters as the case may be.
Thanks fro your help.
Thanks
View 3 Replies
View Related
Mar 15, 2006
Please help me with the efficient JOIN query to bring the below result :
create table pk1(col1 int)
create table pk2(col1 int)
create table pk3(col1 int)
create table fk(col1 int, col2 int NOT NULL, col3 int, col4 int)
insert into pk1 values(1)
insert into pk1 values(2)
insert into pk1 values(3)
insert into pk2 values(1)
insert into pk2 values(2)
insert into pk2 values(3)
insert into pk3 values(1)
insert into pk3 values(2)
insert into pk3 values(3)
insert into fk values(1, 1, null, 10)
insert into fk values(null, 1, 1, 20)
insert into fk values(1, 1,null, 30)
insert into fk values(1, 1, null, 40)
insert into fk values(1, 1, 1, 70)
insert into fk values(2, 3, 1, 60)
insert into fk values(1, 1, 1, 100)
insert into fk values(2, 2, 3, 80)
insert into fk values(null, 1, 2, 50)
insert into fk values(null, 1, 4, 150)
insert into fk values(5, 1, 2, 250)
insert into fk values(6, 7, 8, 350)
insert into fk values(10, 1, null, 450)
Below query will give the result :
select fk.* from fk inner join pk1 on pk1.col1 = fk.col1 inner join pk2 on pk2.col1 = fk.col2 inner join pk3 on pk3.col1 = fk.col3
Result :
+------+------+------+------+
| col1 | col2 | col3 | col4 |
+------+------+------+------+
| 1 | 1 | 1 | 70 |
| 2 | 3 | 1 | 60 |
| 1 | 1 | 1 | 100 |
| 2 | 2 | 3 | 80 |
+------+------+------+------+
But I require also the NULL values in col1 and col3
Hence doing the below :
select distinct fk.* from fk inner join pk1 on pk1.col1 = fk.col1 or fk.col1 is null inner join pk2 on pk2.col1 = fk.col2 inner join pk3 on pk3.col1 = fk.col3 or fk.col3 is null
+------+------+------+------+
| col1 | col2 | col3 | col4 |
+------+------+------+------+
| null | 1 | 1 | 20 |
| null | 1 | 2 | 50 |
| 1 | 1 | null | 10 |
| 1 | 1 | null | 30 |
| 1 | 1 | null | 40 |
| 1 | 1 | 1 | 70 |
| 1 | 1 | 1 | 100 |
| 2 | 2 | 3 | 80 |
| 2 | 3 | 1 | 60 |
+------+------+------+------+
The above is the reqd output, but the query will be very slow if there are more NULL valued rows in col1 and col3, since I need to also use distinct if I use 'IS NULL' check in JOIN.
Please let me know if there is an aliternative to this query which can return the same result set in an efficient manner.
View 2 Replies
View Related
Mar 31, 2008
I am using a Table Many Times in Left Outer Joins and Inner Joins for various Conditions,
is there anyway of writing a query using minimal Table usage, instead of Recurring all the time.
**********************************
SELECT
blog.blogid,
BM.TITLE,
U.USER_FIRSTNAME+ ' ' + U.USER_LASTNAME AS AUTHORNAME,
Blog_Entries = (cASE WHEN Blog_Entries is NULL or Blog_Entries = ' ' then 0 else Blog_Entries END),
Blog_NewEntries = (cASE WHEN Blog_NewEntries is NULL or Blog_NewEntries = ' ' then 0 else Blog_NewEntries END),
Blog_comments = (cASE WHEN Blog_comments is NULL or Blog_comments = ' ' then 0 else Blog_comments END),
dbo.DateFloor(VCOM.objCreationDate) AS CreationDate,
dbo.DateFloor(BLE.entryDate) AS Date_LastEntry
FROM vportal4VSEARCHCOMM.dbo.blog_metaData BM
INNER JOIN vportal4VSEARCHCOMM.dbo.blog BLOG
ON BM.BLOGID = BLOG.BLOGID
INNER JOIN vportal4VSEARCH.dbo.[USER] U
ON U.USER_ID = BLOG.OWNERID
INNER JOIN vportal4VSEARCHCOMM.dbo.vComm_obj VCOM
ON BLOG.vCommObjID = VCOM.vCommObjId
INNER JOIN vportal4VSEARCHCOMM.dbo.blog_entry BLE
ON BLOG.BLOGID = BLE.BLOGID
LEFT OUTER JOIN
(SELECT BlogID, Blog_Entries = COUNT(*)
FROM vportal4VSEARCHCOMM.dbo.Blog_Entry
GROUP BY BlogID )B on B.BLOGID = BM.BLOGID
LEFT OUTER JOIN
(
SELECT BlogID, Blog_NewEntries = COUNT(*)
FROM vportal4VSEARCHCOMM.dbo.Blog_Entry
WHERE ENTRYDATE > '01/01/2008'
GROUP BY BlogID )C on C.BLOGID = BM.BLOGID
LEFT OUTER JOIN
(
SELECT BEN.BLOGID, Blog_comments = COUNT(*)
FROM vportal4VSEARCHCOMM.dbo.blog_comment BC
INNER JOIN vportal4VSEARCHCOMM.dbo.blog_entry BEN
ON BEN.blog_entryId = BC.blogEntryId
GROUP BY BEN.BLOGID )D on D.BLOGID = BM.BLOGID
WHERE VCOM.objName like '%blog%'
thanks
View 5 Replies
View Related
Nov 8, 2006
Hi, everyone.
I have read a lot of topics about execution plan for query, but I got little.
Please give me some help with examples for comparing different select statements to find the best efficient select statement.
Thank you very much.
View 4 Replies
View Related
Mar 30, 2006
which is the most efficient query to find out the total number of rows in a table other than using - SELECT COUNT(*) ... in query
View 10 Replies
View Related
Jan 18, 2008
I have a query similar to the following. The intent of this query is to retrieve the top 6 records meeting the specified criteria (LOGTYPENAME = 'Process Status Start' OR LOGTYPENAME = 'Process Status End' ) based on most recent dates. Please keep in mind that I expect to return up to 6 records for each unique LogProcessName. This could be thousands of different LogProcessNames with up to 6 records for each.
1) The table I am executing against currently is very large in size and thus takes a long time to execute against. It would seem there must be a more efficient query to get the results I am looking for?
2) CTE doesn't work on SQL 2000. I need a query that does.
3) I cannot modify the database itself in the process.
;WITH cte AS (
SELECT [LogProcessName], [LogBody], [LogDate], [LogGUID], row_number()
OVER(PARTITION BY [LogProcessName]
ORDER BY [LogDate] DESC)
AS RN
FROM [LOGTABLE]
WHERE [LogTypeGUID] IN (
SELECT LogTypeGUID
FROM LOGTYPE
WHERE LogTypeName = 'Process Status Start'
OR LogTypeName = 'Process Status End' ) )
SELECT *
FROM cte
WHERE RN = 1 OR RN = 2 OR RN = 3 OR RN = 4 OR RN = 5 OR RN = 6
ORDER BY [LogProcessName] DESC, [LogDate] DESC
Does anybody else have any idea that would yield the results that I am looking for and take into account items 1-3 above?
Thanks in advance.
View 4 Replies
View Related
Mar 26, 2007
Hi! Select gets all records that contains illegal chars... Ok, to replace '[' { and some other chars I will make AND '% .. %' and place other intervals, that is not the problem.The problem is: How to replace not allowed chars ( ! @ # $ % ^ & * ( ) etc. ) with '_' ?I have seen that there is a function REPLACE, but can't figure out how to use it. 1 SELECT user_username
2 FROM users
3 WHERE user_username LIKE '%[!-)]%';
View 2 Replies
View Related
Aug 3, 2007
Hi there,
I'm using a Repeater at the moment which is bound to a SQLDataSource. I expect much load on that Website, should I choose another DataSource? Which other DataSource is better if it's about Performance?
I read some stuff about the SQLAdapter and a DataSet.. is that better in performance? Why is it better?
What about LinQ?
Thanks a lot for any clarification.
View 3 Replies
View Related
Dec 13, 2007
Hi all, I have the code listed below and feel that it could be run much more efficiently. I run this same code for attrib2, 3, description, etc for a total of 21, so on each postback I am running a total of 21 different connections, i have listed only 3 of them here for the general idea. I run this same code for update and for insert, so 21 times for each of them as well. In fact if someone is adding a customer, after they hit the new customer button, it first runs 21 inserts of blanks for each field, then runs 21 updates for anything they put in fields, on the same records. This is running too slow... any ideas on how I can combine these?? We have 21 different entries for EVERY customer. The Pf_property does not change, it is 21 different set entries, the only one that changes is the Pf_Value.
Try Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1' and [Pf_CustomerNo] = @CustomerNo" Dim connection As New SqlClient.SqlConnection("connectionstring") Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection) command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue) Dim reader As SqlClient.SqlDataReader command.Connection.Open() reader = command.ExecuteReader reader.Read() TextBox2.Text = Convert.ToString(reader("Pf_Value")) command.Connection.Close() Catch ex As SystemException Response.Write(ex.ToString) End Try
Try Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1Regex' and [Pf_CustomerNo] = @CustomerNo" Dim connection As New SqlClient.SqlConnection("connectionstring") Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection) command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue) Dim reader As SqlClient.SqlDataReader command.Connection.Open() reader = command.ExecuteReader reader.Read() TextBox5.Text = Convert.ToString(reader("Pf_Value")) command.Connection.Close() Catch ex As SystemException Response.Write(ex.ToString) End Try
Try Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1ValMessage' and [Pf_CustomerNo] = @CustomerNo" Dim connection As New SqlClient.SqlConnection("connectionstring") Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection) command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue) Dim reader As SqlClient.SqlDataReader command.Connection.Open() reader = command.ExecuteReader reader.Read() TextBox6.Text = Convert.ToString(reader("Pf_Value")) command.Connection.Close() Catch ex As SystemException Response.Write(ex.ToString) End Try
Thanks,
Randy
View 2 Replies
View Related
Feb 12, 2002
I have a table that has the following...
ID Status Type Check_Num Issued IssueTime Paid PaidTime
-----------------------------------------------------------------
1 I <null> 10 10.00 2/1/02
2 E IDA 10 <null> <null> 10.01 2/3/02
3 E CAP 10 <null> <null> 10.00 2/4/02
4 E PNI 11 <null> <null> 15.00 2/6/02
I want to return the Check_Num,Type, Paid, and Max(PaidTime) from this...
Example:
Check_Num Type Paid Time
---------------------------
10 CAP 10.00 2/4/02
11 PNI 15.00 2/6/02
Any assistance will be greatly appreciated.
Thanks,
Brian
View 1 Replies
View Related
Sep 7, 2000
Hey people
I'd be really grateful if someone can help me with this. Could someone explain the following:
If the following code is executed, it runs instantly:
declare @SellItemID numeric (8,0)
select @SellItemID = 5296979
SELECT distinct s.sell_itm_id
FROM stor_sell_itm s
WHERE (s.sell_itm_id = @SellItemID )
However, if I use this WHERE clause instead -
WHERE (@SellItemID = 0 OR s.sell_itm_id = @SellItemID)
- it takes 70 micro seconds. When I join a few more tables into the statement, the difference is 4 seconds!
This is an example of a technique I'm using in loads of places - I only want the statement to return all records if the filter is zero, otherwise the matching record only. I think that by using checking the value of the variable in the WHERE clause, a table scan is used instead of an index. This seems nonsensical since the variable is effectively a constant. Wrapping the entire select statement with an IF or CASE works, but when I've got 10 filters I'd have to 100 select statements.
I DON'T GET IT!! There must be a simple answer, HELP!!
Jo
PS this problem seems to occur both in 6.5 and 7.0
View 4 Replies
View Related
Dec 8, 2004
Sumanesh writes "Recently I read one article “Converting Multiple Rows into a CSV string�
(http://www.sqlteam.com/item.asp?ItemID=256)
I have found one easy and more efficient way to achieve the same thing.
First I have a table called test with 2 columns (ID and Data)
And some data inserted into it.
CREATE FUNCTION [dbo].CombineData(@ID SMALLINT)
RETURNS VARCHAR(2000)
AS
BEGIN
DECLARE @Data VARCHAR(2000)
SET @Data = ''
SELECT @Data = @Data + Data + ',' FROM Test WHERE ID = @ID
RETURN LEFT(@Data,LEN(@Data)-1)
END
GO
Then used a simple select query to achieve the same
SELECT DISTINCT ID, [dbo].CombineData(ID) FROM Test
Which achieved the same thing without using any temporary tables and Cursors. I am sure that you will accept that this is more efficient than the one that has been published.
Thanks
Sumanesh"
View 1 Replies
View Related
Apr 19, 2006
Hi all,I am having issues of efficiency of backing up data from one SQL database to another.The two servers in questions are on different networks , behinddifferent firewalls. We have MS SQL 2000.On the source data i run a job with the following steps:1> take trans backup every 4 hrs2> ftp to the remote server3> if ftp fails , disable the whole jobOn the target server I run a job which does the following1> restore the trans backup with NORECOVERY.If the job fails at target. I will have to go through the whole processof doing a complete backup of the source , restoring it at the otherens and then starting trans-backup again.Also, if we do a failover to the target server, then when we roll backto the source server again we have to da a back-up of the target andrestore it on the source server.Is ther a more efficent way of doing this??
View 3 Replies
View Related
Jul 20, 2005
Which is more efficient:Select * from table1 where id in (select id from table2)orSelect * from table1 where exists(select * from table2 wheretable2.id=table1.id)
View 8 Replies
View Related
Jul 20, 2005
It seems I should be able to do 1 select and both return thatrecordset and be able to set a variable from that recordset.eg.Declare @refid intSelect t.* from mytable as t --return the recordsetSet @refid = t.refidthe above doesn't work-how can I do it without making a second trip tothe database?Thanks,Rick
View 3 Replies
View Related
Aug 21, 2006
I have two tables JDECurrencyRates and JDE Currency Conversion
I want to insert all the records all the records from JDECurrencyRates to JDECurrencyConversion that does not exists in JDECurrencyConversion table. For matching I am to use three keys i.e. FromCurrency, TO Currency and Effdate
To achieve this task i wrote the following query
INSERT INTO PresentationEurope.dbo.JDECurrencyConversion(Date,FromCurrency,FromCurrencyDesc,
ToCurrency, ToCurrencyDesc, EffDate, FromExchRate, ToExchRate,CreationDatetime
,ChangeDatetime)
(SELECT effdate as date, FromCurrency, FromCurrencyDesc, ToCurrency, ToCurrencyDesc, EffDate,
FromExchRate, ToExchRate, GETDATE(),GETDATE() FROM MAINTENANCE.DBO.JDECURRENCYRATES
WHERE FROMCURRENCY NOT IN (SELECT FromCurrency FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION)
OR TOCURRENCY NOT IN (SELECT TOCURRENCY FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION)
OR EFFDATE NOT IN (SELECT EFFDATE FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION))
Can any one suggest me the better way to accomplish this task or this query is OK (or efficient enough)
View 5 Replies
View Related
Feb 22, 2007
I have a number of tables, and I need to create a summary of the data.Now I have two choices. I could create a stored procedure that creates a temp table with the filters applied. This would require 12 selects for each year (we have 3 years of data so far). This means 36 selects so far that fill a temp table.The second choice is to create a table with the required columns and rows. As I am esspecially cross-joining 4 tables, the table would have about 10 x 10 x 12 x A rows, where A is a variable that will grow quickly. I can then keep this table up-to-date using triggers for each insert. Then all I need to do is use an aggregate funtion on the relevant filter.My question is, which is more efficient. Creating a stored procedure that creates the table dynamically, or have a table with thousands of rows and using an aggregate function on these.PS I am using SQL Server 2000Jag
View 1 Replies
View Related
Dec 11, 2007
My e-commerce site is currently running the following process when items are shipped from the manufacturer:1. Manufacturer sends a 2-column CSV to the retailer containing an Order Number and its Shipping Tracking Number.2. Through the web admin panel I've built, a retail staff member uploads the CSV to the server.3. In code, the CSV is parsed. The tracking number is saved to the database attached to the Order Number.4. After a tracking # is saved, each item in that order has its status updated to Shipped.5. The customer is sent an email using ASPEmail from Persits which contains their tracking #.The process seems to work without a problem so long as the CSV contains roughly 50 tracking #'s or so. The retailer has gotten insanely busy and wants to upload 3 or 4 thousand tracking #'s in a single CSV, but the process times out, even with large server timeout values being set in code. Is there a way to streamline the process to make this work more efficiently? I can provide the code if that helps.
View 5 Replies
View Related
Feb 6, 2001
If I have 5000 rows to update with different values based upon the primary key, is there a better way to do it than 5000 separate update statements?
View 2 Replies
View Related
Sep 25, 2000
I have a function which works that converts getdate() to a 8 character string. I have tried others ways but this one works OK. However the more I look at it the more I think a more efficient way has to exist. Any ideas greatly appreciated. Here is my approach
declare @order_date char(8), @year char(4), @month char(2), @day char(2)
set @year = cast(datepart(yyyy,getdate()) as char(4))
if datepart(dd,getdate())<10 set @day = '0'+cast(datepart(dd,getdate()) as char(2)) else set @day = cast(datepart(dd,getdate()) as char(2))
if datepart(mm,getdate())<10 set @month = '0'+cast(datepart(mm,getdate()) as char(2)) else set @month = cast(datepart(mm,getdate()) as char(2))
set @order_date = @year + @month + @day select @order_date
View 1 Replies
View Related
Jul 14, 2006
My buddy has an application that logs entries, and for space reasons needs to only retain a maximum N records in the log table. He wants to delete old log entries as part of the insert procedure. Here is his stab at it:CREATE PROCEDURE spInserttblLog
@Message varchar(1024),
@LogDate datetime,
@ElapsedSeconds float,
@LogLevel varchar(50),
@UserName varchar(100),
@ProcessName varchar(100),
@MachineName varchar(100),
@MaxEntries int=0
AS
declare @ varchar(300)
if @MaxEntries > 0
Begin
set @MaxEntries = @MaxEntries -1
set @SQL='Delete From tblLog Where LogID Not In (Select Top ' + cast(@MaxEntries as varchar) + ' LogID from tblLog order by LogID desc)'
execute (@SQL)
End
insert into tblLog
(Message,
LogDate,
ElapsedSeconds,
LogLevel,
UserName,
ProcessName,
MachineName)
values(@Message,
@LogDate,
@ElapsedSeconds,
@LogLevel,
@UserName,
@ProcessName,
@MachineName)
I think this would be more efficient:--SQL Server
CREATEPROCEDURE spInserttblLog
@Message varchar(1024),
@LogDate datetime,
@ElapsedSeconds float,
@LogLevel varchar(50),
@UserName varchar(100),
@ProcessName varchar(100),
@MachineName varchar(100),
@MaxEntries int=0
AS
delete
fromtblLog
whereLogID < (select max(LogID) from tlbLog) - @MaxEntries - 2
and @MaxEntries > 0
insert into tblLog
(Message,
LogDate,
ElapsedSeconds,
LogLevel,
UserName,
ProcessName,
MachineName)
values(@Message,
@LogDate,
@ElapsedSeconds,
@LogLevel,
@UserName,
@ProcessName,
@MachineName)Comments, or suggestion for an even faster method? This is a relatively high-activity table, so there are a lot of inserts.
View 4 Replies
View Related
Feb 15, 2006
I have a table with data that is refreshed regularly but I still need tostore the old data. I have created a seperate table with a foreign keyto the table and the date on which it was replaced. I'm looking for anefficient way to select only the active data.Currently I use:SELECT ...FROM DataTable AS DLEFT OUTER JOIN InactiveTable AS I ON I.Key = D.KeyWHERE D.Key IS NULLHowever I am not convinced that this is the most efficient, or the mostintuitive method of acheiving this.Can anyone suggest a more efficient way of getting this informationplease.Many thanks.*** Sent via Developersdex http://www.developersdex.com ***
View 3 Replies
View Related
Jul 20, 2005
Hi allI have a bit of a dilema that I am hoping some of you smart dudesmight be able to help me with.1. I have a table with about 50 million records in it and quite a fewcolumns. [Table A]2. I have another table with just over 300 records in it and a singlecolumn (besides the id). [Table B]3. I want to:Select all of those records from Table A where [table A].descriptiondoes NOT contain any of (select color from [table B])4. An exampleTable Aid ... [other columns] ... description1the green hornet2a red ball3a green dog4the yellow submarine5the pink pantherTable Bidcolor55blue56gold57green58purple59pink60whiteSo I want to select all those rows in Table A where none of the wordsfrom Table B.color appear in the description field in Table A.I.E: The query would return the following from Table A:2a red ball4the yellow submarineThe real life problem has more variables and is a little morecomplicated than this but this should suffice to give me the rightidea.Due to the number of rows involved I need this to be relevantlyefficient. Can someone suggest the most efficient way to proceed.PS. Please excuse my ignorance.CheersSean
View 3 Replies
View Related
Jan 24, 2008
We are writing an in-house backup tools. It's written in C# and the program allows me to select which records we want to archive and remove from existing database. Functionally, we can acheive what we want to do but I feel that we did not do it right. Can someone show me the right way to go? Here is what we are doing in the backup process.
1) Create an archive database (with the file name, say archive_101.mdf)
2) Move the data from orginal database to archive database using the following SQL statement
INSERT INTO [archive_database].dbo.records SELECT a, b, c, date FROM [original_database] WHERE date < CONVERT(DATETIME, '2007-10-10', 120)
3) Delete the moved data from original database.
4) Of course, step 2 and step3 are done in a transaction
5) Then, we detach the archive database and store the file "archive_101.mdf somewhere.
I felt it's wrong because the performance is very bad. If I run the select statement, it takes me 1 min to dump all the data to console. If I run the same SELECT statement together with INSERT INTO, like what I wrote in step 2, it takes me more than 1 hour to write to another database. I checked the tempdb and its size does grow a lot when I am doing step 2. I know the data I am selecting is about 500MB large but still, it should not take that long. Can somebody give me any hints?
Thanks in advance.
View 5 Replies
View Related
Sep 15, 2001
I'm looking for a query that can "batch" update one table from another. For example, say there are fields on both tables like this:
KeyField
Value1
Value2
Value3
The two tables will match on "KeyField". I would like to write one SQL query that will update the "Value" fields in Table1 with the data from Table2 when there is a match.
View 1 Replies
View Related
Jul 20, 2005
Hi there,I'm a little stuck and would like some helpI need to create an update trigger which will run an update query onanother table.However, What I need to do is update the other table with the changedrecord value from the table which has the trigger.Can someone please show me how this is done please??I can write both queries, but am unsure as to how to get the value ofthe changed record for use in my trigger???Please helpM3ckon*** Sent via Developersdex http://www.developersdex.com ***Don't just participate in USENET...get rewarded for it!
View 1 Replies
View Related
Jan 9, 2008
I'm using DataAdapters with my SQL database with the intention of all the SELECT, UPDATE, INSERT, DELETE commands to be automatically generated.One table is huge so I'm wondering is it more efficient to "SELECT Top(1) * FROM hugetable" instead of "SELECT * FROM hugetable" in order to facilitate the generation of commands.I hope this isn't too confusing.Thanks,Geoff
View 2 Replies
View Related
Oct 21, 2004
Hello All,
Iam new to the world of ASP .Net. Right now iam building an application that will IMPORT about 5,000 records from an Excel spreadsheet to a table in MS SQL Server. Right now the code works correctly, but i feel it is not efficient and takes a little bit of more time in doing the import. Could you guys throw some light on how i can make the code run more faster ? Someone suggested me that i can use DataAdapter and update the table in the database thru an update method available with it. I dont know how to do it? Could anyone share with me a snippet of code that does this ?
Here is my code:
Private Sub ProcessRecords()
Dim ds2 As New DataSet
' readExcelSheet is a user-defined function that reads a spreadsheet and returns a DataSet object
ds2 = readExcelSheet("C:InetpubwwwrootProject1Book2.xls", "SELECT * FROM [Sheet1$]")
Dim myConnection As SqlConnection = Connection() ' user-defined function that returns a SQLConnection object
myConnection.Open()
Dim strSQL As String = "insert_member" ' stored procedure that inserts records
Dim myCommand As New SqlCommand(strSQL, myConnection)
myCommand.CommandType = CommandType.StoredProcedure
myCommand.Parameters.Add("@salutation", SqlDbType.NVarChar)
myCommand.Parameters.Add("@firstname", SqlDbType.NVarChar)
myCommand.Parameters.Add("@lastname", SqlDbType.NVarChar)
myCommand.Parameters.Add("@company", SqlDbType.NVarChar)
Dim i, j As Integer
Response.Write(Date.Now() & "<br>")
For i = 0 To ds2.Tables("Members").Rows.Count() - 1
myCommand.Parameters("@salutation").Value = ds2.Tables("Members").Rows(i).Item("sal")
myCommand.Parameters("@firstname").Value = ds2.Tables("Members").Rows(i).Item("firstname")
myCommand.Parameters("@lastname").Value = ds2.Tables("Members").Rows(i).Item("lastname")
myCommand.Parameters("@company").Value = ds2.Tables("Members").Rows(i).Item("company")
j = myCommand.ExecuteNonQuery()
If (j > 0) Then
Response.Write("Record Inserted - " & i + 1 & "<br>")
End If
Next
Response.Write(Date.Now() & "<br>")
myConnection.Close()
End Sub
Please reply soon.
Thank You.
View 2 Replies
View Related
Mar 1, 2005
Hi All,
I want to give the users the ability to highlight words in sentences on the page in different colors.
Please see link below for an example(watch out for URL wrapping):
http://64.233.161.104/search?q=cache:fKHSRIxERbUJ:www.codeguru.com/columns/DotNet/article.php/c6603/+XML+serialization+in+.net&hl=en
My MASTER table that contains text is very simple:
TEXT_ID INT
TEXT varchar
I was thinking of something like creating another table to keep track of selection criteria:
USER_NAME varchar // keeps track of which user made the selection
TEXT_ID INT // points to the text id in the MASTER table
START_POS INT // start position of the selection
END_POS INT // end position of the selection
COLOR INT // color of the selection
The thing I don't like about this method, is if user(s) have many words higlighted in the sentences, this table may get very large.
I also thought of maybe combining START_POS, END_POS & COLOR into comma delimited entries:
EX: 1:5:240, 25:30:125 // START_POS:END_POS:COLOR
The thing I don't like about this method, the parsing calculations may take a while to execute.
My goal is to stream as much text data as possible and make it as fast as possible. How does Google or any one else do it?
P.S. Having a copy of a MASTER table with HTML tags in TEXT field is not an option.
Thanks,
Vlad
View 1 Replies
View Related