Efficient Way Of Backing Up Data

Jan 24, 2008

We are writing an in-house backup tools. It's written in C# and the program allows me to select which records we want to archive and remove from existing database. Functionally, we can acheive what we want to do but I feel that we did not do it right. Can someone show me the right way to go? Here is what we are doing in the backup process.

1) Create an archive database (with the file name, say archive_101.mdf)
2) Move the data from orginal database to archive database using the following SQL statement
INSERT INTO [archive_database].dbo.records SELECT a, b, c, date FROM [original_database] WHERE date < CONVERT(DATETIME, '2007-10-10', 120)
3) Delete the moved data from original database.
4) Of course, step 2 and step3 are done in a transaction
5) Then, we detach the archive database and store the file "archive_101.mdf somewhere.

I felt it's wrong because the performance is very bad. If I run the select statement, it takes me 1 min to dump all the data to console. If I run the same SELECT statement together with INSERT INTO, like what I wrote in step 2, it takes me more than 1 hour to write to another database. I checked the tempdb and its size does grow a lot when I am doing step 2. I know the data I am selecting is about 500MB large but still, it should not take that long. Can somebody give me any hints?

Thanks in advance.

View 5 Replies


ADVERTISEMENT

Most Efficient Way Of Returning Data

Feb 22, 2007

I have a number of tables, and I need to create a summary of the data.Now I have two choices. I could create a stored procedure that creates a temp table with the filters applied. This would require 12 selects for each year (we have 3 years of data so far). This means 36 selects so far that fill a temp table.The second choice is to create a table with the required columns and rows. As I am esspecially cross-joining 4 tables, the table would have about 10 x 10 x 12 x A rows, where A is a variable that will grow quickly. I can then keep this table up-to-date using triggers for each insert. Then all I need to do is use an aggregate funtion on the relevant filter.My question is, which is more efficient. Creating a stored procedure that creates the table dynamically, or have a table with thousands of rows and using an aggregate function on these.PS I am using SQL Server 2000Jag 

View 1 Replies View Related

Efficient Way To Store Game Data: Help?

Jun 22, 2007

Hello everyone!



So I am trying to wrap my head around the most efficient way to store some game data in sql server 2005.

I wont list all of my tables or columns as there would just be too many, but will try to post as exact as I can.



Basically lets say I have 3000 monsters in my game, and over 8000 items some of which are monster loot.

Each monster could have 1 drop item or they could have 200 different item drops all depending on the monster.



So I am just trying to find the best method of creating a loot table.



So I have the following in mind.



[MonsterTable]

monsterID;

monsterName;

etc...



[ItemTable]

itemID;

itemName;

etc..



[LootTable]

lootTableID;

refMonsterID;

refItemID;



So I'm not worried about how I would get all of the data in there, I'm just wondering if this simple method of using Ref. ID's between tables would provide the best method for extracting the data when needed.



Any ideas or thoughts are more then welcome, any questions just ask, and thanks for any guidance or information you can provide on a more efficient manner.

View 1 Replies View Related

Are Data Sets Memory Efficient?

Feb 8, 2008



Hi,

I have designed a contact manager with Data Grid Control bound to a Data Set.

When the application closes, data from Data Set is written to XML file and when application opens, data from XML file is loded into Data Set and is show in Data Grid control.

Contacts in my application can exceed over 1,000
So, Is Data Set capable of handling lot of data very efficiently in memory?




Please advise

View 3 Replies View Related

Backing Up Data

Sep 4, 2007

i have an existing database that is very large. i want to split it into different file groups to make the back up faster. how do you split it? thanks

View 5 Replies View Related

Doing A Data Import Using DTS Wizard In SQL Server 2005 - Being Efficient With 5 Flat Files

Apr 13, 2006

Hi,

I'm a new user of SQL Server 2005. I have the full version installed. I also have SQL Server Business Integration Dev Studio installed. My OS is Windows XP.

I'm importing a series of 5 flat files into a database on one of the SQL Servers we have. My goal is to get 5 different tables (though perhaps I should do one and add an extra field to distinguish each import) into the database for further analysis.

I tried doing an import via DTS Wizard. There are no column names in the flat file so I defined them during the import process (all 58 of them). When I got to the end, I had an option to save the import process as a SSIS (SQL Server Integration Service) Package on:

SQL SERVER (I don't have permission for this)

or

FILE SYSTEM (did this one)

I saved the Package locally in hopes of being able to go back in, change the source file and destination table of the package and quickly get the other 4 flat files imported.

My problems are:

1) I couldn't find how to run the *.DTSX Package file to run in SQL Server Studio (basically reuse the Package with minor changes and saving me having to redefine the same 58 columns on each flat file import)

2) Tried but didn't understand how to run it in SQL Server Bus Intel Dev Studio (i.e. understanding the mapping and getting the data types right so it wouldn't error out)

3) Don't know how to make the necessary changes so that the Package handles the next source file and puts in a new destination table (do I need to do 5 CREATE TABLES so this Package has a place to run to?)

4) Does the Package need to be part of a Project to run (I haven't found how to take an existing Package and make it part of a Project/Solution)?

5) Is there a good book or online resource for just getting the basics of using SQL Server 2005 and SQL Server Business Intelligence Development Studio?

I'm really at a loss after spending a day fruitlessly on it scouring the help files, forums and experimenting around.

Hope somebody can point me in the right direction.

Regards,

Patrick Briggs,
Pasadena, CA


View 7 Replies View Related

Doing A Data Import Using DTS Wizard In SQL Server 2005 - Being Efficient With 5 Flat Files

Apr 18, 2006

I just spent some time working out how to do a seemingly simple task. I€™m sharing the steps I took to do this in hopes it saves other SQL Server 2005 users (especially newbies like myself) time.

My original question posed on several SQL newsgroups was based on this goal:


I'm importing a series of 5 flat files (all with same file layout) into a database on one of the SQL Servers we have using SQL Server 2005 (SQL Server Management Studio) . My goal is to get 5 different tables. I want to do this without having to redo all the layout criteria 4 additional times.

Below are the steps I followed to get a solution (all done in Microsoft SQL Server Management Studio):

Create the Package (data import)

1) Use the SQL Server Import Export Wizard (equivalent to SQL Server 2000 Data Transfer Wizard) to import your first flat file. At the CHOOSE DATA SOURCE window browse for your file.
2) Under the Advanced tab, you can set your Column attributes (€œoutput column width€? or €œdata type€? to name a few). I highlighted all the columns and selected €œstring [DT_STR]€? for data type. To avoid truncation errors, I selected 255 for output column width. You can name the columns whose data you are most concerned with (I did import all the available fields).
3) After choosing a server destination you will have a €œSELECT SOURCE TABLES AND VIEWS€? window pop up. Under the €œMapping€? column you can choose to tweak your mapping further editing in SQL (see Edit SQL button). I didn€™t.
4) The €œSAVE AND EXECUTE PACKAGE€? will pop up. The €œExecute Immediately€? box should be checked and you should check the €œSave SSIS Package€? (SQL Server Integration Services). When you do, select €œFile System€? for where to save this import-file-package to.
5) Click OKAY for the Package Protection Level and the €œSAVE SSIS PACKAGE€? window will appear. Browse for a path on your local computer to save to.

Modify Package (data import) for Next Use

6) In SQL Server Management Studio, browse for the Package and open it.

Preparation for SQL Task €“ box

7) You should see a screen that shows two boxes (€œPreparation for SQL Task€?) and (€œData Flow Task€?).
8) Right click on the former and select €œEdit€?.
9) On the €œSQL Statement€? row, click into the right column and select the €œ€¦€? box
10) Change the destination table (the table you will create with this package) to a meaningful name and click OK.
11) Click OK for the €œSQL Task Editor€?

Data Flow Task - box

12) Right click on the €œData Flow Task€? box and select €œEdit€?.
13) Three boxes will appear €œSourceConnectionFlatFile€?, €œData Conversion 1€?, and €œDestination - <whatever table name your original data import went to>€?. Below them is a section that displays €œConnection Managers€?

SourceConnectionFlatFile - editing

14) The first thing you will want to do is change the import source to a new flat file. You do this by going below the boxes under the €œConnection Managers€? window and right clicking on €œSourceConnectionFlatFile€? and then selecting €œEdit€?
15) Browse for the new €œFile Name€? and select it.
16) A €œMicrosoft SQL Server Management Studio€? window will pop up asking you if you want to €œkeep or reset the existing metadata€?. The metadata is just your column definitions and choosing €œYES€? to keep this makes sense if you are doing data imports on files with the same file layout.
17) Still in the €œFlat File Connection Manager Editor€? window, change the €œConnection Manager Name€? to something meaningful (I add <_> at the end and then the name of the table the flat file is going to) and click OK.

SourceConnectionFlatFile €“ box (editing)

18) Right click on the €œSourceConnectionFlatFile€? box and select €œEdit€?.
19) Your newly named €œFlat File Connection Manager€? should appear in select box.
20) Click OK, right click again on the €œSourceConnectionFlatFile€? box and select €œShow Advanced Editor€?.
21) Under the €œConnections Manager€? tab, your newly named €œFlat File Connection€? should appear (the prior step is necessary for the advanced editor to recognize your change).
22) Under the €œComponent Properties€? tab, on the €œName€? row, click into the right column and rename to something meaningful (notice the €œIdentification String€? row description changes too once you click out of the €œName€? row)
23) Under the €œColumn Mappings€? tab, just confirm you are mapping your flat file fields (€œAvailable External Columns€?) to a destination table€™s fields (€œAvailable Output Columns€?).
24) Under the €œInput and Output Properties€? tab you can check in €œFlat File Source Output€? to make modifications to either your €œExternal Columns€? or your €œOutput Columns€? €“ you shouldn€™t need to for a simple import.
((NOTE: any changes you make here would likely need to be consistent with the column properties found under the €œConnection Manager Window€? for the €œSourceConnectionFlatFile€? as well as the €œData Conversion 1€? box under the €œData Flow Tasks€? window, so exercise caution
25) NOTE: This process has worked for me by making my source columns all €œstring [DT_STR]€? data type and the output columns all €œUnicode String [DT_WSTR]€? data type.

Data Conversion 1 €“ box (editing)

26) There is nothing you need to do here. By right clicking on the €œData Conversion 1€? box and selecting €œEdit€?, you can see and change the data type of the output columns (the ones in the table your importing the flat file to). There are probably more edits one can do but they€™re beyond what I€™ve learned.

Destination - <whatever table name your original data import went to> €“ box (editing)

27) Right click on the €œDestination - <whatever table name your original data import went to>€? box and select €œShow Advanced Editor€?.
28) Select the €œComponent Properties€? tab.
29) Select the right column at the €œName€? row and change the name to something meaningful (ie. related to the source file name or the table name you€™re importing to).
30) Select the right column at the €œIdentification String€? row and it will update to this change.
31) Select the right column at the €œOpenRowSet€? and change it to the name of the table you are importing your flat file to (this should be consistent with table name under step 10).
32) Click OK
33) Select FILE and select €œSave As€¦€? and then give your package a new name that€™s meaningful (this will be helpful if you have to rerun the import of the flat file later).

Run (execute) the Revised Package (data import)

34) Go back to SQL Server Management Studio and open the Object Explorer
35) Connect to an €œIntegration Services€? component. This should essentially be a local instance (not sure where it is on the local computer or in SQL Server Management Studio on the local computer).
36) In €œObject Explorer€? go down to your €œIntegration Services€? object and expand it.
37) Expand €œStored Packages€?
38) Right click on €œFile System€? and select €œImport Package€? and an €œIMPORT PACKAGE€? window will appear
39) For €œPackage Location€? choose €œFile System€? and then browse for the €œPackage Path€?
40) Click into the €œPackage Name€? and it defaults to your Package€™s file name.
41) Click OK and the Package is imported.
42) Right click on the newly imported Package and select €œRun Package€?
43) An €œExecute Package Utility€? window appears
44) Select €œExecute€? and the package runs.

View 1 Replies View Related

Backing Up Only The Table Data?

Jun 22, 2007

How can i just backup the table data of a database. without the sp, views Please its Veeeeeeeery Urgentetc...

View 6 Replies View Related

Backing Up Data To Network Drive Mapped

Aug 6, 2003

I am new to the DB Administration.

How do i back up the data to a network drive mapped on a day to day basis.

View 4 Replies View Related

Backing Up Online Log After Loss Of Primary Data File

May 24, 2004

My question is this:

Is there anyway to backup the current online log after complete loss of the current correponding datafile?

Example:

(2) Logical Disk Volumes

Disk 1 (D:) contains pubs_data.mdf
Disk 2 (E:) contains pubs_log.ldf

Disk 1 becomes corrupt and goes offline leaving the the database pubs in a suspect state. Is backup of the current online log pubs_log.ldf possible? If a backup of the log is not possible are there any other restoration methods that can be used to bring this database back online, rescuing as much information as possible from the current online log pubs_log.ldf.

Thanks,
P

View 1 Replies View Related

Backing Online Database Table Data To Local Machine

Jul 23, 2005

I have an online SQL Server database provided by an ISP. I do not havepermission to create a backup device and I understand this is normalpractice.I am not using Enterprise Manager to administer the online database.I know I can back up the structure of the database using SQLscripts.My question is:How do I back up on my own machine the data contained in the onlinedatabase tables I have created? If I were using Enterprise Manager Icould do it by downloading tables using the DTS facility but how can Ido it without Enterprise Manager?Is there some work around which I have missed eg creating a csv fileof the data?Best wishes for 2005 to all those helpful people in this newsgroup!John Morgan

View 1 Replies View Related

Most Efficient DataSource?

Aug 3, 2007

Hi there,
 I'm using a Repeater at the moment which is bound to a SQLDataSource. I expect much load on that Website, should I choose another DataSource? Which other DataSource is better if it's about Performance?
I read some stuff about the SQLAdapter and a DataSet.. is that better in performance? Why is it better?
What about LinQ?
Thanks a lot for any clarification.

View 3 Replies View Related

More Efficient Code

Dec 13, 2007

Hi all, I have the code listed below and feel that it could be run much more efficiently.  I run this same code for attrib2, 3, description, etc for a total of 21, so on each postback I am running a total of 21 different connections, i have listed only 3 of them here for the general idea.  I run this same code for update and for insert, so 21 times for each of them as well.  In fact if someone is adding a customer, after they hit the new customer button, it first runs 21 inserts of blanks for each field, then runs 21 updates for anything they put in fields, on the same records.  This is running too slow...  any ideas on how I can combine these??  We have 21 different entries for EVERY customer.  The Pf_property does not change, it is 21 different set entries, the only one that changes is the Pf_Value.
Try                Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1' and [Pf_CustomerNo] = @CustomerNo"                Dim connection As New SqlClient.SqlConnection("connectionstring")                Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection)                command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue)                Dim reader As SqlClient.SqlDataReader                command.Connection.Open()                reader = command.ExecuteReader                reader.Read()                TextBox2.Text = Convert.ToString(reader("Pf_Value"))                command.Connection.Close()            Catch ex As SystemException                Response.Write(ex.ToString)            End Try
            Try                Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1Regex' and [Pf_CustomerNo] = @CustomerNo"                Dim connection As New SqlClient.SqlConnection("connectionstring")                Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection)                command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue)                Dim reader As SqlClient.SqlDataReader                command.Connection.Open()                reader = command.ExecuteReader                reader.Read()                TextBox5.Text = Convert.ToString(reader("Pf_Value"))                command.Connection.Close()            Catch ex As SystemException                Response.Write(ex.ToString)            End Try
            Try                Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1ValMessage' and [Pf_CustomerNo] = @CustomerNo"                Dim connection As New SqlClient.SqlConnection("connectionstring")                Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection)                command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue)                Dim reader As SqlClient.SqlDataReader                command.Connection.Open()                reader = command.ExecuteReader                reader.Read()                TextBox6.Text = Convert.ToString(reader("Pf_Value"))                command.Connection.Close()            Catch ex As SystemException                Response.Write(ex.ToString)            End Try
 Thanks,
Randy

View 2 Replies View Related

Which Query Is More Efficient?

Apr 1, 2008

what's the difference, if I use SQLDataReader at code level, making a query of that retrieves 500 rows and 2 columns, and making a query that retrieves 2 rows and 500 columns? 

View 6 Replies View Related

Most Efficient Way To Do This Select....

Feb 12, 2002

I have a table that has the following...

ID Status Type Check_Num Issued IssueTime Paid PaidTime
-----------------------------------------------------------------
1 I <null> 10 10.00 2/1/02
2 E IDA 10 <null> <null> 10.01 2/3/02
3 E CAP 10 <null> <null> 10.00 2/4/02
4 E PNI 11 <null> <null> 15.00 2/6/02


I want to return the Check_Num,Type, Paid, and Max(PaidTime) from this...

Example:
Check_Num Type Paid Time
---------------------------
10 CAP 10.00 2/4/02
11 PNI 15.00 2/6/02

Any assistance will be greatly appreciated.

Thanks,
Brian

View 1 Replies View Related

In-efficient SQL Code

Sep 7, 2000

Hey people

I'd be really grateful if someone can help me with this. Could someone explain the following:
If the following code is executed, it runs instantly:

declare @SellItemID numeric (8,0)
select @SellItemID = 5296979

SELECT distinct s.sell_itm_id
FROM stor_sell_itm s
WHERE (s.sell_itm_id = @SellItemID )

However, if I use this WHERE clause instead -

WHERE (@SellItemID = 0 OR s.sell_itm_id = @SellItemID)

- it takes 70 micro seconds. When I join a few more tables into the statement, the difference is 4 seconds!

This is an example of a technique I'm using in loads of places - I only want the statement to return all records if the filter is zero, otherwise the matching record only. I think that by using checking the value of the variable in the WHERE clause, a table scan is used instead of an index. This seems nonsensical since the variable is effectively a constant. Wrapping the entire select statement with an IF or CASE works, but when I've got 10 filters I'd have to 100 select statements.
I DON'T GET IT!! There must be a simple answer, HELP!!
Jo

PS this problem seems to occur both in 6.5 and 7.0

View 4 Replies View Related

Need Efficient Query

Jun 12, 2008

This query is giving me very slow search .What could be the efficient way


SELECT (
SELECT COUNT(applicationID) FROM Vw_rptBranchOffice
WHERE ( statusDate between '2008-03-13 16:12:11.513' AND '2008-05-30 00:00:00.000'
AND SearchString like '%del%')) AS
totalNO,ApplicationID,SearchString,StudentName,IntakeID,CounslrStatusDate
FROM Vw_rptBranchOffice
WHERE statusDate between '2008-03-13 16:12:11.513' AND '2008-05-30 00:00:00.000' AND
SearchString like '%del%'

View 5 Replies View Related

More Efficient Way Than You Have Published

Dec 8, 2004

Sumanesh writes "Recently I read one article “Converting Multiple Rows into a CSV string�

(http://www.sqlteam.com/item.asp?ItemID=256)

I have found one easy and more efficient way to achieve the same thing.
First I have a table called test with 2 columns (ID and Data)
And some data inserted into it.

CREATE FUNCTION [dbo].CombineData(@ID SMALLINT)
RETURNS VARCHAR(2000)
AS
BEGIN
DECLARE @Data VARCHAR(2000)

SET @Data = ''
SELECT @Data = @Data + Data + ',' FROM Test WHERE ID = @ID
RETURN LEFT(@Data,LEN(@Data)-1)

END
GO

Then used a simple select query to achieve the same

SELECT DISTINCT ID, [dbo].CombineData(ID) FROM Test

Which achieved the same thing without using any temporary tables and Cursors. I am sure that you will accept that this is more efficient than the one that has been published.


Thanks
Sumanesh"

View 1 Replies View Related

Efficient SQL Backup?

Apr 19, 2006

Hi all,I am having issues of efficiency of backing up data from one SQL database to another.The two servers in questions are on different networks , behinddifferent firewalls. We have MS SQL 2000.On the source data i run a job with the following steps:1> take trans backup every 4 hrs2> ftp to the remote server3> if ftp fails , disable the whole jobOn the target server I run a job which does the following1> restore the trans backup with NORECOVERY.If the job fails at target. I will have to go through the whole processof doing a complete backup of the source , restoring it at the otherens and then starting trans-backup again.Also, if we do a failover to the target server, then when we roll backto the source server again we have to da a back-up of the target andrestore it on the source server.Is ther a more efficent way of doing this??

View 3 Replies View Related

More Efficient - Exists Or In

Jul 20, 2005

Which is more efficient:Select * from table1 where id in (select id from table2)orSelect * from table1 where exists(select * from table2 wheretable2.id=table1.id)

View 8 Replies View Related

Efficient Select

Jul 20, 2005

It seems I should be able to do 1 select and both return thatrecordset and be able to set a variable from that recordset.eg.Declare @refid intSelect t.* from mytable as t --return the recordsetSet @refid = t.refidthe above doesn't work-how can I do it without making a second trip tothe database?Thanks,Rick

View 3 Replies View Related

Efficient Way Than IN Statement

Aug 21, 2006

I have two tables JDECurrencyRates and JDE Currency Conversion

I want to insert all the records all the records from JDECurrencyRates to JDECurrencyConversion that does not exists in JDECurrencyConversion table. For matching I am to use three keys i.e. FromCurrency, TO Currency and Effdate

To achieve this task i wrote the following query

INSERT INTO PresentationEurope.dbo.JDECurrencyConversion(Date,FromCurrency,FromCurrencyDesc,

ToCurrency, ToCurrencyDesc, EffDate, FromExchRate, ToExchRate,CreationDatetime

,ChangeDatetime)

(SELECT effdate as date, FromCurrency, FromCurrencyDesc, ToCurrency, ToCurrencyDesc, EffDate,

FromExchRate, ToExchRate, GETDATE(),GETDATE() FROM MAINTENANCE.DBO.JDECURRENCYRATES

WHERE FROMCURRENCY NOT IN (SELECT FromCurrency FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION)

OR TOCURRENCY NOT IN (SELECT TOCURRENCY FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION)

OR EFFDATE NOT IN (SELECT EFFDATE FROM PRESENTATIONEUROPE.DBO.JDECURRENCYCONVERSION))



Can any one suggest me the better way to accomplish this task or this query is OK (or efficient enough)

View 5 Replies View Related

Is There A More Efficient Way To Perform This Process?

Dec 11, 2007

My e-commerce site is currently running the following process when items are shipped from the manufacturer:1. Manufacturer sends a 2-column CSV to the retailer containing an Order Number and its Shipping Tracking Number.2. Through the web admin panel I've built, a retail staff member uploads the CSV to the server.3. In code, the CSV is parsed.  The tracking number is saved to the database attached to the Order Number.4. After a tracking # is saved, each item in that order has its status updated to Shipped.5. The customer is sent an email using ASPEmail from Persits which contains their tracking #.The process seems to work without a problem so long as the CSV contains roughly 50 tracking #'s or so.  The retailer has gotten insanely busy and wants to upload 3 or 4 thousand tracking #'s in a single CSV, but the process times out, even with large server timeout values being set in code.  Is there a way to streamline the process to make this work more efficiently?  I can provide the code if that helps. 

View 5 Replies View Related

Date Efficient Query ??

Dec 11, 2007

I have a table that has a date and time column. I need to do a search on the table by days and will eventually need to do it by hours as well.
 I wanted to ask the question that will the performance get better if I create two additional columns one stateing the "Day of Week" and the other stating " Hour of Week". These will have numerical values prepopulated i.e. for Saturday 7, sunday 1, Monday 2 etc etc etc. And for the time , I will have 1 for 1pm-159pm  2 for 2-2:59pm pm 3 for 3-3:59pm etc etc etc
 The total number of rows in the table could total half a million, with filtered to by weekf of day may be reduce to  being 80,000 or so.
 Is the above criteria to add two numeric columns to the table and putting indexes on those two numeric fields is a good solution? and efficinet or just using the datepart functionality with the actual date column and using the  week of day and time parameters as the case may be.
Thanks fro your help. 
 
Thanks
 

View 3 Replies View Related

More Efficient Updates Of SQL Server

Feb 6, 2001

If I have 5000 rows to update with different values based upon the primary key, is there a better way to do it than 5000 separate update statements?

View 2 Replies View Related

More Efficient Date Conversion

Sep 25, 2000

I have a function which works that converts getdate() to a 8 character string. I have tried others ways but this one works OK. However the more I look at it the more I think a more efficient way has to exist. Any ideas greatly appreciated. Here is my approach

declare @order_date char(8), @year char(4), @month char(2), @day char(2)
set @year = cast(datepart(yyyy,getdate()) as char(4))
if datepart(dd,getdate())<10 set @day = '0'+cast(datepart(dd,getdate()) as char(2)) else set @day = cast(datepart(dd,getdate()) as char(2))
if datepart(mm,getdate())<10 set @month = '0'+cast(datepart(mm,getdate()) as char(2)) else set @month = cast(datepart(mm,getdate()) as char(2))
set @order_date = @year + @month + @day select @order_date

View 1 Replies View Related

Efficient JOIN Query

Mar 15, 2006

Please help me with the efficient JOIN query to bring the below result :


create table pk1(col1 int)

create table pk2(col1 int)

create table pk3(col1 int)

create table fk(col1 int, col2 int NOT NULL, col3 int, col4 int)


insert into pk1 values(1)
insert into pk1 values(2)
insert into pk1 values(3)

insert into pk2 values(1)
insert into pk2 values(2)
insert into pk2 values(3)

insert into pk3 values(1)
insert into pk3 values(2)
insert into pk3 values(3)

insert into fk values(1, 1, null, 10)
insert into fk values(null, 1, 1, 20)
insert into fk values(1, 1,null, 30)
insert into fk values(1, 1, null, 40)
insert into fk values(1, 1, 1, 70)
insert into fk values(2, 3, 1, 60)
insert into fk values(1, 1, 1, 100)
insert into fk values(2, 2, 3, 80)
insert into fk values(null, 1, 2, 50)
insert into fk values(null, 1, 4, 150)
insert into fk values(5, 1, 2, 250)
insert into fk values(6, 7, 8, 350)
insert into fk values(10, 1, null, 450)

Below query will give the result :

select fk.* from fk inner join pk1 on pk1.col1 = fk.col1 inner join pk2 on pk2.col1 = fk.col2 inner join pk3 on pk3.col1 = fk.col3

Result :
+------+------+------+------+
| col1 | col2 | col3 | col4 |
+------+------+------+------+
| 1 | 1 | 1 | 70 |
| 2 | 3 | 1 | 60 |
| 1 | 1 | 1 | 100 |
| 2 | 2 | 3 | 80 |
+------+------+------+------+

But I require also the NULL values in col1 and col3

Hence doing the below :

select distinct fk.* from fk inner join pk1 on pk1.col1 = fk.col1 or fk.col1 is null inner join pk2 on pk2.col1 = fk.col2 inner join pk3 on pk3.col1 = fk.col3 or fk.col3 is null

+------+------+------+------+
| col1 | col2 | col3 | col4 |
+------+------+------+------+
| null | 1 | 1 | 20 |
| null | 1 | 2 | 50 |
| 1 | 1 | null | 10 |
| 1 | 1 | null | 30 |
| 1 | 1 | null | 40 |
| 1 | 1 | 1 | 70 |
| 1 | 1 | 1 | 100 |
| 2 | 2 | 3 | 80 |
| 2 | 3 | 1 | 60 |
+------+------+------+------+

The above is the reqd output, but the query will be very slow if there are more NULL valued rows in col1 and col3, since I need to also use distinct if I use 'IS NULL' check in JOIN.

Please let me know if there is an aliternative to this query which can return the same result set in an efficient manner.

View 2 Replies View Related

Most Efficient FIFO Logging.

Jul 14, 2006

My buddy has an application that logs entries, and for space reasons needs to only retain a maximum N records in the log table. He wants to delete old log entries as part of the insert procedure. Here is his stab at it:CREATE PROCEDURE spInserttblLog
@Message varchar(1024),
@LogDate datetime,
@ElapsedSeconds float,
@LogLevel varchar(50),
@UserName varchar(100),
@ProcessName varchar(100),
@MachineName varchar(100),
@MaxEntries int=0
AS
declare @ varchar(300)
if @MaxEntries > 0
Begin
set @MaxEntries = @MaxEntries -1
set @SQL='Delete From tblLog Where LogID Not In (Select Top ' + cast(@MaxEntries as varchar) + ' LogID from tblLog order by LogID desc)'
execute (@SQL)
End

insert into tblLog
(Message,
LogDate,
ElapsedSeconds,
LogLevel,
UserName,
ProcessName,
MachineName)
values(@Message,
@LogDate,
@ElapsedSeconds,
@LogLevel,
@UserName,
@ProcessName,
@MachineName)


I think this would be more efficient:--SQL Server
CREATEPROCEDURE spInserttblLog
@Message varchar(1024),
@LogDate datetime,
@ElapsedSeconds float,
@LogLevel varchar(50),
@UserName varchar(100),
@ProcessName varchar(100),
@MachineName varchar(100),
@MaxEntries int=0
AS

delete
fromtblLog
whereLogID < (select max(LogID) from tlbLog) - @MaxEntries - 2
and @MaxEntries > 0

insert into tblLog
(Message,
LogDate,
ElapsedSeconds,
LogLevel,
UserName,
ProcessName,
MachineName)
values(@Message,
@LogDate,
@ElapsedSeconds,
@LogLevel,
@UserName,
@ProcessName,
@MachineName)Comments, or suggestion for an even faster method? This is a relatively high-activity table, so there are a lot of inserts.

View 4 Replies View Related

Efficient Way Of Writing This Query

Mar 31, 2008

I am using a Table Many Times in Left Outer Joins and Inner Joins for various Conditions,
is there anyway of writing a query using minimal Table usage, instead of Recurring all the time.

**********************************
SELECT
blog.blogid,
BM.TITLE,
U.USER_FIRSTNAME+ ' ' + U.USER_LASTNAME AS AUTHORNAME,
Blog_Entries = (cASE WHEN Blog_Entries is NULL or Blog_Entries = ' ' then 0 else Blog_Entries END),
Blog_NewEntries = (cASE WHEN Blog_NewEntries is NULL or Blog_NewEntries = ' ' then 0 else Blog_NewEntries END),
Blog_comments = (cASE WHEN Blog_comments is NULL or Blog_comments = ' ' then 0 else Blog_comments END),
dbo.DateFloor(VCOM.objCreationDate) AS CreationDate,
dbo.DateFloor(BLE.entryDate) AS Date_LastEntry
FROM vportal4VSEARCHCOMM.dbo.blog_metaData BM
INNER JOIN vportal4VSEARCHCOMM.dbo.blog BLOG
ON BM.BLOGID = BLOG.BLOGID
INNER JOIN vportal4VSEARCH.dbo.[USER] U
ON U.USER_ID = BLOG.OWNERID
INNER JOIN vportal4VSEARCHCOMM.dbo.vComm_obj VCOM
ON BLOG.vCommObjID = VCOM.vCommObjId
INNER JOIN vportal4VSEARCHCOMM.dbo.blog_entry BLE
ON BLOG.BLOGID = BLE.BLOGID
LEFT OUTER JOIN
(SELECT BlogID, Blog_Entries = COUNT(*)
FROM vportal4VSEARCHCOMM.dbo.Blog_Entry
GROUP BY BlogID )B on B.BLOGID = BM.BLOGID
LEFT OUTER JOIN
(
SELECT BlogID, Blog_NewEntries = COUNT(*)
FROM vportal4VSEARCHCOMM.dbo.Blog_Entry
WHERE ENTRYDATE > '01/01/2008'
GROUP BY BlogID )C on C.BLOGID = BM.BLOGID
LEFT OUTER JOIN
(
SELECT BEN.BLOGID, Blog_comments = COUNT(*)
FROM vportal4VSEARCHCOMM.dbo.blog_comment BC
INNER JOIN vportal4VSEARCHCOMM.dbo.blog_entry BEN
ON BEN.blog_entryId = BC.blogEntryId
GROUP BY BEN.BLOGID )D on D.BLOGID = BM.BLOGID
WHERE VCOM.objName like '%blog%'



thanks

View 5 Replies View Related

More Efficient Than LEFT JOIN

Feb 15, 2006

I have a table with data that is refreshed regularly but I still need tostore the old data. I have created a seperate table with a foreign keyto the table and the date on which it was replaced. I'm looking for anefficient way to select only the active data.Currently I use:SELECT ...FROM DataTable AS DLEFT OUTER JOIN InactiveTable AS I ON I.Key = D.KeyWHERE D.Key IS NULLHowever I am not convinced that this is the most efficient, or the mostintuitive method of acheiving this.Can anyone suggest a more efficient way of getting this informationplease.Many thanks.*** Sent via Developersdex http://www.developersdex.com ***

View 3 Replies View Related

Most Efficient Way To Run Update Query

Jun 9, 2006

Hi all,Any thoughts on the best way to run an update query to update a specificlist of records where all records get updated to same thing. I would thinka temp table to hold the list would be best but am also looking at theeasiest for an end user to run. The list of items is over 7000Example:update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-LBK'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-LYE'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-XLBK'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-XLYE'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '002-LGR'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '002-LRE'All records get set to same. I tried using an IN list but this wassignificantly slower:update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS'where item_no in('001-LBK','001-LYE','001-XLBK','001-XLYE','002-LGR','002-LRE')Thanks

View 3 Replies View Related

Please Help Obi-Wan: Efficient Join/Cursor/Something/Anything?

Jul 20, 2005

Hi allI have a bit of a dilema that I am hoping some of you smart dudesmight be able to help me with.1. I have a table with about 50 million records in it and quite a fewcolumns. [Table A]2. I have another table with just over 300 records in it and a singlecolumn (besides the id). [Table B]3. I want to:Select all of those records from Table A where [table A].descriptiondoes NOT contain any of (select color from [table B])4. An exampleTable Aid ... [other columns] ... description1the green hornet2a red ball3a green dog4the yellow submarine5the pink pantherTable Bidcolor55blue56gold57green58purple59pink60whiteSo I want to select all those rows in Table A where none of the wordsfrom Table B.color appear in the description field in Table A.I.E: The query would return the following from Table A:2a red ball4the yellow submarineThe real life problem has more variables and is a little morecomplicated than this but this should suffice to give me the rightidea.Due to the number of rows involved I need this to be relevantlyefficient. Can someone suggest the most efficient way to proceed.PS. Please excuse my ignorance.CheersSean

View 3 Replies View Related

Which Is More Efficient To Use: Joins Or SubQuery

Aug 28, 2007

View 6 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved