ID Status Type Check_Num Issued IssueTime Paid PaidTime
-----------------------------------------------------------------
1 I <null> 10 10.00 2/1/02
2 E IDA 10 <null> <null> 10.01 2/3/02
3 E CAP 10 <null> <null> 10.00 2/4/02
4 E PNI 11 <null> <null> 15.00 2/6/02
I want to return the Check_Num,Type, Paid, and Max(PaidTime) from this...
Example:
Check_Num Type Paid Time
---------------------------
10 CAP 10.00 2/4/02
11 PNI 15.00 2/6/02
It seems I should be able to do 1 select and both return thatrecordset and be able to set a variable from that recordset.eg.Declare @refid intSelect t.* from mytable as t --return the recordsetSet @refid = t.refidthe above doesn't work-how can I do it without making a second trip tothe database?Thanks,Rick
Which way of retrieving a record is more effecient?:Select tbl1.field1, tbl2.field1from table1 tbl1 inner join table2 tbl2on tbl1.id = tbl2.idwhere someid = somevalueand someid = somevalueorSelectfield1 = (Select field1 from tabl1 where someid = somevalue),field2 = (Select field2 from table2 where someid = somevalue)
I have read a lot of topics about execution plan for query, but I got little. Please give me some help with examples for comparing different select statements to find the best efficient select statement.
Hi there, I'm using a Repeater at the moment which is bound to a SQLDataSource. I expect much load on that Website, should I choose another DataSource? Which other DataSource is better if it's about Performance? I read some stuff about the SQLAdapter and a DataSet.. is that better in performance? Why is it better? What about LinQ? Thanks a lot for any clarification.
Hi all, I have the code listed below and feel that it could be run much more efficiently. I run this same code for attrib2, 3, description, etc for a total of 21, so on each postback I am running a total of 21 different connections, i have listed only 3 of them here for the general idea. I run this same code for update and for insert, so 21 times for each of them as well. In fact if someone is adding a customer, after they hit the new customer button, it first runs 21 inserts of blanks for each field, then runs 21 updates for anything they put in fields, on the same records. This is running too slow... any ideas on how I can combine these?? We have 21 different entries for EVERY customer. The Pf_property does not change, it is 21 different set entries, the only one that changes is the Pf_Value. Try Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1' and [Pf_CustomerNo] = @CustomerNo" Dim connection As New SqlClient.SqlConnection("connectionstring") Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection) command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue) Dim reader As SqlClient.SqlDataReader command.Connection.Open() reader = command.ExecuteReader reader.Read() TextBox2.Text = Convert.ToString(reader("Pf_Value")) command.Connection.Close() Catch ex As SystemException Response.Write(ex.ToString) End Try Try Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1Regex' and [Pf_CustomerNo] = @CustomerNo" Dim connection As New SqlClient.SqlConnection("connectionstring") Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection) command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue) Dim reader As SqlClient.SqlDataReader command.Connection.Open() reader = command.ExecuteReader reader.Read() TextBox5.Text = Convert.ToString(reader("Pf_Value")) command.Connection.Close() Catch ex As SystemException Response.Write(ex.ToString) End Try Try Dim queryString As String = "select Pf_Value from CustomerPOFlexField where [Pf_property] = 'Attrib1ValMessage' and [Pf_CustomerNo] = @CustomerNo" Dim connection As New SqlClient.SqlConnection("connectionstring") Dim command As SqlClient.SqlCommand = New SqlClient.SqlCommand(queryString, connection) command.Parameters.AddWithValue("@CustomerNo", DropDownlist1.SelectedValue) Dim reader As SqlClient.SqlDataReader command.Connection.Open() reader = command.ExecuteReader reader.Read() TextBox6.Text = Convert.ToString(reader("Pf_Value")) command.Connection.Close() Catch ex As SystemException Response.Write(ex.ToString) End Try Thanks, Randy
what's the difference, if I use SQLDataReader at code level, making a query of that retrieves 500 rows and 2 columns, and making a query that retrieves 2 rows and 500 columns?
SELECT distinct s.sell_itm_id FROM stor_sell_itm s WHERE (s.sell_itm_id = @SellItemID )
However, if I use this WHERE clause instead -
WHERE (@SellItemID = 0 OR s.sell_itm_id = @SellItemID)
- it takes 70 micro seconds. When I join a few more tables into the statement, the difference is 4 seconds!
This is an example of a technique I'm using in loads of places - I only want the statement to return all records if the filter is zero, otherwise the matching record only. I think that by using checking the value of the variable in the WHERE clause, a table scan is used instead of an index. This seems nonsensical since the variable is effectively a constant. Wrapping the entire select statement with an IF or CASE works, but when I've got 10 filters I'd have to 100 select statements. I DON'T GET IT!! There must be a simple answer, HELP!! Jo
PS this problem seems to occur both in 6.5 and 7.0
This query is giving me very slow search .What could be the efficient way
SELECT ( SELECT COUNT(applicationID) FROM Vw_rptBranchOffice WHERE ( statusDate between '2008-03-13 16:12:11.513' AND '2008-05-30 00:00:00.000' AND SearchString like '%del%')) AS totalNO,ApplicationID,SearchString,StudentName,IntakeID,CounslrStatusDate FROM Vw_rptBranchOffice WHERE statusDate between '2008-03-13 16:12:11.513' AND '2008-05-30 00:00:00.000' AND SearchString like '%del%'
Sumanesh writes "Recently I read one article “Converting Multiple Rows into a CSV string�
(http://www.sqlteam.com/item.asp?ItemID=256)
I have found one easy and more efficient way to achieve the same thing. First I have a table called test with 2 columns (ID and Data) And some data inserted into it.
CREATE FUNCTION [dbo].CombineData(@ID SMALLINT) RETURNS VARCHAR(2000) AS BEGIN DECLARE @Data VARCHAR(2000)
SET @Data = '' SELECT @Data = @Data + Data + ',' FROM Test WHERE ID = @ID RETURN LEFT(@Data,LEN(@Data)-1)
END GO
Then used a simple select query to achieve the same
SELECT DISTINCT ID, [dbo].CombineData(ID) FROM Test
Which achieved the same thing without using any temporary tables and Cursors. I am sure that you will accept that this is more efficient than the one that has been published.
Hi all,I am having issues of efficiency of backing up data from one SQL database to another.The two servers in questions are on different networks , behinddifferent firewalls. We have MS SQL 2000.On the source data i run a job with the following steps:1> take trans backup every 4 hrs2> ftp to the remote server3> if ftp fails , disable the whole jobOn the target server I run a job which does the following1> restore the trans backup with NORECOVERY.If the job fails at target. I will have to go through the whole processof doing a complete backup of the source , restoring it at the otherens and then starting trans-backup again.Also, if we do a failover to the target server, then when we roll backto the source server again we have to da a back-up of the target andrestore it on the source server.Is ther a more efficent way of doing this??
Which is more efficient:Select * from table1 where id in (select id from table2)orSelect * from table1 where exists(select * from table2 wheretable2.id=table1.id)
I have two tables JDECurrencyRates and JDE Currency Conversion
I want to insert all the records all the records from JDECurrencyRates to JDECurrencyConversion that does not exists in JDECurrencyConversion table. For matching I am to use three keys i.e. FromCurrency, TO Currency and Effdate
To achieve this task i wrote the following query
INSERT INTO PresentationEurope.dbo.JDECurrencyConversion(Date,FromCurrency,FromCurrencyDesc,
I have a number of tables, and I need to create a summary of the data.Now I have two choices. I could create a stored procedure that creates a temp table with the filters applied. This would require 12 selects for each year (we have 3 years of data so far). This means 36 selects so far that fill a temp table.The second choice is to create a table with the required columns and rows. As I am esspecially cross-joining 4 tables, the table would have about 10 x 10 x 12 x A rows, where A is a variable that will grow quickly. I can then keep this table up-to-date using triggers for each insert. Then all I need to do is use an aggregate funtion on the relevant filter.My question is, which is more efficient. Creating a stored procedure that creates the table dynamically, or have a table with thousands of rows and using an aggregate function on these.PS I am using SQL Server 2000Jag
My e-commerce site is currently running the following process when items are shipped from the manufacturer:1. Manufacturer sends a 2-column CSV to the retailer containing an Order Number and its Shipping Tracking Number.2. Through the web admin panel I've built, a retail staff member uploads the CSV to the server.3. In code, the CSV is parsed. The tracking number is saved to the database attached to the Order Number.4. After a tracking # is saved, each item in that order has its status updated to Shipped.5. The customer is sent an email using ASPEmail from Persits which contains their tracking #.The process seems to work without a problem so long as the CSV contains roughly 50 tracking #'s or so. The retailer has gotten insanely busy and wants to upload 3 or 4 thousand tracking #'s in a single CSV, but the process times out, even with large server timeout values being set in code. Is there a way to streamline the process to make this work more efficiently? I can provide the code if that helps.
I have a table that has a date and time column. I need to do a search on the table by days and will eventually need to do it by hours as well. I wanted to ask the question that will the performance get better if I create two additional columns one stateing the "Day of Week" and the other stating " Hour of Week". These will have numerical values prepopulated i.e. for Saturday 7, sunday 1, Monday 2 etc etc etc. And for the time , I will have 1 for 1pm-159pm 2 for 2-2:59pm pm 3 for 3-3:59pm etc etc etc The total number of rows in the table could total half a million, with filtered to by weekf of day may be reduce to being 80,000 or so. Is the above criteria to add two numeric columns to the table and putting indexes on those two numeric fields is a good solution? and efficinet or just using the datepart functionality with the actual date column and using the week of day and time parameters as the case may be. Thanks fro your help.
I have a function which works that converts getdate() to a 8 character string. I have tried others ways but this one works OK. However the more I look at it the more I think a more efficient way has to exist. Any ideas greatly appreciated. Here is my approach
declare @order_date char(8), @year char(4), @month char(2), @day char(2) set @year = cast(datepart(yyyy,getdate()) as char(4)) if datepart(dd,getdate())<10 set @day = '0'+cast(datepart(dd,getdate()) as char(2)) else set @day = cast(datepart(dd,getdate()) as char(2)) if datepart(mm,getdate())<10 set @month = '0'+cast(datepart(mm,getdate()) as char(2)) else set @month = cast(datepart(mm,getdate()) as char(2)) set @order_date = @year + @month + @day select @order_date
Please help me with the efficient JOIN query to bring the below result :
create table pk1(col1 int)
create table pk2(col1 int)
create table pk3(col1 int)
create table fk(col1 int, col2 int NOT NULL, col3 int, col4 int)
insert into pk1 values(1) insert into pk1 values(2) insert into pk1 values(3)
insert into pk2 values(1) insert into pk2 values(2) insert into pk2 values(3)
insert into pk3 values(1) insert into pk3 values(2) insert into pk3 values(3)
insert into fk values(1, 1, null, 10) insert into fk values(null, 1, 1, 20) insert into fk values(1, 1,null, 30) insert into fk values(1, 1, null, 40) insert into fk values(1, 1, 1, 70) insert into fk values(2, 3, 1, 60) insert into fk values(1, 1, 1, 100) insert into fk values(2, 2, 3, 80) insert into fk values(null, 1, 2, 50) insert into fk values(null, 1, 4, 150) insert into fk values(5, 1, 2, 250) insert into fk values(6, 7, 8, 350) insert into fk values(10, 1, null, 450)
Below query will give the result :
select fk.* from fk inner join pk1 on pk1.col1 = fk.col1 inner join pk2 on pk2.col1 = fk.col2 inner join pk3 on pk3.col1 = fk.col3
But I require also the NULL values in col1 and col3
Hence doing the below :
select distinct fk.* from fk inner join pk1 on pk1.col1 = fk.col1 or fk.col1 is null inner join pk2 on pk2.col1 = fk.col2 inner join pk3 on pk3.col1 = fk.col3 or fk.col3 is null
The above is the reqd output, but the query will be very slow if there are more NULL valued rows in col1 and col3, since I need to also use distinct if I use 'IS NULL' check in JOIN.
Please let me know if there is an aliternative to this query which can return the same result set in an efficient manner.
My buddy has an application that logs entries, and for space reasons needs to only retain a maximum N records in the log table. He wants to delete old log entries as part of the insert procedure. Here is his stab at it:CREATE PROCEDURE spInserttblLog @Message varchar(1024), @LogDate datetime, @ElapsedSeconds float, @LogLevel varchar(50), @UserName varchar(100), @ProcessName varchar(100), @MachineName varchar(100), @MaxEntries int=0 AS declare @ varchar(300) if @MaxEntries > 0 Begin set @MaxEntries = @MaxEntries -1 set @SQL='Delete From tblLog Where LogID Not In (Select Top ' + cast(@MaxEntries as varchar) + ' LogID from tblLog order by LogID desc)' execute (@SQL) End
I think this would be more efficient:--SQL Server CREATEPROCEDURE spInserttblLog @Message varchar(1024), @LogDate datetime, @ElapsedSeconds float, @LogLevel varchar(50), @UserName varchar(100), @ProcessName varchar(100), @MachineName varchar(100), @MaxEntries int=0 AS
delete fromtblLog whereLogID < (select max(LogID) from tlbLog) - @MaxEntries - 2 and @MaxEntries > 0
insert into tblLog (Message, LogDate, ElapsedSeconds, LogLevel, UserName, ProcessName, MachineName) values(@Message, @LogDate, @ElapsedSeconds, @LogLevel, @UserName, @ProcessName, @MachineName)Comments, or suggestion for an even faster method? This is a relatively high-activity table, so there are a lot of inserts.
I am using a Table Many Times in Left Outer Joins and Inner Joins for various Conditions, is there anyway of writing a query using minimal Table usage, instead of Recurring all the time.
********************************** SELECT blog.blogid, BM.TITLE, U.USER_FIRSTNAME+ ' ' + U.USER_LASTNAME AS AUTHORNAME, Blog_Entries = (cASE WHEN Blog_Entries is NULL or Blog_Entries = ' ' then 0 else Blog_Entries END), Blog_NewEntries = (cASE WHEN Blog_NewEntries is NULL or Blog_NewEntries = ' ' then 0 else Blog_NewEntries END), Blog_comments = (cASE WHEN Blog_comments is NULL or Blog_comments = ' ' then 0 else Blog_comments END), dbo.DateFloor(VCOM.objCreationDate) AS CreationDate, dbo.DateFloor(BLE.entryDate) AS Date_LastEntry FROM vportal4VSEARCHCOMM.dbo.blog_metaData BM INNER JOIN vportal4VSEARCHCOMM.dbo.blog BLOG ON BM.BLOGID = BLOG.BLOGID INNER JOIN vportal4VSEARCH.dbo.[USER] U ON U.USER_ID = BLOG.OWNERID INNER JOIN vportal4VSEARCHCOMM.dbo.vComm_obj VCOM ON BLOG.vCommObjID = VCOM.vCommObjId INNER JOIN vportal4VSEARCHCOMM.dbo.blog_entry BLE ON BLOG.BLOGID = BLE.BLOGID LEFT OUTER JOIN (SELECT BlogID, Blog_Entries = COUNT(*) FROM vportal4VSEARCHCOMM.dbo.Blog_Entry GROUP BY BlogID )B on B.BLOGID = BM.BLOGID LEFT OUTER JOIN ( SELECT BlogID, Blog_NewEntries = COUNT(*) FROM vportal4VSEARCHCOMM.dbo.Blog_Entry WHERE ENTRYDATE > '01/01/2008' GROUP BY BlogID )C on C.BLOGID = BM.BLOGID LEFT OUTER JOIN ( SELECT BEN.BLOGID, Blog_comments = COUNT(*) FROM vportal4VSEARCHCOMM.dbo.blog_comment BC INNER JOIN vportal4VSEARCHCOMM.dbo.blog_entry BEN ON BEN.blog_entryId = BC.blogEntryId GROUP BY BEN.BLOGID )D on D.BLOGID = BM.BLOGID WHERE VCOM.objName like '%blog%'
I have a table with data that is refreshed regularly but I still need tostore the old data. I have created a seperate table with a foreign keyto the table and the date on which it was replaced. I'm looking for anefficient way to select only the active data.Currently I use:SELECT ...FROM DataTable AS DLEFT OUTER JOIN InactiveTable AS I ON I.Key = D.KeyWHERE D.Key IS NULLHowever I am not convinced that this is the most efficient, or the mostintuitive method of acheiving this.Can anyone suggest a more efficient way of getting this informationplease.Many thanks.*** Sent via Developersdex http://www.developersdex.com ***
Hi all,Any thoughts on the best way to run an update query to update a specificlist of records where all records get updated to same thing. I would thinka temp table to hold the list would be best but am also looking at theeasiest for an end user to run. The list of items is over 7000Example:update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-LBK'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-LYE'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-XLBK'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '001-XLYE'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '002-LGR'update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS' where item_no = '002-LRE'All records get set to same. I tried using an IN list but this wassignificantly slower:update imitmidx_sql set activity_cd = 'O', activity_dt = '20060601',prod_cat = 'OBS'where item_no in('001-LBK','001-LYE','001-XLBK','001-XLYE','002-LGR','002-LRE')Thanks
Hi allI have a bit of a dilema that I am hoping some of you smart dudesmight be able to help me with.1. I have a table with about 50 million records in it and quite a fewcolumns. [Table A]2. I have another table with just over 300 records in it and a singlecolumn (besides the id). [Table B]3. I want to:Select all of those records from Table A where [table A].descriptiondoes NOT contain any of (select color from [table B])4. An exampleTable Aid ... [other columns] ... description1the green hornet2a red ball3a green dog4the yellow submarine5the pink pantherTable Bidcolor55blue56gold57green58purple59pink60whiteSo I want to select all those rows in Table A where none of the wordsfrom Table B.color appear in the description field in Table A.I.E: The query would return the following from Table A:2a red ball4the yellow submarineThe real life problem has more variables and is a little morecomplicated than this but this should suffice to give me the rightidea.Due to the number of rows involved I need this to be relevantlyefficient. Can someone suggest the most efficient way to proceed.PS. Please excuse my ignorance.CheersSean
We are writing an in-house backup tools. It's written in C# and the program allows me to select which records we want to archive and remove from existing database. Functionally, we can acheive what we want to do but I feel that we did not do it right. Can someone show me the right way to go? Here is what we are doing in the backup process.
1) Create an archive database (with the file name, say archive_101.mdf) 2) Move the data from orginal database to archive database using the following SQL statement INSERT INTO [archive_database].dbo.records SELECT a, b, c, date FROM [original_database] WHERE date < CONVERT(DATETIME, '2007-10-10', 120) 3) Delete the moved data from original database. 4) Of course, step 2 and step3 are done in a transaction 5) Then, we detach the archive database and store the file "archive_101.mdf somewhere.
I felt it's wrong because the performance is very bad. If I run the select statement, it takes me 1 min to dump all the data to console. If I run the same SELECT statement together with INSERT INTO, like what I wrote in step 2, it takes me more than 1 hour to write to another database. I checked the tempdb and its size does grow a lot when I am doing step 2. I know the data I am selecting is about 500MB large but still, it should not take that long. Can somebody give me any hints?
I'm using DataAdapters with my SQL database with the intention of all the SELECT, UPDATE, INSERT, DELETE commands to be automatically generated.One table is huge so I'm wondering is it more efficient to "SELECT Top(1) * FROM hugetable" instead of "SELECT * FROM hugetable" in order to facilitate the generation of commands.I hope this isn't too confusing.Thanks,Geoff
Iam new to the world of ASP .Net. Right now iam building an application that will IMPORT about 5,000 records from an Excel spreadsheet to a table in MS SQL Server. Right now the code works correctly, but i feel it is not efficient and takes a little bit of more time in doing the import. Could you guys throw some light on how i can make the code run more faster ? Someone suggested me that i can use DataAdapter and update the table in the database thru an update method available with it. I dont know how to do it? Could anyone share with me a snippet of code that does this ?
Here is my code:
Private Sub ProcessRecords() Dim ds2 As New DataSet ' readExcelSheet is a user-defined function that reads a spreadsheet and returns a DataSet object ds2 = readExcelSheet("C:InetpubwwwrootProject1Book2.xls", "SELECT * FROM [Sheet1$]") Dim myConnection As SqlConnection = Connection() ' user-defined function that returns a SQLConnection object myConnection.Open() Dim strSQL As String = "insert_member" ' stored procedure that inserts records Dim myCommand As New SqlCommand(strSQL, myConnection) myCommand.CommandType = CommandType.StoredProcedure myCommand.Parameters.Add("@salutation", SqlDbType.NVarChar) myCommand.Parameters.Add("@firstname", SqlDbType.NVarChar) myCommand.Parameters.Add("@lastname", SqlDbType.NVarChar) myCommand.Parameters.Add("@company", SqlDbType.NVarChar)
Dim i, j As Integer Response.Write(Date.Now() & "<br>") For i = 0 To ds2.Tables("Members").Rows.Count() - 1 myCommand.Parameters("@salutation").Value = ds2.Tables("Members").Rows(i).Item("sal") myCommand.Parameters("@firstname").Value = ds2.Tables("Members").Rows(i).Item("firstname") myCommand.Parameters("@lastname").Value = ds2.Tables("Members").Rows(i).Item("lastname") myCommand.Parameters("@company").Value = ds2.Tables("Members").Rows(i).Item("company") j = myCommand.ExecuteNonQuery() If (j > 0) Then Response.Write("Record Inserted - " & i + 1 & "<br>") End If Next Response.Write(Date.Now() & "<br>") myConnection.Close() End Sub
My MASTER table that contains text is very simple: TEXT_ID INT TEXT varchar
I was thinking of something like creating another table to keep track of selection criteria: USER_NAME varchar // keeps track of which user made the selection TEXT_ID INT // points to the text id in the MASTER table START_POS INT // start position of the selection END_POS INT // end position of the selection COLOR INT // color of the selection
The thing I don't like about this method, is if user(s) have many words higlighted in the sentences, this table may get very large.
I also thought of maybe combining START_POS, END_POS & COLOR into comma delimited entries:
EX: 1:5:240, 25:30:125 // START_POS:END_POS:COLOR
The thing I don't like about this method, the parsing calculations may take a while to execute.
My goal is to stream as much text data as possible and make it as fast as possible. How does Google or any one else do it?
P.S. Having a copy of a MASTER table with HTML tags in TEXT field is not an option.
I am running a website of crossword puzzle and Sudoku games. The website is designed to be: There are 20-30 games onlines each day. Every registered user could play and submit the game to win scores. For each game, every registered user could get the score for ONLY one time. i.e., No score will be calculated if the user had finished the game before. To avoid wasting time on a game finished before, user will be notified with hint message in the page when enter a already finished game.
The current solution is: 3 tables are designed for the functions mentioned above. Table A: UserTable --storing usering information, userid Table B: GameList --storing all the game information. Related fields: GameID primary key FinshiedTimes recording how many times the game has been finished Table C: FinishHistory --storing who and when finished the game Related fields: GameID ID of the game UserID ID of the user FinishedDate the time when the game was finshied
PS: Fields listed above are only related ones, not the complete structure.
Each time when user enters the game, the program will read Table B(GameList), listing all the available game and the times games have been finished. User could then choose a desired game to play.
When user clicks the link and enter a page showing the detail content of the game, the program will read Table C(FinishHistory) to check whether user has finished this game before. If yes, hint message will be shown in the page.
When user finishes the game and submit, the program will again read Table C(FinishHistory) to check whether user has finished this game before. If yes, hint message will be shown in the page. If no, user will get the score.
Existing Problems: With the increase of game and users, the capacity of Table C(FinishHistory) grows rapidly. And each time when a game is loaded, the Table C will be loaded to check, and when a game is submitted, the Table C will be loaded to check again. So it is only a time question to find out Table C to become a bottleneck.
Does any one here have any good suggestions to change / re-invent a new structure or design to avoid this bottleneck?
Hi there,I have created a sp and function that returns amongst other things acomma seperated string of values via a one to many relationship, thecode works perfectly but i am not sure how to test its performance.. Isthis an efficient way to achieve my solution.. If not any suggestionshow i can improve it.. What are the best ways to check query speed???MY SP:CREATE PROCEDURE sp_Jobs_GetJobsASBEGINSELECT j.Id, j.Inserted, Title, Reference, dbo.fn_GetJobLocations(j.id)AS location, salary, summary, logoFROM Jobs_Jobs j INNER JOIN Client c ON j.ClientID = c.idORDER BY j.Inserted DESCENDGO--------------------------------------------MY Function:CREATE FUNCTION fn_GetJobLocations (@JobID int)RETURNS varchar(5000) ASBEGINDECLARE @LocList varchar(5000)SELECT @LocList = COALESCE(@LocList + ', ','') + ll.location_nameFROM Jobs_Locations l inner join List_Locations ll onll.LocationID = l.LocationIDWHERE l.JobID = @JobIDRETURN @LocListENDAny help or guidance much appreciated...
So I am trying to wrap my head around the most efficient way to store some game data in sql server 2005.
I wont list all of my tables or columns as there would just be too many, but will try to post as exact as I can.
Basically lets say I have 3000 monsters in my game, and over 8000 items some of which are monster loot.
Each monster could have 1 drop item or they could have 200 different item drops all depending on the monster.
So I am just trying to find the best method of creating a loot table.
So I have the following in mind.
[MonsterTable]
monsterID;
monsterName;
etc...
[ItemTable]
itemID;
itemName;
etc..
[LootTable]
lootTableID;
refMonsterID;
refItemID;
So I'm not worried about how I would get all of the data in there, I'm just wondering if this simple method of using Ref. ID's between tables would provide the best method for extracting the data when needed.
Any ideas or thoughts are more then welcome, any questions just ask, and thanks for any guidance or information you can provide on a more efficient manner.
I want to create a web page where a user can select from 1-100 fields to include in output, and then query a table to only return the fields the user has selected.
I do not want to construct a dynamic SQL SELECT statement. I would rather use a stored procedure, but am not certain how to only return the fields that the user selected.