I am trying to generate scripts for route optimization, that is in what order a machine should operate on different sites with lowest cost of transportation.
I alreday have generated the matrix with distances between all sites.
And the problem now is how to generate the lists of all possible routes.
That is all possible combinations of in which order the sites can be operated.
I have a small tricky problem here...need help of all you experts.
Let me explain in detail. I have three tables
1. Emp Table: Columns-> EMPID and DeptID 2. Dept Table: Columns-> DeptName and DeptID 3. Team table : Columns -> Date, EmpID1, EmpID2, DeptNo.
There is a stored procedure which runs every day, and for "EVERY" deptID that exists in the dept table, selects two employee from emp table and puts them in the team table. Now assuming that there are several thousands of departments in the dept table, the amount of data entered in Team table is tremendous every day.
If I continue to run the stored proc for 1 month, the team table will have lots of rows in it and I have to retain all the records.
The real problem is when I want to retrive data for a employee(empid1 or empid2) from Team table and view the related details like date, deptno and empid1 or empid2 from emp table. HOw do we optimise the data retrieval and storage for the table Team. I cannot use partitions as I have SQL server 2005 standard edition.
Please help me to optimize the query and data retrieval time from Team table.
I have a very simple and probably stupid question. I am new to SB. My question is when I create a service and queue do I have to create a route always?. whats the purpose of creating a route?. what happens if i dont create a route. As I understand creating a route creates a routing table in the database but i am perplexed as to what is the actual use of this routing table and in what way it helps.
I am using the TRANSPORT and self-routing and it works great. My question is: is there ever a need for more than one transport route when using multiple services, or should there just be one per database, generally speaking? What exactly does the presence of this route tell service broker? It seems as though it just signals service broker to use self-routing if it can't resolve the service any other way....similar to the way the LOCAL route works, only for remote services.
I have an application running off an Access database. Im trying to convert the application to a server-client architecture and thus moving everything to SQL Server 2000. I have read alot of articles about ways to accomplish that but I've still yet to decide on which approach is the best (best as in robust and scalable).
I pretty much eliminated the ODBC route due to all the layers of translation a request has to go through to reach SQL Server, although this route seems to be the quickes to accomplish.
Mind you I have plenty of time on hand and am willing to re-write the whole thing from scratch if it means a better app.
Now should I go the ADP route and keep Access as the user interface or should I rebuild the whole front End in pure Visual Basic that interacts with SQL Server? Im leaning towards the latter solution.
I haven't read any articles talking about rebuilding the whole app. using VB and SQL Server instead of just using ADP. Why so and which solution do you think is a better solution for a client-server architecture??
Thanks in advance for any replies to my questions.
I have two databases (A and B )on the same SQL Server instance. Both have SSB enabled and running fine within themselves. All athorizations are at present set to dbo.
Recently I had a requirement to start a dialog and send a message from within data base A to a queue via a service that is in database B.
I tried coding the SSB instance in the BEGIN DIALOG then I set up a route and tried that. On both occoasions I got the following on sys.transmission_queue
"An exception occurred while enqueueing a message in the target queue. Error: 916, State: 3. The server principal "sa" is not able to access the database "B" under the current security context."
Is this sometjhing to do with security lock downs in 2005?
I am experiencing the same problem, and I can't get the easy fix to work. I drop and create the DB's in between tests, so it is not related to having an old certificate in the DB, as in the case of Tilfried.
The situation is as follows:
DB1 owned by login1, has a user for login2; this DB is for the initiator
DB2 owned by login2, has a user for login1; this DB hosts the target
Both DB's have TRUSTWORTHY flag set to ON
Error in sys.transmission_queue: 'Error 916, State 3: The server principal "Login1" is not able to access the database "DB2" under the current security context.
Going on a limp, I decide to add a remote service binding in DB1, binding the user for Login2 to the target service, even though BOL explicitly states that this is only required for cross-server communications. This does change the situation - I still get an error, but a new message is sys.transmission_queue: "Dialog security is unavailable for this covnersation because there is no certificate bound to the database principal (Id: 5). Either create a certificate for the principal, or specify ENCRYPTION = OFF when beginning the conversation." I already know that the first option works, but I wanted to get the simple solution running. As for the second option, I doublechecked and the initiating procedure DOES already specify ENCRYPTION = OFF in the BEGIN DIALOG CONVERSATION command. My theory is that the remote service binding somehow forces SB to use encryption, but (a) that is not stated in the error message, and (b) if so, then how to get the messages sent over to the target service without using the binding?
==> EDIT: Just saw that you confirmed this theory in your last reply to Tlifried. So I am indeed back to having to find out how to get this to work without remote service binding - it should be possible, but how???
BTW, SELECT @@VERSION shows that I'm on build 3054, in case it matters.
Between all the errors in BOL and less than helpfull error messages produced by SB, I feel like I'm slowly losing my sanity. Please help!
Dear Advance,I used one stored procedure to retrive 3 different result set. and in the codebehind i seperate it. means from the dataset i seperate three different datatable and then show my data as my need.but the main problem is ... after retriving the datafrom the database i have to user foreach loop to bind the coulmns data to my different custom class.example: foreach (DataRow oDrow in MyDataTable.Rows) {oClass=new Class();oClass.Name1=oDrow["Name1"] .toString();oClass.Name2=oDrow["Name2"] .toString();.... } 1. so my first question is there any optimization possible ?2. my result set is too loong ... so should keep just one hit to database or hit more than one time Currently i am optimizing my web application. in the previous version 1 have to hit the database 3/4 times for different purposes. but now it hits only one time... but it takes time in the codebehind to perform different operation.Any Suggestion
I have a SP that calls about 10 stored procedures sequentially. The 10 SP's are basically complex update statements, each one individual. Is there any way to optimize this? I know putting the 10 into 1 SP would make it compile faster but thats about it. Are there any execution tricks of Stored Procedures firing off sequentially?..or anything I should know?
Hello All, What is the best way to optimize this code or rewrite it using ISNULL ?
CREATE PROCEDURE get_employees (@dept char(8), @class char(5)) AS IF (@dept IS NULL AND @class IS NOT NULL) SELECT * FROM employee WHERE employee.dept IS NULL AND employee.class=@class ELSE IF (@dept IS NULL AND @class IS NULL) SELECT * FROM employee WHERE employee.dept IS NULL AND employee.class IS NULL ELSE IF (@dept IS NOT NULL AND @class IS NULL) SELECT * FROM employee WHERE employee.dept=@dept AND employee.class IS NULL ELSE SELECT * FROM employee WHERE employee.dept=@dept AND employee.class=@class
I am wondering if the size of the data file makes a difference in running Insert's and/or doing Fetch's. Our DB was 11GB in size, I ran a dbcc shrinkdatabase and it shrank it to 5.5 GB in size, now that it is smaller will it run a select query faster as opposed to when we run large inserts and it has to automatically grow to accommodate the insert. I am trying to figure out if I should leave my .mdf file large or keep it small or does it even make a difference. I am only doing large inserts while loading data to get ready for production after that the inserts will be hourly but much smaller, however our queries to the DB after it is in production will be much more intensive.
We're building a company wide network monitoring systemin Java, and need some advice on the database design andtuning.The application will need to concurrently INSERT,DELETE, and SELECT from our EVENT table as efficiently aspossible. We plan to implement an INSERT thread, a DELETEthread, and a SELECT thread within our Java program.The EVENT table will have several hundred million recordsin it at any given time. We will prune, using DELETE, aboutevery five seconds to keep the active record set down toa user controlled size. And one of the three queries willbe executed about every twenty seconds. Finally, we'llINSERT as fast as we can in the INSERT thread.Being new to MSSQL, we need advice on1) Server Tuning - Memory allocations, etc.2) Table Tuning - Field types3) Index Tuning - Are the indexes right4) Query Tuning - Hints, etc.5) Process Tuning - Better ways to INSERT and DELETE, etc.Thanks, in advance, for any suggestions you can make :-)The table is// CREATE TABLE EVENT (// ID INT PRIMARY KEY NOT NULL,// IPSOURCE INT NOT NULL,// IPDEST INT NOT NULL,// UNIXTIME BIGINT NOT NULL,// TYPE TINYINT NOT NULL,// DEVICEID SMALLINT NOT NULL,// PROTOCOL TINYINT NOT NULL// )//// CREATE INDEX INDEX_SRC_DEST_TYPE// ON EVENT (// IPSOURCE,IPDEST,TYPE// )The SELECTS areprivate static String QueryString1 ="SELECT ID,IPSOURCE,IPDEST,TYPE "+"FROM EVENT "+"WHERE ID >= ? "+" AND ID <= ?";private static String QueryString2 ="SELECT COUNT(*),IPSOURCE "+"FROM EVENT "+"GROUP BY IPSOURCE "+"ORDER BY 1 DESC";private static String QueryString3 ="SELECT COUNT(*),IPDEST "+"FROM EVENT "+"WHERE IPSOURCE = ? "+" AND TYPE = ? "+"GROUP BY IPDEST "+"ORDER BY 1 DESC";The DELETE isprivate static String DeleteIDString ="DELETE FROM EVENT "+"WHERE ID < ?";
There are two main tables in my app,in order to optimize search via scope condition, I set many indexs for these two tables
however,at the same time the two tables are also used for my etl app,everyday there are more than thousands of data need to be updated or inserted, but index is not suitable for huge modification,any idea about how to handle this?
Hello, what is the meaning about <MissingIndexGroup Impact="99.9521"> in the Queryplan? Should I create a Grouped Index? An what is the meaning about Impact="99.9521"?
If the Impact =100 you get a 100% better performance, and if the impact =20 ypu get a 20% better performance, is this the meaning?
Hi, Can anyone help me optimize the SELECT statement in the 3rd step? I am actually writing a monthly report. So for each employee (500 employees) in a row, his attendance totals for all days in a month are displayed. The problem is that in the 3rd step, there are actually 31 SELECT statements which are assigned to 31 variables. After I assign these variable, I insert them in a Table (4th step) and display it. The troublesome part is the 3rd step. As there are 500 employees, then 500x31 times the variables are assigned and inserted in the table. This is taking more than 4 minutes which I know is not required :). Can anyone help me optimize the SELECT statements I have in the 3rd step or give a better suggestion. DECLARE @EmpID, @DateFrom, @Total1 .... // Declaring different variables SELECT @DateFrom = // Set to start of any month e.g. 2007-06-01 ...... 1st Loop (condition -- Get all employees, working fine) BEGIN SELECT @EmpID = // Get EmployeeID ...... 2nd SELECT @Total1 = SUM (Abences) ...... 3rd FROM Attendance WHERE employee_id_fk = @EmpID (from 2nd step) AND Date_Absent = DATEADD ("day", 0, Convert (varchar, @DateFrom)) (from 1st step) SELECT @Total2 ........................... same as above SELECT @Total3 ........................... same as above INSERT IN @TABLE (@EmpID, @Total1, ...... @Total31) ...... 4th Iterate (condition) to next employee ...... 5th END It's only the loop which consumes the 4 minutes. If I can somehow optimize this part, I will be most satisfied. Thanks for anyone helping me....
Could any one tell me what is the best way to declare a connection from ASP .net to a SQL database so the sql could support the maximum users, because it seems that the way i'm using is not correct cuz when i make some transactions from my website to the database, the database send an error message saying that there are no more free connections.
This may sound a little silly, but does anyone have any words of wisdom on how to optimize a server/database for minimim rollback? We have some multimillion row tables we were trying to do updates against, and after several days they increased the size of the transaction log to the point they filled up the drive the database files/logs were on. We've now been running a rollback for about five days. I'd like to make sure this doesn't happen again.
I am using the Database maintenance on a database that is about 4gb. The database optiiztion is running about an hour. Does this job only do an update stats? If I run the stored procedure sp_updatestats on the database it only takes a couple of minutes. Are thes two processes doin the same thing? Do I need them if the create, update statistics are turned on?
Trying to optimize a query, and having problems interpreting the data. We have a query that queries 5 tables with 4 INNER JOINS. When I use INNER HASH JOIN, this is the result:
(Using SQL Programmer)
SQL Server Execution Times: CPU time = 40 ms, elapsed time = 80 ms.
Now, when timing the code execution on my ASP page, it's "faster" not using the HASH. Using HASH, there are a few Hash Match/Inner Joins reported in the Execution Plan. Not using HASH, there are Bookmark Lookups/Nested Loops.
My question is which is better to "see": Boomark Lookups/Nested Loops or Hash Match/Inner Joins for the CPU/Server?
IS there any way to rewrite this Query in optimized way?
SELECT dbo.Table1.EmpId E from dbo.Table1 where EmpId in( SELECT dbo.Table1.EmpId FROM (SELECT DISTINCT PersonID, MAX(dtmStatusDate) AS dtmStatusDate FROM dbo.Table1 GROUP BY PersonID) derived_table INNER JOIN dbo.Table1 ON derived_table.PersonID = dbo.Table1.PersonID AND derived_table.dtmStatusDate = dbo.Table1.dtmStatusDate))
How can I optimized the following query: (SELECT e.SID FROMStudents s JOINTable1e ON e.SID= s.SID JOINTable2 ed ON ed.Enrollment = e.Enrollment JOINTable3 t ON t.TNum = e.TNum JOINTable4 bt ON bt.TNum = t.TNum JOINTable5 b ON b.Batch = bt.Batch JOIN IPlans i ON i.IPlan = ed.IPlan JOINPGroups g ON g.PGroup= i.PGroup
WHERE t.TStatus= 'ACP' ANDed.EStatus= 'APR' ANDe.SID=(select distinct SID from Table1 where Enrollment=@DpEnrollment)) AND(ed.EffectiveDate= (SELECT EffectiveDate FROM Table2 ed JOIN Table1 e ON e.enrollment=ed.enrollment WHERE IPlan = @DpIPlan ANDTCoord = @DpTCoord ANDAGCoord= @DpAGCoord ANDDCoord=@DpDCoord ) ANDDSeq= @DpDSeq) ANDe.SID= (select distinct SID from Table1 where Enrollment=@DpEnrollment)) ) ANDed.TerminationDate= (SELECT TerminationDate FROM Table2 ed JOIN Table1 e ON e.enrollment=ed.enrollment WHERE IPlan = @DpIPlan ANDTCoord = @DpTCoord ANDAGCoord= @DpAGCoord ANDDCoord= @DpDCoord ) ANDDSeq= @DpDSeq) ANDe.SID= (select distinct SID from Table1 where Enrollment=@DpEnrollment)) ) ))
DECLARE @PTEffDate_tmp AS SMALLDATETIME SELECT @PTEffDate_tmp = DateAdd(day, -1, PDate) FROM PDates pd WHERE iplan = @DIPlan and pd.TCoord = @DTCoord and DType = 'EF'
DECLARE @PTCoord_tmp as char(3) SELECT @PTCoord_tmp = tc.TCoord FROM PDates pd JOIN TCoords tc ON (pd.TCoord = tc.TCoord) WHERE pd.Iplan = @DIPlan and tc.TGroup = @TGroup_tmp and PDate = @PTEffDate_tmp and DateType = 'TR1'
DECLARE @EStatus_tmp as char(3) SELECT @EStatus_tmp = EDStatus From EDetails ed JOIN ENR e ON (ed.enr = e.enr) JOIN Trans t ON (e.transID = t.TransID) WHERE iplan = @DIPlan and ed.TCoord = @PTCoord_tmp and t.TransS= 'ACP' and DCoord = @DCoord and CEnr is null
How can I optimazed my query. Since my DB is more then 1 mln it takes a while to do all those join? select * FROM EEMaster eem JOIN NHistory nh ON eem.SNumber = nh.SNumber OR eem.OldNumber = nh.SNumber OR eem.CID = (Replicate ('0',12-len( nh.SNumber))+ nh.SNumber )
Well i wanted to prove to some guys that cursors are not really that important:shocked: . :D So this code is suppose to remove duplicate tuples from a table without temporary tables or cursors:D. Except it needs some optimization(and alot of system down time, not sure about that:confused: ). I would like it, if some one could find an instance of the table when the below code fails or some way to optimize the code or anything;) .
--trashtable for real data create table abc (col1 tinyint, col2 tinyint, col3 tinyint)
--trash values for trash table insert into abc values (1,1,1) insert into abc values (1,1,1) insert into abc values (1,1,1) insert into abc values (1,1,1) insert into abc values (2,2,2) insert into abc values (2,2,2) insert into abc values (2,2,2) insert into abc values (3,2,1) insert into abc values (2,2,3) insert into abc values (3,2,4)
--check that there are ten rows select * from abc --check that there are only five distinct rows select distinct * from abc
--run code : next 15 line as a batch declare @lp tinyint declare @col1 tinyint,@col2 tinyint,@col3 tinyint set @lp=1 while @lp>0 begin if not exists (select top 1 * from abc group by col1,col2,col3 having count(col1)>1) set @lp=0 else begin select top 1 @col1 = col1,@col2 = col2,@col3 = col3 from abc group by col1,col2,col3 having count(col1)>1 delete from abc where col1=@col1 and col2=@col2 and col3=@col3 insert into abc values(@col1,@col2,@col3) end end
--only distinct values left in trash table select * from abc
--think code can be optimized --just wanted to prove: can be done without cursors or temporary tables
Hi All, Were I work we have a standalone system that writes information to an event log. Currently this event log is in .mdb (MS Access) format. The problem we have is that the .mdb seems to get very slow to access after 100,000 rows or so, so it needs to be cleared out regularly. We have long discussed using an SQL server to log the events to instead of an .mdb file.
I have written a VB program to test the two DB formats and i expected MS SQL server 2005 to be faster at reading/writing than the .mdb. Both the server and the .mdb are local to the system (it's a standalone system), so we know it's not network that is making the SQL server slower. So here is my question: does anyone know of any good tips/tricks in the server configuration options to speed it up/generally improve performance?
The table definitions are the same in both SQL server and the .mdb file: Table:event_log_0000_000000 Module - Text Event_date - Text Event_Time - Text Event - Text Record_Number - int, primary key I know it would probably be better to have Event_date and Event_Time as datetime types, but I’m not in charge of that decision. The data/table doesn't matter to much i just need to prove that the SQL server is better (and faster) than a .mdb file.
The VB program uses DAO to access the .mdb DB and ADODB to access the SQL server - this is the only difference to how the DB's are accessed and I don't think it would account for the slowness of the SQL server.
This is my first post here, so I’ve probably missed out some vital information, so please ask.
Also sorry if this is the wrong place to post this question, it sort of covers Access/SQL Server 2005/Database programming areas, so wasn't sure.
Generally speaking when you want to optimise an application that relies on a database which is the order of the following optimization techniques
a) optimizing the spread of the pysichal elements of the database on different disks of the server b) optimizing the use ot the RAM c) optimizing the SQL d) opimizing the OS
My company is undertaking a database optimization project. Optimization the schema, the code, etc. I would like to ask, if you guys could help out, the following:
1. What risks are there? What are the pitfalls?
2. My company is hesitant to do a database freeze and stop all new development until our vendor (who's restructuring tables and changing database objects) has a stable database for us to obtain, then, and only then can we continue development on this newer copy. My question to this: how can we either reduce the database code freeze or work in parallel?
3. Can anyone point me to other sources of information? Another thread? A book? A URL?
I have this problem with my optimization job seems to fail all the time. I have this set up as a sql maintenance plan and this is run 1 every week. i have checked for things that could comme in conflict but theirs nothing. here is the error i am getting from the job history step.
Executed as user: SAPCORPadminsg. sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029). The step failed.
My company is undertaking a database optimization project. Optimization the schema, the code, etc. I would like to ask, if you guys could help out, the following:
1. What risks are there? What are the pitfalls?
2. My company is hesitant to do a database freeze and stop all new development until our vendor (who's restructuring tables and changing database objects) has a stable database for us to obtain, then, and only then can we continue development on this newer copy. My question to this: how can we either reduce the database code freeze or work in parallel?
3. Can anyone point me to other sources of information? Another thread? A book? A URL?