Hearing complaints from users about speed on db server (I have almost no control on design) it just has to work. Ran profiler looking for all sql statements over 4000 millsec and in one hour returned over 715 tsql statements. Over 300 of these were over 10000 milliseconds. THis is on an 8 way Dell with 8 gig of RAM. Looking for opinions, how bad does this look compared to other servers you are taking care of? Cache hit ratio is at 99 % and system queue length still under 1, but this does not look good.
Does anyone know why SSIS sometimes just sits in the Pre-Execute phase of a data flow and does nothing? It doesn't matter how elaborate the data flow is or the volume of data. It can sometimes take 80% of the task's run time.
Which works fine, but what I need to calculate the total duration of a request based on the duration of the tasks completed in the request based on Req_ID. I would like to use the CASE statement I have to determine the SLA_Mins for each task and add them together to get total request SLA_Mins.
Below is the create table schema and data
GO /****** Object: Table [dbo].[MidrangeOtherSourceControl] Script Date: 06/03/2015 18:13:15 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[MidrangeOtherSourceControl]( [Req_ID] [float] NULL, [Service_Name] [nvarchar](255) NULL,
Hi all. We are currently running SQL7 on an NT4 server (dual 800Mhz, 1GB RAM) and it is being pounded mercilessly 24/7!
We are currently in the market to upgrade, and I would like to get your opinions on this setup. Maybe some has experience with this box, or other issues in upgrading to a new OS and new version of SQL...
Box: Compaq Proliant ML530
Processors: 2 Xeon 2.8GHz/512KB with 400Mhz System Bus
Hi, I have probably exhusted the topic of shapes etc... but I am still having a hard time determining the best solution for my problem:
I have several products, each with several specific properties:
Double Tee ----------------------------------------- Width | Height |Flange | Leg | Count
Column ------------------------ Width | Height
Round Column ----------------- Radius
Now originally I wanted to create a scalable table structure, so with the help of several people on this site (and SQL Team) I have developed the following : tbShape ------------------ ShapeID | Shape | XSectionFormula ------------------------------------------- 1 | Rect | Length X Width
From the above table structure I was able to select a product and by obtaining the formula from the tbShape table, using a cursor, replacing the Attribute names in the formula with the attribute values from the tbProductAttributeValues table, using dynamic SQL, I am able to determine the cross section of any selected product.
The Problem now is, what if I need to apply different functions to the data for any given product. This proves to be very difficult because the attributes for the product are not necessarily consistent.
For Example, lets say the above was a slab 10 feet by 1 foot giving a cross section of 10 square feet. Because it is simple to get the cross sectional area, I can easily figure out the cubic feet of concrete used by multiplying the cross section by a length. But lets say the user want to get the cost / square foot? How is the application sure what attribute is the width of the product?
I guess what I am getting at is why the structure below is not any better then the one above?
Now there would be a 1 - 1 relationship between the tbTemplates and tbDoubleTeeTemplates ON TemplateID - fkTemplateID. To add a new product, simple add the category, the new table, and then alter the Stored Procs which would use if() if else() statements based on the category to go to the appropriate template table.
Also, now I can write any customized functions for any product without the worry of user mispelling an attribute between the formula and attributes, etc...
Any opinions, thoughts on this would be appreciated!
I've always used the identity field in SQL server to maintain the unique id for a table. With the new DB design at work we brought in a dba and she made us move away from allowing SQL maintain the unique field and having us maintain the unique field in code. To do that we had to start a transaction, do a select max(id) + 1, insert into table, commit transaction. Doing it this way, I'm starting to see deadlocks due to the transactions locking the table.
Getting down to what I wanted to know, what is the pro's/con's you guys see in maintaining he unique ID this way and is there a better way of creating an unique id in T-SQL code?
I took a search through the archives for related topics (and got Des in trouble along the way :( ) but couldn't find a directly related thread. If I missed one, feel free to tell me where to go (hey...watch that...only if I MISSED one!)
I wrote what is, essentially, a data verification stored proc that goes out to each of FOUR servers we have - each one running a mirror database. In a nutshell, there is one table that contains a row with a column in it that, if everything has gone well in the daily processing in all 4 databases, will match identically between all 4 DBs.
So, that said, here is the output: Job 'Index - Verify PortfolioIndex Across Servers' : Step 1, 'PortfolioIndex Check across all servers and portfolios' : Began Executing 2004-11-09 15:30:00
------------------- BEGINNING PortfolioIndex VERIFICATION -------------------- [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 2 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 3 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 11 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 67 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 72 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 84 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 90 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 92 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 100 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 105 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 110 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 115 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 120 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 125 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 130 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 135 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 140 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 145 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 150 on 11/09/2004! [SQLSTATE 01000] WHOO-HOO!!! EVERYTHING MATCHES for porfolio number 155 on 11/09/2004! [SQLSTATE 01000] UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 160 on 11/09/2004! [SQLSTATE 01000] --> Server TA1 shows an index of 110.582 [SQLSTATE 01000] --> Server TRADEANALYSIS shows an index of 110.582 [SQLSTATE 01000] --> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000] UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1000 on 11/09/2004! [SQLSTATE 01000] --> Server TA1 shows an index of 189.623 [SQLSTATE 01000] --> Server TRADEANALYSIS shows an index of 189.623 [SQLSTATE 01000] --> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000] UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1001 on 11/09/2004! [SQLSTATE 01000] --> Server TA1 shows an index of 164.058 [SQLSTATE 01000] --> Server TRADEANALYSIS shows an index of 164.058 [SQLSTATE 01000] --> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000] UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1002 on 11/09/2004! [SQLSTATE 01000] --> Server TA1 shows an index of 255.978 [SQLSTATE 01000] --> Server TRADEANALYSIS shows an index of 255.978 [SQLSTATE 01000] --> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000] UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1003 on 11/09/2004! [SQLSTATE 01000] --> Server TA1 shows an index of 159.009 [SQLSTATE 01000] --> Server TRADEANALYSIS shows an index of 159.009 [SQLSTATE 01000] --> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000] UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1004 on 11/09/2004! [SQLSTATE 01000] --> Server TA1 shows an index of 318.981 [SQLSTATE 01000] --> Server TRADEANALYSIS shows an index of 318.981 [SQLSTATE 01000] --> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000] UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1005 on 11/09/2004! [SQLSTATE 01000] --> Server TA1 shows an index of 145.921 [SQLSTATE 01000] --> Server TRADEANALYSIS shows an index of 145.921 [SQLSTATE 01000] --> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000] UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1006 on 11/09/2004! [SQLSTATE 01000] --> Server TA1 shows an index of 141.035 [SQLSTATE 01000] --> Server TRADEANALYSIS shows an index of 141.035 [SQLSTATE 01000] --> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000] UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1007 on 11/09/2004! [SQLSTATE 01000] --> Server TA1 shows an index of NULL [SQLSTATE 01000] --> Server TRADEANALYSIS shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000] UH-OH - TROUBLE!!! CloseIndex mismatch for porfolio number 1008 on 11/09/2004! [SQLSTATE 01000] --> Server TA1 shows an index of 123.179 [SQLSTATE 01000] --> Server TRADEANALYSIS shows an index of 123.179 [SQLSTATE 01000] --> Server RECEIVE1 shows an index of NULL [SQLSTATE 01000] --> Server RECEIVE2 shows an index of NULL [SQLSTATE 01000] ------------------- COMPLETE -------------------- [SQLSTATE 01000]
This was cut-n-pasted here from a log file created by the actual SQL SERVER 2000 job created to run the afore-mentioned stored procedure.
After all that...my quandry is this:
What is the best way to send this info out in an email format to interested parties? Currently I have the job send out an email notification on completion, but that still requires my lazy buttocks to go look at the log file in the job (or, more accurately, on the server in the logfile directory).
I want to get the actual DATA as shown above into the email.
As I see it, my options are: (1) write the data out to a flat file during the run (or, as is done now, into a log file by the SQL Server scheduled job) and then attach that FILE to the email - this still requires my lazy buttocks to OPEN the attachment that comes with the email. ro (2) write the message out a line at a time to a table with an IDENTITY column (used to order them on the select) and a VARCHAR(128) column that each line in the log would be written to. This option allows me to just do a SELECT in the call to xp_sendmail to get the data into the actual email...but I just really hate the idea of creating a permanent table for this cheesy solution.
I tried it with a temp table within my stored proc, but of course, when I made the call to xp_Sendmail, it can't see my temp table in order to select from (mind you, it's not that I mind USING a cheesy table, just that I don't want it to have a lifespan longer than the time I need to use it and toss it aside)
I know the common denominator here is "My Lazy Buttocks", but I really can't understate the laziness of my buttocks, so this is a valid concern ;)
Any thoughts? How do people get status messages like this into an email without using an attachment or a cheesy middleman table?
Sorry, as always, about the miniseries...just trying to set the mood before popping the question ;)
I've a core component that is a Win32 DLL, this DLL implements some basic math calculations and conversions between several video systems.. PAL, NTSC and so on. Plus this DLL has a memory mapped file that stores a system value that is the current time of the day (House Clock), it is written by a service app and read by exernal apps that need this frame-accurate value.
I've the full control of the DLL and now I've to decide whether to use this DLL from SQLServer with P/Invoke or if it would be better to port this DLL to C#. Both solutions has pros and cons. (The DLL is written in delphi32 and cannot be easily ported)
- The most important pro is "code reuse"... sql implements the same math as applicstions and once the DLL is bug free, SQL math behaves the same as apps. No need to write code twice and so on.
- The most important con is about code security... not sure if an hard problem in the DLL may take down all the server, even if the situation is very unlikely to happen since the type of the DLL... but never to say...
I would like some opinions on how you would deal with the following scenario:
We probably have somewhere from 500 to 1000 reports (written in crystal). We also have about 120 clients; each client has their own database. One of the reasons for so many reports is because a lot of our customers want report A but with this or that extra column added so we end up with a lot of custom versions of one report for a particular client. My question is in converting over to Reporting Services, how would you setup the file structure?
Right now, we are thinking that every client is going to have their own folder which would contain all Reports, Models, and Datasources. What do you guys think?
Hi all,I was wondering if I could get some experienced opinions on SQL hardware torun an ERP app on SQL 2000. The app does not yet support SQL 2005. The ERPapp has 25 users and likely won't exceed 30 users for several years. Alltraffic is on the LAN. The ERP clients basically submit SQL requests forreads and writes. The app makes heavy use of temp tables, temp views butnot many stored procedures. The current size of the db is 6GB and willlikely double in 4 years.Planned server:Windows Server 20034 GB RAMSQL 2000 Standard (ERP app does not yet support SQL 2005)RAID1 for OSRAID 10 for SQL dataRAID1 for SQL logsRAID1 for temp dbDual, teamed NICsI would try to get 15K SCSI drives. Any thoughts on SATA instead of SCSI?Could I expect much of an impovement by using SQL 2000 Enterprise since itcan use more RAM? I would rather wait for SQL 2005 to be supported.Does anyone have a Dell or HP server configured in an email-able cart thatthey would care to share?Thank you.
Anyone here tried ListCleaner by a company called WinPurehttp://www.winpure.co.uk/lists (a data deduping software) incomparison to other products that may be out there? Looks like somecompanies like Hewlett Packard use this.I am looking for a good (and inexpensive) datacleansing tool to dedupeand standardise lists, happy to extract the data from database andre-import it (which seems WinPure does) before incorporating it intoother BI tools. Tried MatchIt from help it systems but it is a bitcumbersome.Particularly interested in a tool with a good phonetic matchingengine, that handles multiple lists.Mostly work with oracle, sql and access.Recommendations appreciatated.dbdb
Looking for some insight from the professionals about how they handlerow inserts. Specifically single row inserts through a storedprocedure versus bulk inserts.One argument are people who say all inserts (and updates and deletionsI guess) should go through stored procedures. The reasoning is thatthe developers that code the client side have no reason to understandHOW the data is stored, just that it is. Another problem is an insertthat deals with multiple tables. It would be very easy for thedeveloper to forget a step. That last point also applies to businesslogic. In my case, adding a security to our SecurityMaster can touch 1to 4 tables depending on the type of security. Also, certain fieldsare required while others are set to null for depending on the type.Because a stored procedure cannot be passed datasets but only scalarvalues, when you need to deal with multiple (i.e. bulk) rows you arestuck using cursors. This post is NOT about the pros and cons ofcursors. There are plenty of those on the boards (some of themprobably started by me and showing my understanding (or morecorrectly, lack of) of the way to do things). Stored procedures alsogive you the ability to abort and/or log inserts that cannot happenbecause of contraints and/or business rule failures.Another approach is to write code (not accessible from outside thedatabase) that handles bulk inserts. You would need to write in rulesto "extract" or "exclude" rows that do not match constraints orbusiness rules otherwise ALL the inserts would fail because of one badrow. I guess you could put the "potential" rows into a temp table.Apply your rules to the temp table and delete / move rows that wouldfail. Any rows left can that be bulk inserted. (You could also use therows that were moved to another temp table for logging why theyfailed.)So that leaves use with two possible ways to get data into the system.A single row based approach for client apps and a bulk based forinternal use. But that leaves use with another problem. You now havebusiness logic in TWO separate areas. You have to remember to modifycode or fix bugs in multiple locations.For those that are still reading my post, my question is...How do you handle this? What is the approach you take?
I have a client who wants to be able to upload images to his website for his customers to access. It will probably max out at 100 images a month...so not a huge amount of data. I am using asp.net 2.0 and SQL Server 2005. Does anyone have thoughts or opinions on why I should or should not take this approach?
Before I started with this employer the server support staff had planned a backup strategy that included using database agents for Oracle and SQL Server. Nothing is really set up yet for SQL Server so I can still change this direction. Has anyone seen a definite benefit to using database backup agents? If so, what benefits have you seen? There doesn't seem to be much value added by paying for and using an additional product when the database's own utilities are so easy to use, and all the backup files can be backed up to tape with the basic backup software. I've not worked with it, though, so perhaps I am missing something. These are small databases so space is not an issue. Any opinions/comments are appreciated.
I'm wondering what other people do in regards to running hard drive defragmentation programs on SQL Server 2005 servers (assume 64-bit and Windows 2003). From what I can tell the most common opinions are:
1. Don't defragment because it doesn't help and it can cause problems. 2. Use Diskeeper 3. Use the built-in Windows defragmenter
Other respected defragmented programs are PerfectDisk, O&O Defrag, JkDefrag, and Contig.
Dear All, I am writing a procedure to import daily the customer excel file to SQL server 2000, I managed to do that where the excel file will be imported directly to the SQL server after creating the new data table, & then I need to read the created table & import it row by row to my original data table.The problem: I. The original excel file has the following:a. a protection passwordb. The contents has two merged headers (which effecting the import procedure)c. And last line is a totals line Before importing the file I have manually to remove (a – b & c)!! The Solution: II. I am trying to find a way to do the above points automatically inside the project. III. Also I thought of importing the excel file to a DataGrid first then:a. Let the user approve the file contents &b. Remove manually point (I.b.) above (I don’t now how yet, need to try it).c. Then import the DataGrid the the SQL server. I think I prefer solution (III), any suggestions are highly appreciated BR
Please help me decide what to do about my current hardware configuration. I have an ASP.NET app that uses SQL Server for the database. Currently both IIS and SQL are running on the same machine (see machine 1 below). I want to separate it so that IIS and SQL each have their own machine but I have a very limited hardware budget right now. I am trying to decide if it would be worth moving either IIS or SQL to another machine that we have, or if I would actually lose performance by doing so considering the extra machine I have is a bit outdated (see machine 2 below). Should I leave well-enough alone or try to split it to these 2 machines I have. (buying new machines aren't an option right now although that's what I'd like to do). I could probably afford a memory upgrade on one or both computers if necessary. Machine 1Dual XEON 1.8 Ghz w/ 1G RAM Machine 2P3 1.13 Ghz w/ 512K RAM Thanks
We have been told by the director over the DBAs that we may be standardizing ALL scheduled jobs and tasks (including SQL jobs) onto 1 tool called AutoMate (by NetworkAutomation), although I suspect the decision has already been made. I've argued that a standard for batch jobs is good but SQL has a job scheduler designed for SQL and integrated with SQL that works extremely well, but don't think I'm getting through. Has anyone used AutoMate as a replacement for SQLAgent? I am open to hearing both pros and cons please. Thank you.
I'm reviewing a data warehouse design schema for a client that is following Kimball's data warehousing principles. One of the first things I noticed was a table of dates with expanded columns giving such information as the year, month, month name, fiscal year, quarter, etc for each date, They also have a surrogate key (int) for the date value. The fact tables store the surrogate key rather than the date value itself. They were very surprised when I questioned the purpose of this table, assuring me that Kimball was very strong on the concept of having a date dimension for each table. I don't see the purpose of a table containing nothing by derived date formats. I think they will get a bigger performance hit from having to link through the surrogate key than they would suffer from having to convert datevalues stored in the fact tables. Has anybody else ever seen this before? Does Kimball really advise this?
Situation: SQL Server 2000. At my new employer they have a production database on one server and a copy of it that is set to read only on another server which is used for reporting.
#1 They have an SQL Server Agent job on the production server that: (2 times a day)
Backs up the production database Copies the backup file to a directory on the reporting server. (Its pretty big and can take time if there are problems with the LAN)
#2 They have an SQL Server Agent job on the Reporting server that: (scheduled to run 2 hours or so after the job on server 1 has run€¦they figured that it would be a safe bet that the backup and copy process of the first job would be done by then)
Breaks the user connections to the reporting database Performs a restore on the reporting database using the backup file that was copied to the holding directory by the production job. Sets some permissions for various users. Sets the reporting database to READ ONLY. What I would like to do is find a more efficient way to create this reporting database, I have started doing research into DTS methods but would like some opinions from more experienced users.
If monitoring for duration with sql profiler, what does the number represent ie 2733906 is it milliseconds, thousandths, looked in BOL no clear definition
I found this nifty code on stackoverflow that works well but I'm trying to send the results to a text file and the column lengths are huge. I used CAST for the first line and it worked great but I can't seem to make it work with duration. Here's the original code:
I am trying to get a query that will allow me to report the time taken to complete a certain training module.
The database itself does not have a duration field so I am tring to get the duration by using MIN and MAX. I can get the timing for when the module was opened and the time for the last mouse click on it, from this I need to be able to calculate the time taken to complete.
Query I am using to get the basic info comes from 3 tables so I have only attached the relevent output. Query used is as follow:
SELECT * FROM PPS_SCOS, PPS_TRANSCRIPTS, PPS_TRANSCRIPT_DETAILS, PPS_PRINCIPALS WHERE PPS_SCOS.SCO_ID = PPS_TRANSCRIPTS.SCO_ID AND PPS_TRANSCRIPTS.TRANSCRIPT_ID = PPS_TRANSCRIPT_DETAILS.TRANSCRIPT_ID AND PPS_TRANSCRIPTS.PRINCIPAL_ID = PPS_PRINCIPALS.PRINCIPAL_ID AND PPS_SCOS.NAME LIKE 'MTM-106 The Dangers of Smoking' AND PPS_PRINCIPALS.NAME LIKE 'Nigel Cordiner' AND PPS_TRANSCRIPTS.TICKET NOT LIKE 'l-%' ORDER BY PPS_TRANSCRIPT_DETAILS.DATE_CREATED
Output:
pps_scospps_scospps_transcript_detailspps_principalspps_principals SCO_IDNAME DATE_CREATED PRINCIPAL_ID NAME 136850MTM-106 The Dangers of Smoking08:17:2516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:17:2516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:17:4016287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:18:2516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:18:5716287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:19:1416287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:19:4716287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:20:2116287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:20:4416287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:21:2616287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:22:1316287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:24:5516287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:25:1216287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:25:2916287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:26:4916287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:27:0216287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:27:2916287Nigel Cordiner 136850MTM-106 The Dangers of Smoking08:27:4316287Nigel Cordiner
Have added the column heading and the tables the output comes from.
Relatively new to SQL so any help would be greatly received.
SELECT h.JobNum, (CASE WHEN MONTH(h.JobCompletionDate) = 1 THEN datediff(day,MIN(l.ClockInDate),MAX(l.clockInDate)) ELSE 0 END) AS JAN, (CASE WHEN MONTH(h.JobCompletionDate) = 2 THEN datediff(day,MIN(l.ClockInDate),MAX(l.clockInDate)) ELSE 0 END) AS FEB, ... FROM JobHead h INNER JOIN LaborDtl l ON h.JobNum=l.JobNum WHERE JobCompletionDate>='20070101' AND JobCompletionDate <'20080101' AND l.ClockInTime<>0 GROUP BYh.JobNum ,h.JobCompletionDate
The query shows, for each job, the month in which the job completed, and the number of hours it took to complete. I€™m calculating the number of days€™ duration by doing a datediff between the oldest and newest clockindates. I need to ignore adjustment transactions in the labordtl table €“ these rows are easily identified as they have clockintime values of 0. So far, so good. Now here€™s my problem.
There are some jobs which have only one €śreal€? labor transaction €“ this could happen if the job only took one day to complete. Other labor transactions may exist for that job, but let's say they are adjustments which we can ignore -- the date they were entered should not extend the duration of the job. In this situation, my datediff between the oldest valid transaction and the newest, returns 0. I don€™t have to count hours between clockintime and clockouttime. The rule is simply that if there is only one "real" labor transaction, I need to count this as a 1 day job.
I thought a nested CASE statement or expression might be the way to go but I didn't make any real progress.
Any ideas to solve this problem would be appreciated.
I'm developing a web app that displays the running packages and the total elapsed time. I'm calling GetRunningPackages() method and using the ExecutionDuration property of the returned package. The duration seems to be only for the currently executing container and not the entire package. Is there a way to get the duration of the entire package? Thanks.
Hi all, I want to find working duration between two datetimes in c#.i'm using following code... DateTime starttime = Convert.ToDateTime(Session["StartTime"]); DateTime endtime = DateTime.Now; TimeSpan duration = endtime - starttime; DateTime period = new DateTime(duration.Ticks); i want to store this duration in database through stored procedure, i've give datetime datatype to duration but it is giving error in conversion of TimeSpan to DateTime..Please help... Thanks
SQL 6.5 - run duration 6-7 hours SQL 7.0 - run duration 12-13 hours 175+ columns with total record size=570 4.2M records with tablesize 2.5G
It's a simple 'select into...' with some embedded logic from a work table with all char fields into the actual table converting char fields into various data types (int, datetime, real, etc.) Here is a sample of the code:
SELECT LoanNum=CASE WHEN ISNUMERIC(ACCT#)=1 THEN CONVERT(int,ACCT#) ELSE NULL END, PaidToDt=CASE WHEN PAIDDT = '0001-01-01' THEN NULL WHEN ISDATE(PAIDDT)=1 AND SUBSTRING(PAIDDT,1,2) = '19' THEN CONVERT(smalldatetime,PAIDDT) WHEN ISDATE(PAIDDT)=1 AND SUBSTRING(PAIDDT,1,2) = '20' AND SUBSTRING(PAIDDT,3,2) < '79' THEN CONVERT(smalldatetime,PAIDDT) ELSE NULL END, . . . INTO db.owner.tablename FROM db.owner.wrktablename (NOLOCK)
There is a trigger to monitor the modification on a table, and it turn on. For a special duration, I need to turn off this trigger to modify the table. And then turn on the trigger again.
I have a table called Tickets which contains ticket information for a machine. Each machine can have more than one ticket number opened at the same time. The ticket number contains start date/time and end date/time of the ticket. Thereefore the table looks something like this:
I want to be able to calculate total duration time(in hours) that EACH MACHINE had a ticket open...but here is the tricky part. The total duration time that a machine had ticket open has to encompas any tickets that may fall in the same time period. For example: If Machine A has a ticket open at 8:30 and the ticket is closed at 10:00. Meanwhile, Machine A had another separate ticket open at 9:30 which was closed at 10:30. In this case, the total duration time for this machine would be from 8:30 to 10:30 for a total of 2 hrs duration time.
Can anyone help me get started in tackling this problem or provide any examples?
Is there anyway to tell how long this will run for -- or how far it has got? I have a large database that has just had most of the data removed. The command has been running for 8 hours and I have just stopped it to let something else run quickly. Any way of telling how much longer it will take?