Creating a burn down chart using a running total of cumulative hours with the following formula:
CumulativeHoursLeft:=CALCULATE ( SUM('Projects'[Budget hours]) - SUM ( 'hours'[Hours] ), FILTER ( ALL ( 'hours'[Date] ), 'hours'[Date] <= MAX ('hours'[Date]) ) )
Works great except that in a Line Chart using [Date] as the Axis and CumulativeHoursLeft as the value, I get these spikes on days for which the employee reported no hours. I do know what exactly the measure is doing in this instance and I do not get this in a table, those dates simply do not appear. I have tried both Categories and Continuous for the Line Chart. I have also tried filtering where [Date] is not blank.how to get rid of the spikes?
I tried to install "Cumulative update package 2 for SQL Server 2005 Service Pack 2" Everything went well except for the "SQL Server Database Services" update. It errored out as it was trying to "Finalize" the update.
Kicker to this whole thing is the database file "temp_MS_AgentSigningCertificate_database.mdf" does not even exist.
I could not see any references to it in the master database. I checked the registry and I can find a couple of search references for it. It apparently may have been a database that existed in the server at one time.
I am not sure if I should remove the registry references to the database.
Below is a part of the install summary where it has failed: =========================================================== Product Installation Status Product : SQL Server Database Services 2005 (MSSQLSERVER) Product Version (Previous): 3050 Product Version (Final) : Status : Failure Log File : C:Program FilesMicrosoft SQL Server90Setup BootstrapLOGHotfixSQL9_Hotfix_KB936305_sqlrun_sql.msp.log Error Number : 29537 Error Description : MSP Error: 29537 SQL Server Setup has encountered the following problem: [Microsoft][SQL Native Client][SQL Server]A file activation error occurred. The physical file name 'F:Data emp_MS_AgentSigningCertificate_database.mdf' may be incorrect. Diagnose and correct additional errors, and retry the operation.. To continue, correct the problem, and then run SQL Server Setup again.
If someone can tell me what I need to do to resolve this issue, I would greatly appreciate it.
I have found two cumulative update packages available for sql 2005 sp2 - build 3161 (http://support.microsoft.com/kb/935356) and build 3175 (http://support.microsoft.com/kb/936305).
The fixes are supposed to be cumulative and include all fixes since the last service pack, but the list of hotfixes in each package is different.
Does build 3175 contain the fixes in build 3161? What about fixes mentioned in build 3175 that have version lowere than 3161 - are they included in build 3161?
Re: KB 940287 - Will it be in a Cumulative Update? SYMPTOMS:
You use Service Broker in SQL Server 2005. When the initiating service starts a conversation and then sends a message, the initiating service ends the conversation immediately. However, after the target service processes the message, the endpoint on the target server is not deleted when the target service ends the conversation. Therefore, when the initiating service starts a new conversation, the number of the conversation endpoints on the target server keeps increasing.
Additionally, you receive the following error message on the initiator server:
An error occurred while receiving data: '64(The specified network name is no longer available.)' This message could not be delivered because the conversation endpoint has already been closed.
It has several workarounds. I've not been affected by this yet, but KB says it has a hotfix that has been out since July 2007, so it is curious that it has not been put into a Cumulative Update yet, at least as far as documentation in Knowledge Base is concerned. Does anyone know any more about this?
Cumulative Update 5 Build 9.00.3215 KB - Problem http://support.microsoft.com/kb/943656/
We contacted Microsoft and installed the update, but when I do a SELECT @@Version I still get build 3042.
I'm not sure if they sent us the wrong file or if the update didn't apply correctly. Under Add/Remove Programs SQL Native Client shows the new build 3215 though.
Microsoft sent us 335091_intl_i386_zip.exe This extracted to.. hotfix.txt sqlncli.msi
Does an UPDATE statment lock the entire table or just the rows that will be affected by the UPDATE?
I ask because -
Can I run UPDATE statements in parallel on the same table on the same column. The need for doing this is because the table is a large fact table. I plan to execute the same UPDATE statements on different time sections of the data to expedite the processing.
If the UPDATE statment lock the entire table then I cannot run an UPDATE in parallel. If the UPDATE statement just locks the rows that will be affected then maybe I can because rows affected will be different for each UPDATE.
I have a table like gias given below: Name Amount ------------------------------ aaa 10 bbb 20
and I want an output like one given below on running SQL query or stored procedure Name Amount cumulative amount --------------------------------------------------------------------- aaa 10 10 bbb 20 30
One more question about this Custom Calendar table I'm creating. I have a column called "IsWorkdays" which indicates if the day represented by a row is a workday or not. For our purposes, I also need to create a row that accumulates those numbers by month. So, if it is the 3rd workday of the month, this column would have a 3. This is beyond my current T-SQL ability. Does anyone know how to do this?
Hi everyone.... I'm trying to execute this update statement... It takes an eternity... any ideas on how to rewrite or speed it up?
It's a several step process... below is everything that i run, one step at a time. The final update statement is what takes so long. It should only affect about 2600 rows out of a potential 9000. That's why I'm confused on the response time
select d.olddevicename, de.device, d.newdevicename into #temp9 from dns d, devices de where de.device = d.olddevicename
update #temp9 set device = newdevicename where olddevicename = device
update devices set device = #temp9.device from #temp9, devices where #temp9.device in (select #temp9.device from #temp9, devices where #temp9.olddevicename = devices.device)
i'm trying to update a table wich have a trigger on it,when i run the update, this fire the trigger that saves in other table the values of the modified fields and the type of operation that the user did.
this other table is for auditing the changes of the date.
when i try to run the update sql gives me this error :
Process 67 unlocking unowned resource: TAB: 10:334884510
i dont know what it means..can anybody help me to find how to correct this?
Is it possible for me to run an update syntax at a certain time say midnight for example?
I'm trying to update a bit field in my table (which acts as a checkbox in my Access front end), but only if three date fields are before todays date. The dates in question are in two other tables.
I'm having what you might call an optimisation issue - but I'm also not sure if my approach to this problem is right. I've spent the whole day trying various methods but none seem to be performing as admirably as I'd hoped.
Basically, here's the scenario:
1. Log files are being inserted into a SQL table via Log Parser 2.2. 2. I have a lookup table of ip address ranges to countries and other geographic data. 3. Once the log row has been inserted from Log Parser, I want to update the same log table with data from the lookup table.
The main problem seems to be the updating (the initial insert from Log Parser is lightning quick).
I initially wrote a trigger on AFTER INSERT on the log table that converted the actual user's IP address into IPNumber format using a simple algorithm, then pumped the IPNumber into a quick select query which retrieved the geolocation data. Then, in the same trigger, there was an update statement which basically said:
update dbo.Logs set [Country] = @Country where [IpNumber] = @IpNumber and [Country] = Null
Therein lies the problem. The update statement slows everything down to almost a standstill and also seems to obtain a lock on the table.
Critical factors:
1. The log table must remain readable. 2. The query must execute in seconds -- not over half hour :)
I've experimented with various combinations of indexing, so far to no avail.
Hi,I have table with three columns as belowtable name:expNo(int) name(char) refno(int)I have data as belowNo name refno1 a2 b3 cI need to update the refno with no values I write a query as belowupdate exp set refno=(select no from exp)when i run the query i got error asSubquery returned more than 1 value. This is not permitted when thesubquery follows =, !=, <, <= , >, >= or when the subquery is used asan expression.I need to update one colum with other column value.What is the correct query for this ?Thanks,Mani
The following basic UPDATE SQL statement has been running for 16 hours and counting. I need to get this done ASAP.
UPDATE Recipients SET UndeliverableTime = getdate() FROM Recipients INNER JOIN Domains ON (Recipients.DomainID = Domains.ID) INNER JOIN Undeliverables ON ( Recipients.UserName + '@' + Domains.Domain = Undeliverables.EmailAddress)
Is there any way I can see how far this has gone and how long it will take to finish? Will this take another hour to finish or another week?
Both tables (Recipients and Undeliverables) have approximately 80 million records
I did a nearly identical operation with another table that had only 7 million records and it took 10.5 hours. I hope this doesn't scale linearly to 115 hours.
I am tempted to cancel, retune, and rerun but that may be trigger a really expensive rollback operation that could take days. Any ideas?
am using FOR UPDATE triggers to audit a table that has 67 fields. Myproblem is that this slows down the system significantly. I havenarrowed down the problem to the size (Lines of code) that need to becompiled after the trigger has been fired. There is about 67 IFUpdate(fieldName) inside the trigger and a not very complex selectstatement inside the if followed by an insert to the audit table. WhenI leave only a few IF-s in the trigger and comment the rest of thecode performance increased dramatically. It seems like it is checkingevery single UPdate() statement. Assuming that this was slowing downdue to doing a select for every update i tried to do to seperateselects in the beginning from Deleted and Inserted and assigningcolumns name to specific variables and instead of doingif Update(fieldName) i didif @DelFieldName <> @InsFieldNamebeginINSERT INTO AUDITSELECT WHAT I NEEDENDThis did not improve performance. If you have any ideas on how to getaround this issue please let me know.Below is an example of what my triggers look like.------------------------------------Trigger 1 -- this was my original designCREATE trigger1 on TableFOR UPDATEASif update(field1)begininsert into AuditSELECT What I needENDif update(field2)begininsert into AuditSELECT What I needEND...... Repeated about 65 more timesif update(field67)insert into AuditSELECT What I needEND---------------------------------------------------------------------------Trigger 2 -- this is what i tried but did not improve performanceCREATE trigger2 on TableFOR UPDATEASDeclare @DelField1 varcharDeclare @DelField2 varchar....Declare @DelField67 varcharSelect@DelField1 = Field1,@DelField2 = Field2,....@DelField67 = Field67From DeletedDeclare @InsField1 varcharDeclare @InsField2 varchar....Declare @InsField67 varcharSelect@insField1 = Field1,@insField2 = Field2,....@InsField67 = Field67From Inserted-- I do not do if Update() but instead compare variablesif @DelField1 <> InsField1beginInsert into AUDITSELECT what I needendif @DelField2 <> InsField2beginInsert into AUDITSELECT what I needend............if @DelField67 <> InsField67beginInsert into AUDITSELECT what I needend----------------------------------------------IF you have any idea how to optimize this please let me know. Anyinput is greatly appreciated. I do not have a problem with triggersdoing what they are supposed to, they are very slow this is myconcern. The reason I gave you two examples is because i suspect ithas something to do with the enormouse amount of code inside thetrigger. both examples perform about the same whether i use the twohuge selects from the Inserted and Deleted or not.Thanks,Gent
I'm trying to write a script that would only update a column if it exists.
This is what I tried first:
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Enrollment' AND COLUMN_NAME = 'nosuchfield') BEGIN UPDATE dbo.Enrollment SET nosuchfield='666' END
And got the following error:
Server: Msg 207, Level 16, State 1, Line 1 Invalid column name 'nosuchfield'.
I'm curious why MS-SQL would do syntax checking in this case. I've used this type of check with ALTER TABLE ADD COLUMN commands before and it worked perfectly fine.
The only way I can think of to get around this is with:
IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'Enrollment' AND COLUMN_NAME = 'nosuchfield') BEGIN declare @sql nvarchar(100) SET @sql = N'UPDATE dbo.Enrollment SET nosuchfield=''666''' execute sp_executesql @sql END
which looks a bit awkward. Is there a better way to accomplish this?
My DataAdapter.Update() Method is running this query against my database and all the parameters and the formatting looks correct to me. I was wondering if anyone could identify obvious errors.... Thanks!
I have come up with an issue where I want to update data in a table using bulk/SET update to get the result shown in below code with output in column titled "Arrear Amt".
Please use this test data.
CREATE TABLE ##vOD_Calc ( Seq_No INT , Contract_id INT , Rental_id INT , Actual_OD INT , Logic_OD INT , Due_dte DATETIME ,
[Code] .....
Logic required is that once the sum of column [ArrearAmt] of current row and all previous rows becomes greater than $100 then column [ChArrrearAmt] should show that summed up value and in else case the column [ChArrrearAmt] should show the same value as that of column [ArrearAmt].
Once the column [ChArrrearAmt] reaches the threshold of $100 then the same cycle should start again i.e. in above example rental#1 had $37.17 < $100 then rental#1 + rental#2 is also < $100 and at rental#3 sum of rental#1, rental#2 and rental#3 becomes $111.51 which is greater than $100 so its updated in column [CHArrrearAmt]. The same cycle start overs from rental#4 onwards however the summation of [ArrearAmt] will now begin after rental#4 onwards and not from the starting.
Below is the loop based SQL script which handles the above situation, however in BULK its a total deterioration of performance if thousands of rows are to be processed i.e. with a contract having multiple rentals.
The case here is that I have to use the result of previously updated column value of [ChArrrearAmt] to take decision for the next row, however with BULK update since the row is not yet updated with latest amount therefore the decision on next row is also giving wrong result.
This is the code with which I have achieved to update the column 'chArrear Amount', however its a loop based solution and performance killer.
INSERT INTO ##vOD_Calc_loop ( Rows_count , contract_id ) SELECT COUNT(*) , T.Contract_id FROM ##vOD_Calc T GROUP BY T.Contract_id
SQL Ver: 2008 (not r2) Problem: The following code returns correct results when moving variable declarations and update statement outside a stored procedure, but fails to return a value other than zero for the "COMPANY TOTAL" records. The "DEPT TOTAL" result works fine both in and outside the sp.This may have to do with handling NULL values since I was getting warning message earlier involving a value being eliminated by an aggregate function involving a NULL. I only got this message when running inside the sp, not when running standalone. I wrapped the values inside the SUM functions with an ISNULL, and now return a zero rather than NULL for the "COMPANY TOTAL" records when running inside SP.All variable values are correct when running.
SQL CODE: DECLARE @WIPMonthCurrent date = (SELECT TOP 1 WIPMonth FROM budxcWIPMonths WHERE ActiveWIPPeriod = 'Y') select @WIPMonthCurrent as WIPMonthCurrent
I need to search for such SPs in my database in which the queries for update a table contains where clause which uses non primary key while updating rows in table.
If employee table have empId as primary key and an Update query is using empName in where clause to update employee record then such SP should be listed. so there would be hundreds of tables with their primary key and thousands of SPs in a database. How can I find them where the "where" clause is using some other column than its primary key.
If there is any other hint or query to identify such queries that lock tables, I only found the above few queries that are not using primary key in where clause.
Hi SQL fans,I realized that I often encounter the same situation in a relationdatabase context, where I really don't know what to do. Here is anexample, where I have 2 tables as follow:__________________________________________ | PortfolioTitle|| Portfolio |+----------------------------------------++-----------------------------+ | tfolio_id (int)|| folio_id (int) |<<-PK----FK--| tfolio_idfolio (int)|| folio_name (varchar) | | tfolio_idtitle (int)|--FK----PK->>[ Titles]+-----------------------------+ | tfolio_weight(decimal(6,5)) |+-----------------------------------------+Note that I also have a "Titles" tables (hence the tfolio_idtitlelink).My problem is : When I update a portfolio, I must update all theassociated titles in it. That means that titles can be either removedfrom the portfolio (a folio does not support the title anymore), addedto it (a new title is supported by the folio) or simply updated (atitle stays in the portfolio, but has its weight changed)For example, if the portfolio #2 would contain :[ PortfolioTitle ]id | idFolio | idTitre | poids1 2 1 102 2 2 203 2 3 30and I must update the PortfolioTitle based on these values :idFolio | idTitre | poids2 2 202 3 352 4 40then I should1 ) remove the title #1 from the folio by deleting its entry in thePortfolioTitle table2 ) update the title #2 (weight from 30 to 35)3 ) add the title #4 to the folioFor now, the only way I've found to do this is delete all the entriesof the related folio (e.g.: DELETE TitrePortefeuille WHERE idFolio =2), and then insert new values for each entry based on the new givenvalues.Is there a way to better manage this by detecting which value has to beinserted/updated/deleted?And this applies to many situation :(If you need other examples, I can give you.thanks a lot!ibiza
I have a table where table row gets updated multiple times(each column will be filled) based on telephone call in data.
Initially, I have implemented after insert trigger on ROW level thinking that the whole row is inserted into table will all column values at a time. But the issue is all columns are values are not filled at once, but observed that while telephone call in data, there are multiple updates to the row (i.e multiple updates in the sense - column data in row is updated step by step),
I thought to implement after update trigger , but when it comes to the performance will be decreased for each and every hit while row update.
I need to implement after update trigger that should be fired on column level instead of Row level to improve the performance?
Is it possible to create a cumulative log using SSIS? basiclly I have 5 logs which hold failed records. I would like to create a cumulative log and send it via email using SSIS. thoughts?
SQL Server 2000 SP3Hi,How can I get the cumulative weeks from a givedate to the currentdate. I know I can get the weeknumber by using datepart(wk,getdate())but this will giveme the week number for this year. What if I want to know the number ofweeksthat have passed since june 1 2001. If I use datepart(wk,'20010106') Iwillget the week number for 2001 but I would like the number of weeksexpired between then now.Thanks,Reg
I have a table consisting of 3 columns: Parent varchar(50), Child varchar(50), Pop int.
The table is setup as follows:
Parent Child Pop ---------------------------------- Europe France 0 France Paris 1 New York New York City 10 North America United States 0 North America Canada 0 United States New York 0 United States Washington 0 Washington Redmond 200 Washington Seattle 100 World Europe 0 World North America 0
This is just some sample data modified a tiny bit from an example of a hierachical print out sample that is a stored procedure that allows me to pass any place and see all of that place's children/grandchildren.
I need to figure out how to write a query to show me cumulative sums (ROLLUP?) of the whole tree. So the output should basically be something like this (it can include parent and child columns too):
World Null 311 World Europe 1 Europe France 1 France Paris 1 World North America 310 North America United States 310 North America Canada 0 United States New York 10 United States Washington 300 New York New York City 10 Washington Redmond 200 Washington Seattle 100
Hopefully you understand what i'm looking for. I've tried using WITH ROLLUP and I also tried using an Inner Join but I'm not really sure what I need to do to pull this off. I seem to only be able to get it to work 1-2 levels deep but not through the whole tree.