I am new to this level of coding in SQL SERVER 2012, but I am looking to update the TITLE_YEAR field in the temp table with the Year the employee is in that title. For example for employee 11127 the data should look like this:
EMP_IDTitle DateValue TITLE_YEAR
3 Senior Consultant 2009-01-01 00:00:00.0001
3 Director 2010-01-01 00:00:00.0001
3 Director 2011-01-01 00:00:00.0002
3 Director 2012-01-01 00:00:00.0003
3 Director 2013-01-01 00:00:00.0004
3 Senior Director 2014-01-01 00:00:00.0001
I would like a single SQL to return all employee's total billablecompensation for a year. Their billable rates change throughout the year sounder the employee table (one), there is a compensation table (to many)which has the employee id, effective date, billable hourly rate. So in agiven year calendar year they could have many different (though usually 2 atmost) rates. These rates then have to correspond to and e multiplied bytheir corresponding billable hours from the time sheet table.I know I could create a series of UNIONs and hard code the effective dates,i.e.select from time sheets where employee=john and timesheet.task_date betweenjan 1 and jun 1, compensation.billable rate * timesheet.billable hoursUNIONselect from time sheets where employee=john timesheet.task_date between jun1 and dec 31 compensation.billable_rate * timesheet.billable_hoursI'd have to do that for every employee in a very large SQL.Is there an easier way using straight SQL? If not could it be done with astored procedure?Thanks for any insight.
I have huge export files in a DB and i need to check if there are any datasets that have the same value in the first column, but a different in another one, via a query of course.
Like this:
ID IS NULL 1 1 2 1 3 0 1 0
The expected ID i get as a result of my query should be 1 in this case.
I have 3 SSRS 2014 (Dev, UAT and Prod). I would like to change background colors of each environment and customize the title 'SQL Server Reporting Services' to ' SSRS Development'.
I prefer to implement both, a background color change and a title change. The reason for this is to clarify to end users which environment they are working with.
Where can I make those minimal changes in SSRS 2014.
It seemed quite easy at first glance. I Built it up via string concatenation and thought to execute the dynamic sql with sp_exec and get the result. As I don't like dynamic sql I was wondering If there is any other way..
SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count FROM sys.dm_exec_query_stats AS a CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b where last_execution_time >= '2015-04-07 10:01:01.01' ORDER BY execution_count DESC
But the result of execution count is from the first. I want to know it only one day.
I created a new login and then created a new user [COM] in DB with default schema pointing to [COM]
I created then schema [COM] WITH AUTHORIZATION [COM]
I want this [COM] user to have all permissions it needs on [COM] schema only. How do I do that? When I try to create table [Com].Table it gives me permission denied.
I'm looking for a quick script that someone has already written to update statistics (not to rebuild or re-organise) on specific indices in specific databases - I guess loop though a table comprising of a list of databases and the indices.
I know Ola has one but I'm not look for something that is that complicated. If I cannot find one I'm going to have to write one myself - I want to try and avoid re-inventing the wheel as tomorrow I have to do this work and it's about 7K plus indices in about 10+ databases.
This store procedure will get some executable queries from the select statement, the cursor will fetch each rows to execute the query and insert the queries into table_3 to mark as 'E'. Until 17:00, this store procedure will stop execute the queries and just get the queries from select statement insert into table_3 to mark as 'C'.
I don't know why the outputs in table_3 are quiet different than I think. This store procedure comes out with two exactly same queries and one marked as C and another marked as E.
CREATE PROCEDURE procedure1 AS DECLARE cursor_1 CURSOR FOR SELECT 'This is a executable query' FROM table_1 DECLARE @table_2 DECLARE @stoptime DATETIME = NULL;
I have a user who needs access to views like(dbo.viewnameabc1,dbo.viewnameabc2 and so on...) dbo.viewnameabc* and anytime the user creates the view he already have the permission to view those views....
I have a database that has dozens of tables. Many of these tables reference the employee ID.For example tblDaysOff has a column employeeID that is matched on tblEmployees.ID, and there are many such tables.
Now the employee IDs are changing the way they are generated. Instead of a alphanumeric value being stored as a text value, all employee IDs will be uniqueidentifiers stored as text values.The question is, how can I change every instance of "somevalue" in every record in every column where the column name is "employeeID" in every table in the database to "differentvalue" where employeeID = "somevalue"?This is what I have cobbled together from multiple sources ... but there is a syntax error where @max is located.
Code: USE CsDB DECLARE @t TABLE(tRow int identity(1, 1), tSchemaName nvarchar(max), tTableName nvarchar(max)) INSERT INTO @t SELECT SCHEMA_NAME(schema_id), t.name FROM sys.tables AS t JOIN sys.columns c ON c.object_id = t.object_id WHERE c.name LIKE '%employeeID%'
[code]...
Obviously I don't want to run this and then have to try and recover the database when things go away.
how to do this i have table of employee ,evry employee have a unique ID "empid" empid VAL_OK -------------------------- 111 0 222 0 333 0
now insert multiple insert to my work_table shifts for all month for evry employee like this (this is work_table) empid date val -------------------------------------------------- 111 01/02/2008 1 111 02/02/2008 2 ............... 111 29/02/2008 5 --next employee 222 01/02/2008 1 222 02/02/2008 4 ............... 222 29/02/2008 6 --next employee 333 --next employee 444 --next employee 555 -------------------------------------------------------------
now i need for evry OK insert (for all month) each employee go to the TB_Employee and update each employee once !! from VAL_OK=0 to VAL_OK=1 like this
empid VAL_OK -------------------------- 111 1 222 1 333 1 ---------------------- like this i know who is the employee have shift for all month and who NOT !
i think it like this
Code Snippet Create trigger for_insert on tb_work For insert begin if @@rowcount = 1 Update tb_employee Set val_ok= 1
else /* when @@rowcount is greater than 1, use a group by clause */ Update tb_employee set val_ok= 1 select empid from tb_work group by tb_work.empid
I have two queries that give me the total sales amount for the current year, and the last year.
SELECT SUM([Sales (LCY)]) FROM [$Cust_ Ledger Entry] cle LEFT OUTER JOIN dw.dim.FiscalDate fd ON fd.CalendarDate = cle.[Posting Date] WHERE [Customer No_] = '10135' AND fd.CalendarYear = '2013'
[Code] ....
I would like to learn how to be able to make this a single query and end up with two columns and their summed up totals. Like it shows on the attached image.
This is my query without the columns I need:
SELECT c.CustomerNumber ,c.Name ,c.ChainName ,c.PaymentTermsCode ,cle.CreditLimit AS 'CreditLimit' ,SUM(cle.Amount) AS 'Amount'
We have a database on a 2005 box, which we need to keep in sync with one on a 2014 box (until we can turn off the one on 2005). The 2005 database is still being updated with changes that must be applied to the 2014 database, given the nature of the data (medical documents) we need to ensure updates are applied to the 2014 database in very near real time (these changes are - for example - statuses, not the documents themselves).
Cunning plan #1, ulgy - not at all a fan of triggers - but use an after update trigger to run a sp on the remote box via a linked server in this format, with a SQL Server login for the linked server with permissions to EXEC the remote proc.
CREATE TRIGGER [dbo].[SourceUpdate] ON [dbo].[SourceTable] AFTER UPDATE AS SET XACT_ABORT ON; SET NOCOUNT ON; IF UPDATE(ColumnName)
[Code] ....
However, while the sp can be run against the linked server as a standalone query OK, when running it in a trigger it's throwing
OLE DB provider "SQLNCLI" for linked server "WIBBLE" returned message "The transaction manager has disabled its support for remote/network transactions.".
Msg 7391, Level 16, State 2, Procedure TheAfterUpdateTrigger, Line 19
The operation could not be performed because OLE DB provider "SQLNCLI" for linked server "WIBBLE" was unable to begin a distributed transaction.
Whether it actually possible to call a proc on a remote box via a trigger and if so what additional hoops need to be jumped through (like I said, it'll run OK called via SSMS)?
I have this script in my database, but it always gives 2054 rows back and if I actually DO change something it doesn't even notice...
UPDATE a SET a.[omschrijving]=SP.[omschrijving] ,a.[verkoopprijs]=SP.[verkoopprijs] ,a.[gewijzigd]=getDate() FROM [artikelen] a LEFT OUTER JOIN [Hofstede].[dbo].[sparepartsupdate] SP ON a.PartNrFabrikant = sp.PartNrFabrikant WHERE ((A.omschrijving != SP.[omschrijving]) OR (A.[verkoopprijs] != SP.[verkoopprijs]))
One process ( a service ) inserts data into this table , as well as updates certain fields of the table periodically.
Another process ( SQL Job ) updates the table with certain defaults and rules that are unknown to the service - to deal with some calculations and removal of null values where we can estimate the values etc.
These 2 processes have started to deadlock each other horribly.
The SQL Job calls one stored procedure that has around 10 statements in it. This stored proc runs every minute. Most of them are of the form below - the idea being that once this has corrected the data - the update will not affect these rows again. I guess there are read locks on the selecting part of this query - but usually it updates 0 rows - so I am wondering if there are still locks taken ?
UPDATE s SET equivQty = Qty * ISNULL(p.Factor,4.5) / 4.5 FROM Stock s LEFT OUTER JOIN Pack p on s.Product = p.ProductId AND s.Pack = p.PackId WHERE ISNULL(equivQty,0) <> Qty * ISNULL(p.Factor,4.5) / 4.5
The deadlocks are always between these statements from the stored procedure - and the service updating rows. I can't really see how the deadlocks occur but they do.
Another suggestion has been to try and use an exists before the update as below
IF EXISTS( SELECT based on above criteria ) ( UPDATE as before )
Does this reduce the locking at all ? I don't know how to test the theory - i added this code to some of the statements, and it didn't seem to make much difference ?
Is there a way to make a process ( in my case the stored procedure ) - give up if it can't aquire the locks rather than being deadlocked - which leads to job failures and emails etc ?
We are currently trying to filter down the data that is updated to be only the last few months - to reduce the amount of rows even analyzed - as the deadlocking does seem to be impacted by the number of rows in the tables.
I have got 4 MS Access Database Files, which have got 3 Tables each, means Total 12 Tables which gets updated with new data every evening, by an external application. Means new data gets appended to all these 12 Tables.
I want to have exact same 4 Databases, which have got 3 Tables each, means Total 12 Tables, but WITHIN MS SQL SERVER. And then update all of these 12 Tables every evening, with the corresponding updates from the respective tables from the MS Access Databases.
I do not want to Manually Update all these 12 tables every evening into SQL Server. Hopefully there would be some easier method to do this in automatic manner.
I'm working on databases where statistics of some indexes (tables) are changing too frequently. Once I update them manually, one minute after they get 10-20% change, and five minutes after they get over 100% change. Tables get updated very frequently (multiple times in a second).
When I run a query to read from sys.stats, sys.dm_db_stats_properties and other dynamic views, I see that they were last updated when I did it manually, but the change rate overpassed the 500+20% (tables have multiples of 10K rows). Auto create and update statistics are set to true on all databases, and I don't know why sql server does not do that automatically.
I have a job scheduled that imports a table from a Oracle database. The job runs at 3am and reports success. But for some reason when i query the table to see how many records there are, I see the same row count as the day before (it should increase everyday- student enrollment). When i execute the package manually, the table updates fine.
We face slow performance issue for like taking long time for same query execution after We apply index rebuild and reorganize index. But, after execution of query or procedure for 2 -3 times, performance will be faster. I have following questions
1 do we need to update stats after we rebuild an reorganize index. 2. is it will be slow for 1-2 times for every query and stored procedure execution after we rebuild and reorganize index?
If you were to do a fresh install it would set permissions on the disk so everything just works.
Now when changing the service account (e.g. to a domain user) use the configuration manager, does it do the same magic (possibly sans if the database data/log files are on another disk)? Or do you need to trawl through the dozens of folders and assign rights manually?
I am trying to execute a stored procedure to update a table and I am getting Invalid Object Name. I am create a cte named Darin_Import_With_Key and I am trying to update table [dbo].[Darin_Address_File]. If I remove one of the update statements it works fine it just doesn't like trying to execute both. The message I am getting is Msg 208, Level 16, State 1, Line 58 Invalid object name 'Darin_Import_With_Key'.
BEGIN SET NOCOUNT ON; WITH Darin_Import_With_Key AS ( SELECT [pra_id] ,[pra_ClientPracID]
I have a table #vert where I have value column. This data needs to be updated into two channel columns in #hori table based on channel number in #vert table.
CREATE TABLE #Vert (FILTER VARCHAR(3), CHANNEL TINYINT, VALUE TINYINT) INSERT #Vert Values('ABC', 1, 22),('ABC', 2, 32),('BBC', 1, 12),('BBC', 2, 23),('CAB', 1, 33),('CAB', 2, 44) -- COMBINATION OF FILTER AND CHANNEL IS UNIQUE CREATE TABLE #Hori (FILTER VARCHAR(3), CHANNEL1 TINYINT, CHANNEL2 TINYINT) INSERT #Hori Values ('ABC', NULL, NULL),('BBC', NULL, NULL),('CAB', NULL, NULL) -- FILTER IS UNIQUE IN #HORI TABLE
One way to achieve this is to write two update statements. After update, the output you see is my desired output
UPDATE H SET CHANNEL1= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=1 -- updates only channel1 UPDATE H SET CHANNEL2= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=2 -- updates only channel2 SELECT * FROM #Hori -- this is desired output
my channels number grows in #vert table like 1,2,3,4...and so Channel3, Channel4....so on in #hori table. So I cannot keep writing too many update statements. One other way is to pivot #vert table and do single update into #hori table.
I do not insert/update/delete on the view directly.
For every insert/update in table A /B the values should get insert/update in the view respectively. This insert/update on view should invoke the trigger.
And I am unable to see this trigger work on the view if any insert/update occurs on base table level.
Trigger is working only if any operation is done directly on the view.
We have a "main" SQL 2014 server who imports XML files using SSIS in a datacenter. In remote sites (which are warehouses), there is an instance of SQL 2014 Express. A merge replication is setup, as every operations done on each site must be "forwared" to the main database, as some XML files are generated as output for an ERP system.
Now, the merge replication replicate all the data to the server on each sites. But a specific site don't need the data of every other sites, only the data relevant to itself (which is the warehouse code). Is there a way to replicate only the data relevant to each individual sites to the subscribers? Or is there a better way than replication to accomplish this?