SQL Server 2012 :: Delete Last Job In Parallel Processing?
Mar 18, 2015
I create a main program which will launch two jobs at a time, each job does some processes and at the end I'm trying to delete those jobs after storing the job details in one of the custom table I created (cleanup sub-program).
Out of two jobs I am able to store one job details (like job_name,job_id,start_time and end_time of the job) in the custom table and able to delete that job, but the job that's getting completed at the end is not getting captured nor getting deleted from sysjobs and sysjobhistory tables.
I had included this step (which will call the cleanup sub-program to store the job details and delete it) at the end. I can see that this cleanup procedure getting called from debug message but it is neither storing details nor deleting the job.
When I execute this cleanup program separately, it does store the job details and delete it.
View 0 Replies
ADVERTISEMENT
Dec 9, 2005
SQL Server 2000 SP3ALast week one of our processes starting issuing or suffering deadlockdetected errors every 15 minutes or so.I have read several articles at MS on the subject. I set a couple ofstartup parameters related to producing deadlock detection informationand ran SQL Profiler. I found the SQL statements being issued by thedeadlocked statements. In every deadlock the same UPDATE statementappears however the data values being searched on are different. Thebest I can tell from trying to query the actual data each update hitsonly one or very few rows. No indexed column is updated so the indexesshould not be the source of conflict.Looking at the query I noticed that the query does not have anavailable index and Query Analyzer shows that the full table scan isbeing done in parallel.My question: Does SQL Server change or modify its locking rules whenqueries are converted to be ran using parallel processing? If so, doyou have a reference?Here is the deadlock entries posted to the error log:SPID=167ResType:LockOwner Stype:'OR' Mode: IX SPID:63 ECID:0 Ec:(0x65971510)Value:0x3c577e60 Cost:(0/0)Input Buf: Language Event: UPDATE Station_Upload setStation_Accept_Status = 'ACC',HeadStatus ='ACC',LastProcessedSta='110',HeadPartType='1' WHERE Part_Serial_No ='SCH1119323' AND Station = 'H110'SPID=63ResType:LockOwner Stype:'OR' Mode: IX SPID:167 ECID:0 Ec:(0x65801510)Value:0x3c27d060 Cost:(0/0)Input Buf: Language Event: UPDATE Station_Upload setStation_Accept_Status = 'ACC',HeadStatus ='ACC',LastProcessedSta='70',HeadPartType='1' WHERE Part_Serial_No ='SCH1119060' AND Station = 'H070'I have suggested adding an index to support the query.Any ideas?Thanks -- Mark D Powell --
View 4 Replies
View Related
Oct 7, 2006
Hi,
I am facing some problem's while using the FOR loop container to execute 7-10 packages in parallel.
The main package has 7 FOR loop containers say F1-F7.
Each FOR loop container has 2 task's
T1==> exec child package C1
T2==> exec delay task Delay1.
The idea is to run child packages c1-c7 in parallel ...delay for some time and then run again since there are in the FOR loop container.
I am facing someproblems.
1. The execution of tasks T1-T7 is not guranteed. This means SSIS picks up any 6 tasks of T1-T7 randomly to start with. 6 is the max it processes whereas i have more than that. Can i change this setting???
2. Its not guranteed that if say Task t1 of FOR loop F1 is executed the subsequent task for Delay within tat For loop would be executed next. Typically wat happens is it starts with T1-T6 (T7 onhold) and then exec the delay for T1-T5 and passes control to T7 without going into the delay for T6.This is not the intended execution.
What i want is exec T1-t7 ..delay for the next exec and start again.
How do i do this.
View 5 Replies
View Related
Apr 12, 2007
I've got an SSRS report that is set up using a data-driven subscription to supply input parameters to the stored procedure that is called to generate the report results.
I was wondering if there is any way to specify the execution processing method (running the reports in parallel or serially). The subscription that we have set up appears to be running all of the reports in parallel which is causing massive load on our servers.
Thanks.
View 3 Replies
View Related
Jul 22, 2015
I am creating an SSIS Script Task that will be used to process SSAS dimensions and partitions and ideally log the details of each in a table. Any info on the benefits or drawbacks of using the built-in SSAS parallel processing as opposed to doing it manually in a multi-threaded "Parallel.Foreach" loop using the .NET AMO library.
In my testing, when I use a Parallel.foreach loop, I am able to obtain and log information about the object such as end time and time to process immediately after each object is processed. This allows me to keep a history of processing time for each object:
public void processDimensions(Server Server, Database Database, ProcessType processType)
{
Parallel.ForEach(Database.Dimensions.OfType<Microsoft.AnalysisServices.Dimension>(), d =>
{
DateTime beginTime = DateTime.Now;
try
{
d.Process(processType);
[code]....
If circumventing the built-in SSAS parallel processing is not best practice I'd like to know in advance before I go too far down that path.
View 2 Replies
View Related
Jan 22, 2015
I have a stored procedure that calls several views that rely on each other. In the past these views used to go parallel and use up all 100% of the CPU (12 cores), and now when the same stored procedure runs it only uses 8% of the CPU (1 core). This extends the time spent on the query from roughly 10-15 sec to 2-3min. I'm not quite sure why this is happening.
Are there some obvious things to look at when optimizing views to utilize all cores/threads? Also, it doesn't matter if I set Cost Threshold for Parallelism to 1 or 50 or 5, it is always the same, and I have Max Degree of Parallelism set to 0 as well, which should mean to use all cores when available.
View 9 Replies
View Related
Aug 5, 2015
I have a table with 8 columns, I need to update data in multiple columns on this table, this table contains 1 million records, having single update was taking time so I broke the single update into multiple update statements and running multiple update statements in parallel, Each update statement updates different column.
This approach is working fine but I am getting the deadlock error.
Transaction (Process ID 65) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
I tried with various lock hints but no success.
View 4 Replies
View Related
Jun 20, 2014
I have an XML file about 25MB. When I read it with openrowset it gives me 'System.OutOfMemoryException'. The machine I'm running has 16GB of RAM and memory is definetely not exceeded. When I run smaller XML files it works fine.
I've read that older versions of SQL Server had this problem and it was caused by the parser having limited amount of memory. Is this still the case? Is there a way to change this?
View 7 Replies
View Related
Feb 24, 2015
We have a monitoring tool that find a query that is using most of execution time of all sessions on the server.
I located it, and found it is a data flow task in an SSIS package.
It export data from a table which has big mount of data to another database.
I know it only executes one time, but I see in the monitoring tool it executes 4 times.
I am wondering is it because SSIS is doing it in parallel execution automatically?
We use all default settings, and the server physical cpu is 4.
Also it says the query is slow is because it has a wait called PREEMPTIVE_OS_WAITFORSINGLEOBJECT
not sure what does that mean
View 0 Replies
View Related
Nov 6, 2015
what the ideal CPU count and Max Degree of Parallelism are for a 3rd party database server.The server has 12 CPUs, 32GB RAM and all database sizes add up to < 30GB so they can all fit in memory (I tried to force this by doing a select * from every table). On certain payroll days, the CPU gets maxed out to 100% for a few seconds.
MAXDOP was originally set to the default 0. We later changed it to 8 based on several 'best-practices' articles. However the vendor suggests to change it to 1 (no parallelism), while others suggest changing it to 4, so that one run-away query doesn't hog most of the CPUs.
I'd like to find out how many CPUs are actually being used by queries. There is a Degree of Parallelism event in URL.... The BinaryData column says :
0x00000000, indicates a serial plan running in serial.
0x01000000, indicates a parallel plan running in serial.
>= 0x02000000 indicates a parallel plan running in parallel.- What does "parallel plan running in serial" mean ?
I see a lot of 0x01000000, and a few 0x08000000's in my trace.How can i determine whether one query is hogging CPUs and if reducing it to 4 will work?
View 4 Replies
View Related
Jun 1, 2015
I come from other ETL tools (Oracle Warehouse Builder, BODI, BODS & DataStage) and i'm having trouble finding the best practice for scheduling a collection of packages to be processed parallel en retry those that fail. I created a staging project which contains all the packages (50) that extract data from 1 source system and grouped the packages into 2 sequence containers to make sure that the 'heavy' packages are started first and together in parallel.
I soon discovered that there is no standard option to have one child package retry on failure. Currently if 1 package fails the whole project is retried.
I explored checkpoints as a solution but that seems a dead end when running packages in parallel.
There seem to be 2 solutions for my issue:
(1) create a loop around every EPT with 3 variables (waittime, retry_counter & succes_flag)
(2) create an event handler to keep a list of ID's that failed and enable/disable EPT's based on that list (there's a lot more to it).
Option 1 seems like a lot of bloatware in what i expected to be standard functionality. I'm still investigating option 2.
How do others handle this kind of scheduling? Is it so different with SSIS that i'm approaching this incorrectly ?
View 0 Replies
View Related
Dec 11, 2014
I have a scenario where i have to run update task on multiple servers in parallel and once all of them are completed (success or failure) another task is to be run on another server
1. in maintenance plan, if we add tasks which are not joined, will they run in paralled at the same time
2. if we link the last task to all the tasks with link type 'completed' will the last task complete after all tasks are completed or when any one of them is completed (i have big doubt here)
the business requirement behind this is to bring data from multiple servers into shadow copies locally and then process them together. its ok if some server data transfer fails, but its not ok to start processing centrally while data transfer is going on. further, we want to run data transfer from multiple servers in paralleled to save time.
View 0 Replies
View Related
Aug 19, 2015
Is the SQL Server Profiler Reads Column Incorrect For Parallel Plans?
I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.
However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.
Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.
Does the profiler only display one thread of a parallel plan perhaps? Or something else?
View 9 Replies
View Related
Oct 22, 2015
I have a deadlock trace running on a production server that is filling up with the following:
Lock: Deadlock Chain
Parallel query worker thread was involved in a deadlock SPID 8
105 - Resource Type Exchange
Spid 8 is a SQL process LOCK MONITOR
There are no graphs as seen with typical deadlock, so it is not two queries that are deadlocking, and I've never seen a deadlock for "LOCK MONITOR"
View 0 Replies
View Related
Mar 27, 2012
I am trying to create a calculated member for parallel period function using ssas 2012. I have 10 measures for which i need to create parallelperiod.
I can successfully create for 1 measure but when i add multiple values to it it breaks. Below is the sample i tried for multiple measures:
sum(ParallelPeriod([Date].[Calendar].[year],1,[Date].[Calendar].currentmember) ,
([Measures].[Revenue],[Measures].[Expenses]))
View 10 Replies
View Related
Jun 11, 2014
Is there anyway I can turn off delete in SQL server? I want to prevent anyone inadvertently deleting rows in tables. I thought worse case I could have triggers on tables to perform roll back.
View 3 Replies
View Related
Apr 22, 2015
I am using SQL 2012 SE. I have 2 databases say A and B with same structure and relationships. There are 65 tables in each database. A is already replicating data to database C for 35 tables. Now I need to move data from A to B which is greater than getdate()-1 everyday for all the tables and once the move is done I need to delete this data from A. And the same thing the next day and everyday. Since this is for 65 tables its challenging to identify the insert order. Once the insert order is identified the delete order will be the reverse of it.
Is there a tool or any SP that could generate the insert order script? The generate scripts data only is generating the entire data and these databases are almost 400GB. Some tables have 200Mil+ rows. So it takes forever.
View 1 Replies
View Related
Apr 4, 2014
I have an issue with Delete statement.In the code given below (its a part of actual proc),if we use TRUNCATE to clean the temp tables, everything goes fine.But if I use DELETE in place of truncate, system skips the IF loop 'if (@script_type = 1 OR @script_type = 2)'I am not able to understand this behavioral difference between DELETE and TRUNCATE.Recently the database is being used for replication, but that should not be a reason.
SELECT @max_rows = COUNT('X') FROM #temp_table1
SET@row_cnt = 1
WHILE @row_cnt <= @max_rows
BEGIN
[code],...
View 2 Replies
View Related
Jun 16, 2015
I have a table with about 466 Million rows. In this table there is a int column called WeeksToRetain as well as a EventDate column containing the date the row was inserted. I am trying to delete all the rows that that should be deleted according to the WeeksToRetain. For example, if the EventDate is 5/07/15 with a 1 in the WeeksToRetain column the row should be removed by 5/14/15. I am not sure what days SQL considers the beginning and end of the week. However the core issue I am having is the sheer mass of deletions I must do and log growth.
So I am trying to do the delete in batches. More specifically I want to load a temporary table with a million rows, then use the temporary table to load a sub temporary table with 100,000 rows and join this temporary table to the table I want to delete from looping through 10 times to get 1 million. The Logging.EvenLog table which is the table I'm trying to purge has a clustered index on EventDate (ASC). I would like to run this in a schedule job with enough time between executions for log backups to run.
DECLARE @i int
DECLARE @RowCount int
DECLARE @NextBatchDate datetime
CREATE TABLE #BatchProcess
(
EventDate datetime,
ApplicationID int,
[Code] .....
View 9 Replies
View Related
Aug 4, 2015
I have a table 'library' with two columns
user_name, access_time
every user is having more than 500 entries, i want to keep only top 50 records for every user and delete the remaining records.
View 9 Replies
View Related
Aug 18, 2015
I have a table which has XML data, so it takes lot of time to delete it.
Here's the condition:
Archive all data older than 2 months and delete it.
What would be the good approach for this?
View 3 Replies
View Related
Sep 8, 2015
I have the following scenario:
SQL database on SQL 2012
Large Production table 15 Million record
The table has 3 years of data
New monthly data is being added every month.
A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.
Delete is done based on trx_date which is a year/month combo, like 201508.
The table has monthly sales by customer aggregated.
The table structure is:
CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,
[code].....
View 9 Replies
View Related
Mar 19, 2014
create table #sample
(
Name varchar(100),
value int
[code]....
From that I wanted to delete some records based on following condition. randomly select any number of records but sum(value) = 125 and name = xxx
View 2 Replies
View Related
Nov 11, 2014
I'm new to using SQL Server. I've been asked to optimize a series of scripts that queries over 4 millions records. I've managed to add indexes and remove a cursor, which increased performance. Now when I run the execution plan, the only query that cost is a DELETE statement from the main table. It shows a SORT which cost 71%. The table has 2 columns and a unique index. Here is the current index:
ALTER TABLE [dbo].[Qry] ADD CONSTRAINT [Qry_PK] PRIMARY KEY NONCLUSTERED
(
[QryNum] ASC,
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = ON, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
Question: Will the SORT affect the overall performance? If so, is there anything I should change within the index that would speed up my query?
View 5 Replies
View Related
Feb 24, 2015
I've 2 tables ResumeSkill (Child table) and Skill (Parent table), There are duplicates in the parent table and after removing the foreign key constraint in child table deleted all duplicate values from Parent table. But those deleted duplicate values has references in child table which need to be deleted now.
ResumeSkill Skill
Id SkillId
SkillId Name
I want to delete all the records from ResumeSkill that dont have matching skillId in Skill table.
View 2 Replies
View Related
Mar 12, 2015
I have created a Dynamic Merge statement SCD2 Store procedure , which insert the records if no matches and if bbxkey matches from source table to destination table thne it updates old record as lateteverion 0 and insert new record with latest version 1.
I am getting below error when I ahve more than 1 bbxkey in my source table. How can I ignore this.
BBXkey is nothing but I am deriving by combining 2 columns.
Msg 8672, Level 16, State 1, Line 6
The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches more than one source row. A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times. Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows.
View 4 Replies
View Related
Aug 19, 2014
I just finished migrating SQL 2005 - 2012 and I am having issues processing my cubes. I included the error message below with generic names, but I think it should be transparent. I seem to be having an issue with one of the logins, but which one. The error message points to "domain_nameserver_name$" with a $ at the end. I am assuming this is a SQL login? I have included the error message below:
<Batch xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
<Parallel>
<Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2"
[code]...
View 1 Replies
View Related
Oct 19, 2014
I have a master table and i need to import the rows into the parent and child table.
Master table name is Flatfile_Inventory
Parent Table name is INVENTORY
Child Tables name are INVENTORY_AMOUNT,INVENTORY_DETAILS,INVENTORY_VEHICLE,
Error details will be goes to LOG_INVENTORY_ERROR
I have 4 duplicate rows in the Flatfile_Inventory which i have already inserted in the Parent and child table.
Again when i run the query using stored procedure,its tells that all the 4 rows are duplicate and will move to the Log_Inventory_Error.
I need is if i have the duplicate rows in the flatfile_Inventory when i start inserting into the parent and child table the already inserted row have the unique ID i must identify it and delete that row in the both parent and chlid table.And latest row must get inserted into the Parent and child table from Flatfile_Inventory.
-- ===============================================================================================
-- STORED PROCEDURE FOR FLATFILE_INVENTORY
-- ===============================================================================================
USE [IconicMarketing]
---=======================================SALES_CURSUR===========================================
--USE IconicMarketing
--GO
DECLARE
@FileType varchar(50) ,
@ACDealerID varchar(50) ,
[code].....
View 2 Replies
View Related
Apr 16, 2015
I am using SQL Server 2012 SE.I am trying to delete rows from a couple of tables (GetPersonValue has 250 million rows and I am trying to delete 50Million rows and GetPerson has 35 Million rows and I am trying to delete 20 million rows). These tables are in TX replication.The plan is to delete data older than 400 days old.
I tried to move data to new tables from the last 400 days and it took me like 11 hours. If I delete data in chunks of 500000 then its taking a long time to rebuild indexes(delete plus rebuild indexes 13 hours). Since I am using standard edition partition wont work.
find ddl below:
GO
CREATE TABLE [dbo].[GetPerson](
[GetPersonId] [uniqueidentifier] NOT NULL,
[LinedActivityPersonId] [uniqueidentifier] NOT NULL,
[CTName] [nvarchar](100) NULL,
[SNum] [nvarchar](50) NULL,
[PHPrimary] [nvarchar](50) NULL,
[code]....
View 1 Replies
View Related
Apr 16, 2015
I am working on SQL 2012.We have a SSAS Cube build. On top of it client use Excel to connect to SSAS Cube and See the reports.My Cube process every hour and take almost a 1-2 min to Process.When ever End user/Client See or refresh the Excel report (Which use Cube as Source),WHEN CUBE IS PROCESSING ,they get an error that Source is not available
We Have tried best but cannot bring down the Processing time of the cube to 3-5 Second , so that End user don't face report refresh issue at the moment of cube processing
Requirement :In Case End User see the report while Cube is processing from back end , Instead of Error they should see some customize Msg which we can provide some thing Like "Data is Refreshing , please wait ".
View 5 Replies
View Related
Oct 31, 2012
All I get back is an error message of "Analysis Services Processing Task Error: A Connection cannot be made. Ensure the Server is running" The server is running, I can process the cube by connecting to the AS instance and right-click processing it.
I can process the cube by running the SSIS task inside of SSDT Just when I deploy the SSIS package (in Project mode) and then execute it do I get the error message.
SQL Server, SSAS, and SSIS processes are all running under the same account. SSAS is on a separate server from SSIS and SQL if that matters.
View 2 Replies
View Related
Jul 25, 2015
I have a four tables called plandescription, plandetail and analysisdetail. The table plandescription has the columns DetailQuestionID which is the primary and identity column and a QuestionDescription column.
The table plandetail consists of the column PlanDetailID which the primary and identity column, DetailQuestionID which is the foreign key attribute of plandescription table and a planID column.
The third table analysisdetail consists of a analysisID which the primary and identity column, PlanDetailID which is the foreign key attribute of plandetail table and a scenario.
Below is the schema of the three tables
I have a two web form that will insert, update and delete data into these three tables in a two transaction. One web form will perform CRUD operations in plandescription and plandetail table. When the user inserts QuestionDescription and planid in this web form, I will insert the QuestionDescription Value in the plandescription table and will generate a DetailQuestionID value and this value is fed to the plandetail table with the planid. Here I will generate a PlanDetailID.
Once this transaction is done, I will show the second web form in which the user enters the scenario and this will be mapped with the plandescription using the PlanDetailID.
This schema cannot be changes as this is the client requirement. When I insert values I don’t have any problem. However when I update existing data, I need to delete existing PlanDetailID in the plandetail table and recreate PlanDetailID data for that DetailQuestionID and planID. This is because, the user will be adding or deleting a planID associated with the QuestionDescription.
Once I recreate PlanDetailID for that DetailQuestionID and planID, I need to update the old PlanDetailID with the new PlanDetailID in the third table analysisdetail for the associated analysisID.
I created a #Temp table called #DetailTable to insert the values analysisID, planid and old PlanDetailID and new PlanDetailID so that I can have them in update statement once I delete the data from plandetail table for that PlanDetailID.
Then I deleted the plandetailid from the plandetail table and recreate PlanDetailID for that DetailQuestionID. During my recreation I fetched the new PlanDetailID’s created into another temp table called #InsertedRows
After this I am running a while loop to update the temp table #DetailTable with the newly created PlanDetailID for the appropriate planID’s. The problem is here. When I have the same number of planID’s for example 2 planID’s 1,2 I will have only two old PlanDetailID and new PlanDetailID for that planID and analysisID.But When I add a new PlanID or remove a existing planID I am getting null value for that newly added or deleted planID. This is affecting my update statement of analysisdetail table as PlanDetailID cannot be null.
I tried to remove the Null value from the #DetailTable by running the update statement of analysis detail in a while loop however its not working.
DECLARE @categoryid INT = 8
DECLARE @DetailQuestionID INT = 1380
/*------- I need the query to run for the below three data.
Here i'm updating my planids that already exists in my database*/
DECLARE @planids VARCHAR(MAX) = '2,4,5'
[code].....
View 2 Replies
View Related
Oct 30, 2007
Is there any way to run a stored procedure in parallel to another one? i.e. I have a stored procedure that sends an email. I then scan a table and send any unsent emails. I do not want the second part to slow the response to the user.
View 2 Replies
View Related