our package have design like this,
OLEDBSource à Derived Column à Lookup
|
Matching Records Un Matched Records
| |
OLEDBCommand OLEDBDestination
(Update) (Insert)
and our source & destination table are oracle. when we execute the package the performance is very low and some times its showing like processing ( yellow color) even for 1 hrs .what could be the problem.can any one help us.is there any reason like when we use orcale database this will slow down the performance of package
I have recently taken up performance optimization activity for our database. Can any one suggest a really good source for articles/tutorials/guides etc. on Performance optimization for SQL server 2005.
Does anyone know any good references (web, books etc) about optimizing performance for MSSQL Server 6.5, not seen from a developers perspective but more the admin of the SQL Server?
I`m using a very data comprehensive application and I have the feeling things would run a lot smoother if the database/server was optimized once in a while (like you can do with Access).
I am able to get this to work by using nested loops but they are very inefficient and with the size of my tables I cannot afford to use them. There must be a more efficient solution?
Some general rules... (Hope these are clear enough)
- Each person has at least one Initial_Procedure.
- There may be zero, one, or more Procedure_2 for each Initial_Procedure. - If there is more than one Procedure_2 for an Initial_Procedure get the most recent.
- To link Procedure_2 to Initial_Procedure the Initial_Procedure.Completed_DTTM < Procedure_2.Completed_DTTM and Initial_Procedure.Person_ID = Procedure_2.Person_ID
- If there is more than one Initial_Procedure where Initial_Procedure.Completed_DTTM < Procedure_2.Completed_DTTM: Procedure_2.Completed_DTTM Between row 1: Initial_Procedure.Completed_DTTM and row 2: Initial_Procedure.Completed_DTTM --(assuming Initial_Procedure is in order) AND Initial_Procedure.Person_ID = Procedure_2.Person_ID
Some example data.....
Declare @Initial_Procedure table (ID int, Person_ID int, Completed_DTTM datetime) Insert into @Initial_Procedure Select 1, 1, '01/10/2007' union all Select 1, 1, '02/15/2007' union all Select 1, 1, '02/20/2007' union all Select 1, 2, '01/02/2007' union all Select 1, 3, '06/26/2007' union all Select 1, 4, '03/14/2006' union all Select 1, 4, '10/10/2006' union all Select 1, 4, '08/27/2007'
Declare @Procedure_2 table( ID int, Person_ID int, Completed_DTTM datetime) Insert into @Procedure_2 Select 2, 1, '01/09/2007' union all Select 2, 1, '01/15/2007' union all Select 2, 1, '01/16/2007' union all Select 2, 1, '01/17/2007' union all Select 2, 1, '02/19/2007' union all Select 2, 1, '07/25/2007' union all Select 2, 1, '09/02/2007' union all Select 2, 2, '01/01/2007' union all Select 2, 2, '01/14/2007' union all Select 2, 2, '01/20/2007' union all Select 2, 3, '05/04/2007' union all Select 2, 3, '06/27/2007' union all Select 2, 4, '11/06/2006'
Hello,I have a question regarding stored procedure desing that provides theoptimal performance. Let's say we have a table Products that consists ofthree columns: Name, Status, RegistrationTime. All columns are indexed andusers should be able to lookup data by any of the columns. We have two mainoptions to design stored procedures for data retrieval:1. Design separate stored procedures for each search criteria:LookupProductsByName, LookupProductsByStatus, LookupProductsByTime.2. Write a generic stored procedure that will fit any search criteria:CREATE PROCEDURE GetProducts (@Name varchar(20),@Status int = NULL,@FromTime datetime = NULL,@ToTime datetime = NULL)AS BEGINSELECT[Name],[Status],[RegistrationTime]FROM [Products]WHERE [Name]=CASEWHEN @Name<>NULL THEN @NameELSE [Name]ENDAND [Status]=CASEWHEN @Status<>NULL THEN @StatusELSE [Status]ENDAND [RegistrationTime]>=CASEWHEN @FromTimestamp<>NULL THEN @FromTimestampELSE [RegistrationTime]ENDAND [RegistrationTime]<=CASEWHEN @ToTimestamp<>NULL THEN @ToTimestampELSE [RegistrationTime]ENDORDER BY [RegistrationTime]END;The second option is very attractive, because it is obviously easier tomaintain such code. However, I am a little concerned about performance ofsuch stored procedure. It is not possible to foresee what index should beused, index can only be selected each during procedure execution, becausesearch criteria can include either Name, Status or RegistrationTime. Will itmake this SP inefficient? Or perormance difference in such case is not big(if any) and we should choose the second option because of its significantcode reduction?Thanks in advanceVagif AbilovJoin Bytes!
I am facing some performance issues in a Stored Procedure. The procedure needs to return a resultset based on some search criteria. There are around 20 possible search criteria. Below is the SQL query used in my Stored procedure. Any help to optimize the search will be great:
--get LOV details in table variables INSERT INTO @tblLov (LovCode, LovDesc, ParamCode) SELECT LovCode, LovDesc, ParamCode FROM tp_Lov WITH (NOLOCK) WHERE ParamCode IN('FileSrc', 'CommTrailInd', 'CommTxnStatus', 'AgencyPrincipalInd','ProdSubType','AuditTransStatus')
--get commission transaction according to the search criteria INSERT INTO @tblSearchResults SELECT l1.LovDesc AS TransSource, l2.LOVDesc AS CommTrailInd, r.RemitCode as RemitNumber, t.IntTransId as TransNumber, CONVERT(VARCHAR, t.TrdDt, 110) AS TradeDate, CONVERT(VARCHAR, t.SettlementDt, 110) AS SettlementDate, rp.RepCode, (ISNULL(rp.LstNm,'') + ', ' + ISNULL(rp.FstNm,'')) AS RepName, (CASE WHEN ISNULL(t.IntClntId,0)=0 THEN ISNULL(t.ClntShortNM, '') + (CASE WHEN (t.TransSrc = 'NSM' OR (t.TransSrc = 'MCE' AND ISNULL(t.ProdType,'') <> 'VA')) AND ISNULL(t.FundAcctNum,'')<>'' THEN ' - ' + ISNULL(t.FundAcctNum,'') WHEN (t.TransSrc = 'NSV' OR (t.TransSrc = 'MCE' AND ISNULL(t.ProdType,'') = 'VA')) AND ISNULL(t.PolicyNum,'')<>'' THEN ' - ' + ISNULL(t.PolicyNum,'')
WHEN t.TransSrc IN('PSH','MSR') AND ISNULL(t.ClrHouseAcctNum,'')<>'' THEN ' - ' + ISNULL(t.ClrHouseAcctNum,'') ELSE '' END) ELSE dev.udf_COMM_PCD_GetClientName(t.IntClntId, t.IntTransId) END) AS Client, (CASE WHEN ISNULL(t.CUSIP,'')='' THEN t.ProdName ELSE p.ProdNm END) AS [Product], t.InvAmt AS InvestmentAmt, t.GDC AS GDC, t.ClrChrg AS ClearingCharge, t.NetComm AS NetCommission, (CASE WHEN t.Status IN(@strLov_TxnStatus_Tobepaid, @strLov_TxnStatus_Paid) THEN dev.udf_COMM_PCD_GetPayoutRateString(t.IntTransId) ELSE '' END) AS PayoutRate, (CASE WHEN t.Status IN(@strLov_TxnStatus_Tobepaid, @strLov_TxnStatus_Paid) THEN dev.udf_COMM_PCD_GetPayoutAmountString(t.IntTransId) ELSE '' END) AS Payout, l3.LOVDesc AS TransStatus, t.Comments, t.OrderMarkup AS BDMarkup, t.IntTransId, rp.IntRepId, sch.SchCode, t.IntClntId, t.CUSIP, t.RepIdValue AS RepAlias, t.RepIdType, t.SplitInd, l4.LOVDesc AS AgencyPrincipalInd, t.AgencyPrincipalFee, t.EmployeeTradeInd, t.ShareMarkup, t.UnitsTraded, s.SponsorNm, CASE WHEN t.TransSrc = 'NSM' OR (t.TransSrc = 'MCE' AND ISNULL(t.ProdType,'') <> 'VA') THEN ISNULL(t.FundAcctNum,'') --Production Defect #873 & 877 WHEN t.TransSrc = 'NSV' OR (t.TransSrc = 'MCE' AND ISNULL(t.ProdType,'') = 'VA') THEN ISNULL(t.PolicyNum,'') ELSE t.ClrHouseAcctNum END, CASE WHEN ISNULL(t.ProdSubType,'') IN ('', 'Z') THEN 'Not Defined' ELSE l6.LovDesc END AS ProdSubType, --t.ProdSubType, l5.LOVDesc AS TransAuditStatus, --t.TransAuditStatus, t.TransAuditStatus AS TransAuditStatusCode, t.OriginalTransId, t.RowId, t.Status, t.intParentTransId, t.CancelTrdInd, t.ClrChrgOverrideInd, 9999 AS AuditKey FROM tr_CommTrans t WITH (NOLOCK) INNER JOIN @tblLov l1 ON t.TransSrc = l1.LOVCode and l1.ParamCode = 'FileSrc' INNER JOIN @tblLov l2 ON t.CommTrailInd = l2.LOVCode and l2.ParamCode = 'CommTrailInd' INNER JOIN @tblLov l3 ON t.Status = l3.LOVCode and l3.ParamCode = 'CommTxnStatus' INNER JOIN td_Remit r WITH (NOLOCK) ON t.IntRemitId = r.IntRemitId LEFT OUTER JOIN @tblLov l4 ON t.AgencyPrincipalInd = l4.LOVCode and l4.ParamCode = 'AgencyPrincipalInd' LEFT OUTER JOIN @tblLov l5 ON t.TransAuditStatus = l5.LOVCode AND l5.ParamCode = 'AuditTransStatus' LEFT OUTER JOIN @tblLov l6 ON t.ProdSubType = l6.LOVCode AND l6.ParamCode = 'ProdSubType' LEFT OUTER JOIN tm_BDProd p WITH (NOLOCK) ON t.CUSIP = p.CUSIP LEFT OUTER JOIN tm_BDSponsors s WITH (NOLOCK) ON t.IntBDSponsorId = s.IntBDSponsorId LEFT OUTER JOIN tm_Reps rp WITH (NOLOCK) ON t.IntRepId = rp.IntRepId LEFT OUTER JOIN tm_PayoutSch sch WITH (NOLOCK) ON t.IntSchId = sch.IntSchId WHERE t.IntTransId = (CASE WHEN @intTransId IS NULL THEN t.intTransId ELSE @intTransId END) AND t.TransSrc = @strTransSrc AND r.RemitCode = (CASE WHEN ISNULL(@strRemitCode,'')='' THEN r.RemitCode ELSE @strRemitCode END) AND ISNULL(t.SettlementDt,'01-01-1900') BETWEEN @dtmFromSettlementDt AND @dtmToSettlementDt AND ISNULL(t.TrdDt,'01-01-1900') BETWEEN @dtmFromTradeDt AND @dtmToTradeDt AND t.CommTrailInd = (CASE WHEN @chrShowTrails='Y' THEN t.CommTrailInd ELSE 'C' END) AND t.Status = (CASE WHEN ISNULL(@strStatus,'')='' THEN t.Status ELSE @strStatus END) AND ISNULL(t.ClrHouseAcctNum,'') LIKE (CASE WHEN ISNULL(@strAccountId,'')='' THEN ISNULL(t.ClrHouseAcctNum,'') WHEN (@strTransSrc = 'PSH' OR @strTransSrc = 'MSR' OR @strTransSrc = 'MSA') THEN @strAccountId ELSE ISNULL(t.ClrHouseAcctNum,'') END) AND ISNULL(t.FundAcctNum,'') LIKE (CASE WHEN ISNULL(@strAccountId,'')='' THEN ISNULL(t.FundAcctNum,'') WHEN @strTransSrc = 'NSM' THEN @strAccountId WHEN @strTransSrc = 'MCE' AND ISNULL(t.ProdType,'')<>'VA' THEN @strAccountId ELSE ISNULL(t.FundAcctNum,'') END) AND ISNULL(t.PolicyNum,'') LIKE (CASE WHEN ISNULL(@strAccountId,'')='' THEN ISNULL(t.PolicyNum,'') WHEN @strTransSrc = 'NSV' THEN @strAccountId WHEN @strTransSrc = 'MCE' AND ISNULL(t.ProdType,'')='VA' THEN @strAccountId ELSE ISNULL(t.PolicyNum,'') END) AND ISNULL(t.IntBDSponsorId,-1) = (CASE WHEN @intSponsorId IS NULL THEN ISNULL(t.IntBDSponsorId,-1) ELSE @intSponsorId END) AND ISNULL(t.ProdType,'') = (CASE WHEN ISNULL(@strProdType,'')='' THEN ISNULL(t.ProdType,'') ELSE @strProdType END) AND ISNULL(t.ProdSubType,'') = (CASE WHEN ISNULL(@strProdSubType,'') ='' THEN ISNULL(t.ProdSubType,'') ELSE @strProdSubType END) AND ISNULL(t.CUSIP,'') = (CASE WHEN ISNULL(@strCUSIP,'')='' THEN ISNULL(t.CUSIP,'') ELSE @strCUSIP END) AND ISNULL(rp.SSN, 0) = (CASE WHEN @numRepSSN IS NULL THEN ISNULL(rp.SSN, 0) ELSE @numRepSSN END) AND ISNULL(rp.RepCode,'') = (CASE WHEN ISNULL(@strRepCode,'')='' THEN ISNULL(rp.RepCode,'') ELSE @strRepCode END) AND ISNULL(rp.LstNm, '') = (CASE WHEN ISNULL(@strRepLstNm,'')='' THEN ISNULL(rp.LstNm,'') ELSE @strRepLstNm END) AND ISNULL(rp.FstNm, '') = (CASE WHEN ISNULL(@strRepFstNm,'')='' THEN ISNULL(rp.FstNm,'') ELSE @strRepFstNm END) AND ISNULL(rp.RepStatus,'') <> (CASE WHEN @chrIncludeTerminated='Y' THEN 'Z' ELSE 'T' END) AND ISNULL(t.IntClntId,-1) = (CASE WHEN @intClientId IS NULL THEN ISNULL(t.IntClntId,-1) ELSE @intClientId END) AND ( (@chrAuditReportFlag = 'N' AND t.Status NOT IN(@strLov_TxnStatus_Loaded, @strLov_TxnStatus_Cancelled) AND ISNULL(TransAuditStatus,@strLov_TransAuditStatus_Active) = @strLov_TransAuditStatus_Active ) OR (@chrAuditReportFlag = 'Y' AND t.Status NOT IN(@strLov_TxnStatus_Loaded) DefectID# 880,895
Would any of you give me any ideas for how could we optimize the report on data mining models? (as we know, for the data mining report, we have to select the mining model and the case table)
Hope it is clear for your advices and help.
Thanks a lot in advance and I am looking forward to hearing from you shortly.
We have a package that simply reads a rows from a table and puts it into a flat file destination, this repeats through a for each loop. It is a very simple package. Problem is it takes 10 minutes to do a thousand rows. This is incradible slow, i have check indexes are fine, the table are only 1000 rows, but it seems to only read and write about 3 rows a second, this is crawling.
Please how can we make tis faster, are there any obviously properties setting we should be checking ?
We have started using SSIS alot around here, main problem is that all our packages seem very slow ! Whether they run in GUI debug or in a job in sql server (twice as fast but still slow).
Anyone know some good resources on SSIS optimization ???
I have a small tricky problem here...need help of all you experts.
Let me explain in detail. I have three tables
1. Emp Table: Columns-> EMPID and DeptID 2. Dept Table: Columns-> DeptName and DeptID 3. Team table : Columns -> Date, EmpID1, EmpID2, DeptNo.
There is a stored procedure which runs every day, and for "EVERY" deptID that exists in the dept table, selects two employee from emp table and puts them in the team table. Now assuming that there are several thousands of departments in the dept table, the amount of data entered in Team table is tremendous every day.
If I continue to run the stored proc for 1 month, the team table will have lots of rows in it and I have to retain all the records.
The real problem is when I want to retrive data for a employee(empid1 or empid2) from Team table and view the related details like date, deptno and empid1 or empid2 from emp table. HOw do we optimise the data retrieval and storage for the table Team. I cannot use partitions as I have SQL server 2005 standard edition.
Please help me to optimize the query and data retrieval time from Team table.
I have been working on a project last few months. I have developed the project on my laptop, which is resonably powerful. It runs through fine within 9 mins with sample data set.
If I replicate the same environment on a 64 Bit machine with 32 Bit Win 2003 and SP1, it takes just over 7 mins.
If I rerun it on a 64 Bit machine with 64 Bit Win 2003, it takes between 21 and 24 mins.
We are executing the packages via dtexec on a command prompt.
I am currently in the process of migrating DTS packages to SSIS. I am finding that most of the packages are running faster, but some of them are taking longer to execute.
The DTS package copies data from our Production server to Development. It uses a Copy SQL Server Objects Task to copy only the data from about 50 tables. This takes about 3.5 minutes. I created the exact same package in SSIS using the Transfer SQL Server Objects Task and it is running about 5 minutes.
Another package I am having this problem with is only copying data from 1 table using a Copy SQL Server Objects Task. This package executes in 19 minutes. I have created the exact same package twice. Once using Transfer SQL Server Objects and once with a data flow task. The Transfer SQL Server Objects package takes about 50 minutes while the Data Flow package takes about 40 minutes.
As I said most of the packages are faster with SSIS so this is why I am confused on these couple where they are just copying the data.
I've written new SSIS packages to do what DTS packages did and the performance I'd say is about 20 times slower. In this package, I have a loop that loops through different servers based on server entries in a SQL database. Each loop pumps 10 tables. The source query is set by a variable and the destination table is set also by a variable, since all this data goes to the same tables on the SQL server and the definitions are all the same on the source server (Sybase). It's still going and has taken about 12 hours to pull roughly 5 million records.
The source query ends up being:
SELECT *, 'ServerName' FROM SourceTable1 WHERE Date >= Date
The 'ServerName' , the "sourcetable1" and the "Date" are all set by variables which in turn build the source query variable.
Anyway, I just mention this for completeness--I would think setting the variables which have anything to do with the pump's performance. How can I check to see where the performance is getting held up?
Also, I have checked via ping the timeout to the 3 servers. The slowest one pings in about 62 ms, the fastest at 1 ms, and the other somewhere in between.
I have a bunch of packages that take views and create tables from them. Some of the views are rather complex, but the packages themselves are very simple... drop and re-create a table using the data from a view on the same server. We create a new DB for each year, and this year we've upgraded to a new server with SQL 2005, so our DTS packages on the 2000 SQL server had to be recreated in SSIS on the new server. No problem, as I said the packages are really simple. But when I create the packages in SSIS they now take an extremely long time to execute, and I cannot figure out why.
For instance, one DTS package would take approximately 5 minutes to run when the view contained hundreds of thousands of rows and the underlying tables contained millions. But now, even with MUCH smaller tables (since it's the beginning of the year, new DB) the SSIS package I created on the new server takes over an hour, literally. The view that the SSIS package is using to create the table only takes about 15 seconds to execute in management studio (only about 16,000 rows). How can this possibly take so long??
the new server is virtually the same hardware-wise... 4 x 2400mhz, 4gb ram, win2k3 server
I'm working on a conversion project and I'm trying to compare performance of SSIS with Other ETL Tools, especially Informatica PowerCenter. Which one do you think is better ETL performer, when source and destination being SQL Server databases. Is there any benchmark available?
I'll just throw my question: how could I increase SSIS-performance?
I have a really heavy job with thousands of records my base selection, then I perform some lookups (I replaced most of them by sql) and derived columns (again, I replaced as much as possible by sql). Finally, after a slowly changing dimension task, I do update/insert on a given table. Is there a trick to speed up lookups and inserts (something like manipulating the buffer sizes - just asking). Fact is that I replaced a script task by pure sql-joins and gained 6 of the 12 hours this job took.
Dear Friends, I always use this forum to find support and to try help others. But this time I need to receive your feedback about my package that will be in prodution in few weeks. So.. could you give me your opinions? I prefer i write the comments in the blog, but you can write here to... http://pedrocgd.blogspot.com/2007/10/bicasestudy-package-v2.html
I created a dataflow that transferred about 1 million records from a SQL database on one server to a differend SQL database on the same server. The processing took about 30 minutes. I used the Fast Load option.
I then created a "Execute SQL Task" and wrote a "SELECT * INTO TABLE" and this processing took about 30 - 60 seconds.
Can someone tell me why creating a Data Flow Tak would take so much longer or give differences between the two options above? Can someone give some pointers on how to make a Data Flow task more efficient?
Hello Everyone, Can any one update me up performance turning of SSIS and what difference would it make if I change the default value of this two parameter in each Data Flow. DefaultBuffermaxRows DefaultBufferSize
Also update me on what is these parameters used for.
I have a multi-threaded C# application that loads a bunch of tables into ado.net datasets in memory for surrogate key lookups. Depending on what else is going on, it can process 100,000 to 170,000 rows per minute and usually utilizes 20-30% of each cpu.
I created procedure which completes execution in 20 mins in sql server 2005 but if i kept the same procedure in Execute SQL Task in SSIS and executing means, it is taking 3 hrs.
Is there any way to increase the performance for the above same.
On 32 bit SSIS installations, both of the following performance counter objects are visible in perfmon.
SQLServer:SSIS Service
SQLServer:SSIS Pipeline
On 64 bit SSIS installations, only the following is available.
SQLServer:SSIS Service
The SQlServer:SSIS Pipeline counters are nowhere to be found.
Should I re-install? Is this a known issue with 64 bit SSIS?
P.S. Remote or local access administrative access with perfmon makes no difference, the "SQLServer:SSIS Pipeline" performance counters don't appear in the listbox when connecting to Windows 2003 x64 server.
I have been running massive ssis packages and testing the performance.
This is my execution design:
I have a main package that gets a list of packages to execute from a table.
Then using a foreach loop in send the package to execute ( somewhere in the middle i delete the corresponding old log file for that package ), each of the packages configures themselves from the parent package variables.
What i have been analysing tells me that for example a package runs in 2 minutes and then the time wasted from the end of the package to the start of the other task is in average 3 to 6 minutes... thats alot... since i run about 20x12 packages witch gives me of wasted time about 20 hours.
My question is... what can be causing the delay between the end of package and the start of the other one...
The tasks types i am using in the execution controller package are:
Foreach loops, For Loop, File System task, Execute Package Task and some Execute SQL Tasks
Can anyone tell me the best way in SSIS to log performance at control flow level i.e. per task I have in my control flow and what performance characteristics it is possible to log.
We have SSIS installed and everything is working great. We are now to the point of wanting to tune one of our longer running packages and the Performance counters are not working. At all. They show up ok but the counter is always at 0. Is there anything special I have to do to get this to work.
One comment I fould was that the Performance Logs and Alerts service needs to be running to see these counters. I tried to start it and it immediately quit. I set it to automatic startup up and run a package. The counter still read 0.
Is there anything else out there I can try to get these counters to return somthing. Thanks.
My apologies if this is a very basic question, but I am having a very difficult time finding the answer.
My very, very simple dataflow task is PAINSTAKINGLY slow. (It took over an hour to transwer @300,000 records). I'm doing no transformations whatsoever. In fact, the only reason I'm using the Data Flow component here is for its error tracking capabilities.
Here's a brief description-
1) The source is an OleDB datasource object that uses an OLEDB connection to access a SQL Server 2000 database.
2) The output from the source is dumped directly (no data transformations) into an OLEDB Destination Object (uses an OLEDB connection to access a View on a SQL Server 2005 database). Individual row errors are pushed to a seperate logging table.
Based on the advice of an article I read, I removed the "OleDB Destination" object and used the records from the OLEDB source as the input to a RowCount Transformation. This still took a SIGNIFICANT amount of time. I'm guessing that my problem is with using an OleDB Source component???? That seems really strange though... wouldn't it be optimized? What are my workaround options?
I have multiple data flow tasks defined in my package. The task of the package is to extract data from Oracle/InfoLease tables and put them on to a SQL Server 2005 database.
Listed below are few queries that I had:
1. In SSIS package, I need to add "Data Conversion" component to convert from Unicode string datatype to String datatype. This was not required in SQL Server 2000 DTS package.
2. By default, Individual transformation is created for each column. Is there a way, to create one transformation for all columns.
3. This SSIS package is being executed as part of a job. The execution time takes around 33 mins. The same functionality of the SSIS package was replicated in form of SQL Server 2000- DTS package and was executed in form of a job. This execution got completed in 9 mins. So there has been a drastic increase in job execution time. Are there any ways to increase performance.
I am facing 2 problems : PROBLEM 1 : We have a few packages that run pretty fast on a desktop server with 2 Gig RAM, Dual processor (approx 4-5 hours). But the same packages run very very slow on the another server containing 8 CPU and 12 Gig RAM (ran for 24 hours without completing).
PROBLEM 2 : The CPU% ranges from 40-80% and the PF usage is stagnant at 2GB on desktop server for the same package. But in the 8CPU server, the CPU % ranges from 0-10% but the PF Usage raises from 750 MB to 8 GB.