Auto Stats Adversly Affect Cached Plans?

May 31, 2007

With SQL2005 SP2, we are seeing that when auto stats run on one or more indexes of a large table (1.5M rows), then immediately the stored proc using that table starts acting as if the query plan is no longer any good. This causes a drastic slowdown in response time and a corresponding increase of table reads to get the data. E.g, the next execution of the procedure after the auto stats kick in goes from 355 reads to 755000 reads (as depicted by Profiler). Generally, there are about 25 people using the DB at any one time. They connect through a mid-tier VB component.



I tried adding WITH RECOMPILE to the stored proc in question, but that caused almost all executions to run at the higher number. I thought that the WITH RECOMPILE hint would create a new query plan for each execution of the procedure and that plan would the the latest and greatest. Perhaps it did, but most users got stuck with the higher number of reads anyway. After taking the hint out, everyone went back to getting the 335 number and quick response times.



What we are wrestling with is that when those auto stats hit, it really messes up everyone until we manually recompile the procedure. Daily we delete all records in the table that are over 45 days old, so the table stays pretty much the same size. We also set the recompile flag to cause a new plan to be generated that will reflect the smaller amount of data. Should we also run a stats update before recompiling the procedure? Profiler has been very helpful in capturing what is going on, so I think I have a good handle on that. However, I don't understand why WITH RECOMPILE produced a messed up plan for everyone. The compile itself seems to take only 1 ms when done from the query screen.

View 11 Replies


ADVERTISEMENT

Too Many Stored Procedures Or Cached Plans?

Aug 16, 2005

I was wondering if anyone had an concrete information about if there is a problem with having too many stored procedures or plans in the cache? Obviously there is an impact on memory but if we can ignore that for the time being, does SQL perform just as well with 100 query plans as it does with 10's millions of plans?

View 9 Replies View Related

SQL 2012 :: Way To Invalidate Cached Query Plans?

Apr 29, 2014

Any way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them. Also any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?

exec sp_executesql N'select
cast(5 as int) as DisplaySequence,
mt.Description + '' '' + ct.Description as Source,
c.FirstName + '' '' + c.LastName as Name,
cus.CustomerNumber Code,
c.companyname as "Company Name",
a.Address1,
a.Address2,

[code]....

In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect.the server is using an old cached query plan, but don’t know for sure.

View 9 Replies View Related

SQL Server 2012 :: Invalidate Cached Query Plans?

Apr 30, 2014

way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them.

Also do you know of any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?

exec sp_executesql N'select
cast(5 as int) as DisplaySequence,
mt.Description + '' '' + ct.Description as Source,

[Code].....

In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect. I’m thinking the server is using an old cached query plan, but don’t know for sure.

View 3 Replies View Related

Auto Stats

Apr 10, 2008



Hi all,
I have an dtsx (SSIS) for "clone" manually Sql server database to another.

How I copy all stats from one database to another ? I have problem with "auto stats".

When I try DROP statitics for auto stats I get this error:

No se puede DROP el índice 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. No es una colección de estadísticas.

Cannot DROP index 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. Not statitics collection.


What can I do ??


-- Get Stats list
SELECT
'[' + SCHEMA_NAME(tbl.schema_id) + '].[' + tbl.name + ']' AS [Table_Name_With_Schema],
'[' + st.name + ']' AS [Name],
'' + SCHEMA_NAME(tbl.schema_id) + '.' + tbl.name + ''
+ '.' + st.name + '' AS [Estadistica]
FROM
sys.tables AS tbl
INNER JOIN sys.stats st ON st.object_id=tbl.object_id
ORDER BY
[Table_Name_With_Schema] ASC,[Name] ASC


Thanks in advance, any help will be appreciated, regards, greetings

View 1 Replies View Related

Auto Stats

Apr 10, 2008

Hi all,
I have an dtsx (SSIS) for "clone" manually Sql server database to another.

How I copy all stats from one database to another ? I have problem with "auto stats".

When I try DROP statitics for auto stats I get this error:

No se puede DROP el índice 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. No es una colección de estadísticas.

Cannot DROP index 'dbo.ACTIVIDAD_PROVEEDOR.PK_ACTIVIDAD_PROVEEDOR'. Not statitics collection.


What can I do ??


-- Get Stats list
SELECT
'[' + SCHEMA_NAME(tbl.schema_id) + '].[' + tbl.name + ']' AS [Table_Name_With_Schema],
'[' + st.name + ']' AS [Name],
'' + SCHEMA_NAME(tbl.schema_id) + '.' + tbl.name + ''
+ '.' + st.name + '' AS [Estadistica]
FROM
sys.tables AS tbl
INNER JOIN sys.stats st ON st.object_id=tbl.object_id
ORDER BY
[Table_Name_With_Schema] ASC,[Name] ASC


Thanks in advance, any help will be appreciated, regards, greetings

View 3 Replies View Related

Duration Of Auto Update Stats

Mar 10, 2003

Does anyone know how to tell how long it took for an auto update statistics to run? I looked under DBCC Show_Statistics and it shows the time the stats were last updated, but not how long it took to update them. Thanks.

View 2 Replies View Related

Auto Stats Not Working Sql 2005

Dec 29, 2007

The auto stats not working

I have both Auto Update Statistics and Auto Update Statistics Asynchronously set to True

Created a little test table.
USE [TEST]

GO

/****** Object: Table [dbo].[CUSTOMER] Script Date: 12/29/2007 10:42:49 ******/

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE TABLE [dbo].[CUSTOMER](

[Customer_Id] [nchar](10) NOT NULL,

[Customer_Name] [nvarchar](1000) NULL,

[Customer_Address] [nvarchar](1000) NULL,

[Customer_Address1] [nchar](1000) NULL,

CONSTRAINT [PK_CUSTOMER] PRIMARY KEY CLUSTERED

(

[Customer_Id] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

) ON [PRIMARY]





--Created an insert table

DECLARE @COUNT INT

SET @COUNT = 1

WHILE @COUNT <= 1000

begin

insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)

VALUES (@COUNT, '12345678901234567890')

SET @COUNT = @COUNT + 1

END



I then look at Tables then statistics the statistics are empty so i fire update statistics and see 1000 rows in here.



I run again the insert script

DECLARE @COUNT INT

SET @COUNT = 1001

WHILE @COUNT <= 2000

begin

insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)

VALUES (@COUNT, '12345678901234567890')

SET @COUNT = @COUNT + 1

END



Look again at statistics it does not have 2000 rows in here.


If i do select * from CUSTOMER where CUSTOMER_ID = '2000' then go checks statictics it works.



I was under the impression that when you do insert or delete, update then the statistics are fired.



The sys.sysindexes rowmodctr shows the 1000 rows.



I checked the conditions that sql fires if the no of rows int able > 6 and < 500 then updates when 500 mods made.

Also if row > 500 auto update done when 500 = 20% are added



So both are met.



Anyone other any other suggestions about the auto stats ?

View 7 Replies View Related

Auto Stats Not Working Sql 2005

Dec 29, 2007

The auto stats not working

I have both Auto Update Statistics and Auto Update Statistics Asynchronously set to True




USE [TEST]

GO

/****** Object: Table [dbo].[CUSTOMER] Script Date: 12/29/2007 10:42:49 ******/

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE TABLE [dbo].[CUSTOMER](

[Customer_Id] [nchar](10) NOT NULL,

[Customer_Name] [nvarchar](1000) NULL,

[Customer_Address] [nvarchar](1000) NULL,

[Customer_Address1] [nchar](1000) NULL,

CONSTRAINT [PK_CUSTOMER] PRIMARY KEY CLUSTERED

(

[Customer_Id] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]

) ON [PRIMARY]




--Created an insert table
DECLARE @COUNT INT

SET @COUNT = 1

WHILE @COUNT <= 1000

begin

insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)

VALUES (@COUNT, '12345678901234567890')

SET @COUNT = @COUNT + 1

END

Look at Tables then statistics the statistics are empty so i fire update statistics and see 1000 rows in here.

I run again the insert script
DECLARE @COUNT INT

SET @COUNT = 1001

WHILE @COUNT <= 2000

begin

insert into CUSTOMER (CUSTOMER_ID, CUSTOMER_NAME)

VALUES (@COUNT, '12345678901234567890')

SET @COUNT = @COUNT + 1

END


Look again at statistics it not firing.

If i do select * from CUSTOMER where CUSTOMER_ID = '2000' then go checks statictics it works.

I was under the impression that when you do insert or delete, update then the statistics are fired.

The sys.sysindexes rowmodctr shows the 1000 rows.

I checked the conditions that sql fires if the no of rows int able > 6 and < 500 then updates when 500 mods made.
Also if row > 500 auto update done when 500 = 20% are added

So both are met.

Anyone other any other suggestions about the auto stats ?

View 2 Replies View Related

Auto Update Stats Causing Blocking

Mar 10, 2003

Recently a production server suffered a critical blocking period and I wanted to know if I could solicit some input. It seems that a stored procedure was in the middle of recompiling while and auto update statistics started. This caused blocking for like an hour on the
single object (stored procedure) that was originally called. The table that the update occurred on and that the
stored procedure is reading form is quite large. It is 2 mil rows and about 140 columns wide. Some info from
sysprocesses is below. The table alone takes up almost 4GB of space, when looking at sp_spaceused. I have some
questions.
1. Can the update statistics for a '_WA%' stats cause
blocking on a table?
2. Does an update stats on an index survive a restart of
SQL server? We tried restarting, but the blocking did not
end.
3. If the stored procedure is running under a compile, can
the server automatically start an update stats and cause
the stored procedure to wait?
4. Can the server automatically start an update stats on
more than one column stats at a time, causing one to be
blocked by the other?
5. We had never seen this issue before going to SQL2K
clustering. Is this something specific to SQL2K and not
SQL7 ?

Thanks for your input.
John Lee

This is the lock info for the blocking processes.

spid dbid ObjId IndId Type Resource Mode Status name
------ ------ ----------- ------ ---- ---------------- -------- ------ -------------------------
142 7 2 1 KEY (6f00035ef42b) S GRANT sysindexes
142 7 2 1 KEY (6f00035ef42b) S GRANT sysindexes
142 7 421576540 0 TAB Sch-S GRANT tJob
142 7 1141579105 0 TAB Sch-S GRANT tPatient_info
142 7 1141579105 0 TAB [UPD-STATS] Sch-M GRANT tPatient_info
142 7 1659921035 0 TAB [COMPILE] X GRANT iDBGetPatInfoRecord
142 7 1659921035 0 TAB Sch-S GRANT iDBGetPatInfoRecord


These are the processes that are being blocked:

spid
------
137
140


Below this is a snapshot of all the SQL processes on the server being blocked.
Save the report and send to the whole database group.

spid kpid blocked waittype waittime lastwaittype waitresource
------ ------ ------- -------- ----------- -------------------------------- -----------------------------
140 4292 142 0x0005 68609 LCK_M_X TAB: 7:1659921035 [[COMPILE]]
137 2576 140 0x0005 64671 LCK_M_X TAB: 7:1659921035 [[COMPILE]]

View 1 Replies View Related

Auto-Creating DB Maint Plans?

Aug 3, 2005

Hi Guys,

Does anyone know an easier way to create DB Maint Plans instead of having to create each one manually?
i.e. Copy Maint Plans from 1 server to another automatically.
Does anyone know of a quick export method.

thnx in advance.

View 4 Replies View Related

SQL 2012 :: SSMS Auto-recovery / Auto-save New (unsaved) Queries

Feb 16, 2014

Since upgrading from SQL Server Management Studio 2008 R2, I've noticed that it no longer autosaves queries that have not been manually saved first. If a file has been manually saved the autorecover files end up in the following directory:

%appdata%MicrosoftSQL Server Management Studio11.0AutoRecoverDatSolution1

However, I have ended up in the situation where I have unsaved queries when my computer has crashed and have not been able to recover them.

I have also found references to .sql files stored in temp files in the following directory, but the files here seem to be very haphazardly caught:

%userprofile%AppDataLocalTemp

View 2 Replies View Related

Cached Datasets?

Aug 8, 2007

Hello all,
I have a report with a table and a chart. It uses dataset1 as the data source.
All works fine.
I create a new dataset called dataset2.
The queries are exactly the same. The only differences between the 2 datasets is the database server and the fact that one of the columns is a smallint (in dataset2) and an int(in Dataset1)
I change the datasetName property of both the table and the chart to use dataset2.
When I run the report I get a conversion error stating that there was an overflow of int2 while using dataset1. I have verified the report is not using dataset1 anywhere. If I delete dataset1 and run the report the error goes away. If I add it back, I get the error again. Why is the report looking at dataset1 if it is not referenced at all in the report? Does SQL RS cache the datasets and verify each when it compiles?

regards,
Bill

View 9 Replies View Related

View Result Cached

Aug 1, 2007

I am using SQL server 2005. I have a VIEW that joins several tables. One of the table's column can be added dynamically by the user from a GUI interface. However, after a column is added, it does not show up in the VIEW immediately. It will take a while (I haven't figured out exactly how long) before the extra column shows up as the execution result of the VIEW.
 So it seems like SQL server is caching that VIEW's schema. Is there anyway I can make this view always comes back with the latest schema?
Thanks a lot!
Penn

View 1 Replies View Related

Cached Search Of Top Terms

Mar 11, 2008

Hi, I have a search and I want to create a hyperlinked list of the top 5 search terms below it, what's the most efficient way to go about this?

View 20 Replies View Related

Cached Query Result

Sep 20, 2007

I want to check the performance of m query and i just want to remove cached query results. Is there any suggestion how can i do this.
I just want to check after each modificatin how much improvement in performance

View 1 Replies View Related

Non-cached Lookup Method

May 28, 2008

Phil, great links, really helpful and appreciated.

I just need to verify one thing on the lookup method:
--One of the lookup methods people were discussing is non-cached lookup -- which seem to be evaluated to be the fastest. Is the non-cached the default of LookUp transformation? and when I wanted the lookup method to be cached, I need to go into the Advance tab and set it to however %, right? thanks.

View 3 Replies View Related

Snapshot And Cached Instances

Oct 23, 2007

Hi,

I'm trying to understand the cases where it's more interesting to use snapshot and when it's more interesting to use cached instances.

If I have 100 users trying to reach a report, is it better to use snaphsot or cache instance ? In both case, the 100 users will have the same report result. And what about the performance, are they similar ?

Thanks for your time and response,

See you,


Have a nice day!

regards,
sandy

View 1 Replies View Related

Snapshots Vs Cached Instances

Jun 19, 2007

Hello,

I would like to know what is the difference between a snapshot and a cached instance in SSRS?

Which one has the best performances and which one is the best for multiple users and reports containing parameters (the parameters are then passed in the where clause of the sql code; ex: WHERE IN(@param1))?

Thanks for your answers.
Zoz

View 4 Replies View Related

Keeping A Cached Lookup In Memory

Dec 10, 2007

Is it possible to keep a Cached Lookup in memory when executing multiple Data Flows? Executing DFT€™s in parallel will cache and use the same LOOKUP statement. But what if I€™m executing the DFT sequentially, can I keep the LOOKUP from the first DFT in memory for the second DFT? For example, in my case, I€™m caching a lookup against the Customer dimension for invoices. The second DFT then processes credits and again does a lookup against the Customer dimension. I want to use the cached Customer records from the first DFT.

View 1 Replies View Related

Generating A Cached Copy Of The Report

May 27, 2005

Is there a way to programatically determine if a report should be generated from the cache or run against real-time data?

View 7 Replies View Related

Full Cached Lookup With Parameters

Jul 8, 2006

Parameterized queries are only allowed on partial or none cache style lookup transforms, not 'full' ones. Is there some "trick" to parameterizing a full cache lookup, or should the join simply be done at the source, obviating the need for a full cache lookup at all (other suggestion certainly welcome)

More particularly, I'd like to use the lookup transform in a surrogate key pipeline. However, the dimension is large (900 million rows), so its would be useful to restrict the lookup transform's cache by a join to the source.

For example:

Source query is: select a,b,c from t where z=@filter (20,000 rows)

Lookup transform query: select surrogate_key,business_key from dimension (900 M rows, not tenable)



Ideal Lookup transform query:

select distinct surrogate_key

,business_key

from dimension d inner join

t on d.business_key = t.c

where t.z = @filter

View 7 Replies View Related

Auto Increment Auto Non-identity Field

Jan 23, 2004

I have an MS SQL Server table with a Job Number field I need this field to start at a certain number then auto increment from there. Is there a way to do this programatically or within MSDE?

Thanks, Justin.

View 3 Replies View Related

Linked Servers And Cached Execution Path

May 1, 2006

I have a procedure that generates dynamic sql and then executes via the execute(strSQL) syntax. BOL states that if I use sp_executesql with hard-typed parameters passed in variables, the query optimizer will 'probably' match the sql statement with the cached execution path, thus avoiding recompilation and speeding up the results for heavily run procedures.

Can anyone tell me if this is also true if the sql references an object on a linked sql server 2000 database? Technically, the sql is exactly the same, but I'm unsure if there is some exception due to the way linked objects are processed.

Thanks!

View 1 Replies View Related

Images Not Displaying From Snapshot Or Cached Report

Apr 11, 2008

Hello,
This is my first post, and I'm hoping you all can help.

Using Reporting Services 2005, I have several reports that use embedded images.
All images render fine when the report execution is set to:
1) Always run this report with the most recent data
1a) Do not cache temporary copies of this report

However, when I change the execution to either a Cache or a Snapshot, the images and some charts render as red "X" placeholders. This is sometimes remedied when the user clicks the page refresh button, but not always.

Of course, I could just have all the concurrent users use the uncached report that hits the OLAP server, but that would be highly inefficient, and just plain slow.

Thanks for any help on this subject.

-michael

View 2 Replies View Related

Clear Cached Reports In Visual Studio

Jun 12, 2007

I am frequently going back and forth between the making changes to my reports and previewing those changes, all from within visual studio without publishing the reports each time.



The problem is that frequently changes that I make are not reflected in the preview window unless I either close out of visual studio completely or I wait some length of time, (I'm not sure how long exactly, but 1/2 an hour seems to always do the trick.)



Is there a way to clear any cache and force visual studio to completely reprocess a report?



At least when it is formatting changes I can identify whether the change has stuck, but when I'm fixing bugs in the code, I can't tell if I didn't fix it or if the change just hasn't taken effect.

View 1 Replies View Related

Storing Daily Cached Query Results In SQL Server

Mar 7, 2005

I'm working on a reporting tool that could bring back hundreds of thousands of results back at once. I need some way to run the actual query only once a day, and then the reporting tool would just pull back this cached results. To be short, I need to figure out how to do this using a minimum amount of resources. Would a DataView work with something like this? How would I have it update only once a day? I appreciate any advice!

View 2 Replies View Related

SQL 2012 :: Entity Framework Returning Cached Data

Mar 23, 2014

I have a datagridview bound to a table that is part of an Entity Framework model. A user can edit data in the datagridview and save the changes back to SQL. But, there is a stored procedure that can also change the data, in SQL, not in the datagridview. When I try to "refresh" the datagridview the linq query always returned the older cached data. Here's the code that I have tried using to force EF to pull retrieve new data:

// now refresh the maintenance datagridview data source
using (var context = new spdwEntities())
{
var maintData =
from o in spdwContext.MR_EquipmentCheck
where o.ProdDate == editDate
orderby o.Caster, o.Strand
select o;
mnt_DGV.DataSource = maintData;
}

When I debug, I can see that the SQL table has the updated data in it, but when this snippet of code runs, maintData has the old data in it.

View 0 Replies View Related

What Could Affect &#39;allow Updates&#39; Option

Mar 19, 2002

every morning I have 7-10 identical messages in error log

1.
Configuration option 'allow updates' changed from 1 to 0. Run the RECONFIGURE statement to install..
2.
Error: 15457, Severity: 0, State: 1
3.
Configuration option 'allow updates' changed from 0 to 1. Run the RECONFIGURE statement to install..

It is standby server with custom log shipping and DTS transfering logins every 15 min.

What could cause this error ?

View 1 Replies View Related

Tempdb - Does It Affect Performance ?

Aug 7, 2001

We think we're having performance problems, and among the areas of investigations is the tempdb database. Since it resets itself after SQL is restarted, is there a way to find out how big it has grown in the past ? Does leaving it at the default size cause a performace hit ?? Right now it's 8.75 MB, with 7.38 MB available, which sounds pretty harmless.

Any thoughts ?

View 1 Replies View Related

SQL Profiler - How Does It Affect SQL Performance?

Jun 23, 2006

Everywhere I read, it states that running SQL Profiler can affect performance of your SQL Server. My question is - how much of an impact will it really make? Will I see a 1% degredation in peformance? 5%? 50%? I haven't been able to find a good answer. We currently have SQL Profiler running all day long for almost 3 years, and the databases are still humming.

Is it the amount of data you are requesting from the trace that affects performance? There are some compliance tools out there (Idera Compliance Manager, IPLocks, etc) that run a profiler trace to get data. There are other DBAs in my organization who don't want to use them because "profiler traces will degrade my SQL Server performance". How true is this really.

Any help I can get would be extremely appreciated.

Thanks.

View 4 Replies View Related

Does Multiplication With 1 Affect Query Performance?

May 20, 2008

Does multiplication with 1 affect query performance?I have a a stored procedure that converts results to another unit if required. In alternative 1 below, the results are returned with a separate select statement if no conversion is necessary - in other words, no multiplication with a conversion factor is required. However, the code is not very nice since I need to repeat the select statement again in case a conversion is required, this time including the conversion factor.Alternative 2 uses cleaner-looking code. The conversion factor is set to 1 if no conversion is required, and a single SELECT statement is used to return the data. The @factor variable is defined as a float.I would rather use alternative 2, but I wonder if there is any performance penalty for doing that if no conversion is required since the results are always multiplied with the @factor? Or can SQL server somehow understand that @factor = 1 and no multiplication is required?--- Alternative 1: ---IF @fromunit_sid = @tounit_sid-- Return unconverted results
SELECT ISNULL(ls_totalWaterConsumption,0) AS ls_totalWaterConsumption,ls_theoreticalWaterConsumption AS ls_theoretical_WaterConsumption,ls_totalWaterConsumption - ls_theoreticalWaterConsumption AS ls_extra_WaterConsumption FROM Results WHERE scenario_id = @scenario_idELSEBEGIN
-- Get conversion factor
EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT -- Get the converted results
SELECT ISNULL(ls_totalWaterConsumption * @factor,0) AS ls_totalWaterConsumption, ls_theoreticalWaterConsumption * @factor AS ls_theoretical_WaterConsumption, (ls_totalWaterConsumption - ls_theoreticalWaterConsumption) * @factor AS ls_extra_WaterConsumptionFROM Results WHERE scenario_id = @scenario_idEND --- Alternative 2: ---IF @fromunit_sid = @tounit_sidSET @factor = 1ELSE
-- Get conversion factor
EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT

-- Get the converted results
SELECT ISNULL(ls_totalWaterConsumption * @factor,0) AS ls_totalWaterConsumption, ls_theoreticalWaterConsumption * @factor AS ls_theoretical_WaterConsumption, (ls_totalWaterConsumption - ls_theoreticalWaterConsumption) * @factor AS ls_extra_WaterConsumptionFROM Results WHERE scenario_id = @scenario_id  And another question: is using an IF function considerably faster than making a call to another stored procedure?In alternative 2 above I use an IF statement to check if @fromunit_sid = @tounit_sid, and . But in fact the function getConversionFactor that I'm calling does exactly the same thing:  if I pass in identical from- and to-values, it simply returns 1, so I could omit the IF statement completely and just use alternative 3. But is it slower?--- Alternative 3 -- Get conversion factor
EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT

-- Get the converted results
...  

View 3 Replies View Related

If One Of The BCP Fail. Does It Affect The Rest Of The Process??

Dec 8, 1998

Hi, I am wrote the following code in one store procedure called p_bcp_all.
and then scheduled it to run over night. what if the first two bcp were successful but the third one failed. Is the whole procedure going to fail? also what if the first one failed, is the rest of the code going to be executed have the bcp process going for the second and the third table?
thanks for your adivce

regards Ali

create procedure p_bcp_all as
Exec master..xp_cmdshell "bcp servername..tblone in d:blone.txt /fd:formatfileblone.fmt /servername /Usa /password/b250000 /a8000"

Exec master..xp_cmdshell "bcp servername..tbltwo in d:bltwo.txt /fd:formatfilebltwo.fmt /servername /Usa /password/b250000 /a8000"

Exec master..xp_cmdshell "bcp servername..tblthree in d:blone.txt /fd:formatfileblthree .fmt /servername /Usa /password/b250000 /a8000"
GO

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved