SQL 2012 :: Way To Invalidate Cached Query Plans?
Apr 29, 2014
Any way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them. Also any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?
exec sp_executesql N'select
cast(5 as int) as DisplaySequence,
mt.Description + '' '' + ct.Description as Source,
c.FirstName + '' '' + c.LastName as Name,
cus.CustomerNumber Code,
c.companyname as "Company Name",
a.Address1,
a.Address2,
[code]....
In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect.the server is using an old cached query plan, but don’t know for sure.
View 9 Replies
ADVERTISEMENT
Apr 30, 2014
way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them.
Also do you know of any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?
exec sp_executesql N'select
cast(5 as int) as DisplaySequence,
mt.Description + '' '' + ct.Description as Source,
[Code].....
In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect. I’m thinking the server is using an old cached query plan, but don’t know for sure.
View 3 Replies
View Related
Aug 16, 2005
I was wondering if anyone had an concrete information about if there is a problem with having too many stored procedures or plans in the cache? Obviously there is an impact on memory but if we can ignore that for the time being, does SQL perform just as well with 100 query plans as it does with 10's millions of plans?
View 9 Replies
View Related
May 31, 2007
With SQL2005 SP2, we are seeing that when auto stats run on one or more indexes of a large table (1.5M rows), then immediately the stored proc using that table starts acting as if the query plan is no longer any good. This causes a drastic slowdown in response time and a corresponding increase of table reads to get the data. E.g, the next execution of the procedure after the auto stats kick in goes from 355 reads to 755000 reads (as depicted by Profiler). Generally, there are about 25 people using the DB at any one time. They connect through a mid-tier VB component.
I tried adding WITH RECOMPILE to the stored proc in question, but that caused almost all executions to run at the higher number. I thought that the WITH RECOMPILE hint would create a new query plan for each execution of the procedure and that plan would the the latest and greatest. Perhaps it did, but most users got stuck with the higher number of reads anyway. After taking the hint out, everyone went back to getting the 335 number and quick response times.
What we are wrestling with is that when those auto stats hit, it really messes up everyone until we manually recompile the procedure. Daily we delete all records in the table that are over 45 days old, so the table stays pretty much the same size. We also set the recompile flag to cause a new plan to be generated that will reflect the smaller amount of data. Should we also run a stats update before recompiling the procedure? Profiler has been very helpful in capturing what is going on, so I think I have a good handle on that. However, I don't understand why WITH RECOMPILE produced a messed up plan for everyone. The compile itself seems to take only 1 ms when done from the query screen.
View 11 Replies
View Related
Mar 23, 2014
I have a datagridview bound to a table that is part of an Entity Framework model. A user can edit data in the datagridview and save the changes back to SQL. But, there is a stored procedure that can also change the data, in SQL, not in the datagridview. When I try to "refresh" the datagridview the linq query always returned the older cached data. Here's the code that I have tried using to force EF to pull retrieve new data:
// now refresh the maintenance datagridview data source
using (var context = new spdwEntities())
{
var maintData =
from o in spdwContext.MR_EquipmentCheck
where o.ProdDate == editDate
orderby o.Caster, o.Strand
select o;
mnt_DGV.DataSource = maintData;
}
When I debug, I can see that the SQL table has the updated data in it, but when this snippet of code runs, maintData has the old data in it.
View 0 Replies
View Related
Nov 24, 2014
I ran the below 2 select statements and ended up seeing multiple cached instances of the same stored procedure. The majority have only one cached instance but more than a handful have multiple cached instances. When there are multiple cached instances of the same sproc, which one will sql server reuse when the sproc is called?
SELECT o.name, o.object_id,
ps.last_execution_time ,
ps.last_elapsed_time * 0.000001 as last_elapsed_timeINSeconds,
ps.min_elapsed_time * 0.000001 as min_elapsed_timeINSeconds,
ps.max_elapsed_time * 0.000001 as max_elapsed_timeINSeconds
[code]...
View 4 Replies
View Related
Sep 20, 2007
I want to check the performance of m query and i just want to remove cached query results. Is there any suggestion how can i do this.
I just want to check after each modificatin how much improvement in performance
View 1 Replies
View Related
Mar 7, 2005
I'm working on a reporting tool that could bring back hundreds of thousands of results back at once. I need some way to run the actual query only once a day, and then the reporting tool would just pull back this cached results. To be short, I need to figure out how to do this using a minimum amount of resources. Would a DataView work with something like this? How would I have it update only once a day? I appreciate any advice!
View 2 Replies
View Related
Feb 25, 2015
On one server I'm having an issue with and it having such a small procedure cache.
Server has 60GB of RAM assigned to its min and max server memory settings, optimise for ad hoc workloads is disabled.
Procedure cache at the moment on the server is 2.41MB with only 6 objects in side all related to mssqlsystemresource database, I can see stuff dropping in for user databases, but as soon as the proc has finished the plan is removed from the cache.
Buffer cache is in the 17GB mark, free pages around the 42GB mark so around 60GB used with a bit in stolen pages, but no proc cache.
All other servers in the environment are reporting over 8GB of proc cache in use which is more healthy.
Using Spotlight to monitor all of this.
Whats wrong with this one server and it not keeping the plans in cache.
View 9 Replies
View Related
Jul 11, 2014
We have just implemented a SQL 2012 always on environment. We have a primary and secondary server. I am confused about how to set up the backup plans. The application team was happy to tell me that in sql 2012 always on we can offload the backups to the secondary, thus reducing overhead on the primary server.
However, the secondary only supports copy only full backups. I am unsure how these would be useful in a disaster event? I could not apply any trx log backups on a copy only backup. This means I need to run my full backups on the primary server?
View 7 Replies
View Related
Aug 19, 2015
Is the SQL Server Profiler Reads Column Incorrect For Parallel Plans?
I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.
However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.
Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.
Does the profiler only display one thread of a parallel plan perhaps? Or something else?
View 9 Replies
View Related
Apr 13, 2006
I have 2 SQL databases which are the same and are giving me differentquery plans.select s.* from hlresults hinner join specimens s on s.specimen_tk = h.specimen_tkwhere s.site_tk = 9 and s.location in ('ABC','WIAD')and s.date_collected between '2/1/2003' and '2/3/2006'order by s.location, s.date_collectedBoth boxes have the same configuration, the only difference is that oneof them is a cluster.The Acluster box is taking twice as long to run the query.I have run statistics on both, and the cluster is still creating abitmap and running some parallelism which the other box is not.Also, the the first step, the A1 box estimates the rows returned to bearound 80K and the actual rows returned is about 40K - subtree cost =248. The Acluster box estimates 400K - subtree cost=533!After running statistics, how can it be so off?I've also reindexed to no avail . . .any insight would be very much appreciated. We just moved to this newsystem and I hate that the db is now slower -A1:affinity mask -2147483648 2147483647 0 0allow updates 0 1 0 0awe enabled 0 1 1 1c2 audit mode 0 1 0 0cost threshold for parallelism 0 32767 0 0Cross DB Ownership Chaining 0 1 0 0cursor threshold -1 2147483647 -1 -1default full-text language 0 2147483647 1033 1033default language 0 9999 0 0fill factor (%) 0 100 90 90index create memory (KB) 704 2147483647 0 0lightweight pooling 0 1 0 0locks 5000 2147483647 0 0max degree of parallelism 0 32 4 4max server memory (MB) 4 2147483647 14336 14336max text repl size (B) 0 2147483647 65536 65536max worker threads 32 32767 255 255media retention 0 365 0 0min memory per query (KB) 512 2147483647 1024 1024min server memory (MB) 0 2147483647 4096 4096nested triggers 0 1 0 0network packet size (B) 512 32767 4096 4096open objects 0 2147483647 0 0priority boost 0 1 0 0query governor cost limit 0 2147483647 0 0query wait (s) -1 2147483647 -1 -1recovery interval (min) 0 32767 0 0remote access 0 1 1 1remote login timeout (s) 0 2147483647 0 0remote proc trans 0 1 0 0remote query timeout (s) 0 2147483647 0 0scan for startup procs 0 1 1 1set working set size 0 1 0 0show advanced options 0 1 1 1two digit year cutoff 1753 9999 2049 2049user connections 0 32767 0 0user options 0 32767 0 0Acluster:affinity mask -2147483648 2147483647 0 0allow updates 0 1 0 0awe enabled 0 1 1 1c2 audit mode 0 1 0 0cost threshold for parallelism 0 32767 0 0Cross DB Ownership Chaining 0 1 0 0cursor threshold -1 2147483647 -1 -1default full-text language 0 2147483647 1033 1033default language 0 9999 0 0fill factor (%) 0 100 90 90index create memory (KB) 704 2147483647 0 0lightweight pooling 0 1 0 0locks 5000 2147483647 0 0max degree of parallelism 0 32 4 4max server memory (MB) 4 2147483647 14336 14336max text repl size (B) 0 2147483647 65536 65536max worker threads 32 32767 255 255media retention 0 365 0 0min memory per query (KB) 512 2147483647 1024 1024min server memory (MB) 0 2147483647 4095 4095nested triggers 0 1 0 0network packet size (B) 512 32767 4096 4096open objects 0 2147483647 0 0priority boost 0 1 0 0query governor cost limit 0 2147483647 0 0query wait (s) -1 2147483647 -1 -1recovery interval (min) 0 32767 0 0remote access 0 1 1 1remote login timeout (s) 0 2147483647 0 0remote proc trans 0 1 0 0remote query timeout (s) 0 2147483647 0 0scan for startup procs 0 1 1 1set working set size 0 1 0 0show advanced options 0 1 1 1two digit year cutoff 1753 9999 2049 2049user connections 0 32767 0 0user options 0 32767 0 0
View 1 Replies
View Related
Aug 16, 2006
As part of my data warehouse nightly build, I truncate my tables in mytarget database.As example, I find it is much quicker to do a bulk API load of 13Mrecords and to do an update/insert of 100K rows. I also drop theindexes before the builds and reindex after. Thats an aside.What I am wondering is how is this impacting the statistics? Do I needto update them?Not well versed on statistics and any data is welcomed.ThanksRob
View 1 Replies
View Related
Jan 23, 2004
Gurus,
I'm trying to get an application finished that works like Query Analizer in
terms of returning query plans and statistics.
Problem the co-author is having:
>In using ADO to connect to SQL Server, I'm trying to retrieve multiple
>datasets AND statistics that are usually returned via the OnInfoMessage
>event. For those that are familiar with SQL Server, I need the results
>returned by the SET STATISTICS IO ON and SET STATISTICS PROFILE ON options.
>Anyone had any luck doing this before?
Can anyone shed any light on this please?
Thanks.
BTW if anyone wants to take a look at the tool so far - to see what I'm
delving into:
http://81.130.213.94/myforum/forum_posts.asp?TID=78&PN=1
Much Appreciated!!
View 3 Replies
View Related
Jul 16, 2007
Hi,We are trying to solve a real puzzle. We have a stored procedure thatexhibits *drastically* different execution times depending on how itsexecuted.When run from QA, it can take as little as 3 seconds. When it iscalled from an Excel vba application, it can take up to 180 seconds.Although, at other times, it can take as little as 20 seconds fromExcel.Here's a little background. The 180 second response time *usually*occurs after a data load into a table that is referenced by the storedprocedure.A check of DBCC show_statistics shows that the statistics DO getupdated after a large amount of data is loaded into the table.*** So, my first question is, does the updated statistics force arecompile of the stored procedure?Next, we checked syscacheobjects to see what was going on with theexecution plan for this stored procedure. What I expected to see wasONE execution plan for the stored procedure.This is not the case at all. What is happening is that TWO separateCOMPILED PLANs are being created, depending on whether the sp is runfrom QA or from Excel.In addition, there are several EXECUTABLE PLANs that correspond to thetwo COMPILED PLANs. Depending on *where* the sp is run, the usecountincreases for the various EXECUTABLE PLANS.To me, this does not make any sense! Why are there *multiple* compileand executable plans for the SAME sp?One theory we have is, that we need to call the sp with the dboqualifier, ie) EXEC dbo.spHas anyone seen this? I just want to get to the bottom of this andfind out why sometimes the query takes 180 seconds and other timesonly takes 3 seconds!!Please help.Thanks much
View 5 Replies
View Related
Jan 17, 2008
We know that a query execution plan exists for Stored Procedures in the Procedure Cache.
What about Views? Does a view have a query execution plan? We know that a View is a virtual table and that virtual table is populated
when the view is induced but does it have a Query Execution Plan?
I have tried to find this info for Views in BOL but I cannot see it anywher in BOL.
View 9 Replies
View Related
Mar 10, 2006
Hi,I need to shrink a database file and was wondering whether it isrequired to run a full backup after the shrink operation.In SQL Server 7.0 shrinkfile was a non-logged operation so wouldinvalidate your transaction logs. Is the same true for 2000?Obviously as a matter of course I would backup before and after theoperation but going forward I may want to implement this on a regularbasis.CheersDee
View 7 Replies
View Related
Jul 23, 2005
I have a stored procedure that suddenly started performing horribly.The query plan didn't look right to me, so I copy/pasted the code andran it (it's a single SELECT statement). That ran pretty well and useda query plan that made sense. Now, I know what you're all thinking...stored procedures have to optimize for variable parameters, etc.Here's what I've tried to fix the issue:1. Recompiled the stored procedure2. Created a new, but identical stored procedure3. Created the stored procedure with the RECOMPILE option4. Created the stored procedure with a hard-coded value instead ofaparameter5. Changed the stored procedure to use dynamic SQLIn every case, performance did not improve and the query plan remainedthe same (I could not easily confirm this with the dynamic SQLversion, but performance was still horrible).I am currently running UPDATE STATISTICS on all of the involvedtables, but that will take awhile.Any ideas?Thanks!-Tom.
View 10 Replies
View Related
Feb 21, 2013
I think not. Microsoft says it is possible: one for parallel and one for serial execution. Don't believe that's possible for a stored procedure to change execution plans on the fly. Have an on-going problem with timeout occurring with an application and narrowed the culprit to a stored procedure. I couldn't find any obvious issues database wise, no locks, etc. so I recompiled (altered) the sproc without making any changes and the issue cleared for a couple days.
It happened again to day, and so I recompiled (altered) the sproc and it went away again. No code changes to both application (so they say) and stored procedure. I ran the below code snippet to check for sprocs with multiple cached plans and the offending one came up on a short list. So, my question is, Is it one sproc per query plan or can there be more than one. I understand the connection issues.
Code:
SELECT db_name(st.dbid) DBName,
object_schema_name(st.objectid, dbid) SchemaName,
object_name(st.objectid, dbid) StoredProcedure,
MAX(cp.usecounts) Execution_count,
st.text [Plan_Text]
INTO #TMP
[Code] .....
View 13 Replies
View Related
Jul 17, 2006
Try this script to see what queries are taking over a second.To get some real output, you need a long-running query. Here's one(estimated to take over an hour):PRINT GETDATE()select count_big(*)from sys.objects s1, sys.objects s2, sys.objects s3,sys.objects s4, sys.objects s5PRINT GETDATE()Output is:session_id elapsed task_alloc task_dealloc runningSqlText FullSqlTextquery_plan51 32847 0 0 select count_big(*) from sys.objects s1, sys.objects s2,sys.objects s3, sys.objects s4, sys.objects s5 SQL PlanClicking on SQL opens the full SQL batch as a .txt file, including the PRINTstatementsClicking on Plan allows you to see the .sqlplan file in MSSMS========Title: Using a VB Script to show long-running queries, complete with queryplans.Today (July 14th), I found a query running for hours on a development box.Rather than kill it, I decided to use this opportunity to develop a scriptto show long-running queries, so I could see what was going on. (ReferenceRoy Carlson's article for the idea.)This script generates a web page which shows long-running queries with thecurrently-executing SQL command, full SQL text, and .sqlplan files. The fullSQL query text and the sqlplan file are output to files in your tempdirectory. If you have SQL Management Studio installed on the localcomputer, you should be able to open the .sqlplan to see the query plan ofthe whole batch for any statement.'LongestRunningQueries.vbs'By Aaron W. West, 7/14/2006'Idea from:'http://www.sqlservercentral.com/columnists/rcarlson/scriptedserversnapshot.asp'Reference: Troubleshooting Performance Problems in SQL Server 2005'http://www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspxSub Main()Const MinimumMilliseconds = 1000Dim srvnameIf WScript.Arguments.count 0 Thensrvname = WScript.Arguments(0)Elsesrvname = InputBox ( "Enter the server Name", "Server", ".", VbOk)If srvname = "" ThenMsgBox("Cancelled")Exit SubEnd IfEnd IfConst adOpenStatic = 3Const adLockOptimistic = 3Dim i' making the connection to your sql server' change yourservername to match your serverSet conn = CreateObject("ADODB.Connection")Set rs = CreateObject("ADODB.Recordset")' this is using the trusted connection if you use sql logins' add username and password, but I would then encrypt this' using Windows Script Encoderconn.Open "Provider=SQLOLEDB;Data Source=" & _srvname & ";Trusted_Connection=Yes;Initial Catalog=Master;"' The query goes heresql = "select " & vbCrLf & _" t1.session_id, " & vbCrLf & _" t2.total_elapsed_time AS elapsed, " & vbCrLf & _" -- t1.request_id, " & vbCrLf & _" t1.task_alloc, " & vbCrLf & _" t1.task_dealloc, " & vbCrLf & _" -- t2.sql_handle, " & vbCrLf & _" -- t2.statement_start_offset, " & vbCrLf & _" -- t2.statement_end_offset, " & vbCrLf & _" -- t2.plan_handle," & vbCrLf & _" substring(sql.text, statement_start_offset/2, " & vbCrLf & _" CASE WHEN statement_end_offset<1 THEN 8000 " & vbCrLf & _" ELSE (statement_end_offset-statement_start_offset)/2 " & vbCrLf & _" END) AS runningSqlText," & vbCrLf & _" sql.text as FullSqlText," & vbCrLf & _" p.query_plan " & vbCrLf & _"from (Select session_id, " & vbCrLf & _" request_id, " & vbCrLf & _" sum(internal_objects_alloc_page_count) as task_alloc, " &vbCrLf & _" sum (internal_objects_dealloc_page_count) as task_dealloc " &vbCrLf & _" from sys.dm_db_task_space_usage " & vbCrLf & _" group by session_id, request_id) as t1, " & vbCrLf & _" sys.dm_exec_requests as t2 " & vbCrLf & _"cross apply sys.dm_exec_sql_text(t2.sql_handle) AS sql " & vbCrLf & _"cross apply sys.dm_exec_query_plan(t2.plan_handle) AS p " & vbCrLf & _"where t1.session_id = t2.session_id and " & vbCrLf & _" (t1.request_id = t2.request_id) " & vbCrLf & _" AND total_elapsed_time " & MinimumMilliseconds & vbCrLf & _"order by t1.task_alloc DESC"rs.Open sql, conn, adOpenStatic, adLockOptimistic'rs.MoveFirstpg = "<html><head><title>Top consuming queries</title></head>" & vbCrLfpg = pg & "<table border=1>" & vbCrLfIf Not rs.EOF Thenpg = pg & "<tr>"For Each col In rs.Fieldspg = pg & "<th>" & col.Name & "</th>"c = c + 1Nextpg = pg & "</tr>"Elsepg = pg & "Query returned no results"End Ifcols = cdim filenamedim WshShellset WshShell = WScript.CreateObject("WScript.Shell")Set WshSysEnv = WshShell.Environment("PROCESS")temp = WshShell.ExpandEnvironmentStrings(WshSysEnv("TEMP")) & ""filename = temp & filenameDim fso, fSet fso = CreateObject("Scripting.FileSystemObject")i = 0Dim cDo Until rs.EOFi = i + 1pg = pg & "<tr>"For c = 0 to cols-3pg = pg & "<td>" & RTrim(rs(c)) & "</td>"Next'Output FullSQL and Plan Text to files, provide links to themfilename = "topplan-sql" & i & ".txt"Set f = fso.CreateTextFile(temp & filename, True, True)f.Write rs(cols-2)f.Closepg = pg & "<td><a href=""" & filename & """>SQL</a>"filename = "topplan" & i & ".sqlplan"Set f = fso.CreateTextFile(temp & filename, True, True)f.Write rs(cols-1)f.Closepg = pg & "<td><a href=""" & filename & """>Plan</a>"'We could open them immediately, eg:'WshShell.run temp & filenamers.MoveNextpg = pg & "</tr>"Looppg = pg & "</table>"filename = temp & "topplans.htm"Set f = fso.CreateTextFile(filename, True, True)f.Write pgf.CloseDim oIESET oIE = CreateObject("InternetExplorer.Application")oIE.Visible = TrueoIE.Navigate(filename)'Alternate method:'WshShell.run filename' cleaning uprs.Closeconn.CloseSet WshShell = NothingSet oIE = NothingSet f = NothingEnd SubMain
View 1 Replies
View Related
Mar 9, 2006
I compared view query plan with query plan if I run the same statementfrom view definition and get different results. View plan is moreexpensive and runs longer. View contains 4 inner joins, statisticsupdated for all tables. Any ideas?
View 10 Replies
View Related
Aug 8, 2007
Hello all,
I have a report with a table and a chart. It uses dataset1 as the data source.
All works fine.
I create a new dataset called dataset2.
The queries are exactly the same. The only differences between the 2 datasets is the database server and the fact that one of the columns is a smallint (in dataset2) and an int(in Dataset1)
I change the datasetName property of both the table and the chart to use dataset2.
When I run the report I get a conversion error stating that there was an overflow of int2 while using dataset1. I have verified the report is not using dataset1 anywhere. If I delete dataset1 and run the report the error goes away. If I add it back, I get the error again. Why is the report looking at dataset1 if it is not referenced at all in the report? Does SQL RS cache the datasets and verify each when it compiles?
regards,
Bill
View 9 Replies
View Related
Aug 1, 2007
I am using SQL server 2005. I have a VIEW that joins several tables. One of the table's column can be added dynamically by the user from a GUI interface. However, after a column is added, it does not show up in the VIEW immediately. It will take a while (I haven't figured out exactly how long) before the extra column shows up as the execution result of the VIEW.
So it seems like SQL server is caching that VIEW's schema. Is there anyway I can make this view always comes back with the latest schema?
Thanks a lot!
Penn
View 1 Replies
View Related
Mar 11, 2008
Hi, I have a search and I want to create a hyperlinked list of the top 5 search terms below it, what's the most efficient way to go about this?
View 20 Replies
View Related
May 28, 2008
Phil, great links, really helpful and appreciated.
I just need to verify one thing on the lookup method:
--One of the lookup methods people were discussing is non-cached lookup -- which seem to be evaluated to be the fastest. Is the non-cached the default of LookUp transformation? and when I wanted the lookup method to be cached, I need to go into the Advance tab and set it to however %, right? thanks.
View 3 Replies
View Related
Oct 23, 2007
Hi,
I'm trying to understand the cases where it's more interesting to use snapshot and when it's more interesting to use cached instances.
If I have 100 users trying to reach a report, is it better to use snaphsot or cache instance ? In both case, the 100 users will have the same report result. And what about the performance, are they similar ?
Thanks for your time and response,
See you,
Have a nice day!
regards,
sandy
View 1 Replies
View Related
Jun 19, 2007
Hello,
I would like to know what is the difference between a snapshot and a cached instance in SSRS?
Which one has the best performances and which one is the best for multiple users and reports containing parameters (the parameters are then passed in the where clause of the sql code; ex: WHERE IN(@param1))?
Thanks for your answers.
Zoz
View 4 Replies
View Related
Dec 10, 2007
Is it possible to keep a Cached Lookup in memory when executing multiple Data Flows? Executing DFT€™s in parallel will cache and use the same LOOKUP statement. But what if I€™m executing the DFT sequentially, can I keep the LOOKUP from the first DFT in memory for the second DFT? For example, in my case, I€™m caching a lookup against the Customer dimension for invoices. The second DFT then processes credits and again does a lookup against the Customer dimension. I want to use the cached Customer records from the first DFT.
View 1 Replies
View Related
May 27, 2005
Is there a way to programatically determine if a report should be generated from the cache or run against real-time data?
View 7 Replies
View Related
Jul 8, 2006
Parameterized queries are only allowed on partial or none cache style lookup transforms, not 'full' ones. Is there some "trick" to parameterizing a full cache lookup, or should the join simply be done at the source, obviating the need for a full cache lookup at all (other suggestion certainly welcome)
More particularly, I'd like to use the lookup transform in a surrogate key pipeline. However, the dimension is large (900 million rows), so its would be useful to restrict the lookup transform's cache by a join to the source.
For example:
Source query is: select a,b,c from t where z=@filter (20,000 rows)
Lookup transform query: select surrogate_key,business_key from dimension (900 M rows, not tenable)
Ideal Lookup transform query:
select distinct surrogate_key
,business_key
from dimension d inner join
t on d.business_key = t.c
where t.z = @filter
View 7 Replies
View Related
May 1, 2006
I have a procedure that generates dynamic sql and then executes via the execute(strSQL) syntax. BOL states that if I use sp_executesql with hard-typed parameters passed in variables, the query optimizer will 'probably' match the sql statement with the cached execution path, thus avoiding recompilation and speeding up the results for heavily run procedures.
Can anyone tell me if this is also true if the sql references an object on a linked sql server 2000 database? Technically, the sql is exactly the same, but I'm unsure if there is some exception due to the way linked objects are processed.
Thanks!
View 1 Replies
View Related
Apr 11, 2008
Hello,
This is my first post, and I'm hoping you all can help.
Using Reporting Services 2005, I have several reports that use embedded images.
All images render fine when the report execution is set to:
1) Always run this report with the most recent data
1a) Do not cache temporary copies of this report
However, when I change the execution to either a Cache or a Snapshot, the images and some charts render as red "X" placeholders. This is sometimes remedied when the user clicks the page refresh button, but not always.
Of course, I could just have all the concurrent users use the uncached report that hits the OLAP server, but that would be highly inefficient, and just plain slow.
Thanks for any help on this subject.
-michael
View 2 Replies
View Related
Jun 12, 2007
I am frequently going back and forth between the making changes to my reports and previewing those changes, all from within visual studio without publishing the reports each time.
The problem is that frequently changes that I make are not reflected in the preview window unless I either close out of visual studio completely or I wait some length of time, (I'm not sure how long exactly, but 1/2 an hour seems to always do the trick.)
Is there a way to clear any cache and force visual studio to completely reprocess a report?
At least when it is formatting changes I can identify whether the change has stuck, but when I'm fixing bugs in the code, I can't tell if I didn't fix it or if the change just hasn't taken effect.
View 1 Replies
View Related