Generating A Cached Copy Of The Report
May 27, 2005Is there a way to programatically determine if a report should be generated from the cache or run against real-time data?
View 7 RepliesIs there a way to programatically determine if a report should be generated from the cache or run against real-time data?
View 7 RepliesHi All
I have a reporting viewer in a windows form that behaves very strange. When I open the form and run the report it shows up nicely in the report viewer. If I print the report it only prints one page. When i print the report a second time the whole report is printed. Next I'll change the report parameters and run the report, then it shows up nicely in the viewer, but when I print the report the first report is printed.
Does anyone know what to do with this problem?
Thanks in advance
Hello,
This is my first post, and I'm hoping you all can help.
Using Reporting Services 2005, I have several reports that use embedded images.
All images render fine when the report execution is set to:
1) Always run this report with the most recent data
1a) Do not cache temporary copies of this report
However, when I change the execution to either a Cache or a Snapshot, the images and some charts render as red "X" placeholders. This is sometimes remedied when the user clicks the page refresh button, but not always.
Of course, I could just have all the concurrent users use the uncached report that hits the OLAP server, but that would be highly inefficient, and just plain slow.
Thanks for any help on this subject.
-michael
We are generating excel report using report viewer at run time but if excel report having more then 65000 record generating error Microsoft. Reporting Services.OnDemandReportRendering.ReportRenderingException: Excel Rendering Extension: Number of rows exceed.
View 2 Replies View RelatedHi
Below is the error message I receive when I try to generate a report in Systems Center Essentials 2007.
Can anybody help me in this regard?
An internal error occured on the report server. See the error log for more details. Could
not find stored procedure 'CopyChunksOfType'.
Thanks and Regards,
KMohan
I am using VS2003 and SQL Server 2000 with Reporting Services.
I try to generate the report but get an error "must declare the variable '@qcode'
I check the file quotation.rdl and i found it should have declared... it's like
</PageFooter>
<ReportParameters>
<ReportParameter Name="qcode">
<DataType>String</DataType>
<Nullable>true</Nullable>
<DefaultValue>
<DataSetReference>
<DataSetName>DS_quot_code</DataSetName>
<ValueField>quot_code</ValueField>
</DataSetReference>
</DefaultValue>
<AllowBlank>true</AllowBlank>
<Prompt>Quotation Code:</Prompt>
<ValidValues>
<DataSetReference>
<DataSetName>DS_quot_code</DataSetName>
<ValueField>quot_code</ValueField>
<LabelField>quot_code</LabelField>
</DataSetReference>
</ValidValues>
</ReportParameter>
I would like to know what trigger the problem and what shall i do so as to solve it?
Thank you.
Hi,
I have craated an interface for generating the rdl file through c# (User will select some fields for group and some for details and report will be generated accordingly). I am generating the XML for the rdl according to the schema. And the report is running fine in my local system where i am using sqlexpress but is not working in production. The problem is coming with the datasource as follows.
An error has occurred during report processing.
Cannot create a connection to data source 'SOP'.
Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection
I am using Integrated Authentication, below is the datasource code generated by c#
<DataSource Name="SOP">
<ConnectionProperties>
<IntegratedSecurity>true</IntegratedSecurity>
<ConnectString>Data Source=SIGMASQL;Initial Catalog=SOP</ConnectString>
<DataProvider>SQL</DataProvider>
</ConnectionProperties>
<rdataSourceID>36a274d3-f283-4ac9-9f26-401ddf14f733</rdataSourceID>
</DataSource>
When i am pasting the RDL xml to my BI project's Report it is giving preview properly, but after deployment it is not coming.
I have tried with SQL Authentication also by removing the IntegratedSqcurity element and changin g the connection string to add "sa" userid, but still it doesn't run and gives the same error and if i refresh the report with the refresh button of report viewer, it shows some Wrong String Format error.
When i am editing the dataset in the report designer(BI Project), the connection string is not storing password and username info in the xml of the rdl, i read somewhere that it stores these values in VS2005 and reporting service database with some encryption and don't keep in the xml. So it is seeming to me that i am not sending these credentials to reporting service while deployment through my c# code. Below is my deployment codebyte[] byteRDL;System.Text.UTF8Encoding encoder = new UTF8Encoding();
byteRDL = encoder.GetBytes(reportDefination);
Property[] rsProperty = new Property[10];
//Property property = new Property();Warning[] warnings;
warnings = rs.CreateReport(reportName, "/QuoteReports", true, byteRDL, null);
But i have no idea why it is not even running with Windows Authentication also.
I am in big trouble guys. Pls help. I have to show it to my client..
we have data base in ms access and reports in crystal reports 5 recently we converted our data base to sql sever 7
and reports to crystal reports 8. when i opend directly in cystal reports and given data base connecting through oledb
i am getting reports but from the application it is in vb6 the report window is poping up and latter a error msg saying un able to open sql server
when i checked up the connection it is ok help me
anil
I have been generating report models for users to use with Report Builder and there is no data when they select the model. I noticed that the tables I chose did not have a primary key and when I chose a different table, with a primary key, and generated a model from it, then there was data for the user to use in Report Builder.
Is there a documented work around or will I need to set a primary key on each table?
Hi All,
I am using ssrs and vs2005 to develop reports.The deployed reports are placed in the source directory(c:source) and i am using a asp.net page with a button on it to generate a report in .Pdf format and after generation this Pdf document will be placed in a destination folder(c: arget).
Can anyone help me out in solving this or atleast direct me as how to go on with it?
thanks
Hi,
I Have a table below.
Query
PKEY id int
name varchar(128)
date_add DateTime
What is the SQL statement to get the number of query on each day?
the output should be date and quantity. There should still be an output even if there is no query on that day.
The only way I can think of is by a table-value UDF. (rough design)
function(startdate, enddate)
{
for each day from start to end
insert into result select count(*) from Query where date_add = currentDate
return
}
Is there a more efficient way to do this?
Thanks,
Max
Hi all,
I have tried asking the same question in other forums. All i get is links
Please help me. I have the following requirement:
I have the following tables:
Theater - TheaterId, TheaterName, Revenues,locationid, stateid
State - StateId, StateName
Location - LocationId, LocationName, StateId
I want to generate reports that will tell me the revenue generated for each theater in each location in a state. I want to run a batch process which will loop through the 3 tables and will passing the location and state id as parameters one by one. I want each report to be generated as a pdf and stored in a location. How do I do this?
Thanks.
Hi ,
I have been using ms sql 2000 database for generating reports up till now . but now as per a new requirement , i have to obtain data for generating the report from sql express & not ms sql 2000 , and that too in offline mode . i do not know how the structure of data is there in sql express .
1)do we have data stored in tables? or do we use xml ?
2)do we write similar queries as we write in ms sql ??
3) and lastly what is this offline stuff ???
Can any one please throw some light on comparison between sql 2000 and sql express in these three points.
Thanks in advance
Wasted a lot of time figuring out the possible cause of this. I was working in connected mode (with VSS), and when it tried to create a Report Model (after doing all its validations/checks), it had to check-out the project file, which is normal. But before doing so, it tries to check-out the .dsv file as well (data source view file from which report model is being generated). While attempting this, the visual studio crashes.
I could never guess that this could be an issue. All the time I was trying to figure out if there was anything wrong with the data in my tables.
So, for me, a simple solution worked - check-out the .dsv file before you start creating data model. I hope this may save time for others...
My problem is this: How do I dynamically generate a Report Model Definition with c#?
Is there some sort of method I could call from the ReportingService2005 web service? Or some sort of APIs I could use?
If I didn't have a dynamic database structure, I would just create a Report Model Definition with BIS and then deploy the same model to each customer. However, our product creates additional tables in the database, depending on what data users wish to collect.
There are currently 2 solutions for this problem. First, I can manually create a Report Model Definition through the Buisness Intelligence Studio (BIS). However, I wish to be able to dynamically generate the report model without having to go through BIS. Second, I could use C# and manually write the XML of the SMDL. However this seems problematic.
I'm really hoping for some MS API that I'm missing out on here. Thanks for the help.
- Sean
Daily report generating Monthly rollup stats
I have a daily report which each morning generates monthly information for the current month which was implemented in December. Everything was working correctly untill January 1st. On the 1st the report generated blank since it was suppose to generate 1-31 Dec but but the currently month was Jan, so it failed. How do I program it so if it is the 1st of a month generates the previous month but still would generate current month but while in the current month? Any help is appreciated.
SELECT GETDATE() - 1 AS rptdate, Errors.WTG_ID, lookup.Phase, Errors.STATUS_TYPE, Errors.STATUS_CODE, STATUS_CODES.STATUS_DEF, Errors.TIME_STAMP,
Errors.ANSI_TIME, lookup.WTG_TYPE, Errors.POSITION
FROM Errors INNER JOIN lookup ON Errors.WTG_ID = lookup.WTG_id RIGHT OUTER JOIN STATUS_CODES ON Errors.STATUS_CODE = STATUS_CODES.STATUS_CODE AND lookup.WTG_TYPE = STATUS_CODES.WTG_TYPE
WHERE (STATUS_CODES.STATUS_DEF IS NOT NULL) AND (Errors.TIME_STAMP BETWEEN DATEADD(mm, DATEDIFF(mm, 0, GETDATE()), 0) AND DATEADD(mm, DATEDIFF(m, 0, GETDATE()) + 1, 0))
ORDER BY Errors.WTG_ID, Errors.TIME_STAMP, position
Hi
I was wondering if it was possible to call reporting server web service directly from my sql server stored procedure. The call that I need to make to reporting web service needs to generate the report in a PDF format.
When creating a report using reporting Services with a cube as a datasource, visual studio 2005 hangs. I have applied SP1 to visual studio.net 2005 .
This is the query generated by query designer in reporting services. When I try to drag and drop another dimension to slice it by , visual studio hangs.
The fact table has just 10,000 records. Are the number of dimensions used to slice the cube too many? How do I optimize this query?
SELECT NON EMPTY { [Measures].[Avg MT Rate - Speech], [Measures].[Change in Avg MT Rate], [Measures].[Edited Lines Per 1000], [Measures].[Speech Utilization], [Measures].[Time Saved %], [Measures].[Speech Edited for Rev 1], [Measures].[Change in Account Productivity], [Measures].[Hours Saved], [Measures].[Lines Per 1000], [Measures].[Average MT Rate - Total Production], [Measures].[Typed Lines], [Measures].[Productivity Time in hours - Edited Docs],[Measures].[Productivity Time in Hours - Typed Docs],[Measures].[Productivity Time in Hours]} ON COLUMNS, NON EMPTY { ([Dim Customer].[Customer Name].[Customer Name].ALLMEMBERS * [Dim Customer].[Dictator Site Name].[Dictator Site Name].ALLMEMBERS * [Dim Date].[The Month].[The Month].ALLMEMBERS * [Dim Date].[The Year].[The Year].ALLMEMBERS * [Dim MT].[Creator Last Name].[Creator Last Name].ALLMEMBERS * [Dim MT].[Creator First Name].[Creator First Name].ALLMEMBERS ) } DIMENSION PROPERTIES MEMBER_CAPTION, MEMBER_UNIQUE_NAME ON ROWS FROM ( SELECT ( STRTOMEMBER(@FromDimDateTheYear, CONSTRAINED) : STRTOMEMBER(@ToDimDateTheYear, CONSTRAINED) ) ON COLUMNS FROM ( SELECT ( STRTOMEMBER(@FromDimDateTheMonth, CONSTRAINED) : STRTOMEMBER(@ToDimDateTheMonth, CONSTRAINED) ) ON COLUMNS FROM [ETL DW])) CELL PROPERTIES VALUE, BACK_COLOR, FORE_COLOR, FORMATTED_VALUE, FORMAT_STRING, FONT_NAME, FONT_SIZE, FONT_FLAGS
Hello all,
I have a report with a table and a chart. It uses dataset1 as the data source.
All works fine.
I create a new dataset called dataset2.
The queries are exactly the same. The only differences between the 2 datasets is the database server and the fact that one of the columns is a smallint (in dataset2) and an int(in Dataset1)
I change the datasetName property of both the table and the chart to use dataset2.
When I run the report I get a conversion error stating that there was an overflow of int2 while using dataset1. I have verified the report is not using dataset1 anywhere. If I delete dataset1 and run the report the error goes away. If I add it back, I get the error again. Why is the report looking at dataset1 if it is not referenced at all in the report? Does SQL RS cache the datasets and verify each when it compiles?
regards,
Bill
I am using SQL server 2005. I have a VIEW that joins several tables. One of the table's column can be added dynamically by the user from a GUI interface. However, after a column is added, it does not show up in the VIEW immediately. It will take a while (I haven't figured out exactly how long) before the extra column shows up as the execution result of the VIEW.
So it seems like SQL server is caching that VIEW's schema. Is there anyway I can make this view always comes back with the latest schema?
Thanks a lot!
Penn
Hi, I have a search and I want to create a hyperlinked list of the top 5 search terms below it, what's the most efficient way to go about this?
View 20 Replies View RelatedI want to check the performance of m query and i just want to remove cached query results. Is there any suggestion how can i do this.
I just want to check after each modificatin how much improvement in performance
Phil, great links, really helpful and appreciated.
I just need to verify one thing on the lookup method:
--One of the lookup methods people were discussing is non-cached lookup -- which seem to be evaluated to be the fastest. Is the non-cached the default of LookUp transformation? and when I wanted the lookup method to be cached, I need to go into the Advance tab and set it to however %, right? thanks.
Hi,
I'm trying to understand the cases where it's more interesting to use snapshot and when it's more interesting to use cached instances.
If I have 100 users trying to reach a report, is it better to use snaphsot or cache instance ? In both case, the 100 users will have the same report result. And what about the performance, are they similar ?
Thanks for your time and response,
See you,
Have a nice day!
regards,
sandy
Hello,
I would like to know what is the difference between a snapshot and a cached instance in SSRS?
Which one has the best performances and which one is the best for multiple users and reports containing parameters (the parameters are then passed in the where clause of the sql code; ex: WHERE IN(@param1))?
Thanks for your answers.
Zoz
I was wondering if anyone had an concrete information about if there is a problem with having too many stored procedures or plans in the cache? Obviously there is an impact on memory but if we can ignore that for the time being, does SQL perform just as well with 100 query plans as it does with 10's millions of plans?
View 9 Replies View RelatedIs it possible to keep a Cached Lookup in memory when executing multiple Data Flows? Executing DFT€™s in parallel will cache and use the same LOOKUP statement. But what if I€™m executing the DFT sequentially, can I keep the LOOKUP from the first DFT in memory for the second DFT? For example, in my case, I€™m caching a lookup against the Customer dimension for invoices. The second DFT then processes credits and again does a lookup against the Customer dimension. I want to use the cached Customer records from the first DFT.
View 1 Replies View RelatedParameterized queries are only allowed on partial or none cache style lookup transforms, not 'full' ones. Is there some "trick" to parameterizing a full cache lookup, or should the join simply be done at the source, obviating the need for a full cache lookup at all (other suggestion certainly welcome)
More particularly, I'd like to use the lookup transform in a surrogate key pipeline. However, the dimension is large (900 million rows), so its would be useful to restrict the lookup transform's cache by a join to the source.
For example:
Source query is: select a,b,c from t where z=@filter (20,000 rows)
Lookup transform query: select surrogate_key,business_key from dimension (900 M rows, not tenable)
Ideal Lookup transform query:
select distinct surrogate_key
,business_key
from dimension d inner join
t on d.business_key = t.c
where t.z = @filter
Can anyone give me info on how the report processing page works in reporting services.
My application makes some pretty heavy queries to the database and I would like to have a message appear on the page whilst the request is being processed.
Something identical to the way reporting services deals with this would be absolutely perfect!
Any ideas?
I have three SSRS reports that I modified. The original reports had several subscriptions associated with them. When I deployed the new reports, I placed the old reports into a folder called Archive and renamed them. Now I have these new reports with the original names of the first versions of the reports.
My question is is it possible to somehow replicate the schedules of the original reports to the new reports? There are about 30 subscriptions with between 5-10 recipients, and it would be a real pain to have to manually recreate those subscriptions.
I have a report whose parameters are limited by the user who is logged in. Ex. if a user had full access they would see all 100 hospitals in our hospital parameter, if another user had limited access they might only see 5 - this will vary largely on who is logged in. The data source that drives these parameters are set as follows:
Credentials stored securely in the report server, username and password are entered and Impersonate the authenticated user after a connection has been made to the data source is checked.I now want to cache this report for X minutes. So I goto Properties/Execution and select 'Cache a temporary copy of the report. Expire copy of report after a number of minutes'.
Applying this gives me the error: Credentials used to run this report are not stored.
I have a procedure that generates dynamic sql and then executes via the execute(strSQL) syntax. BOL states that if I use sp_executesql with hard-typed parameters passed in variables, the query optimizer will 'probably' match the sql statement with the cached execution path, thus avoiding recompilation and speeding up the results for heavily run procedures.
Can anyone tell me if this is also true if the sql references an object on a linked sql server 2000 database? Technically, the sql is exactly the same, but I'm unsure if there is some exception due to the way linked objects are processed.
Thanks!
Any way to invalidate cached query plans? I would rather target a specific query instead of invalidating all of them. Also any sql server setting that will cause cached query plans to invalidate even though only one character in the queries has changed?
exec sp_executesql N'select
cast(5 as int) as DisplaySequence,
mt.Description + '' '' + ct.Description as Source,
c.FirstName + '' '' + c.LastName as Name,
cus.CustomerNumber Code,
c.companyname as "Company Name",
a.Address1,
a.Address2,
[code]....
In this query we have seen (on some databases) simply changing ‘@CustomerId int',@CustomerId=1065’ too ‘@customerId int',@customerId=1065’ fixed the a speed problem….just changed the case on the Customer bind parameter. On other servers this has no effect.the server is using an old cached query plan, but don’t know for sure.