I posted last year about getting and Expand All/Collapse All link working in reports, and it does work perfectly for on-demand reporting. I was able to get these working using the boolean report parameter and then the Jump to Report option going back to itself changing that parameter.
The issue that I have just discovered is that when a report with these Expand All/Collapse All links are in a report snapshot, clicking those links will cause it to re-render the report from the datasource at the time the link is clicked. This is an issue for one of the datasources we are using because the data is always changing, so when it goes back to re-render the counts and data returned are completely different then they were when the snapshot was first created. As time goes on the data will no longer even be in the datasource since it can only maintain 4 months of data.
Is there another way to get all detail groups to expand/collapse at once that does not require the report to be completely re-rendered from the datasource?
I have a report that is grouping phone calls by "Category" (Outgoing, Incoming, Voicemail, etc). When the report is rendered, each category (along with totals) is shown and the details (each actual call) are hidden. Using toggling, I allow the user to burst open any category to view all of the call details.
I join the query for this report so that even if a category has 0 calls for the day, it is still shown. When expand a category with no calls I get a blank record (as I should, because there are no call details).
The question I have is ... Can I have the toggle be conditional? Can I have the ability to toggle thr group only when there are call details? Can I have the + to the left of the category name not be shown?
Thanks in advance for any help! Merry Christmas!!!
On one of my reports, the +/- for expand/collapse of groups is reversed -- i.e. when the group is expanded it displays "+" and when the group is collapsed it displays "-". The display of the document map is correct.
I have a db-query that returns quite a few rows which I must show in a table. At initial load the report only shows the row group heading for the detailed data, and when clicked the detailed data for a group expands.
My problem is that reporting services assumes that when collapsed it should summarize everything in each column and display it.
Here is an illustration of the problem.
First the detailed view of my data (Notice the 5th and 6th rows which are simply the values in the 1st through 4th rows divided into groups - and the "Total" row shows the actual total for Row 1 Group Header: [-] "Row 1 Group Header" "Detail data 1" "10" "Detail data 2" "10" "Detail data 3" "4" "Detail data 4" "6" "Row 1 group 1" "20" "Row 1 group 2" "10" "Total" "30" The detailed view is just as I want it - no problems there.
However, when the detailed view is collapsed, reporting services calculates the sum for the column - which is 90 - not 30 as is the actual total: [+] "Row 1 Group Header" "90"
Is there any way I can redefine the formula/insert filters used for calculating the "collapsed-total" for each column when the row is collapsed? Or maybe simply prevent it from showing a "collapsed-total" altogether? Or can you think of another way to structure the data so the "collapsed-totals" gives me the actual value (30)?
I thought about splitting the values into a column for each data-group, so that I'd have a column for the "Detail data n" values, a 2nd column for the "Row 1 group n" values and a 3rd column for the "Total" values - this would yield correct "collapsed-total" values. But is really a plan B - I really do not want to do that unless it is the only way out of my dilemma.
In my report i have more than 10 sub reports each should be displayed when we click on some label but my requirement is to expand all the sub reports at a time or collapse the sub reports.URL.....problem here is it is a genius solution working beautifully in my local machine but in the client server if i click on the Expand/Collapse radio button it is asking for the parameters again.
I have a problem with collapsing of groups. It seems that the reports saves the expand/collapse state of groups between renderings ... bug or feature?
In my report I have a dataset which contains some numerical data over time.
The report is designed to show data from a selected month as well as January to selected month. A parameter is used to select the month.
The presentation looks like the following: Company total Department Employee Employee ... Department Emplyee ... ... The employees have their initial state set to hidden, with their respective department as toggler.
This works perfectly ... within Visual Studio. When I deploy this report to my web server strange things happens.
Fx 1) If I expand one department to see the data for the employees for may and then decide to look at data from another month. 2) I change the month parameter and click 'View report' to render the report again. 3) The report renders as it should. All departments are collapsed. 4) If I then expand some other department than before, both the newly selected department and the old one are expanded ... a behavior that is not that practical
It seems that the report saves the expand/collapse state for the groups between renderings. So when I click expand for the second time it expands the one I just clicked and the department which was expanded before rendering.
Is there a way to expand/collapse all items in the documentmap ?
I know there is an option DocumentMapCollapsed but this is to collapse/expand the documentmap panel. I would like to control the collapse state of the items into the document map ...
I have a report with some groups which can all be expanded/collapsed by clicking on a textbox with an action attached to it. The example I used can be found here: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=600583&SiteId=1
But I want to hide the row containing this textbox (or the textbox itself) on the actual printed report. Is there any report item that can tell me if the report is in preview phase, or can it be solved in any other way?
has anyone out there found a way to get the best of both recursive hierarchy and drill down in the same report, ie without needing to know how many levels there are in your hierarchy, still being able to report them like a tree view with collapse and expand capability at each level?
How do I programmatically check a row group's Visibility or Expand/Collapse flag in a matrix table? For example, I have a matrix table contains the following groups:
Row groups: Facility --> Category Type --> Category Column groups --> year, quarter, month
I want to be able to programmatically update the table content if Category rows are not visible (Category Type row group is collapsed).
I'm relatively new to this, so bear with me here. (SQL Server 2005 Express, Datatypes are all varchar, int or money, nothing crazy...)
I currently have a table (not designed by me...) which looks like this:
ProjectID Months Expenses
3214 JAN 45.67
3214 MAR 56.78
1234 JAN 78.99
4567 MAY 43.56
And so on.... And I need this:
Project ID Jan Feb Mar Apr May Jun etc....
3214 45.67 56.78
1234 78.99
4567 43.56
I had attempted to do it using this code (really sloppy... I know. Beginner's attempt...)
DECLARE @Months varchar(4)
DECLARE @Counter int
DECLARE @Rowcount int
EXEC @RowCount=dbo.ReturnRowCount
SET @Counter = 0
WHILE @Counter <= @RowCount
BEGIN
SET @Counter=@Counter+1
SET @Months=(SELECT Months FROM TestData WHERE RowNum=@Counter)
SELECT @Months
WHILE @Months='JAN'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Jan)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='FEB'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Feb)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='MAR'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Mar)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='APR'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Apr)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='MAY'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, May)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='JUN'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Jun)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='JUL'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Jul)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='AUG'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Aug)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='SEP'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Sep)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='OCT'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Oct)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='NOV'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Nov)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
WHILE @Months='DEC'
BEGIN
INSERT INTO SusansOutputTable (ProjectID, Dec)
SELECT ProjectID, Expenses FROM TestData WHERE @Counter=TestData.RowNum
SET @Months=''
END
SET @Months=''
END
SELECT RowNum, ProjectID, Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec FROM SusansOutputTable
I also tried using IF statement instead of the imbedded WHILE statement and had the same result. What it's returning is 98,765 results of the same thing, with the exception of the last 25 rows, which return what I expected (which still needs to be collapsed...) I know there HAS to be an easier way to do this, I'm afraid it might be a bit beyond me though, any help?
Ps. ReturnRowCount function just returns the row count of the base table, assuming a rowid column with an IDENTITY int variable, which I can safely assume in this case. Also, the counter seems to work fine, but something is wrong in my internal portion of the while loop I think...
I am having a difficult time trying to generate a result set that collapses multiple rows into one. However, I am also getting some of the columns that I am trying to SUM, to double their value.
I have created a report where I have Z-Index all set correctly, but I cannot see the desired effect of toggling as can be seen in the AdventureWorks sample of SalesOrder report that gets shipped with SSRS. What am I missing?
Have 3 reports that runs using a shared schedule to generate historic snapshots. This has be working for months and months but out of nowhere it changed.
So now when ever the schedule runs it will generate the historic snapshot for 1 report but NOT the 2 remaining. As a strange twist the report that is successful changes from time to time...
We will soon be going to a new ERP system using MS SQL 2000. I'm looking into possible backup strategies. The database size will be about 50 Gigs.
We have a SAN, which the new ERP system will be backed up on. I know a little about a SAN, but I need to make decisions about the types of backups to make. We can have 10 active snapshots, so I could use snapshots during the day as a point in time backup. I could also use the SAN snapshot instead of using a full backup. Would it be safe to use the SAN snapshots instead of the normal SQL Server backups? I’m not sure how long it will take to do a full SQL Server backup because the server is still in testing mode. I’m not getting the big picture about when to use snapshots and when to use regular SQL Server backups.
Is it possible in Windows 2000 Server to stop the transactions and flush the buffers for a few seconds to do a snapshot? From what I understand, the database could be left in a suspect state if you use snapshots to restore a database.
I think I've read some conflicting advice in BOL. Maybe someone can clarify it for me.
Under "Database Mirroring and Database Snapshots" it says: "You can take advantage of a mirror database that you are maintaining for availability purposes to offload reporting. To use a mirror database for reporting, you can create a database snapshots on the mirror database and direct client connection requests to the most recent snapshot"
To my mind, to enjoy a noteworthy performance gain, that mirror database would need to be on another server.
But then you read under "Database Snapshots": "Multiple snapshots can exist on a source database and always reside on the same server instance as the database."
HiWhat would be the quickest way to create a backup and revert programon an sql (2000) database?- Can you create a transaction on a database, regardless of theconnections and thenrollback it all via an external program- Could you monitor the changes with profiler and then reverse those?- If desperate, could you backup the db from a tool and then restoreit? (too slow to be practical?)Not sure how to do this so any offers would be appreciatedta
If multiple snapshots for the same report attempt to generate at the same second, is there something in place to prevent them from conflicting with each other?
Our SSRS 2005 application will use a console app that we are writing to change the default parameters on each report and call the CreateReportHistorySnapshot method. This application will be multi-threaded, so there is a possibility that multiple versions of the same report with different parameters will attempt to run at the same time. We need to be sure that these snapshots will not conflict and return the appropriate SnapshotHistoryID for that report run.
From looking at the History table in the ReportServer database it looks like there are uniqueIDs and other keys that would prevent duplicates, but what if the requests come in at the same time? Is SSRS 2005 smart enough to delay for a second so that there is not a duplicate? Since the way you reference these snapshots in your URL is with the Snapshot Time in UTC format, that only goes out to full seconds (2006-05-02T00:00:00).
Sorry for the length, but I could not think of how to condense this.
At the moment, we are using SSRS to report off an Oracle database, if we take a snapshot of the data on the Oracle database and put it within SQL, will we expect an increase in performace/response.
And is it a good idea to keep the snapshot in the same place where SSRS is installed?
I want to share one of my doubt about SSRS and report server, if i use snapshots or cache to my reports in report server, is it going to make increase in performance? if it is going to make increase in performance,then SQLSERVER DATABASE is going to have any burdens, means is it going to down the performance of Database? Pls give me Responce to my Questions....
I am having the huge db of 80 GB and trying to configure merge replication. The intial snapshot application is failing due to some schema issues. I have made some necessary changes to avoid the schema issues. If I try start the merge agent it is going to re-initialize the snapshot and reapplying the whole snapshot will take atleast 10-15 hrs.
Is there any way to resume the snapshot from the point of failure.. avoiding the reapplying all the data which has already got transfered to subscriber.
Does someone know if doing a reindex on a clustered or non-clustered index cause the snapshot file to grow? In other words, is the data that makes up the snapshot copied from the source to the snapshot database? If a normal reindex is done on the underlying database, will it block users from acessing the snapshot? Any help would be appreciated.
I have setup my web synchronisation with very few problems until now. Initially I set the snapshots to generated by the subscriber when they first synchronise, this worked perfectly every time until a bottleneck occured after I reinitialised all the subscribers. When more than one subscriber then initialises the snapshot agent, the snapshot jobs fail because there is already a job running for user distributor_admin. The agent then retries etc, etc.. until all retries have failed. Ultimately only one of the snapshots is successfully created only after all other jobs have failed and run out of retries, it is then left alone to carry out its job.
Following this I made the decision to run with pre-generated snapshots instead so that I can schedule the server load and avoid these bottlenecks. Using the same publication I unchecked the option "Automatically define a partition and generate..." in the publication properties and proceeded to add each of the partitions specifying the correct "Host_Name()" value etc. After this I selected one partition and clicked "Generate the selected snapshots now". The snapshot was duly created but when synchronised the snapshot is not collected and used. Weirdly the merge porcess begins enumerating changes at the publisher followed by downloading 3100 chunks of data before receiving an error concerning the format of the message from the distributor.
If I check the option to allow auto generation, the synchronisation begins absolutely perfectly but only once it has generate a snapshot (Ignoring the one I created previously).
To add further weirdness, I created a small app which would generate the dynamic partitions using RMO. I used the sample code implicitly from BOL and it duly generated the dynamic snapshot job. I can then create a new Job class passing it the JobServer and JobName, it gives me the correct job but I cannot then call the Start() method because it keeps telling me the Job object is not created even though It is clearly enabled as a job in SSMS and can be right_clicked and started from there.
Even partition jobs created from within the SSMS appear as state "Creating" when I make an instance of it using the Job class.
What am I missing? Are my pre-generated snapshots not available to the subscriber because they seem to still be in the creating state? Is this a SQL Server configuration issue or have I setup the publication incorrectly?
Private Sub btnGenerate_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnGenerate.Click
Dim hostName As String = txtSiteID.Text.Trim ' Define the server, database, and publication names Dim publisherName As String = "WDU340" Dim publicationName As String = txtOrgID.Text.Trim Dim publicationDbName As String = publicationName Dim distributorName As String = publisherName
Dim publication As MergePublication Dim partition As MergePartition Dim snapshotAgentJob As MergeDynamicSnapshotJob = Nothing Dim schedule As ReplicationAgentSchedule
' Create a connection to the Distributor to start the Snapshot Agent. Dim distributorConn As ServerConnection = New ServerConnection(distributorName)
Try ' Connect to the Publisher. distributorConn.Connect()
' Set the required properties for the publication. publication = New MergePublication() publication.ConnectionContext = distributorConn publication.Name = publicationName publication.DatabaseName = publicationDbName
' If we can't get the properties for this merge publication, ' then throw an application exception. If (publication.LoadProperties() Or publication.SnapshotAvailable) Then ' Set a weekly schedule for the filtered data snapshot.
If RunJob(publication, snapshotAgentJob, distributorConn, hostName) = False Then
' Set the value of Hostname that defines the data partition. partition = New MergePartition() partition.DynamicFilterHostName = hostName snapshotAgentJob = New MergeDynamicSnapshotJob() snapshotAgentJob.DynamicFilterHostName = hostName
' Create the partition for the publication with the defined schedule. publication.AddMergePartition(partition) publication.AddMergeDynamicSnapshotJobForLateBoundComClients(snapshotAgentJob, schedule)
RunJob(publication, snapshotAgentJob, distributorConn, hostName) End If Else Throw New ApplicationException(String.Format( _ "Settings could not be retrieved for the publication, " + _ " or the initial snapshot has not been generated. " + _ "Ensure that the publication {0} exists on {1} and " + _ "that the Snapshot Agent has run successfully.", _ publicationName, publisherName)) End If Catch ex As Exception ' Do error handling here. MessageBox.Show(String.Format( _ "The partition for '{0}' in the {1} publication could not be created.", _ hostName, publicationName) & ": " & ex.Message) Finally If distributorConn.IsOpen Then distributorConn.Disconnect() End If End Try
End Sub
Private Function RunJob(ByVal publication As MergePublication, ByVal snapshotAgentJob As MergeDynamicSnapshotJob, ByVal distributorConn As ServerConnection, ByVal hostName As String) As Boolean Dim jobs As ArrayList Dim iJob As Integer Dim bExists As Boolean = False jobs = publication.EnumMergeDynamicSnapshotJobs() For iJob = 0 To jobs.Count - 1 snapshotAgentJob = DirectCast(jobs(iJob), MergeDynamicSnapshotJob) If snapshotAgentJob.DynamicFilterHostName = hostName Then 'Run the Job bExists = True Dim server As New Server(distributorConn) Dim job As New Job = (server.JobServer, snapshotAgentJob.Name) MessageBox.Show(String.Format("About to run the dynamic snapshot job: {0} with status: {1}", job.Name, job.State.ToString)) Try job.Start() Catch ex As Exception MessageBox.Show(ex.ToString) Throw End Try Exit For End If Next End Function End Class
I am using SQL2005 merge replication and have pull subscribers on a low bandwidth link
I am compressing the snapshot into an alternate folder. Files are not put into the default folder
When I start a synchronization, I would expect the cab file to be copied to the subscriber and then the files to be extracted locally at the subscriber in order to apply the snapshot
However, what appears to be happening is that the files are being extracted from the cab file on the publisher (in a UNC specified directory) and then copied in their uncompressed form to the subscriber - resulting in an extremely slow snapshot application.
Any ideas what I am doing wrong? I have read about the options for using FTP to transfer snapshot files, but I am not clear whether I have to use FTP in order to transmit a compressed snapshot. I don't want to use FTP unless I need to.
Hi everybody, I'm quite new to SQL 2005 and I€™m trying to understand some key concepts regarding replicas. I need to develop an application with characteristics similar to the Sales Order Sample for Merge Replication, on the client side it should run with the express version of sql server and also the synchronization should only work via web. I try to run the sample but I got an exception in the CreateSubscription method on invoking publisherConn.Connect();
I am trying to create daily automatic snapshots by using the code below...
Code Block declare @MyDay varchar(20) declare @query varchar(1000) declare @DatabaseName varchar(128) declare @snapshotName varchar(128) declare @snapDataName varchar(128) declare @snapFileName varchar(128) declare @snapFilePath varchar(128) set @Myday = (Select datename(weekday,getdate())) print 'It is ' + @MyDay Set @DatabaseName ='Cerritos_Net' Set @SnapDataName='Cerritos_Net_Data' Set @SnapshotName ='Cerritos_Net_Snapshot'+'_'+@MyDay Set @SnapFilename ='E:ShareINCOMINGSnapshotsDailyCerritos_Net_Data'+'_'+@MyDay+'.ss' Print 'Snapshot name is ' +@SnapshotName select * from sys.databases where source_database_id =db_id(@databasename) and name = @SnapshotName if @@rowcount <>0 begin set @query = 'Drop database '+ @SnapshotName print @query exec(@query) end set @query ='Create database '+ @SnapshotName + ' on (Name = '''+@snapDataName +', FileName="' +@SnapFilename +'") AS SNAPSHOT of '+ @databasename+';' print @query exec(@query)
But I keep getting this error
Code Block It is Tuesday Snapshot name is Cerritos_Net_Snapshot_Tuesday (0 row(s) affected) Create database Cerritos_Net_Snapshot_Tuesday on (Name = 'Cerritos_Net_Data, FileName="E:ShareINCOMINGSnapshotsDailyCerritos_Net_Data_Tuesday.ss") AS SNAPSHOT of Cerritos_Net; Msg 105, Level 15, State 1, Line 1 Unclosed quotation mark after the character string 'Cerritos_Net_Data, FileName="E:ShareINCOMINGSnapshotsDailyCerritos_Net_Data_Tuesday.ss") AS SNAPSHOT of Cerritos_Net;'. Msg 102, Level 15, State 1, Line 1 Incorrect syntax near 'Cerritos_Net_Data, FileName="E:ShareINCOMINGSnapshotsDailyCerritos_Net_Data_Tuesday.ss") AS SNAPSHOT of Cerritos_Net;'.
We are running a Mirrored instance of SQL Server 2005 SP2 (in High Performance mode) and using snapshots on the Mirror to provide a reporting solution.
The problem is that quite frequently the Database Snapshot starts going through a Recovery Process. This is causing large delays in the reporting process.
The sequence of events in the SQL logs is 1. An Event for "Starting up Database Snapshotname" 2 A series of events "Analysis of Database Snapshotname is X% complete" This can take up to 2 minutes 3. A series of events "Recovery of Database Snapshotname is X% complete" This recovery process can take up to 1hr 45 minutes.
The database is running on new servers and there are no apparent disk problems.
Can anybody advise on why a snapshot should start a recovery process and how to prevent this Regards