Can anybody help me with the following...I want to Pivot the following data
Pcode
Year
Month
Mcode
Value
2
2008
March
EN10A
56349.1
2
2008
March
EN10B
1061.6
2
2008
March
EN10C
2.67
2
2008
March
EN10D
8370
2
2008
April
EN10A
819.31
2
2008
April
EN10B
245.09
2
2008
April
EN10C
33.38
2
2008
April
EN10D
2.31
After Pivot...the data should be like this
Pcode
Year
Month
EN10A
EN10B
EN10C
EN10D
2
2008
March
56349.1
1061.6
2.67
8370
2
2008
April
819.31
245.09
33.38
2.31
Can we use Pivot function or is their a easier way for doing this...Also the MCodes are dynamic so now there are only 4 distinct MCodes but they may be more than four...
In the following code examples I got to learn PIVOT, I found an error for SUM. However when this is ran against the AdventureWorks db it works fine. Notice it is using a table variable and not an actual table. What do I need to do to my db to get this to work?
For the Data Driven Subscription in SSRS we are using the following stored procedure
In Step 3 - Create a data-driven subscription
create procedure spRSGetReportSettings
(
@ReportID as integer
) as
begin
set nocount on
declare @t as table(y int not null primary key)
declare
@cols as nvarchar(max),
@y as int,
@sql as nvarchar(max)
set @cols=stuff(
(select N',' + quotename(y) as [text()]
from (select ParameterName as y from Reportsettings where reportid=1) as Y
order by y
For XML Path('')),1,1,N'');
set @sql=N'select * from
(select reportid,parametername, parametervalue from ReportSettings where reportid= ' + Cast(@ReportID as varchar(5)) +' ) as D
pivot(min(parametervalue) for parametername in(' + @cols +N')) as p'
exec sp_executesql @sql
end
Basically the idea is to maintain a single report parameter setting table for multiple reports.
Structure of the table is as given below
ReportID, ParameterName, ParameterValue.
Using Pivot we can generate the ParameterName/ParameterValue combinations for each report. This stored procedure is working fine in query editors(Management Studio)
But, in SSRS it is giving any results.
In Step 4 - Create a data-driven subscription,
Get the value from the database drop down, I am not getting any database columns.
I've done both a CTE and a pivot, but never together. I did see a few examples out there and followed them, but mine isn't working. I have four 'tables' within the CTE, and then my final select statement joins all of them and attempts to pivot. My error is:
SELECT LocationType_Name, FiscalYear_Name, FiscalPeriodOfYear, FiscalWeekOfPeriod, FiscalWeekOfYear,His.Week_Idx AS 'Week_Idx', His.LocationType_Code AS 'LocationType_Code', His.Scenario_Idx AS 'Scenario_Idx', Scenario_Code, PlannedSalesAmtUSD AS PlannedTY,
ActualSalesAmtUSD
FROM dbo.FactInventoryHistory HIS
INNER JOIN DimLocation LOC ON
His.LocationType_Code = Loc.LocationType_Code
INNER JOIN DimDate Date ON
His.Week_idx = Date.Date_Idx
INNER JOIN DimScenario Scenario ON
His.Scenario_Idx = Scenario.Scenario_Idx
WHERE Scenario_Code = 'FY08P'
GROUP BY LocationType_Name, FiscalYear_Name, FiscalPeriodOfYear, FiscalWeekOfPeriod, FiscalWeekOfYear,His.Week_Idx, His.LocationType_Code, His.Scenario_Idx, Scenario_Code, PlannedSalesAmtUSD,
id | value ========== 1,abc 1,def 1,ghi 2,def 2,jkl
I want these values to go horizontally into another table matched on id, to look like this:
id | value ======== 1,abc def ghi 2, def jkl
I built a cursor to parse through it but was taking forever (there's 185,000 records in the reference table). Any idea's on the fastest way to perform this function?
gud day.please help me. im working right now on a case study that willretrieve/produce a simple report on sql. my problem is I dont know howto pivot queries like in access. please help me. thanks
I want to pull data from XLs file and put them in a table.
Source file looks like this.
Month
Mar-07 Apr-07 May-07
Non-Accruals
$304,732,515 $307,051,978 $308,274,921
REO
$115,072,839 $123,957,394 $149,744,174
Home Equity Total NPA
$419,805,354 $431,009,372 $458,019,095
Destination table should look like this. Date(Only the first row ) in XL file is in the following format. DATE(YEAR($O1)-1,MONTH($O1)-1,DAY($O1)). From second row onwords data format is Money type. I hope I need to convert the date row into SQL datetime type too. Otherwise it comes as NULL.
Month Non-Accruals REO Home Equity Total NPA
Mar-07 $ 304,732,515 $ 115,072,839 $ 419,805,354
Apr-07 $ 307,051,978 $ 123,957,394 $ 431,009,372
May-07 $ 308,274,921 $ 149,744,174 $ 458,019,095
Can i create a SSIS package to do this job? if so , How? I'm not sure which transformation should i used and how? Hope some one can help me
I’d like to get some data which includes month values bound to a data grid. The data is stored in a table like so:
Measure Month Value A June 10.00 A July 9.00 A Aug 11.00 B Jun 100.00 B Jul 98.00 B Aug 99.00 C Jun 0.75 C Jul 0.8 C Aug 0.91
I need to report the data like this: Measure Jun Jul August A 10 9 11 B 100 98 99 C 75% 80% 91%
This was simple in classic ASP. Just use two recordsets, create a new table cell for each month using the first recordset then use the second recordset for each row.
But is there a way to “Pivot� or rotate the data so I can use the DataGrid? It only seems possible if each month has its own column field in table. Each month add a new column.
I can restructure the database, if needed.
I thought about creating a Cube, but that seems to have its own limitations. For example what if I want to add a Column for Quarter and year totals? I don’t think it’s possible to show multiple planes like that in an query of a cube.
It seems that this might be resolved in the presentation layer or the data layer. Any Suggestions?
I need some help here in Pivoting the table. I have the table with the Following Columns.. and here is the sample data. Assume that below table will just have one weeks worth of data.
I can write a stored Proc using cursor, but I just want to learn how to do it with using cursors
I have a query in which I would like to pivot the resultsI presently have my results displaying something like this. OrderNumber Product OrderQuantity--------------- --------------- ----------------------0608 Prod1 30608 Prod2 120608 Prod3 2 What I am after is for the results to display something like this.OrderNumber Prod1 Prod2 Prod3 --------------- --------- --------- --------- 0608 3 12 2 This is using SQL Server ver 8.0
I'm trying using the GROUP BY CUBE aggregation. Currently I have this working as such:
SELECT ISNULL(CONVERT(VARCHAR,Date), 'Grand Total') Date ,ISNULL([1 Attempt],0) [1 Attempt] ,ISNULL([2 Attempts],0) AS [2 Attempts] ,ISNULL([3 Attempts],0) AS [3 Attempts] ,ISNULL([4 Or More],0) AS [4 Or More]
[Code] .....
Basically this is used to work similar to a Pivot table in excel. My data will look as follows:
Date 1 Attempt2 Attempts3 Attempts4 Or MoreTotal 2012-09-04 239 68 2 8 317
The problem I'm having is the Total column. Although this is summing the line values correctly, the total should be based on the sum not count of attempts i.e. 1 x 239, 2 x 68, 3 x 2, 4 x 8
If I change the FROM select clause to use SUM instead of COUNT
SELECT CONVERT(DATE,[Date]) Date ,ISNULL(AttemptsFlag,'Total') as Attempt ,SUM(NoOfTimes) AS Totals FROM XXXXX GROUP BY CUBE([Date],AttemptsFlag)
It will return the correct Total amount but not the right numbers for the Attempt groupings...
I am wondering if it is possible to use SSIS to sample data set to training set and test set directly to my data mining models without saving them somewhere as occupying too much space? Really need guidance for that.
I have used both data readers and data adapters(with datasets) in the projects that I have worked on. I am trying to get some clarification on when I should be using which one. I think I am doing this correctly but I want to be sure I am developing good habits.
As the name might suggest, it seems like a datareader is for only reading data. I have read that the data adapter and dataset are for a disconnected architecture. Or, that they can be used for this type of set up. I have been using the data adapter and datasets when writing to a database and the datareader when reading from a database.
Is this how these should be used? Is the data reader the best choice for reading data? Am I doing this the optimal way from a performance stand point?
......................................................thanks in advance
We already integrated different client data to MDS with MS Excel plugin, now we want to push back updated or new added record to source database. is it possible do using MDS? Do we have any background sync process to which automatically sync data to and from subscriber and MDS?
When I enter over 4000 chars in any ntext field in my SQL Server 2005 database (directly in the database and through the application) I get an error saying that the data could not be updated because string or binary data would be truncated.Has anyone ever seen this? I cannot figure out what is causing it, ntext should be able to hold a lot more data that this...
I have a requirement to implement CDC for 50+ tables to implement incremental data changes warehouse/reporting rather than exporting the whole table data. The largest table is having more than half a billion records.
The warehouse use a daily copy of OLTP db (daily DB refresh). How can I accomplish this. Is there a downside in implementing CDC just for the sake of taking incremental changes on the tables?
Is there any performance impact if we enable CDC on OLTP db?
Can we make use of the CDC tables on the environment we do daily db refresh so that the queries don't hit OLTP database?
What is the best way to implement CDC to take incremental changes for reporting.
Hi,This is driving me nuts, I have a table that stores notes regarding anoperation in an IMAGE data type field in MS SQL Server 2000.I can read and write no problem using Access using the StrConv function andI can Update the field correctly in T-SQL using:DECLARE @ptrval varbinary(16)SELECT @ptrval = TEXTPTR(BITS_data)FROM mytable_BINARY WHERE ID = 'RB215'WRITETEXT OPERATION_BINARY.BITS @ptrval 'My notes for this operation'However, I just can not seem to be able to convert back to text theinformation once it is stored using T-SQL.My selects keep returning bin data.How to do this! Thanks for your help.SD
I'm using Script Component to load data into Oracle DB due to the poor performance issue. Now, I found it will missing some data during the transmission. Please see the screenshot below:Â
[DTS.Pipeline] Error: "component "Excel Source" (1)" failed validation and returned validation status "VS_NEEDSNEWMETADATA".
and also this:
[Excel Source [1]] Warning: The external metadata column collection is out of synchronization with the data source columns. The column "Fiscal Week" needs to be updated in the external metadata column collection. The column "Fiscal Year" needs to be updated in the external metadata column collection. The column "1st level" needs to be added to the external metadata column collection. The column "2nd level" needs to be added to the external metadata column collection. The column "3rd level" needs to be added to the external metadata column collection. The "external metadata column "1st Level" (16745)" needs to be removed from the external metadata column collection. The "external metadata column "3rd Level" (16609)" needs to be removed from the external metadata column collection. The "external metadata column "2nd Level" (16272)" needs to be removed from the external metadata column collection.
I tried going data flow->excel connection->advanced editor for excel source-> input and output properties and tried to refresh the columns affected. It seems that somehow the 3 columns are not read in from the source file? ans alslo fiscal year, fiscal week is not set up up properly in my data destination? anyone faced such errors before?
When I execute the below stored procedure I get the error that "Arithmetic overflow error converting expression to data type int".
USE [FileSharing] GO /****** Object: StoredProcedure [dbo].[xlaAFSsp_reports] Script Date: 24.07.2015 17:04:10 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
[Code] .....
Msg 8115, Level 16, State 2, Procedure xlaAFSsp_reports, Line 25 Arithmetic overflow error converting expression to data type int. The statement has been terminated. (1 row(s) affected)
is there a step by step paper to get there? here is what i need to consider. I Iwill have many customers that will need their own set of records and access pages "branded for their company" each customer will have many clients. I am hosting this application on a windows 2003 server with SQL 2005 server enterprise.
I am using windows authentication, I have created a username in windows, then i added the windows user in SQL management studio in security, granted "DB Read" and "DB write" and again under the database security tab. still from the web authentication fails. i must be nissing a step or two?
I expect to set up a username for each database as i setup new customers.
RE: XML Data source .. Expression? Variable? Connection? Error: unable to read the XML data.
I want my XML Data source to be an expression as i will be looping through a directory of xml files.
I don't see the expression property or the connection property??
I tried setting the XMLData property to @[User::filename], but that results in:
Information: 0x40043006 at Load XML Files, DTS.Pipeline: Prepare for Execute phase is beginning. Error: 0xC02090D0 at Load XML Files, XML Source [108]: The component "XML Source" (108) was unable to read the XML data. Error: 0xC0047019 at Load XML Files, DTS.Pipeline: component "XML Source" (108) failed the prepare phase and returned error code 0xC02090D0. Information: 0x4004300B at Load XML Files, DTS.Pipeline: "component "OLE DB Destination" (341)" wrote 0 rows. Task failed: Load XML Files Information: 0xC002F30E at Bad, File System Task: File or directory "d:jcpxmlLoadjcp2.xml.bad" was deleted. Warning: 0x80019002 at Package: The Execution method succeeded, but the number of errors raised (2) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. SSIS package "Package.dtsx" finished: Failure. The program '[3312] Package.dtsx: DTS' has exited with code 0 (0x0).
I setup this package to import data from a Sharepoint list to a SQL Server data table. The primary key of my SQL table is mapped to the Title column of my Sharepoint list. There is a possibility that duplicate values will be entered in the Title field of the Sharepoint list. So when importing data into my table via SSIS, my package always error-out when there it comes across duplicate values. how you others have managed data integrity when importing from a Sharepoint list with the Title column being mapped to the primary key of a table.
"pRecordSet" is an ADO recordset. The database column "MyColumn" is of type "decimal(19,10)".
The most important question for me is, if the regional settings of the database server or the regional settings of the client PC are considered during the conversion from the string to the decimal value. For example in standard French regional settings the "." would not be recognized as decimal separator.
I am also wondering if the language of the database instance, in which this data is saved, is considered during this conversion or any other settings of this database instance.
So my general question is: Does anybody know exactly what rules apply during the above mentioned conversion?
I've question about how to handle structural datamodel changes in a datasource of PowerPivot. Suppose I'm developing a starmodel in SQL Server and sometimes a datatype changes or a name of a field changes in a table. It seems to me that PowerPivot handle this not gracefully as Analysis MD does (mostly). I received an error because of a wrong fieldname or even no error when a dattype changes in PowerPivot. Is this common or do I something wrong here. Does this mean that every time the datamodel changes the PowerPivot should be recreated? Or am I missing the clue here?
I have to extract, dayly a list of contacts on a exchange server in a table on our EDW on sql server 2005. Is it possible to get the information directly from a dataflow or i will have to developpe a script task ?