Business Intelligence – Oracle

Archive for the ‘EPM’ Category

Oracle BI EE 10.1.3.4.1 – Running Hyperion Essbase Data Loads/Calculations from BI EE – Using JAPI and JSPs

Posted by Venkatakrishnan J on April 26, 2009

As consultants one of the major hurdles that we would face with customers on a daily basis is the fact that how much ever features are available in any BI tool, almost upto 10 to 30% of the requirements(especially on big implementations) require some kind of custom work(sometimes this might even go upto 60% depending on the project). For example, last week i had a customer who basically wanted to know of a method for kick starting essbase allocation/calculations/data loads directly from BI EE. Now that they are using BI EE/Essbase connectivity, they wanted to provide the capability to some users to kick start the calculations directly from BI EE. Unfortunately, currently there is no out of the box way of doing this. But with some amount of work, the integration can be achieved. Today, lets look at how to go about achieving this integration. This would serve as a pre-cursor to another blog series wherein i would walk you through some complex integration techniques between BI EE and Essbase.

The basic aim is to have a link in BI EE, clicking on which should run a calculation in Essbase and then refresh a report to show the report with the new calculated data. To do this, the first step is to create a simple JSP page which would be used as a placeholder for firing the Essbase calculation. This JSP page would in turn refer the JAPI of Essbase and fire the calculation. In my case, i have used JDeveloper 10.1.3.2 to create the JSP page.

In order to create the JSP page, create a sample project and package called AllocEssbase.

       

Create a custom JSP page within this application. Include the Essbase jar files Ess_japi.jar and Ess_es_server.jar (these are the 2 main jar files that contribute to Essbase JAPI) in the project classpath.

       

Now, create the JSP using the code below. The code basically executes a calculation script CalcAll.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"<a href="http://www.w3.org/TR/html4/loose.dtd">http://www.w3.org/TR/html4/loose.dtd</a>">
<%@ page contentType="text/html;charset=windows-1252"%>
<%@ page import="java.io.*" %>
<%@ page import="java.util.Map" %>
<%@ page import="java.util.Map.Entry" %>
<%@ page import="java.util.jar.Attributes" %>
<%@ page import="java.util.Iterator" %>
<%@ page import="com.essbase.api.base.*" %>
<%@ page import="com.essbase.api.dataquery.*" %>
<%@ page import="com.essbase.api.session.*" %>
<%@ page import="com.essbase.api.datasource.*" %>
<%@ page import="com.essbase.api.domain.*" %>
<%@ page import="com.essbase.api.metadata.*" %>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"/>
    <title>AllocEssbase</title>
  </head>
    <body>
  <font size="5"><%="Essbase Allocation Process Started" %></font>
<%
        String s_userName = "admin";
        String s_password = "password";
        String s_olapSvrName = "localhost";
        String s_provider = "<a href="http://localhost:13080/aps/JAPI">http://localhost:13080/aps/JAPI</a>";
       
        try
        {       
        IEssbase ess = IEssbase.Home.create(IEssbase.JAPI_VERSION);
        IEssDomain dom = ess.signOn(s_userName, s_password, false, null, s_provider);
        IEssOlapServer olapSvr = (IEssOlapServer)dom.getOlapServer(s_olapSvrName);
        olapSvr.connect();
       
        IEssCube cube = olapSvr.getApplication("Global").getCube("Global");
        String maxLstat = "import database 'Global'.'Global' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' on error abort;";
        //cube.loadData(IEssOlapFileObject.TYPE_RULES,"UnitsLd",0,"SH",false,"global","global");
        cube.calculate(false,"CalcAll");
        //cube.beginDataload("UnitsLd",IEssOlapFileObject.TYPE_RULES,"global","global",0);
        //IEssMaxlSession maxLsess = (IEssMaxlSession)dom.getOlapServer(s_olapSvrName);
        //maxLsess.execute(maxLstat);
        }catch (EssException x){
            System.out.println("ERROR: " + x.getMessage());
        }
%>
               
  <font size="5"><%="Essbase Allocation Process Ended" %></font>
        </body>
</html>
<%
    response.sendRedirect("<a href="http://localhost:9704/analytics">http://localhost:9704/analytics</a>");
%>

       

As you see this is a very basic JSP which runs a calculation script. You can use the same for doing data loads or just about anything in Essbase. As a next step execute this JSP from JDeveloper itself to be sure that this JSP works.

       

This JSP would then have to deployed on the OC4J (or app server) of BI EE. Since this JSP uses a couple of external jar files, the only way to deploy this is by bundling it as a WAR file. In order to do that, create a new deployment profile and include both the jars

       

       

Deploy the war file in the OC4J of BI EE.

       

       

Now you should be able to access the JSP using the url given below


http://localhost:9704/AllocEssbase/AllocEssbase.jsp

Embed this URL in the BI EE dashboards.

       

This should get you started. Now there are lots of different applications that one can put this into use. Like Planning load start, ODI integration, BI Scheduler integration etc. I would cover all of these use cases in detail in the coming blog entries.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 1 Comment »

Oracle BI EE 10.1.3.4 and Essbase Connectivity – Understanding Aggregations – Part 4

Posted by Venkatakrishnan J on April 13, 2009

Today i would basically summarize the observations that i had mentioned over the last couple of blog entries. As a generic rule, while designing BI EE reports with Essbase as the data source, it is important to consider the 2 points mentioned below

1. Physical layer aggregation and BMM layer aggregation both determine how the MDX is fired.
2. Each one of them should not be looked at in isolation or should not be changed in isolation without considering the end to end impact.

Lets look at the results of each and every combination one by one.

Physical Layer: Aggr_External and BMM: Aggr_External

1. If a report requests data that exists directly in Essbase, then the MDX generated would go only against those intersections.
2. If a report requests data that does not exist directly in Essbase ( can be retrieved through a custom aggregation), then the aggregation would be pushed back to Essbase. The MDX would only go against the necessary intersections to get the aggregated/calculated data. The Essbase aggregation would use the AGGREGATE function to calculate the summarized data.
3. This does not support relational database functions and hence everything would have to converted into MDX.

Physical Layer: Aggr_External and BMM: SUM

1. If a report requests data that exists directly in Essbase, then the MDX generated would go only against those intersections.
2. If a report requests data that does not exist directly in Essbase, then the MDX fired would be pushed against the Essbase layer. The major difference in this scenario and the one above is the fact that Aggregation is the fact that the aggregation in this case is done using the SUM function. So, this would generate wrong results if the outline contains Time Based properties like TBAverage, TBLast etc. This sometimes can vary based on the outlines.
3. This supports relational calculation like concatenation etc. But the aggregation would happen only at the BI Server layer. The MDX fired would go against all the level-0 intersections. So, this is generally not recommended. Also, the numbers produced might be wrong if you have lots of Label Only or Time Based properties in your outline.

Physical Layer: SUM and BMM: Aggr_External

1. This combination requires all the dimensions in the report layout.
2. This combination does not have any specific advantage over other combinations.
3. This always does a level-0 roll up using AGGREGATE function. This can be at times pretty slow and is not recommended unless you want the aggregation to happen dynamically all the time.

Physical Layer: SUM and BMM: SUM

1. This is exactly similar to Aggr_External aggregation set for both Physical Layer and BMM.
2. If a report requests data that exists in Essbase, then the MDX generated would go only against the necessary intersections.
3. If a report requests data that does not exist directly in Essbase, then the aggregation would be pushed back to Essbase. The MDX would go only those necessary intersections to calculate the aggregated/calculated data. The Essbase aggregation would use the SUM function to calculate the summarized data.
4. The major advantage of this is the fact that this supports relational calculations as well. When relational calculations are introducted then, the relational manipulations are done in BI Server memory. The MDX generated would go only against those necessary intersections rather than doing a level-0 dynamic sum.

Of course, some of the behavior above can change as new releases of BI EE come out. But currently the above summarizes the behavior of various aggregation combinations when introduced in the BMM and physical layer. There are other combinations that are possible as well. I have not covered them as they are not used that much.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 2 Comments »

Oracle BI EE 10.1.3.4 and Essbase Connectivity – Understanding Aggregations – Part 3

Posted by Venkatakrishnan J on April 9, 2009

Lets look at one more interesting aggregation combination today which would result in different MDX queries and hence would result in different data being displayed. The combination that we would be using is given below

BMM: SUM and Physical Layer: SUM

Again to test this combination we shall use two forms of a similar report. The data for one report can be obtained from Essbase directly. The data for the other report would have to be derived from the existing Essbase data.

       

       

The MDX for this report is given below. As you see, this aggregation combination goes directly against the Essbase intersections.

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
select
{ [Measures].[Units] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

This is very similar to the Aggr_External aggregation set to both the physical layer and the BMM. Now lets test a report wherein the data to be displayed would have to be derived.

       

       

If you look at the MDX below, you would notice that SUM function is shipped instead of AGGREGATE. So, what this means is that, if you use this combination you need to be sure your Essbase outline does not contain any Time Balance properties. If it contains Time balance properties then the numbers generated would be wrong, if the above combination is used.

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
set [Time2] as '{[Time].[1998], [Time].[1999], [Time].[2000]}'
member [Measures].[MS1] as 'SUM({[Time2]},Measures.[Units])'
select
{ [Measures].[MS1] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

So far we have seen all the relevant combinations which would be commonly used. But again there are other combinations like Evaluate_Aggr, None etc which again can generate completely different set of MDX queries. I would not be covering these combinations as they are seldom useful. Neverthless, this wraps up my aggregation series. In the next blog entry i would summarize the observations so that anyone using BI EE on Essbase can refer to it while doing their implementation.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 1 Comment »

Oracle BI EE 10.1.3.4 and Essbase Connectivity – Understanding Aggregations – Part 2

Posted by Venkatakrishnan J on April 7, 2009

Yesterday we saw the 3 types of aggregations that are possible currently in BI EE (while using Essbase as a data source). We had discussed the first type of aggregation which is the default. Now, today we shall go more into details of how BI EE generates the MDX and how the aggregations set in the BMM and the physical layer affect the MDX generated. As you might have already noticed, both BMM layer and physical layer level aggregations can be specified for Essbase data sources. For simplicity and for easy understanding we shall take only 3 types of aggregation in both the BMM and the physical layer. The list is given in the below table

BMM Aggr_External
BMM SUM
BMM None
Physical Layer Aggr_External
Physical Layer SUM
Physical Layer None

The important part to note here is the fact that different combinations of aggregations set in the BMM and the physical layer can produce different MDX queries. Let us start with the first combination shown below (we shall use the same report that we had used yesterday – One wherein data is directly available in Essbase and the other wherein its not available in Essbase).

BMM: Aggr_External and Physical Layer: SUM

       

       

You would notice that this would produce an error due to the wrong MDX being generated. Thats because whenever in physical layer, if the aggregation is set other than Aggr_External, the reports that we create should have a dimension member from every dimension. Though this has produced an error, lets take a look at the MDX.

With set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
member [Measures].[MS1] as 'AGGREGATE(crossjoin (Descendants([Channel].currentmember,[Channel].Generations(3)),crossjoin (Descendants([Product].currentmember,[Product].Generations(5)),{[Time4]})),Measures.[Units])', SOLVE_ORDER = 100
select
{ [Measures].[MS1] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

Now, lets go back to the report and add Gen2, Year to bypass the above error.

       

The MDX generated is given below

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
set [Time2] as '[Time].Generations(2).members'
member [Measures].[MS1] as
'AGGREGATE(crossjoin (Descendants([Channel].currentmember,[Channel].Generations(3)),crossjoin (Descendants([Product].currentmember,[Product].Generations(5)),
Descendants([Time].currentmember,[Time].Generations(4)))),Measures.[Units])', SOLVE_ORDER = 100
select
{ [Measures].[MS1] } on columns,
NON EMPTY {crossjoin ({[Channel1]},crossjoin ({[Product1]},{[Time2]}))} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

As you see, due to the aggregation property (SUM) in the physical layer, the MDX generated does a level-0 dynamic aggregation using the AGGREGATE function (though the intersection for the above members actually exist in Essbase). So, what this means is that, you do not necessarily have to always have aggregated data in Essbase. Now, as a next step lets change the report to produce a report for which data does not even exist inside Essbase.

       

       

Even this has produced an error. So, in effect what this means is with a combination of BMM: Aggr_External and Physical Layer: SUM it is not possible to do in memory or custom aggregations on Essbase. But by default in the report if we have generations from every dimension then a level-0 roll up happens using the AGGREGATE function.

BMM: SUM and Physical Layer: Aggr_External

To test this combination lets first create the simple report without the Time dimension and then we shall look at the MDX query.

       

       

The MDX query generated would be

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
select
{ [Measures].[Units] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

As you see, this behaves like the first case discussed above. If we are requesting a report for which data already exists in Essbase, then the MDX generated would go directly against those intersections. Now, lets add a year filter to generate a report for which data does not exist in Essbase (can be obtained only through calculations/aggregation).

       

       

Now, if you at the MDX below you would notice that the aggregation is not pushed back to Essbase. There is no AGGREGATE function. But there is a SUM function. So even in this case the aggregation is pushed back to Essbase. But if you have Time Based Properties in the outline like TBLast, TBAverage etc, then this would generate wrong results. You need to be extremely careful in such cases as the numbers reported might be wrong.

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
set [Time2] as '{[Time].[1998], [Time].[1999], [Time].[2000], [Time].[2001]}'
member [Measures].[MS1] as 'SUM({[Time2]},Measures.[Units])'
select
{ [Measures].[MS1] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

Now the question is what is the difference between this and the default setting. What difference does SUM aggregation on the BMM layer make to the report. There are 2 main differences (one is noted above).

1. When a report requests some data that is not there in Essbase (which would have to be calculated on the fly) then in this case the aggregation is pushed back to Essbase but with the SUM function. In the default case (both aggregations set to Aggr_External) the aggregation is pushed back to Essbase with AGGREGATE function.
2. The other difference is that when we set SUM as the aggregation in the BMM layer, then in Answers we can use relational functions to manipulate the data. For example, in the report above, lets assume we need to do a relational calculation like concatenation to the dimension members in the report. So, all we need to do is to append “Test” to the Product dimension as shown below.

       

       

As you see the number above is outright different than the actual numbers. The reason for this is the MDX generated.

With
set [Channel3] as '[Channel].Generations(3).members'
set [Product5] as '[Product].Generations(5).members'
set [Time4] as '[Time].Generations(4).members'
set [Axis1Set] as 'crossjoin ({[Channel3]},crossjoin ({[Product5]},{[Time4]}))'
select
{[Measures].[Units]} on columns,
NON EMPTY {[Axis1Set]} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

As you see, just because of the relational calculation, BI EE has generated an MDX that goes against Level-0 members. If you fire this directly in Essbase you would notice all Non Empty tupules being generated as shown below.

       

So what this means is BI Server itself does the aggregation. Again, one would have to be extremely careful while using this as this would turn out to be very slow depending on the amount of data getting extracted. Not many people are aware of these minute differences and this is where i believe understanding Essbase and the BI EE RPD would help a lot while creating Essbase specific BI EE reports. I would document(in the form of a tabular chart) various aggregation combinations in the coming blog entries so that everyone can refer to that while using Essbase as a data source.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 1 Comment »

Oracle BI EE 10.1.3.4 and Essbase Connectivity – Understanding Aggregations – Part 1

Posted by Venkatakrishnan J on April 6, 2009

One of the least explored/documented part of the BI EE and Essbase Connectivity is the ability of BI EE to do aggregations at 3 different layers. Whenever Essbase is used as a data source in BI EE, there are 3 types of aggregations that are possible. All the 3 can produce different results and hence warrants a correct understanding of how they exactly work. I have seen quite a few customer cases wherein the report results are outright wrong due to wrong aggregations or due to lack of understanding of how BI EE treats Essbase. So, lets try to understand the 3 types of aggregation that BI EE and Essbase connectivity provides to the end user.

1. Aggregation by going directly against the aggregated intersections in Essbase.
2. Aggregation in Essbase by using MDX. Essbase would fire Aggregate/Sum functions to calculate the data from Level-0.
3. Aggregation at the BI EE layer

All the 3 above are possible in BI EE currently. Remember that by default, when you create the BMM for an Essbase cube by a simple drag and drop, the first type of aggregation would be used. There are certain caveats that one would have to understand while using this type of aggregation. As always, wherever possible try to include a dimension member from every dimension in the report. MDX as well as BI EE does not require us to qualify a report with all the dimensions. So, if you are creating a report with only a subset of all the dimensions in the outline, then you need to be aware of the fact that topmost dimension member for the other dimensions are implicitly assumed by the MDX query. For example, consider the outline shown below

       

       

As you see, this is a very simple outline containing 4 dimensions. Now lets create a simple report(assuming the default BMM created by the drag and drop option) as shown below

       

This report contains the topmost member of the channel and product dimension reporting against the Units measure. Since, we have not included the Time dimension in this report, we would typically assume that the report would be a sum of all the years data. Apparently that is not the case since MDX implicitly assumes the topmost node of the missing dimension. Since Time is a label only member, the value that you would see in the report would be of the year 1999 (first immediate child).

       

The MDX query for this report is given below

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
select
{ [Measures].[Units] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

Now, since all the data that we requested in the report is present in Essbase, the MDX would go directly against the data source and extract the intersecting data values. But what if we want to get a sum of all the years. Then to achieve this include all the years as a filter in your report.

       

       

So basically, in the above report we are requesting data which does not even exist in Essbase. We only have the individual year values and not the summed up year values. So, If you look at the MDX now, you would notice that BI EE would have automatically fired an AGGREGATE function to retrieve the data back.

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
set [Time2] as '{[Time].[1998], [Time].[1999], [Time].[2000], [Time].[2001], [Time].[2002], [Time].[2003], [Time].[2004]}'
member [Measures].[MS1] as 'AGGREGATE({[Time2]},Measures.[Units])', SOLVE_ORDER = 100
select
{ [Measures].[MS1] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

The question is how did BI EE determine that AGGREGATE function needs to be used here. What if we let the BI Server to do the aggregation? How do we make BI EE to generate SUM function based MDX instead of AGGREGATE? These are questions that would be answered in the next couple of blog entries.

In the same fashion, whenever you are creating a custom Evaluate MDX function based logical column, be aware of the intersecting dimension members while creating the report. This is absolutely critical for the data correctness in the report. We will go more into detail in the next couple of blog entries.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 1 Comment »

Oracle BI EE 10.1.3.4 and Hyperion Essbase 11.1.1.0 – Ago and ToDate functions – Custom MDX and Evaluate_Aggr – Achieving Time based Shifting and Period To Date Calculations

Posted by Venkatakrishnan J on March 30, 2009

One of the few topics that i have not covered so far in my BI EE and Essbase connectivity series is the support of Ago and ToDate functions. As you would probably know, Ago and ToDate are BI EE rpd specific functions which can basically help in achieving Time based shifting, Period To Date kind of calculations. Depending on the underlying database, BI EE automatically function ships these 2 functions into the corresponding relational database’s sql queries. I have covered this before for a relational source here and here. Now with the support for Essbase data sources in BI EE, Ago and ToDate functions are function shipped automatically to MDX specific functions. Lets try to understand what these 2 functions do and how they work on an Essbase data source.

1. Ago – This function basically calculates the value of the measure where the Time member would actually be shifted by an index specified in the Function.
2. ToDate – This function basically calculates the Period To Date value for every intersection of the Time dimension. i.e if you have Jan, Feb in your report, the Period to Date would calculate Jan, Jan + Feb.

For the above functions to work, you need a dimension marked as Time in your Essbase outline. So, lets start with the Demo->Basic cube.

       

Now, go to the BMM of this imported Essbase cube in the BI EE repository. Mark Year as the Time Dimension and also set the chronological key.

       

       

Once this is done, go to the fact table and create a custom column. This custom column would basically be used for doing periodic shift of prior 2 months. In the column enter the formula as shown below

Ago("Demo"."Basic"."Actual" ,  "Demo"."Year"."Gen3,Year" , 2)

       

Now, include this column in the presentation layer and create a report as shown below. Ensure that the grain of the report matches the Ago function.

       

As you see, the Ago function automatically creates a custom metric calculating the value of the metric by shifting the month by 2 for every time member. Lets look at the MDX query.

       

With
set [Year3] as '[Year].Generations(3).members'
member [Scenario].[MS1] as '(ParallelPeriod([Year].[Gen3,Year],2,[Year].currentmember),Scenario.[Actual])', SOLVE_ORDER = 100
select
{ [Scenario].[Actual], [Scenario].[MS1] } on columns,
NON EMPTY {{[Year3]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Demo.Basic]

As you see above, the Ago function is automatically function shipped to an MDX function called ParallelPeriod. The major drawback with Ago is that it hardcodes the ParallelPeriod function and hence all the capabilities of the ParallelPeriod function is not leveraged (like Hierarchy filters etc). If those are required, then Evaluate would have to be used. We shall see how this can be done later. Now that we are clear on how Ago works, lets move on to ToDate function. Go to the fact logical table in the BMM and add a custom column. In the custom column enter the formula shown below

 TODATE(Demo.Basic.Actual, Demo."Year"."Gen2,Year")

       

       

If we look at the MDX query generated, you would notice that ToDate is automatically function shipped to PeriodsToDate MDX function.

With
set [Year3] as '[Year].Generations(3).members'
member [Scenario].[MS1] as 'AGGREGATE({PeriodsToDate([Year].[Gen2,Year],[Year].currentmember)},Scenario.[Actual])', SOLVE_ORDER = 100
member [Scenario].[MS2] as '(ParallelPeriod([Year].[Gen3,Year],2,[Year].currentmember),Scenario.[Actual])', SOLVE_ORDER = 100
member [Scenario].[MS3] as 'Rank([Year].Generations(3).dimension.currentmember,[Year].Generations(3).dimension.members)'
select
{ [Scenario].[Actual], [Scenario].[MS1], [Scenario].[MS2], [Scenario].[MS3] } on columns,
NON EMPTY {{[Year3]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Demo.Basic]

If you want to achieve the same functionality of Ago and Todate without actually using them, use them as Evaluate functions, as shown below

EVALUATE_AGGR('AGGREGATE({PeriodsToDate([Year].[Gen2,Year],%1.dimension.currentmember)},Scenario.[Actual])' , "vejanaki-lap.in.oracle.com"."Demo".""."Basic"."Gen3,Year")
EVALUATE_AGGR('(ParallelPeriod([Year].[Gen3,Year],2,%1.dimension.currentmember),Scenario.[Actual])',"vejanaki-lap.in.oracle.com"."Demo".""."Basic"."Gen3,Year")

       

Its good that BI Server automatically function ships these functions back to Essbase in the form of MDX. But the drawback as noted above is the fact that one does not have the flexibility to support all the parameters of MDX. This hopefully should be available in the coming releases.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 7 Comments »

Oracle Data Integrator 10.1.3.5 and Hyperion Essbase 11.1.1.0 – Columnar ASO export using MDX

Posted by Venkatakrishnan J on March 24, 2009

I was on a customer call last week wherein the customer wanted to know of ways to export partial data from an ASO cube. The issue with the customer was that they were on version 9.3.1. Since 9.3 does not support clearing regions of an ASO cube, they wanted ways of exporting partial data from the ASO cube and then reload it back to the same cube after clearing the cube. This is a very common scenario and i thought i would blog about it here. As you would know, Oracle Data Integrator 10.1.3.5 now supports export of data from an Essbase cube using Report Scripts as well as MDX. We shall be using the same feature today to understand how MDX exports for an ASO cube work. Also, it would be an interesting activity considering the fact that there can be different ways of framing an MDX query. So, lets first start with the GlobaASO cube that i have been using for the last 3 or 4 blog entries. First lets take a look at the outline

       

Basically this cube has data for the month of April, May and July as shown below.

       

Now, our intention is to take an export for the month of April and May from the ASO cube. There are a couple of options. One is to use a report script as shown below and take the export. But generally MDX is always recommended over Report Scripts for ASO cubes.

<Sym
{ SUPALL  }{ NAMESON  }{ TABDELIMIT  }{ ROWREPEAT  }{ NOINDENTGEN  }{ SUPMISSINGROWS  }
<SUPSHARE
<ROW ("Time","Measures","Channel","Product","Customer")
"Apr-99","May-99"
<DIMBOTTOM "Measures"
<DIMBOTTOM "Channel"
<DIMBOTTOM "Product"
<DIMBOTTOM "Customer"
!

       

The above report script can be re-written into MDX in 2 ways(the main difference between the 2 is the fact that you would get a lot of #missing rows in the first query). Both the MDX scripts are provided below

SELECT
{[Measures].[Units]} on AXIS(0),
NON EMPTY {[Customer].Levels(0).members} on AXIS(1),
NON EMPTY {[Apr-99],[May-99]} ON AXIS(4),
NON EMPTY {[Channel].Levels(0).members} ON AXIS(3),
NON EMPTY {[Product].Levels(0).members} ON AXIS(2)
FROM
[GlobaASO].[GlobaASO]

       

With
set [Time1] as '{[Apr-99],[May-99]}'
set [Channel1] as '[Channel].Levels(0).members'
set [Product1] as '[Product].Levels(0).members'
set [Customer1] as '[Customer].Levels(0).members'
select
{[Measures].[Units] } on columns,
NON EMPTY {crossjoin ({[Time1]},crossjoin({[Channel1]},crossjoin({[Product1]},{[Customer1]})))}   on rows
from [GlobaASO.GlobaASO]

       

Even if we try to save the above MDX outputs as a CSV file we would get the member names wrapped within parantheses (depending on the MDX fired).

       

Now, lets go to Oracle Data Integrator and create an interface with this GlobaASO as the input. Our idea is to test both the above MDX queries. So, first lets start with the first MDX Query and insert it into a dummy table.

       

       

Now, lets try the same thing with the other MDX query.

       

       

It does look like ODI takes into account all types of MDX queries(though in effect every MDX query works on the AXIS specification). The interesting fact is that BI EE chooses the ROWS and COLUMNS (basically AXIS(0) and AXIS(1)) type of queries instead of the first type of the query where every AXIS apart from AXIS(0) comes as a seperate column. One of the main reasons for this is the fact that NON EMPTY clause would not work on individual members. It would work only on the actual existing tupules. So, the 2nd type of query with cross join would actually generate a resultset without #Missing values(lesser number of rows returned). The more you start analyzing the BI EE and Essbase connectivity, the more you get to know how the connectivity has been designed. It would also give you insights on the future product directions. For example, if you notice in EPM 11.1.1.0, a new feature has been added where saved sets can be used during the duration of the session. My guess is that this has been added keeping the BI EE 11g in mind(based on the screenshots in the OOW presentations). Anyway, i am digressing. The above was more of a initial walkthrough of the ODI-Essbase MDX export capabilities. I would be going more into detail of Java API and MDX in the future blog entries.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus, Oracle Data Integrator | 1 Comment »

Hyperion Essbase 11.1.1.0 – Transparent Partition – Using BSO as Partition Targets – Write-Back Partitions on ASO – Part 2

Posted by Venkatakrishnan J on March 23, 2009

In the last blog entry, i had given a brief on how Transparent partition works on ASO targets. One of the major drawbacks of ASO targets is that, they cannot be used to store local data. Today we shall see the advantages of using BSO cubes as transparent partition targets. The major advantage of using BSO as a transparent partition target is that, it provides the capability to do write-backs on ASO (more of a workaround). Lets take at a simple use case today. We shall be using the same GlobaASO cube as our ASO source. Lets look at this ASO outline first.

       

As you see, we have 4 analysis dimensions and one accounts dimension. This ASO cube stores only actual units sold. Lets assume that we have end users who do active planning/forecasting on the number of the units sold. And they need to have the capability to store the forecasted data directly into the cubes. Typically they do the forecasts along the product dimension. Lets also assume that forecasting is typically done for the last 6 months after looking at the 6 months of Actual units sold. So, in order to facilitate this, lets create a BSO cube as shown below.

       

As you see, this BSO outline has only 3 dimensions. So, any user who would be involved in forecasting would be looking at the actual units sold for the first 6 months and then would be doing the forecasts accordingly for the remaining 6 months. Also this outline has one extra account called as Forecast Units which would hold the Forecast Data. Now lets create a transparent partition with the ASO cube as the source and the BSO cube as the target.

       

       

The important point to note while creating a partition is the fact that we do not have the same set of dimensions in the source as well as the target. So for all the dimensions that do not exist in the target, just include the topmost member (dimension member or the member that you feel has the relevant data stored for forecast). The source partition area would look like the one shown below

       

As you see, i have chosen the topmost member(in the source) for the non-existent dimensions in the target. The target partition would look like the one shown below

       

Now, if you validate the partition you would get an error stating that the number of dimensions in the source and the target do not match.

       

To bypass this, go to the mappings tab and map your non-existent dimensions in the target from the source as shown below

       

This will validate your partition. Now go to excel-add in and query on the new BSO cube.

       

Now as an end user, i would typically look at Q1 and Q2 actual numbers and based on that i would be forecasting for the remaining 2 quarters across multiple products. Of course, there would be a lot of other factors coming into play while forecasting, for simplicity purposes i would just multiply the Q2 actual numbers by 1.3 for Q3 and Q4 forecast. You should be able to do this directly from excel-add in. Though you might be getting warnings on the write-back going against ASO(as long you are sure that the particular cell that you have locked is not part of the transparent partition you should be good), you should still be able to writeback. Essbase performs a partition area check based on the members chosen in Excel and not on the cells locked. Hence you might get warnings even if you have locked the correct cell.

       

Posted in All Posts, EPM, Hyperion Essbase | Leave a Comment »

Hyperion Essbase 11.1.1.0 – Transparent Partition – Using ASO as Partition target – Part 1

Posted by Venkatakrishnan J on March 19, 2009

In the last blog entry we saw how a Transparent partition works. Today we shall go more into detail on what this partition does and how it works on BSO and ASO cubes. To demonstrate this, i would be using a nASO cube as the partition source. Both ASO as well as BSO would be used as partition targets. This would let us understand the differences between ASO and BSO while using them as Transparent Partition targets. Lets start with a Global ASO outline as shown below. This would act as our transparent partition source.

       

Now, assume that we have 2 partition targets. One in ASO and the other in BSO. As a thumb rule, for partitioning to work, the number of dimensions in the source and the target should match. But the dimensions would typically be a subset of the source partition. The outlines of the target ASO and BSO cubes are provided below.

       

Lets create a partition on the source ASO cube. There are 4 main steps while creating a transparent partition

1. Assigning a target ASO/BSO database
2. Assigning security
3. Assigning Areas
4. Creating mappings between the source and target.

The first 2 steps are self explanatory. In our first case, we shall use ASO as a target.

       

       

       

Once the security and the target have been assigned, the next step is to define the area. The area definition (basically this defines the slices of the source and target databases) should follow certain rules.

1. Number of cells chosen in the source and the target should match
2. When the source and target dimension members match, just include them in the area definition and there is no need for additional mapping.
3. When the source and target dimension members do not match, mapping editor should be used to map the individual members.
4. If a dimension is not chosen in the area, then the partition inherently assumes that the dimension has a one-to-one match between the source and the target.

In our case, since we have the same dimension member names, we shall map just a single level-0 dimension member combination of the source with the target as shown below.

       

Since we do not have any Customer dimension mapping, the partition assumes that every member in the source Customer dimension and the target customer dimension match. Now, lets load some data into the source.

       

And lets try to view the data in the target.

       

As you see, we are able to query the source from the target. This is straight forward. Now, the next step is to define another area like the one above. This time map the Apr-99 member from the source to the target. The basic reason for doing this is to understand whether the target does a dynamic aggregation based on multiple partition/multiple areas within a partition.

       

After doing this lets validate the partition and verify the data from excel add-in.

       

As you see, the target is able to do an aggregation from 2 different areas of the partition dynamically. This is the value add of a transparent partition.

One of the major differences between ASO and BSO transparent partition targets is that, ASO does not support local data. Even if one loads data into the unmapped regions of the partition target, this would not be recognized by the ASO target. To validate this lets load the data in the ASO target for the Month of Jun-99.

       

After loading this data validate the partition.

       

As you see, the warning message clearly states that local data is not supported. Even when we query from the excel add-in we would not be getting the local Jun-99 data.

       

In the next blog entry we shall see how a BSO cube behaves when it is used as transparent partition target. I would also show how a BSO transparent partition target can be used for Custom ASO writebacks.

Posted in All Posts, EPM, Hyperion Essbase | 1 Comment »

Hyperion Essbase 11.1.1.0 – Partitioning and its use cases – Transparent, Replicated and Linked partitions – Introduction

Posted by Venkatakrishnan J on March 17, 2009

I was working with one customer this week who basically wanted to find out ways of tuning their existing Hyperion Planning BSO cubes. Multiple options were discussed like loading parts of the BSO cube into an ASO cube, partitioning etc. That too, since they were on the 11.1.1.0 release of EPM, it made the task even more simpler and broader in terms of the options available. One of the major advantages of the new release of Essbase is that quite a few excellent enhancements have been made with regard to partitioning. I thought it would be worth covering the various aspects of partitioning in Essbase through another series of blog entries. Today we shall see what are the partitioning techniques that are available and which options (ASO or BSO) does each partitioning technique support.

Basically Essbase supports 3 types of partitions.

1. Transparent Partition
2. Replicated Partition
3. Linked Partition

If you have worked with Oracle OLAP and are familiar with the partition techniques available there, you would realize that partitioning of Essbase is fundamentally way too different, though it achieves the goal of providing more performance/scalability depending on the technique chosen. With Oracle OLAP partitioning is generally always recommended for bigger AWs, but in the case of Essbase it is not always the case. Each of the partitioning techniques above have their own pros and cons, and they need to be understood in detail before using any one of them.

Transparent Partition:

This is an excellent partition technique which basically provides the same cube’s data from across multiple outlines. The other advantage is that it can be used to provide a consolidated view of data from multiple databases. To understand this better, look at the screenshot below

       

As you see above transparent partition basically is used to consolidate data from multiple databases. This technique is typically used to provide smaller versions of a bigger database. Also, there can be cases wherein a new database a created to store every year’s worth of data. A transparent partition provides a seamless way of looking at all the year’s data without loading them completely into a new database. In addition, the transparent partition target can have its own data as well. So, in effect whenever a calculation is triggered in the target database, if the calculation requires data from the transparent partition source, then Essbase automatically gets the data out of the transparent partition and combines it with the local data. The other advantage of using this technique is that data loads can be done directly from data source or data target. In the coming blog entries i would discuss a couple of use cases

1. Using ASO as a transparent partition target
2. Using BSO as a transparent partition target

Replicated Partition:

This is generally the most commonly used partition technique which copies a portion of the source database’s data to the target. This does not always ensure that the users are looking at the latest data as there are possibilities of the source and the target going out of sync. This partition technique supports data load only in the source. If the data is loaded in the target, that target data would be over-written after the sychronization with the source. A transparent partition target can use a replicated partition target as a source. But a replicate partition target cannot use a transparent partition target as a source.

       

       

Linked Partition:

Though this is called as a partition technique, it does not provide the generic partitioning capabilities. This basically provides a link to other databases (using XREF) and based on the mapping done while creating the partition, this partition provides the capability to drill from one cell in a source database to another cell in the target database. Since it provides only a linkage, it does not have all the limitations of replicated and transparent partitions. This is illustrated as shown below

       

We would go into details of each of these partition techniques in future blog entries. Each of the partition technique above have certain limitations and are supported only on certain configs. This is provided below (this is for 11.1.1.0 release).

       

As you see above, ASO is now supported as a partition target for both replicated and transparent partitions. This opens a lot of possible tuning opportunities especially in cases wherein both the complex calculation abilities of BSO and quick aggregation capabilities of ASO are required.

Posted in All Posts, EPM, Hyperion Essbase | 3 Comments »

 
Follow

Get every new post delivered to your Inbox.

Join 158 other followers