Business Intelligence – Oracle

Archive for the ‘OBI EE Plus’ Category

Oracle BI EE 10.1.3.4.1 – Report SUM and Pivot Calculations – An alternative SQL on Oracle 10g and above – Equivalence to MDX – MODEL Clause – Part 2

Posted by Venkatakrishnan J on May 16, 2009

One of the few things that was discussed in the BI Forum was the fact that BI EE sometimes generates absolutely un-necessary queries while using a custom Report Sum. For example, if you look at the report below, you would notice that we have added a custom summation row after every Year to get the Year wise sub-totals

There are 2 ways of producing this summation. One is by making the BI Server to fire a query back to the database (which would generate the sum) and the other is by letting the BI Server itself to produce the sum. Lets first choose the first option by disabling the report based totals(making BI EE to generate the sub-total query as well).

If you look at the query you would notice that the query generated by BI EE does not actually produce the result in the actual desired format. The query is given below

select
distinct D1.c2 as c1,
D1.c3 as c2, D1.c1 as c3,
sum(D1.c1) over () as c4
from
(
select sum(T10976.UNITS) as c1,
T10939.FISCAL_YEAR_DSC as c2,
T10881.CHANNEL_DSC as c3
from
CHANNEL_DIM T10881,
TIME_DIM T10939,
UNITS_FACT T10976 where ( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )
group by
T10881.CHANNEL_DSC,
T10939.FISCAL_YEAR_DSC ) D1
order by c1, c2

This query would not generate the data in the desired format(instead of new summation row, this would generate a new column that BI Server would have to manipulate). So, this begs the question of how do we generate a sql which produces an output exactly same as the above report. This in itself is a nice sql challenge as it is not possible out of the box using normal sql constructs. This is where MODEL clause can come to our rescue. In the last blog entry on this topic, i had shown you how to go about generating AGO and ToDate functions using direct sql on the summed up result set instead of the base tables. In the same fashion, MODEL clause can be used to generate custom sub totals. For example, the same query above can be re-written as shown below

SELECT FISCAL_YEAR_DSC, CHANNEL_DSC, UNITS FROM (select sum(T10976.UNITS) as UNITS,
               T10939.FISCAL_YEAR_DSC as FISCAL_YEAR_DSC,
               T10881.CHANNEL_DSC as CHANNEL_DSC
          from
               CHANNEL_DIM T10881,
               TIME_DIM T10939,
               UNITS_FACT T10976
          where  ( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )
          group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC)
MODEL
PARTITION BY (FISCAL_YEAR_DSC)
DIMENSION BY (CHANNEL_DSC)
MEASURES (UNITS)
RULES (UNITS['Total'] = SUM(UNITS)[ANY])
ORDER BY 1,2

In order to use MODEL, we would have to understand certain basics of how it can be used and in what situations it can be used. MODEL was introduced to generate sql queries that enables an end user to visualize the data in a multi-dimensional format. One can generate Multi MODEL references to do currency restatements etc. To understand this lets look at the BSO architecture diagram of Essbase. As you would probably know, BSO option consists of 2 parts

1. Sparse Dimensions that are stored as an index – So, these dimensions are typically stored in a normal relational row-wise format in Essbase.
2. Dense Dimensions that are stored as an Multi-Dimensional Array – These dimensions form a multi-dimensional array. The pointer to the array is denoted by the corresponding sparse dimension combination

In the same way, a MODEL clause consists of an index denoted by the Partition By Clause. The multidimensional array or a Block is denoted by the DIMENSION clause. The only difference between the 2 is the fact that you would have multiple measure blocks per partition by clause dimension combination. In the case of essbase, an index entry points to only one block. The reason for this is, MODEL clause does not treat the measure values as a seperate dimension. So, a MODEL clause treats a summarized sql query as shown below

If you understand this analogy, it would be easier to understand the analogy between Essbase member formulas/calculation scripts to the MODEL RULES clause. I would go into detail of how these 2 are related in a future blog entry. For now, lets take another pivot table example as shown below.

As you see, the above is a very simple pivot table containing a calculated row (DIrect Sales+Catalog). If you look at the query generated by BI EE, you would notice, that BI Server would be doing this calculation in memory. Now, our aim is to generate the pivot table report with the calculation too in a single query using MODEL clause

select T10939.FISCAL_YEAR_DSC as c1,
     T10881.CHANNEL_DSC as c2,
     sum(T10976.UNITS) as c3
from
     CHANNEL_DIM T10881,
     TIME_DIM T10939,
     UNITS_FACT T10976
where  ( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )
group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC
order by c1, c2

In order to do that, we would have to generate a rule that would produce a new set of rows in addition to the existing rowset.

SELECT CHANNEL_DSC, FISCAL_YEAR_DSC,  UNITS FROM (select sum(T10976.UNITS) as UNITS,
               T10939.FISCAL_YEAR_DSC as FISCAL_YEAR_DSC,
               T10881.CHANNEL_DSC as CHANNEL_DSC
          from
               CHANNEL_DIM T10881,
               TIME_DIM T10939,
               UNITS_FACT T10976
          where  ( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )
          group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC)
MODEL
PARTITION BY (FISCAL_YEAR_DSC)
DIMENSION BY (CHANNEL_DSC)
MEASURES (UNITS)
RULES (UNITS['Calculated'] = SUM(UNITS)[ANY])
ORDER BY 1,2

As you see, MODEL clause has quite a few advantages whenever we have data manipulations on a summarized result. We shall see more complex examples in a future blog entry.

Advertisements

Posted in All Posts, Hyperion Essbase, OBI EE Plus | Leave a Comment »

Oracle BI EE 10.1.3.4.1 – Understanding Logical SQL – Part 1

Posted by Venkatakrishnan J on May 7, 2009

I was recently involved in a migration project wherein the customer was migrating from a custom OLAP solution to Oracle OLAP. The customer had around 1200 reports reporting out of a single BI EE repository. I was brought in to build a repository on Oracle OLAP 10g. Since it was a migration project and since only the backend was changing, the customer expected all the reports to work out of the box on Oracle OLAP. So, my job was to build a repository from scratch using SQL Views on OLAP 10g, and at the same time ensure that none of the reports actually break due to this new repository. It was a very interesting exercise since one would have to use the concept of dimension binning on all the dimensions. Though dimension binning itself is a straightforward concept, the implementation itself was pretty tricky due to multiple hierarchies in each dimension and i had to dedicate around 8 hours to build the repository and at the same time get it right in all aspects. There are certain concepts that one would have to be aware of while building a BI EE rpd on Oracle OLAP 10g. On OLAP 11g(which the customer was not willing to migrate to) though, this would be a straightforward exercise as 11g offers out of the box MVs for facts and dimensions. So, in effect, 11g OLAP cube would be treated as a simple star schema. I would cover those concepts like using EVALUATE_PREDICATE, Dimension Binning, Multi Hierarchy fragmentation etc in a future blog entry. While working on this repository, i noticed that there were certain prompt requirements wherein i had to hardcode the logical sql since a direct select on the column would produce a drastically costly query on the OLAP SQL Views. This led me to write today’s blog entry which is about logical SQL.

As you would probably know, Oracle BI Server generates 2 types of queries. Logical and Physical. When columns are chosen in a report, a custom logical sql is generated and BI Server converts this logical sql to the actual physical queries. Now, how different is a Logical SQL from a Physical SQL, in terms of structure as well as in terms of functionality. This blog series would cover this.

Structure of Logical SQL:

A logical SQL is exactly similar to a physical SQL. The major difference is that it follows the ANSI SQL structure. But there are cases wherein it has its own structure. At a high level the Logical SQL Structure would be like the one shown below

SELECT
<DISTINCT>
<PRESENTATION TABLE>.<PRESENTATION COLUMN LIST>
FROM
<SUBJECT AREA>
WHERE
<PRESENTATION TABLE>.<PRESENTATION COLUMN LIST EXPRESSIONS>
GROUP BY
<PRESENTATION TABLE>.<PRESENTATION COLUMN LIST>
ORDER BY
<PRESENTATION TABLE>.<PRESENTATION COLUMN LIST>

When do i join and when do i not

So, there is no need for any join in a logical sql when presentation table columns belong to a single Subject Area. For example, consider the logical sql and the correponding subject area organization below

SELECT
Time."Fiscal Year" saw_0,
Customer.Region saw_1,
Facts.UNITS saw_2
FROM "Global - Prod & Dev"
ORDER BY saw_0, saw_1

The idea behind the sql is that we need the number of units sold for each year in every region. In normal sql terms this would have involved a join between the Time dimension, customer dimension and the fact table. It would also have involved a group by on Year and Region columns. But in logical sql. joins and group bys are not needed as long as we are reporting against a single subject area. But assume that we have the following requirement wherein we need to show all the products and regions in a single report. Now, in our physical layer we do not have a join between products and regions. Hence when we create such a report in BI EE, the query generated would make a join with the fact table. So, instead of getting every combination of products and regions, we would be getting list of products being sold in every region based on whether transactions have occured for them in the fact table. The actual query generated is given below

select
T11023.ITEM_DSC as c1,
T10994.REGION_DSC as c2
from
PRODUCT_DIM T11023,
CUSTOMER_DIM T10994,
UNITS_FACT T11080
where ( T10994.SHIP_TO_ID = T11080.SHIP_TO_ID and T11023.ITEM_ID = T11080.ITEM_ID )

But sometimes this is not what we need. All we need is a cartesian product between Customer and Products dimension. Though we can create such a cartesian join in the physical layer, changing the repository to satisfy all such similar cases would not work all the time. To circumvent this, we can make the BI Server to make an in-memory cartesian join. The logical SQL is given below

SELECT
B.saw_1 saw_0,
A.saw_0 saw_1
FROM
(SELECT Customer.Region saw_0 FROM "Global - Prod & Dev") A,
(SELECT Product.Family saw_1 FROM "Global - Prod & Dev") B
ORDER BY
saw_0, saw_1

If you look at the actual query generated, you would notice 2 queries being fired parallely.

In effect we have made BI Server to make an in-memory cartesian join. Now, in some cases, the requirement would be the other way around. In a dashboard prompt, we would have to show the list of all products for which there has been a transaction in the fact table as a drop down. In such a case, there are 2 ways of getting the list of products. One way is to include the following SQL in the sql results section of the dashboard prompt

SELECT Product.Product,
Facts.UNITS
FROM
"Global - Prod & Dev"

In effect, the dashboard prompt would ignore the 2nd column and hence you should see all the products for which there is a transaction in the fact table.

The other approach is to create a logical SQL that will use the above query as a subquery.

SELECT Sub.Prod saw_0
FROM
(SELECT Product.Product Prod,
Facts.UNITS Units
FROM
"Global - Prod & Dev") Sub
ORDER BY saw_0

Once we understand how the logical SQL works, it would become even more clearer on how BI Server generates the physical queries. Today we just saw some basic logical SQL queries. There are more complex queries possible to cater to different kinds of reporting requirements. The only drawback in using the custom SQL is that, end users lose the capability of doing the native hierarchy drills. But sometimes logical SQL becomes an indispensable tool to have. We shall see more later.

Posted in All Posts, OBI EE Plus | 2 Comments »

Oracle BI EE 10.1.3.4.1 – Content Accelerator Framework (CAF) Synchronizer – A review/preview – XUDML

Posted by Venkatakrishnan J on May 5, 2009

I have been on a couple of customer calls this month wherein the customer wanted to know a seamless means of migrating their web catalog from one environment to another. The first thing that came to my mind was the CAF (Content Accelerator Framework) that came out a couple of weeks back. Though i had recommended this to the customers, i also informed them of the fact that this was in its first release and hence there are bound to be one or 2 resolvable hurdles. Now that i am involved in creating a couple of custom utilities myself, i thought it was time to check out the capabilities of the utility. This blog entry is a result of that. I must thank Phillipe(CAF PM) and his dev team for working with me on this. There are certain pre-requisites for the CAF to work. Work is under way to update the documentation as well. Following are my findings.

1. CAF is certified to work on 10.1.3.4 and above.
2. CAF requires JDK 1.6 as the active JDK.
3. CAF mandates the use of a Repository with relational objects. If you have Essbase as a data source, it might not work. Though i am not sure about this one as this was the only reason i could attribute to some of my repositories not working with this utility. If you find it otherwise, just leave them in the comments section.
4. I believe CAF does not mandate BI EE itself to be installed using JDK 1.6. JDK 1.6 is required only for the client catalog manager. Ensure that the path variable (on the client catalog manager machine) points to JDK 1.6.

Now lets get into an actual use case scenario. Consider that we have 2 repositories and 2 web catalogs. One is the default samplesales repository and the other is the usagetracking repository provided with the default BI EE install. Now, BI EE provides out of the box web catalog containing reports and dashboards going against each of the repositories. Now our aim is to merge both of them together into a single repository and web catalog. To do that, lets first start with merging the samplesales and the usagetracking repository. This is done using the 3 way merge that i have discussed here before. As a first step, create a copy of the usagetracking.rpd. Once this is done, open the samplesales.rpd as the current repository. Choose the usagetracking.rpd as the original repository.

Now choose the Usagetracking.rpd copy as the modified repository. And also make the necessary mappings to ensure that the merge happens correctly.

Now you should have a merged version of both the repositories.

As a next step, we need to bring in the usage tracking reports into the web catalog of sample sales. So, log into the catalog manager and navigate to the usage tracking web catalog. And choose all the reports that we want to migrate.

This would pop up another screen wherein we would have to choose the source rpd, target rpd (which would be the merged repository in our case). Also an online web catalog would have to be chosen as a target. The major advantage with this migration is that all we need is an access to the target webcatalog (which would be accessed through the same http server port). Also, CAF utilizes another utility called nqXUDMLGen.exe and nqXUDMLExec.exe which are basically similar to the older nqudmlgen.exe and nqudmlexec.exe. The major difference lies in the fact that now the entire repository is visualized as an xsd schema containing references to the actual objects. The xsd schema is given below. And the output of the nqXUDMLGen.exe would not be a UDML text file anymore. Rather it would be an XML file which would be in accordance with the xsd.

Click on next. And select the subject area to which the reports metadata would be mapped to. So, in effect you would have to choose the reports that have their columns within a single subject area in the previous screen. In our case, it would be usagetracking.

In the next screen you would find a mapping between the source report columns and the target metadata. What i would have liked is an automapper if the source and target rpd columns match. Unfortunately, we would have to map this one by one. But to circumvent this, the mapping structure can be saved so that it can reused in future migrations as well.

In the next step, enter the path in the target catalog. This is the path the reports would be migrated to in the target web catalog.

Now, if you open up the target web catalog you would notice that the reports have been migrated successfully.

Though the extent of usage of this utility(customer adoption) is pretty small, the concept this introduces is very robust and can be reused across multiple implementations. This also introduces the XUDML which should give you a sneek peak into the world of 11g. Apart from the installation hassles, this utility does what it promises to do and does it very well. The only hiccup that i see is the Essbase related XUDML issue. But i believe this is something that can be sorted out pretty soon. Hopefully we should see more of similar tools.

Posted in All Posts, OBI EE Plus | 4 Comments »

Oracle BI EE 10.1.3.4.1 – Ago and ToDate – An alternative SQL on Oracle 10g and above – Equivalence to MDX – MODEL Clause – Part 1

Posted by Venkatakrishnan J on April 30, 2009

One of the few things that i have been exploring recently is the ability to create sql queries similar to the ones created by MDX. The basic idea behind this is the fact that MDX has excellent calculation capabilities. It also has excellent member based selection queries which in most cases would be relevant to relational reporting as well. In relational reporting, we are in most cases tied to the columns and hence for some use cases like AGO and TODATE, the queries generated are actually not necessary at all. For example, lets take the Ago function provided by BI EE. For demonstration purposes, lets create a Prior Year Measure on the Global BMM as shown below

       

       

Now, lets create a report as shown below

       

All the above report does is, it calculates the Units sold for each year and would also display the prior year units sold as well. There are 2 ways of generating queries for such a report

1. Generate Units and Generate Prior Year Units both from the base time and fact tables.
2. Generate Units from the base time and fact tables. Then generate the Prior Year units based on Units data.

Currently BI EE generates queries using the 1st approach mentioned above. But sometimes you realize that working on the resultant dataset is generally more efficient and predictable rather than going against the base tables again for the calculated dataset. If you look at the query generated to produce the above result, you would notice that the query generated is pretty complex and in fact sometimes can generate wrong results if you dont have the proper chronological key.

select D1.c2 as c1,
     D1.c1 as c2,
     D1.c3 as c3
from
     (select D1.c1 as c1,
               D1.c2 as c2,
               D1.c3 as c3
          from
               (select sum(T10976.UNITS) as c1,
                         D3.c3 as c2,
                         D3.c2 as c3,
                         ROW_NUMBER() OVER (PARTITION BY D3.c2 ORDER BY D3.c2 ASC) as c4
                    from
                         UNITS_FACT T10976,
                         (select distinct D1.c1 + 1 as c1,
                                   D1.c2 as c2
                              from
                                   (select min(D1.c1) over (partition by D1.c3)  as c1,
                                             D1.c2 as c2
                                        from
                                             (select Case when case D1.c4 when 1 then D1.c2 else NULL end  is not null then Rank() OVER ( ORDER BY case D1.c4 when 1 then D1.c2 else NULL end  ASC NULLS LAST ) end as c1,
                                                       D1.c2 as c2,
                                                       D1.c3 as c3
                                                  from
                                                       (select T10939.MONTH_ID as c2,
                                                                 T10939.YEAR_ID as c3,
                                                                 ROW_NUMBER() OVER (PARTITION BY T10939.YEAR_ID ORDER BY T10939.YEAR_ID DESC) as c4
                                                            from
                                                                 TIME_DIM T10939
                                                       ) D1
                                             ) D1
                                   ) D1
                         ) D4,
                         (select distinct min(D1.c1) over (partition by D1.c2)  as c1,
                                   D1.c2 as c2,
                                   D1.c3 as c3
                              from
                                   (select Case when case D1.c5 when 1 then D1.c4 else NULL end  is not null then Rank() OVER ( ORDER BY case D1.c5 when 1 then D1.c4 else NULL end  ASC NULLS LAST ) end as c1,
                                             D1.c2 as c2,
                                             D1.c3 as c3
                                        from
                                             (select T10939.YEAR_ID as c2,
                                                       T10939.YEAR_DSC as c3,
                                                       T10939.MONTH_ID as c4,
                                                       ROW_NUMBER() OVER (PARTITION BY T10939.YEAR_ID ORDER BY T10939.YEAR_ID DESC) as c5
                                                  from
                                                       TIME_DIM T10939
                                             ) D1
                                   ) D1
                         ) D3
                    where  ( T10976.MONTH_ID = D4.c2 and D3.c1 = D4.c1 )
                    group by D3.c2, D3.c3
               ) D1
          where  ( D1.c4 = 1 )
     ) D1
order by c3

       

Sometimes the queries above are not necessary. This is where Oracle 10g’s MODEL clause can come to the rescue. MODEL clause was introduced to visualize a relational data in the form of a multidimensional data source. So, it typically works on top of a GROUP BY sub query (visualize this as a summarized cube). In the future blog entries (also in a planned whitepaper) i would cover how the MODEL clause is similar to visualizing data in a Essbase cube (both MDX as well as Calc Scripts). Today, i will just show how the above Ago query can be re-organized to a MODEL query. The query is given below

SELECT YEAR_DSC, SALES, PRIOR_YEAR
FROM
(SELECT YEAR_DSC, SUM(UNITS) SALES
FROM
TIME_DIM A, UNITS_FACT B
WHERE
A.month_id = b.month_id (+)
GROUP BY
YEAR_DSC)
MODEL
DIMENSION BY (YEAR_DSC)
MEASURES (SALES,0 PRIOR_YEAR)
RULES (PRIOR_YEAR[ANY] = SALES[cv(YEAR_DSC)-1])
ORDER BY 1

       

Currently BI EE does not create the MODEL clause based sql queries out of the box. Neither it is possible to hardcode them directly into the BI Server. I am just hoping that this feature would be added in the coming releases. And, I have just scratched the surface of the capabilities this MODEL clause can offer. There are quite a few use cases wherein this can be extremely useful/effecient. I would cover them in the future blog entries.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | Leave a Comment »

Oracle BI EE 10.1.3.4.1 – Hyperion Essbase Security Integration – Resolved

Posted by Venkatakrishnan J on April 27, 2009

Now that 10.1.3.4.1 is out, i thought i would test out the security integration of BI EE and Essbase. If you have read my blog entries here and here, you would have noticed that there was a known bug around the 10.1.3.4 version of BI EE. Though there was a patch that was out earlier to resolve this, i never got around to install the patch. 10.1.3.4.1 is a patch release, so all the intermediary patches should have been bundled with this release as well.

To see how this has been resolved, lets use the Global cube that we had built in one of the blog entries before. We shall be testing it with 2 users. One is the admin and the other would be admin1. admin would be the super user. admin1 would just have access to the Global cube. And a filter would be assigned to this user. So to start lets first create a filter in the Global Cube as shown below

create filter 'Global'.'Global'.'Sample'
read on '"1999","Units","All Channels","Total Product"';

Once this filter is created, assign the filter to the admin1 user.

       

So basically, admin1 has access to only one intersection. Every other intersection would produce #NoAccess in Excel addin. As a next step lets create a report out of the Global cube as the admin user.

       

Now lets create a report in BI EE out of this global with admin user.

       

Lets switch this user to admin1 in the connection pool and try viewing the same report.

       

Thats so much better. So, by default BI EE does displays the #NoAccess intersections as null. The MDX generated still remains the same.

With
  set [Channel2]  as '[Channel].Generations(2).members'
  set [Product2]  as '[Product].Generations(2).members'
  set [Time2]  as '[Time].Generations(2).members'
select
  { [Measures].[Units]
  } on columns,
  NON EMPTY {crossjoin ({[Channel2]},crossjoin ({[Product2]},{[Time2]}))} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [Global.Global]

       

Lets change the filter a little bit as shown below to include more intersections for admin1’s access.

create or replace filter 'Global'.'Global'.'Sample'
read on '@IDESCENDANTS("Time"),"Units",@IDESCENDANTS("All Channels"),"Total Product"';

 

With
set [Channel2] as '{[Channel].[All Channels]}'
set [Product2] as '{[Product].[Total Product]}'
set [Product3] as 'Generate({[Product2]}, Descendants([Product].currentmember, [Product].Generations(3),SELF), ALL)'
set [Time2] as '{[Time].[1998]}'
select
{ [Measures].[Forecast Units], [Measures].[Units] } on columns,
NON EMPTY {crossjoin ({[Channel2]},crossjoin ({[Product3]},{[Time2]}))} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

       

It does look like this indeed works. Well this makes it more interesting. We now have a complete solution and sure opens up lot more opportunities for integration.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 2 Comments »

Oracle BI EE 10.1.3.4.1 – Running Hyperion Essbase Data Loads/Calculations from BI EE – Using JAPI and JSPs

Posted by Venkatakrishnan J on April 26, 2009

As consultants one of the major hurdles that we would face with customers on a daily basis is the fact that how much ever features are available in any BI tool, almost upto 10 to 30% of the requirements(especially on big implementations) require some kind of custom work(sometimes this might even go upto 60% depending on the project). For example, last week i had a customer who basically wanted to know of a method for kick starting essbase allocation/calculations/data loads directly from BI EE. Now that they are using BI EE/Essbase connectivity, they wanted to provide the capability to some users to kick start the calculations directly from BI EE. Unfortunately, currently there is no out of the box way of doing this. But with some amount of work, the integration can be achieved. Today, lets look at how to go about achieving this integration. This would serve as a pre-cursor to another blog series wherein i would walk you through some complex integration techniques between BI EE and Essbase.

The basic aim is to have a link in BI EE, clicking on which should run a calculation in Essbase and then refresh a report to show the report with the new calculated data. To do this, the first step is to create a simple JSP page which would be used as a placeholder for firing the Essbase calculation. This JSP page would in turn refer the JAPI of Essbase and fire the calculation. In my case, i have used JDeveloper 10.1.3.2 to create the JSP page.

In order to create the JSP page, create a sample project and package called AllocEssbase.

       

Create a custom JSP page within this application. Include the Essbase jar files Ess_japi.jar and Ess_es_server.jar (these are the 2 main jar files that contribute to Essbase JAPI) in the project classpath.

       

Now, create the JSP using the code below. The code basically executes a calculation script CalcAll.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"<a href="http://www.w3.org/TR/html4/loose.dtd">http://www.w3.org/TR/html4/loose.dtd</a>">
<%@ page contentType="text/html;charset=windows-1252"%>
<%@ page import="java.io.*" %>
<%@ page import="java.util.Map" %>
<%@ page import="java.util.Map.Entry" %>
<%@ page import="java.util.jar.Attributes" %>
<%@ page import="java.util.Iterator" %>
<%@ page import="com.essbase.api.base.*" %>
<%@ page import="com.essbase.api.dataquery.*" %>
<%@ page import="com.essbase.api.session.*" %>
<%@ page import="com.essbase.api.datasource.*" %>
<%@ page import="com.essbase.api.domain.*" %>
<%@ page import="com.essbase.api.metadata.*" %>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"/>
    <title>AllocEssbase</title>
  </head>
    <body>
  <font size="5"><%="Essbase Allocation Process Started" %></font>
<%
        String s_userName = "admin";
        String s_password = "password";
        String s_olapSvrName = "localhost";
        String s_provider = "<a href="http://localhost:13080/aps/JAPI">http://localhost:13080/aps/JAPI</a>";
       
        try
        {       
        IEssbase ess = IEssbase.Home.create(IEssbase.JAPI_VERSION);
        IEssDomain dom = ess.signOn(s_userName, s_password, false, null, s_provider);
        IEssOlapServer olapSvr = (IEssOlapServer)dom.getOlapServer(s_olapSvrName);
        olapSvr.connect();
       
        IEssCube cube = olapSvr.getApplication("Global").getCube("Global");
        String maxLstat = "import database 'Global'.'Global' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' on error abort;";
        //cube.loadData(IEssOlapFileObject.TYPE_RULES,"UnitsLd",0,"SH",false,"global","global");
        cube.calculate(false,"CalcAll");
        //cube.beginDataload("UnitsLd",IEssOlapFileObject.TYPE_RULES,"global","global",0);
        //IEssMaxlSession maxLsess = (IEssMaxlSession)dom.getOlapServer(s_olapSvrName);
        //maxLsess.execute(maxLstat);
        }catch (EssException x){
            System.out.println("ERROR: " + x.getMessage());
        }
%>
               
  <font size="5"><%="Essbase Allocation Process Ended" %></font>
        </body>
</html>
<%
    response.sendRedirect("<a href="http://localhost:9704/analytics">http://localhost:9704/analytics</a>");
%>

       

As you see this is a very basic JSP which runs a calculation script. You can use the same for doing data loads or just about anything in Essbase. As a next step execute this JSP from JDeveloper itself to be sure that this JSP works.

       

This JSP would then have to deployed on the OC4J (or app server) of BI EE. Since this JSP uses a couple of external jar files, the only way to deploy this is by bundling it as a WAR file. In order to do that, create a new deployment profile and include both the jars

       

       

Deploy the war file in the OC4J of BI EE.

       

       

Now you should be able to access the JSP using the url given below

http://localhost:9704/AllocEssbase/AllocEssbase.jsp

Embed this URL in the BI EE dashboards.

       

This should get you started. Now there are lots of different applications that one can put this into use. Like Planning load start, ODI integration, BI Scheduler integration etc. I would cover all of these use cases in detail in the coming blog entries.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 1 Comment »

Oracle BI EE 10.1.3.4 – Content Accelerator Framework V1 – Free Utility released – Migrating Reports/Dashboards/RPD

Posted by Venkatakrishnan J on April 16, 2009

Update: As of today, this utility would not work against any of your existing installations as this is certified to work only with 10.1.3.4.1 patch release which has not been released yet. Thanks Phillipe Lions, the PM for CAF. This would work with 10.1.3.4 as well. Ensure that after deleting the jar files, there is no lingering in-memory older catalog manager processes.

I happened to stumble upon a new utitlity in OTN today. Not sure when this was released. But i am blogging about it as not a lot of people are aware of this.

One of the major disadvantages of BI EE has been the fact that there was no out of the box utility to migrate Dashboards/Reports from one environment to another. Also, the other major issue has been the fact that any changes made to the RPD are not propagated automatically to the Reports/Dashboards. To circumvent that a new free public utlity called Content Accelerator Framework (CAF) has been released. It has excellent features which i would list below

1. Ability to use a report or a dashboard as a template to create further reports. This is an excellent feature which basically helps end users to reuse a report across multiple subject areas (the utility will help in reverse engineering the RPD to an extent).
2. Propagating RPD changes to the Reports/Dashboards.
3. Easy cloning of Development and Production instances (helps in seamless migration of Reports/Dashboards from one environment to another).
4. Paves way for BI EE Answers templates (similar to BI Publisher Templates) which can be re-used by a lots of users. I can foresee a BI EE Template depot like what has been done for BI Publisher.

The above are just the high level advantages. There are even more excellent advantages which you would realize once you start using the utility. This utility can be downloaded here and the documentation is available here. To be honest, this looks like a precursor to the kind of capabilities 11g can offer. This utility basically uses the nqudmlexec and nqudmlgen to reverse engineer and rebuild the RPD.

       

As the name of utility suggests, i believe it is meant as a framework over which many similar such applications can be built (i am just guessing). The utility uses the catalog manager jar file to add custom properties within the catalog manager.

Posted in All Posts, OBI EE Plus | 3 Comments »

Oracle BI EE 10.1.3.4 and Essbase Connectivity – Understanding Aggregations – Part 4

Posted by Venkatakrishnan J on April 13, 2009

Today i would basically summarize the observations that i had mentioned over the last couple of blog entries. As a generic rule, while designing BI EE reports with Essbase as the data source, it is important to consider the 2 points mentioned below

1. Physical layer aggregation and BMM layer aggregation both determine how the MDX is fired.
2. Each one of them should not be looked at in isolation or should not be changed in isolation without considering the end to end impact.

Lets look at the results of each and every combination one by one.

Physical Layer: Aggr_External and BMM: Aggr_External

1. If a report requests data that exists directly in Essbase, then the MDX generated would go only against those intersections.
2. If a report requests data that does not exist directly in Essbase ( can be retrieved through a custom aggregation), then the aggregation would be pushed back to Essbase. The MDX would only go against the necessary intersections to get the aggregated/calculated data. The Essbase aggregation would use the AGGREGATE function to calculate the summarized data.
3. This does not support relational database functions and hence everything would have to converted into MDX.

Physical Layer: Aggr_External and BMM: SUM

1. If a report requests data that exists directly in Essbase, then the MDX generated would go only against those intersections.
2. If a report requests data that does not exist directly in Essbase, then the MDX fired would be pushed against the Essbase layer. The major difference in this scenario and the one above is the fact that Aggregation is the fact that the aggregation in this case is done using the SUM function. So, this would generate wrong results if the outline contains Time Based properties like TBAverage, TBLast etc. This sometimes can vary based on the outlines.
3. This supports relational calculation like concatenation etc. But the aggregation would happen only at the BI Server layer. The MDX fired would go against all the level-0 intersections. So, this is generally not recommended. Also, the numbers produced might be wrong if you have lots of Label Only or Time Based properties in your outline.

Physical Layer: SUM and BMM: Aggr_External

1. This combination requires all the dimensions in the report layout.
2. This combination does not have any specific advantage over other combinations.
3. This always does a level-0 roll up using AGGREGATE function. This can be at times pretty slow and is not recommended unless you want the aggregation to happen dynamically all the time.

Physical Layer: SUM and BMM: SUM

1. This is exactly similar to Aggr_External aggregation set for both Physical Layer and BMM.
2. If a report requests data that exists in Essbase, then the MDX generated would go only against the necessary intersections.
3. If a report requests data that does not exist directly in Essbase, then the aggregation would be pushed back to Essbase. The MDX would go only those necessary intersections to calculate the aggregated/calculated data. The Essbase aggregation would use the SUM function to calculate the summarized data.
4. The major advantage of this is the fact that this supports relational calculations as well. When relational calculations are introducted then, the relational manipulations are done in BI Server memory. The MDX generated would go only against those necessary intersections rather than doing a level-0 dynamic sum.

Of course, some of the behavior above can change as new releases of BI EE come out. But currently the above summarizes the behavior of various aggregation combinations when introduced in the BMM and physical layer. There are other combinations that are possible as well. I have not covered them as they are not used that much.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 2 Comments »

Oracle BI EE 10.1.3.4 and Essbase Connectivity – Understanding Aggregations – Part 3

Posted by Venkatakrishnan J on April 9, 2009

Lets look at one more interesting aggregation combination today which would result in different MDX queries and hence would result in different data being displayed. The combination that we would be using is given below

BMM: SUM and Physical Layer: SUM

Again to test this combination we shall use two forms of a similar report. The data for one report can be obtained from Essbase directly. The data for the other report would have to be derived from the existing Essbase data.

       

       

The MDX for this report is given below. As you see, this aggregation combination goes directly against the Essbase intersections.

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
select
{ [Measures].[Units] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

This is very similar to the Aggr_External aggregation set to both the physical layer and the BMM. Now lets test a report wherein the data to be displayed would have to be derived.

       

       

If you look at the MDX below, you would notice that SUM function is shipped instead of AGGREGATE. So, what this means is that, if you use this combination you need to be sure your Essbase outline does not contain any Time Balance properties. If it contains Time balance properties then the numbers generated would be wrong, if the above combination is used.

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
set [Time2] as '{[Time].[1998], [Time].[1999], [Time].[2000]}'
member [Measures].[MS1] as 'SUM({[Time2]},Measures.[Units])'
select
{ [Measures].[MS1] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

So far we have seen all the relevant combinations which would be commonly used. But again there are other combinations like Evaluate_Aggr, None etc which again can generate completely different set of MDX queries. I would not be covering these combinations as they are seldom useful. Neverthless, this wraps up my aggregation series. In the next blog entry i would summarize the observations so that anyone using BI EE on Essbase can refer to it while doing their implementation.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 1 Comment »

Oracle BI EE 10.1.3.4 and Essbase Connectivity – Understanding Aggregations – Part 2

Posted by Venkatakrishnan J on April 7, 2009

Yesterday we saw the 3 types of aggregations that are possible currently in BI EE (while using Essbase as a data source). We had discussed the first type of aggregation which is the default. Now, today we shall go more into details of how BI EE generates the MDX and how the aggregations set in the BMM and the physical layer affect the MDX generated. As you might have already noticed, both BMM layer and physical layer level aggregations can be specified for Essbase data sources. For simplicity and for easy understanding we shall take only 3 types of aggregation in both the BMM and the physical layer. The list is given in the below table

BMM Aggr_External
BMM SUM
BMM None
Physical Layer Aggr_External
Physical Layer SUM
Physical Layer None

The important part to note here is the fact that different combinations of aggregations set in the BMM and the physical layer can produce different MDX queries. Let us start with the first combination shown below (we shall use the same report that we had used yesterday – One wherein data is directly available in Essbase and the other wherein its not available in Essbase).

BMM: Aggr_External and Physical Layer: SUM

       

       

You would notice that this would produce an error due to the wrong MDX being generated. Thats because whenever in physical layer, if the aggregation is set other than Aggr_External, the reports that we create should have a dimension member from every dimension. Though this has produced an error, lets take a look at the MDX.

With set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
member [Measures].[MS1] as 'AGGREGATE(crossjoin (Descendants([Channel].currentmember,[Channel].Generations(3)),crossjoin (Descendants([Product].currentmember,[Product].Generations(5)),{[Time4]})),Measures.[Units])', SOLVE_ORDER = 100
select
{ [Measures].[MS1] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

Now, lets go back to the report and add Gen2, Year to bypass the above error.

       

The MDX generated is given below

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
set [Time2] as '[Time].Generations(2).members'
member [Measures].[MS1] as
'AGGREGATE(crossjoin (Descendants([Channel].currentmember,[Channel].Generations(3)),crossjoin (Descendants([Product].currentmember,[Product].Generations(5)),
Descendants([Time].currentmember,[Time].Generations(4)))),Measures.[Units])', SOLVE_ORDER = 100
select
{ [Measures].[MS1] } on columns,
NON EMPTY {crossjoin ({[Channel1]},crossjoin ({[Product1]},{[Time2]}))} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

As you see, due to the aggregation property (SUM) in the physical layer, the MDX generated does a level-0 dynamic aggregation using the AGGREGATE function (though the intersection for the above members actually exist in Essbase). So, what this means is that, you do not necessarily have to always have aggregated data in Essbase. Now, as a next step lets change the report to produce a report for which data does not even exist inside Essbase.

       

       

Even this has produced an error. So, in effect what this means is with a combination of BMM: Aggr_External and Physical Layer: SUM it is not possible to do in memory or custom aggregations on Essbase. But by default in the report if we have generations from every dimension then a level-0 roll up happens using the AGGREGATE function.

BMM: SUM and Physical Layer: Aggr_External

To test this combination lets first create the simple report without the Time dimension and then we shall look at the MDX query.

       

       

The MDX query generated would be

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
select
{ [Measures].[Units] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

As you see, this behaves like the first case discussed above. If we are requesting a report for which data already exists in Essbase, then the MDX generated would go directly against those intersections. Now, lets add a year filter to generate a report for which data does not exist in Essbase (can be obtained only through calculations/aggregation).

       

       

Now, if you at the MDX below you would notice that the aggregation is not pushed back to Essbase. There is no AGGREGATE function. But there is a SUM function. So even in this case the aggregation is pushed back to Essbase. But if you have Time Based Properties in the outline like TBLast, TBAverage etc, then this would generate wrong results. You need to be extremely careful in such cases as the numbers reported might be wrong.

With
set [Channel1] as '[Channel].Generations(1).members'
set [Product1] as '[Product].Generations(1).members'
set [Time2] as '{[Time].[1998], [Time].[1999], [Time].[2000], [Time].[2001]}'
member [Measures].[MS1] as 'SUM({[Time2]},Measures.[Units])'
select
{ [Measures].[MS1] } on columns,
NON EMPTY {crossjoin ({[Channel1]},{[Product1]})} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

Now the question is what is the difference between this and the default setting. What difference does SUM aggregation on the BMM layer make to the report. There are 2 main differences (one is noted above).

1. When a report requests some data that is not there in Essbase (which would have to be calculated on the fly) then in this case the aggregation is pushed back to Essbase but with the SUM function. In the default case (both aggregations set to Aggr_External) the aggregation is pushed back to Essbase with AGGREGATE function.
2. The other difference is that when we set SUM as the aggregation in the BMM layer, then in Answers we can use relational functions to manipulate the data. For example, in the report above, lets assume we need to do a relational calculation like concatenation to the dimension members in the report. So, all we need to do is to append “Test” to the Product dimension as shown below.

       

       

As you see the number above is outright different than the actual numbers. The reason for this is the MDX generated.

With
set [Channel3] as '[Channel].Generations(3).members'
set [Product5] as '[Product].Generations(5).members'
set [Time4] as '[Time].Generations(4).members'
set [Axis1Set] as 'crossjoin ({[Channel3]},crossjoin ({[Product5]},{[Time4]}))'
select
{[Measures].[Units]} on columns,
NON EMPTY {[Axis1Set]} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Global.Global]

As you see, just because of the relational calculation, BI EE has generated an MDX that goes against Level-0 members. If you fire this directly in Essbase you would notice all Non Empty tupules being generated as shown below.

       

So what this means is BI Server itself does the aggregation. Again, one would have to be extremely careful while using this as this would turn out to be very slow depending on the amount of data getting extracted. Not many people are aware of these minute differences and this is where i believe understanding Essbase and the BI EE RPD would help a lot while creating Essbase specific BI EE reports. I would document(in the form of a tabular chart) various aggregation combinations in the coming blog entries so that everyone can refer to that while using Essbase as a data source.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 1 Comment »