Business Intelligence – Oracle

Archive for the ‘All Posts’ Category

Oracle BI EE – Consuming Web Services in BI EE – Displaying Live Stock Quotes – Using UTL_HTTP and EVALUATE – Part 1

Posted by Venkatakrishnan J on May 19, 2009

When i was at the BI Forum in Brighton, one of forum participants had a question on using Web Services in BI EE. BI EE currently does not support direct WSDL response consumption by passing a request. So, in order to achieve this one way is to write a pl/sql wrapper which would basically call the web service and then use EVALUATE within BI EE to display them. There are other approaches as well like writing a pipelined table function etc. We shall see the other approaches in a later blog entry.

Assume that we have a table as shown below containing the companies and their corresponding Stock Symbols.

The requirement is to display the live stock feeds(Day High as well as Day Low) from a live Web Service feed at within BI EE. One good thing about this service is that they offer a variety of web service feeds along with their statuses. In order to consume the stock feed, lets first create a pl/sql function which is nothing but a wrapper, that sends a request to the WSDL above and gets the response back. The request sent would pass the Stock Symbol as a parameter. The Pl/Sql function would basically extract the Day High and Day close (or any other parameter) values directly from the stock feed.

create or replace function stock_quote
( p_symbol in varchar2 default 'ORCL',
p_tag in  varchar2  default 'High'
return varchar2
  s_request varchar2(32000);
  s_response varchar2(32000);
  h_request utl_http.req;
  h_response utl_http.resp;
  s_request:= '<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="" xmlns:xsd="" xmlns:soap="">
    <GetQuote xmlns="http://www.webserviceX.NET/">
 h_request:= utl_http.begin_request
            ( ''
            , 'POST'
            , 'HTTP/1.1'
  utl_http.set_header(h_request, 'Content-Type', 'text/xml');
  utl_http.set_header(h_request, 'Content-Length', length(s_request));
  utl_http.write_text(h_request, s_request);
  h_response := utl_http.get_response(h_request);
  utl_http.read_text(h_response, s_response);
  return substr(s_response,instr(s_response,'<'||p_tag||'>')+length(p_tag)+8,instr(s_response,'</'||p_tag||'>')-instr(s_response,'<'||p_tag||'>') - length(p_tag) - 8);

As you see the above function basically does a string manipulation to extract the parameter values. Typically this is not a recommended approach. The recommended approach is to use an XMLType function or a Regex function to extract the parameter values. But in our case, the string is sufficient. If you are behind a proxy, ensure that you set the UTL_HTTP.set_proxy before the begin request function. To test this out, lets fire the function from command line.

select stock_quote('ORCL','Low') from dual;

As you see we get the Oracle stock quote high and low for the day directly from this function. Now, lets go to BI EE and call this function through EVALUATE.

As you see we have now displayed the response of a WSDL directly from BI EE. We shall look at other approaches in future.


Posted in All Posts, OBI EE Plus | 3 Comments »

Oracle BI EE – Time Based Aggregation – Essbase Equivalence – Part 1

Posted by Venkatakrishnan J on May 17, 2009

Right now i am in the process of comparing certain Essbase related functionalities in BI EE and vice versa. One of the main reasons why i am doing this is to understand how the feature of one product can be leveraged while working with the other, now that more and more customers seem to have both of them together. While doing a quick comparison, one of the key Essbase functionalities that came to the forefront was the ability to do Time based aggregations. Any Essbase user would be aware of the fact that both Block Storage and Aggregate Storage (to an extent) essbase cubes support time based aggregations. Time based aggregations are nothing but aggregations of measures wherein the aggregation rule varies for the time dimension alone. There are 3 types of time based aggregations

1. Time Based Last – In some cases, when we are at a year level, we would like to see the amount sold for the last month(December) alone instead of a summation of all the months. But the aggregation for all the other dimensions (and their levels) would be SUM.
2. Time Based First – In some cases, when we are at a year or a quarter level, we would like to see the amount sold for the first month of that quarter or year instead of a summation of the all the child months for that quarter/year. But the aggregation for all the other dimensions would be SUM.
3. Time Based Average – In some cases, when we are at a year or a quarter level in the Time dimension, we would like to see the amount sold as an average of all the corresponding child months for that quarter/year. But the aggregation for all the other dimensions would be SUM.

Just to explain this a bit further, lets consider the screenshot below

As you see, we basically have 2 dimensions. One is Product and the other is Year. In the report containing Top Level of Product and Year Level of Time, the report has produced a number which is nothing but sum of amount sold for all the products in the last month of that Year (December). So, basically there are 2 kinds of aggregation for the same measure. The method for achieving this though is pretty simple, i wanted to introduce this here so that this can act as a pre-cursor to even more complex dimension based aggregations which i would show in the coming blog entries. To achieve this, open up the repository and go to the aggregation tab of the measure that you are reporting against.

Now choose the based on dimensions option. In the aggregation list first choose Other Dimensions and enter the aggregation as SUM. After that choose the Time dimension and enter the aggregation as LAST.

Ensure that the order of the dimensions listed above are the same. So basically what this does is, it sets the aggregation of the measure for all the dimensions apart from Time as SUM. For Time alone it would do a LAST aggregation. The order of the dimensions ensure that the LAST operation is carried out after the SUM.

Now create a report on Time and Product. You would notice that whenever upper levels of Time dimension is queried upon, it would result in LAST aggregation.

If you look at the actual query fired, you would notice that TIME based LAST aggregation would be performed on a Summed up product dimension.

select D1.c1 as c1,
     D1.c2 as c2,
     D1.c3 as c3
     (select distinct D1.c2 as c1,
               D1.c3 as c2,
               D1.c4 as c4,
               D1.c5 as c5
               (select T3663.TOTAL_PRODUCT_DSC as c2,
                         T3683.YEAR_DSC as c3,
                         T3663.TOTAL_PRODUCT_ID as c4,
                         T3683.YEAR_ID as c5,
                         sum(T3720.UNITS) as c6,
                         T3683.MONTH_ID as c7
                         PRODUCT_DIM T3663,
                         TIME_DIM T3683,
                         UNITS_FACT T3720
                    where  ( T3663.ITEM_ID = T3720.ITEM_ID and T3663.TOTAL_PRODUCT_DSC = 'Total Product' and T3683.MONTH_ID = T3720.MONTH_ID )
                    group by T3663.TOTAL_PRODUCT_DSC, T3663.TOTAL_PRODUCT_ID, T3683.MONTH_ID, T3683.YEAR_DSC, T3683.YEAR_ID
               ) D1
     ) D1
order by c1, c2

The above was pretty straightforward. But there is one more similar use case of Time based properties in Essbase which is more tougher to simulate in BI EE against a relational source. Time based aggregation can vary on different products. For example, for one product we would need Time Based aggregation of LAST, the other FIRST and the other AVERAGE. In such cases the above cannot be used as the aggregation is constant across a dimension(in the example above). In the above case, we had the same TB property for all members within a dimension. We shall see how to go about achieving such complex members based other dimension Time Based Aggregations in a future blog entry( in Part 2)

Posted in All Posts, Hyperion Essbase, OBI EE Plus | 2 Comments »

Oracle BI EE – Report SUM and Pivot Calculations – An alternative SQL on Oracle 10g and above – Equivalence to MDX – MODEL Clause – Part 2

Posted by Venkatakrishnan J on May 16, 2009

One of the few things that was discussed in the BI Forum was the fact that BI EE sometimes generates absolutely un-necessary queries while using a custom Report Sum. For example, if you look at the report below, you would notice that we have added a custom summation row after every Year to get the Year wise sub-totals

There are 2 ways of producing this summation. One is by making the BI Server to fire a query back to the database (which would generate the sum) and the other is by letting the BI Server itself to produce the sum. Lets first choose the first option by disabling the report based totals(making BI EE to generate the sub-total query as well).

If you look at the query you would notice that the query generated by BI EE does not actually produce the result in the actual desired format. The query is given below

distinct D1.c2 as c1,
D1.c3 as c2, D1.c1 as c3,
sum(D1.c1) over () as c4
select sum(T10976.UNITS) as c1,
T10939.FISCAL_YEAR_DSC as c2,
T10881.CHANNEL_DSC as c3
TIME_DIM T10939,
UNITS_FACT T10976 where ( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )
group by
order by c1, c2

This query would not generate the data in the desired format(instead of new summation row, this would generate a new column that BI Server would have to manipulate). So, this begs the question of how do we generate a sql which produces an output exactly same as the above report. This in itself is a nice sql challenge as it is not possible out of the box using normal sql constructs. This is where MODEL clause can come to our rescue. In the last blog entry on this topic, i had shown you how to go about generating AGO and ToDate functions using direct sql on the summed up result set instead of the base tables. In the same fashion, MODEL clause can be used to generate custom sub totals. For example, the same query above can be re-written as shown below

               T10939.FISCAL_YEAR_DSC as FISCAL_YEAR_DSC,
               T10881.CHANNEL_DSC as CHANNEL_DSC
               CHANNEL_DIM T10881,
               TIME_DIM T10939,
               UNITS_FACT T10976
          where  ( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )
          group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC)

In order to use MODEL, we would have to understand certain basics of how it can be used and in what situations it can be used. MODEL was introduced to generate sql queries that enables an end user to visualize the data in a multi-dimensional format. One can generate Multi MODEL references to do currency restatements etc. To understand this lets look at the BSO architecture diagram of Essbase. As you would probably know, BSO option consists of 2 parts

1. Sparse Dimensions that are stored as an index – So, these dimensions are typically stored in a normal relational row-wise format in Essbase.
2. Dense Dimensions that are stored as an Multi-Dimensional Array – These dimensions form a multi-dimensional array. The pointer to the array is denoted by the corresponding sparse dimension combination

In the same way, a MODEL clause consists of an index denoted by the Partition By Clause. The multidimensional array or a Block is denoted by the DIMENSION clause. The only difference between the 2 is the fact that you would have multiple measure blocks per partition by clause dimension combination. In the case of essbase, an index entry points to only one block. The reason for this is, MODEL clause does not treat the measure values as a seperate dimension. So, a MODEL clause treats a summarized sql query as shown below

If you understand this analogy, it would be easier to understand the analogy between Essbase member formulas/calculation scripts to the MODEL RULES clause. I would go into detail of how these 2 are related in a future blog entry. For now, lets take another pivot table example as shown below.

As you see, the above is a very simple pivot table containing a calculated row (DIrect Sales+Catalog). If you look at the query generated by BI EE, you would notice, that BI Server would be doing this calculation in memory. Now, our aim is to generate the pivot table report with the calculation too in a single query using MODEL clause

select T10939.FISCAL_YEAR_DSC as c1,
     T10881.CHANNEL_DSC as c2,
     sum(T10976.UNITS) as c3
     CHANNEL_DIM T10881,
     TIME_DIM T10939,
     UNITS_FACT T10976
where  ( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )
group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC
order by c1, c2

In order to do that, we would have to generate a rule that would produce a new set of rows in addition to the existing rowset.

               T10939.FISCAL_YEAR_DSC as FISCAL_YEAR_DSC,
               T10881.CHANNEL_DSC as CHANNEL_DSC
               CHANNEL_DIM T10881,
               TIME_DIM T10939,
               UNITS_FACT T10976
          where  ( T10881.CHANNEL_ID = T10976.CHANNEL_ID and T10939.MONTH_ID = T10976.MONTH_ID )
          group by T10881.CHANNEL_DSC, T10939.FISCAL_YEAR_DSC)
RULES (UNITS['Calculated'] = SUM(UNITS)[ANY])

As you see, MODEL clause has quite a few advantages whenever we have data manipulations on a summarized result. We shall see more complex examples in a future blog entry.

Posted in All Posts, Hyperion Essbase, OBI EE Plus | Leave a Comment »

Brighton BI Forum – Updates – Day 1

Posted by Venkatakrishnan J on May 15, 2009

I just woke up after a very bad nauseating night yesterday. I believe the time lag finally got to me. In the retrospect, when i think about yesterday’s presentation that i did, i thought there were a few things(in fact a lot) that i could have done better. Mark was kind enough to say that this was one of the best sessions so far, due to the number of questions. But again i think i could have done more justice to it by being well prepared. For my own memory following are the key aspects of any presentation that one would have to focus on

1. Get the flow correct – I missed that completely yesterday. For the 1st 15 mins, i did not get any single question which meant that i was not coherent enough to make the audience understand.
2. Get the content to match the flow – Well, i thought i had this right as well. But when i was going through the slides (during the presentation which was another mistake) i realized the fact that it had more words than actual helpful content
3. Rehearse atleast once before you present – Well, i thought this was not needed at all. To my utter dismay, i was found lacking in this aspect as i could not get the audience glued to the presentation.

Christian was kind enough to fill in and after that the Q&A session went very well. Unfortunately, i could not get any photos to post here. Let me get try to take them today. All in all the conference was good so far. There were key presentations from Craig Stewart who presented the Roadmap on ODI. Today we have a session from Mike which i believe should be interesting as he would be presenting the Roadmap on 11g. All in all the conference has been excellent so far and i should thank Mark & Jon and their team of Borkur, Peter, Ragnar made us feel at home. It should be another interesting day of presentations today.

Posted in All Posts | 1 Comment »

Oracle BI Forum – Brighton Conference – Update

Posted by Venkatakrishnan J on May 14, 2009

As you might probably know already, i am right now at the Brighton – Oracle BI Forum for one of my presentations on BI EE and Essbase Security integration. More than the conference, i believe this has given me an excellent opportunity to actually interact face to face with the fellow active bloggers like Christian, Andreas, fellow forumers, conference presenters and of course with Mark, who was pivotal in organizing this entire conference. Now i know with whom i have been interacting over emails :-). I arrived yesterday pretty late and just about missed Mark’s keynote. Should not be missing any presentation today. I am not sure how much of an interest my presentation would invoke as its more tilted towards with the under-pinnings of Java API of Essbase. Well lets see. Will keep everyone posted about the conference updates here. Of course, do follow Mark’s twitter feed if you want frequent updates.

The above was taken today morning from my room. The view is breathtaking though i have not done justice to it with the above picture. I would post more later.

Posted in All Posts, Conference, General | Leave a Comment »

Oracle BI EE – Understanding Logical SQL – Part 1

Posted by Venkatakrishnan J on May 7, 2009

I was recently involved in a migration project wherein the customer was migrating from a custom OLAP solution to Oracle OLAP. The customer had around 1200 reports reporting out of a single BI EE repository. I was brought in to build a repository on Oracle OLAP 10g. Since it was a migration project and since only the backend was changing, the customer expected all the reports to work out of the box on Oracle OLAP. So, my job was to build a repository from scratch using SQL Views on OLAP 10g, and at the same time ensure that none of the reports actually break due to this new repository. It was a very interesting exercise since one would have to use the concept of dimension binning on all the dimensions. Though dimension binning itself is a straightforward concept, the implementation itself was pretty tricky due to multiple hierarchies in each dimension and i had to dedicate around 8 hours to build the repository and at the same time get it right in all aspects. There are certain concepts that one would have to be aware of while building a BI EE rpd on Oracle OLAP 10g. On OLAP 11g(which the customer was not willing to migrate to) though, this would be a straightforward exercise as 11g offers out of the box MVs for facts and dimensions. So, in effect, 11g OLAP cube would be treated as a simple star schema. I would cover those concepts like using EVALUATE_PREDICATE, Dimension Binning, Multi Hierarchy fragmentation etc in a future blog entry. While working on this repository, i noticed that there were certain prompt requirements wherein i had to hardcode the logical sql since a direct select on the column would produce a drastically costly query on the OLAP SQL Views. This led me to write today’s blog entry which is about logical SQL.

As you would probably know, Oracle BI Server generates 2 types of queries. Logical and Physical. When columns are chosen in a report, a custom logical sql is generated and BI Server converts this logical sql to the actual physical queries. Now, how different is a Logical SQL from a Physical SQL, in terms of structure as well as in terms of functionality. This blog series would cover this.

Structure of Logical SQL:

A logical SQL is exactly similar to a physical SQL. The major difference is that it follows the ANSI SQL structure. But there are cases wherein it has its own structure. At a high level the Logical SQL Structure would be like the one shown below


When do i join and when do i not

So, there is no need for any join in a logical sql when presentation table columns belong to a single Subject Area. For example, consider the logical sql and the correponding subject area organization below

Time."Fiscal Year" saw_0,
Customer.Region saw_1,
Facts.UNITS saw_2
FROM "Global - Prod & Dev"
ORDER BY saw_0, saw_1

The idea behind the sql is that we need the number of units sold for each year in every region. In normal sql terms this would have involved a join between the Time dimension, customer dimension and the fact table. It would also have involved a group by on Year and Region columns. But in logical sql. joins and group bys are not needed as long as we are reporting against a single subject area. But assume that we have the following requirement wherein we need to show all the products and regions in a single report. Now, in our physical layer we do not have a join between products and regions. Hence when we create such a report in BI EE, the query generated would make a join with the fact table. So, instead of getting every combination of products and regions, we would be getting list of products being sold in every region based on whether transactions have occured for them in the fact table. The actual query generated is given below

T11023.ITEM_DSC as c1,
T10994.REGION_DSC as c2
where ( T10994.SHIP_TO_ID = T11080.SHIP_TO_ID and T11023.ITEM_ID = T11080.ITEM_ID )

But sometimes this is not what we need. All we need is a cartesian product between Customer and Products dimension. Though we can create such a cartesian join in the physical layer, changing the repository to satisfy all such similar cases would not work all the time. To circumvent this, we can make the BI Server to make an in-memory cartesian join. The logical SQL is given below

B.saw_1 saw_0,
A.saw_0 saw_1
(SELECT Customer.Region saw_0 FROM "Global - Prod & Dev") A,
(SELECT Product.Family saw_1 FROM "Global - Prod & Dev") B
saw_0, saw_1

If you look at the actual query generated, you would notice 2 queries being fired parallely.

In effect we have made BI Server to make an in-memory cartesian join. Now, in some cases, the requirement would be the other way around. In a dashboard prompt, we would have to show the list of all products for which there has been a transaction in the fact table as a drop down. In such a case, there are 2 ways of getting the list of products. One way is to include the following SQL in the sql results section of the dashboard prompt

SELECT Product.Product,
"Global - Prod & Dev"

In effect, the dashboard prompt would ignore the 2nd column and hence you should see all the products for which there is a transaction in the fact table.

The other approach is to create a logical SQL that will use the above query as a subquery.

SELECT Sub.Prod saw_0
(SELECT Product.Product Prod,
Facts.UNITS Units
"Global - Prod & Dev") Sub
ORDER BY saw_0

Once we understand how the logical SQL works, it would become even more clearer on how BI Server generates the physical queries. Today we just saw some basic logical SQL queries. There are more complex queries possible to cater to different kinds of reporting requirements. The only drawback in using the custom SQL is that, end users lose the capability of doing the native hierarchy drills. But sometimes logical SQL becomes an indispensable tool to have. We shall see more later.

Posted in All Posts, OBI EE Plus | 2 Comments »

Oracle BI EE – Content Accelerator Framework (CAF) Synchronizer – A review/preview – XUDML

Posted by Venkatakrishnan J on May 5, 2009

I have been on a couple of customer calls this month wherein the customer wanted to know a seamless means of migrating their web catalog from one environment to another. The first thing that came to my mind was the CAF (Content Accelerator Framework) that came out a couple of weeks back. Though i had recommended this to the customers, i also informed them of the fact that this was in its first release and hence there are bound to be one or 2 resolvable hurdles. Now that i am involved in creating a couple of custom utilities myself, i thought it was time to check out the capabilities of the utility. This blog entry is a result of that. I must thank Phillipe(CAF PM) and his dev team for working with me on this. There are certain pre-requisites for the CAF to work. Work is under way to update the documentation as well. Following are my findings.

1. CAF is certified to work on and above.
2. CAF requires JDK 1.6 as the active JDK.
3. CAF mandates the use of a Repository with relational objects. If you have Essbase as a data source, it might not work. Though i am not sure about this one as this was the only reason i could attribute to some of my repositories not working with this utility. If you find it otherwise, just leave them in the comments section.
4. I believe CAF does not mandate BI EE itself to be installed using JDK 1.6. JDK 1.6 is required only for the client catalog manager. Ensure that the path variable (on the client catalog manager machine) points to JDK 1.6.

Now lets get into an actual use case scenario. Consider that we have 2 repositories and 2 web catalogs. One is the default samplesales repository and the other is the usagetracking repository provided with the default BI EE install. Now, BI EE provides out of the box web catalog containing reports and dashboards going against each of the repositories. Now our aim is to merge both of them together into a single repository and web catalog. To do that, lets first start with merging the samplesales and the usagetracking repository. This is done using the 3 way merge that i have discussed here before. As a first step, create a copy of the usagetracking.rpd. Once this is done, open the samplesales.rpd as the current repository. Choose the usagetracking.rpd as the original repository.

Now choose the Usagetracking.rpd copy as the modified repository. And also make the necessary mappings to ensure that the merge happens correctly.

Now you should have a merged version of both the repositories.

As a next step, we need to bring in the usage tracking reports into the web catalog of sample sales. So, log into the catalog manager and navigate to the usage tracking web catalog. And choose all the reports that we want to migrate.

This would pop up another screen wherein we would have to choose the source rpd, target rpd (which would be the merged repository in our case). Also an online web catalog would have to be chosen as a target. The major advantage with this migration is that all we need is an access to the target webcatalog (which would be accessed through the same http server port). Also, CAF utilizes another utility called nqXUDMLGen.exe and nqXUDMLExec.exe which are basically similar to the older nqudmlgen.exe and nqudmlexec.exe. The major difference lies in the fact that now the entire repository is visualized as an xsd schema containing references to the actual objects. The xsd schema is given below. And the output of the nqXUDMLGen.exe would not be a UDML text file anymore. Rather it would be an XML file which would be in accordance with the xsd.

Click on next. And select the subject area to which the reports metadata would be mapped to. So, in effect you would have to choose the reports that have their columns within a single subject area in the previous screen. In our case, it would be usagetracking.

In the next screen you would find a mapping between the source report columns and the target metadata. What i would have liked is an automapper if the source and target rpd columns match. Unfortunately, we would have to map this one by one. But to circumvent this, the mapping structure can be saved so that it can reused in future migrations as well.

In the next step, enter the path in the target catalog. This is the path the reports would be migrated to in the target web catalog.

Now, if you open up the target web catalog you would notice that the reports have been migrated successfully.

Though the extent of usage of this utility(customer adoption) is pretty small, the concept this introduces is very robust and can be reused across multiple implementations. This also introduces the XUDML which should give you a sneek peak into the world of 11g. Apart from the installation hassles, this utility does what it promises to do and does it very well. The only hiccup that i see is the Essbase related XUDML issue. But i believe this is something that can be sorted out pretty soon. Hopefully we should see more of similar tools.

Posted in All Posts, OBI EE Plus | 4 Comments »

Oracle BI EE – Ago and ToDate – An alternative SQL on Oracle 10g and above – Equivalence to MDX – MODEL Clause – Part 1

Posted by Venkatakrishnan J on April 30, 2009

One of the few things that i have been exploring recently is the ability to create sql queries similar to the ones created by MDX. The basic idea behind this is the fact that MDX has excellent calculation capabilities. It also has excellent member based selection queries which in most cases would be relevant to relational reporting as well. In relational reporting, we are in most cases tied to the columns and hence for some use cases like AGO and TODATE, the queries generated are actually not necessary at all. For example, lets take the Ago function provided by BI EE. For demonstration purposes, lets create a Prior Year Measure on the Global BMM as shown below



Now, lets create a report as shown below


All the above report does is, it calculates the Units sold for each year and would also display the prior year units sold as well. There are 2 ways of generating queries for such a report

1. Generate Units and Generate Prior Year Units both from the base time and fact tables.
2. Generate Units from the base time and fact tables. Then generate the Prior Year units based on Units data.

Currently BI EE generates queries using the 1st approach mentioned above. But sometimes you realize that working on the resultant dataset is generally more efficient and predictable rather than going against the base tables again for the calculated dataset. If you look at the query generated to produce the above result, you would notice that the query generated is pretty complex and in fact sometimes can generate wrong results if you dont have the proper chronological key.

select D1.c2 as c1,
     D1.c1 as c2,
     D1.c3 as c3
     (select D1.c1 as c1,
               D1.c2 as c2,
               D1.c3 as c3
               (select sum(T10976.UNITS) as c1,
                         D3.c3 as c2,
                         D3.c2 as c3,
                         ROW_NUMBER() OVER (PARTITION BY D3.c2 ORDER BY D3.c2 ASC) as c4
                         UNITS_FACT T10976,
                         (select distinct D1.c1 + 1 as c1,
                                   D1.c2 as c2
                                   (select min(D1.c1) over (partition by D1.c3)  as c1,
                                             D1.c2 as c2
                                             (select Case when case D1.c4 when 1 then D1.c2 else NULL end  is not null then Rank() OVER ( ORDER BY case D1.c4 when 1 then D1.c2 else NULL end  ASC NULLS LAST ) end as c1,
                                                       D1.c2 as c2,
                                                       D1.c3 as c3
                                                       (select T10939.MONTH_ID as c2,
                                                                 T10939.YEAR_ID as c3,
                                                                 ROW_NUMBER() OVER (PARTITION BY T10939.YEAR_ID ORDER BY T10939.YEAR_ID DESC) as c4
                                                                 TIME_DIM T10939
                                                       ) D1
                                             ) D1
                                   ) D1
                         ) D4,
                         (select distinct min(D1.c1) over (partition by D1.c2)  as c1,
                                   D1.c2 as c2,
                                   D1.c3 as c3
                                   (select Case when case D1.c5 when 1 then D1.c4 else NULL end  is not null then Rank() OVER ( ORDER BY case D1.c5 when 1 then D1.c4 else NULL end  ASC NULLS LAST ) end as c1,
                                             D1.c2 as c2,
                                             D1.c3 as c3
                                             (select T10939.YEAR_ID as c2,
                                                       T10939.YEAR_DSC as c3,
                                                       T10939.MONTH_ID as c4,
                                                       ROW_NUMBER() OVER (PARTITION BY T10939.YEAR_ID ORDER BY T10939.YEAR_ID DESC) as c5
                                                       TIME_DIM T10939
                                             ) D1
                                   ) D1
                         ) D3
                    where  ( T10976.MONTH_ID = D4.c2 and D3.c1 = D4.c1 )
                    group by D3.c2, D3.c3
               ) D1
          where  ( D1.c4 = 1 )
     ) D1
order by c3


Sometimes the queries above are not necessary. This is where Oracle 10g’s MODEL clause can come to the rescue. MODEL clause was introduced to visualize a relational data in the form of a multidimensional data source. So, it typically works on top of a GROUP BY sub query (visualize this as a summarized cube). In the future blog entries (also in a planned whitepaper) i would cover how the MODEL clause is similar to visualizing data in a Essbase cube (both MDX as well as Calc Scripts). Today, i will just show how the above Ago query can be re-organized to a MODEL query. The query is given below

A.month_id = b.month_id (+)


Currently BI EE does not create the MODEL clause based sql queries out of the box. Neither it is possible to hardcode them directly into the BI Server. I am just hoping that this feature would be added in the coming releases. And, I have just scratched the surface of the capabilities this MODEL clause can offer. There are quite a few use cases wherein this can be extremely useful/effecient. I would cover them in the future blog entries.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | Leave a Comment »

Oracle BI EE – Hyperion Essbase Security Integration – Resolved

Posted by Venkatakrishnan J on April 27, 2009

Now that is out, i thought i would test out the security integration of BI EE and Essbase. If you have read my blog entries here and here, you would have noticed that there was a known bug around the version of BI EE. Though there was a patch that was out earlier to resolve this, i never got around to install the patch. is a patch release, so all the intermediary patches should have been bundled with this release as well.

To see how this has been resolved, lets use the Global cube that we had built in one of the blog entries before. We shall be testing it with 2 users. One is the admin and the other would be admin1. admin would be the super user. admin1 would just have access to the Global cube. And a filter would be assigned to this user. So to start lets first create a filter in the Global Cube as shown below

create filter 'Global'.'Global'.'Sample'
read on '"1999","Units","All Channels","Total Product"';

Once this filter is created, assign the filter to the admin1 user.


So basically, admin1 has access to only one intersection. Every other intersection would produce #NoAccess in Excel addin. As a next step lets create a report out of the Global cube as the admin user.


Now lets create a report in BI EE out of this global with admin user.


Lets switch this user to admin1 in the connection pool and try viewing the same report.


Thats so much better. So, by default BI EE does displays the #NoAccess intersections as null. The MDX generated still remains the same.

  set [Channel2]  as '[Channel].Generations(2).members'
  set [Product2]  as '[Product].Generations(2).members'
  set [Time2]  as '[Time].Generations(2).members'
  { [Measures].[Units]
  } on columns,
  NON EMPTY {crossjoin ({[Channel2]},crossjoin ({[Product2]},{[Time2]}))} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [Global.Global]


Lets change the filter a little bit as shown below to include more intersections for admin1’s access.

create or replace filter 'Global'.'Global'.'Sample'
read on '@IDESCENDANTS("Time"),"Units",@IDESCENDANTS("All Channels"),"Total Product"';


set [Channel2] as '{[Channel].[All Channels]}'
set [Product2] as '{[Product].[Total Product]}'
set [Product3] as 'Generate({[Product2]}, Descendants([Product].currentmember, [Product].Generations(3),SELF), ALL)'
set [Time2] as '{[Time].[1998]}'
{ [Measures].[Forecast Units], [Measures].[Units] } on columns,
NON EMPTY {crossjoin ({[Channel2]},crossjoin ({[Product3]},{[Time2]}))} properties ANCESTOR_NAMES, GEN_NUMBER on rows


It does look like this indeed works. Well this makes it more interesting. We now have a complete solution and sure opens up lot more opportunities for integration.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 2 Comments »

Oracle BI EE – Running Hyperion Essbase Data Loads/Calculations from BI EE – Using JAPI and JSPs

Posted by Venkatakrishnan J on April 26, 2009

As consultants one of the major hurdles that we would face with customers on a daily basis is the fact that how much ever features are available in any BI tool, almost upto 10 to 30% of the requirements(especially on big implementations) require some kind of custom work(sometimes this might even go upto 60% depending on the project). For example, last week i had a customer who basically wanted to know of a method for kick starting essbase allocation/calculations/data loads directly from BI EE. Now that they are using BI EE/Essbase connectivity, they wanted to provide the capability to some users to kick start the calculations directly from BI EE. Unfortunately, currently there is no out of the box way of doing this. But with some amount of work, the integration can be achieved. Today, lets look at how to go about achieving this integration. This would serve as a pre-cursor to another blog series wherein i would walk you through some complex integration techniques between BI EE and Essbase.

The basic aim is to have a link in BI EE, clicking on which should run a calculation in Essbase and then refresh a report to show the report with the new calculated data. To do this, the first step is to create a simple JSP page which would be used as a placeholder for firing the Essbase calculation. This JSP page would in turn refer the JAPI of Essbase and fire the calculation. In my case, i have used JDeveloper to create the JSP page.

In order to create the JSP page, create a sample project and package called AllocEssbase.


Create a custom JSP page within this application. Include the Essbase jar files Ess_japi.jar and Ess_es_server.jar (these are the 2 main jar files that contribute to Essbase JAPI) in the project classpath.


Now, create the JSP using the code below. The code basically executes a calculation script CalcAll.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"<a href=""></a>">
<%@ page contentType="text/html;charset=windows-1252"%>
<%@ page import="*" %>
<%@ page import="java.util.Map" %>
<%@ page import="java.util.Map.Entry" %>
<%@ page import="java.util.jar.Attributes" %>
<%@ page import="java.util.Iterator" %>
<%@ page import="com.essbase.api.base.*" %>
<%@ page import="com.essbase.api.dataquery.*" %>
<%@ page import="com.essbase.api.session.*" %>
<%@ page import="com.essbase.api.datasource.*" %>
<%@ page import="com.essbase.api.domain.*" %>
<%@ page import="com.essbase.api.metadata.*" %>
    <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"/>
  <font size="5"><%="Essbase Allocation Process Started" %></font>
        String s_userName = "admin";
        String s_password = "password";
        String s_olapSvrName = "localhost";
        String s_provider = "<a href="http://localhost:13080/aps/JAPI">http://localhost:13080/aps/JAPI</a>";
        IEssbase ess = IEssbase.Home.create(IEssbase.JAPI_VERSION);
        IEssDomain dom = ess.signOn(s_userName, s_password, false, null, s_provider);
        IEssOlapServer olapSvr = (IEssOlapServer)dom.getOlapServer(s_olapSvrName);
        IEssCube cube = olapSvr.getApplication("Global").getCube("Global");
        String maxLstat = "import database 'Global'.'Global' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' on error abort;";
        //IEssMaxlSession maxLsess = (IEssMaxlSession)dom.getOlapServer(s_olapSvrName);
        }catch (EssException x){
            System.out.println("ERROR: " + x.getMessage());
  <font size="5"><%="Essbase Allocation Process Ended" %></font>
    response.sendRedirect("<a href="http://localhost:9704/analytics">http://localhost:9704/analytics</a>");


As you see this is a very basic JSP which runs a calculation script. You can use the same for doing data loads or just about anything in Essbase. As a next step execute this JSP from JDeveloper itself to be sure that this JSP works.


This JSP would then have to deployed on the OC4J (or app server) of BI EE. Since this JSP uses a couple of external jar files, the only way to deploy this is by bundling it as a WAR file. In order to do that, create a new deployment profile and include both the jars



Deploy the war file in the OC4J of BI EE.



Now you should be able to access the JSP using the url given below


Embed this URL in the BI EE dashboards.


This should get you started. Now there are lots of different applications that one can put this into use. Like Planning load start, ODI integration, BI Scheduler integration etc. I would cover all of these use cases in detail in the coming blog entries.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 1 Comment »