Business Intelligence – Oracle

Archive for February, 2009

Oracle BI EE 10.1.3.4 – Switching between Production and Development Data Sources using same Reports and Dashboards

Posted by Venkatakrishnan J on February 27, 2009

I was recently involved in a client project wherein the client was implementing BI EE on top of an Oracle Database. Since they were on a shoe-string budget, they could not afford to have a couple of servers for BI EE Development and Production. Of course, not an ideal situation, they did have a couple of instances of the database (one for development and the other for production). These databases were populated using Informatica from an external source. Since, the client had only one instance of BI EE, they wanted to know whether it is possible to switch between production and development databases just by means of a single prompt (for administrators alone to test the data validity across the database instances). Though a very unique request, it made sense due to the frequency of development happening on the dev databases. Also, they wanted a capability wherein a set of users would be looking at a set of reports reporting against the dev database and another set of users would be reporting on the same set of reports on the production database. The below diagram provides the details on what is required.

       

Lets look at an approach today using which we shall achieve the above requirement. So, in effect our aim is to have a prompt across all the reports, which will switch between the development and production databases (for Administrators). For demonstration purposes, i would be using the Global Schema. Lets consider the Global Schema as the actual development database. Also, lets consider another schema called global_prod which would have the production database tables.

       

As you see, we have the same set of tables in both prod and dev databases. Typically any Administrator would like to run the reports on both dev and prod to determine any possible issues. So, lets start with designing the BMM layer for these 2 schemas. The idea is to create a single BMM layer which will fetch data from either prod or dev depending on the fragmentation set in the repository. To start with, lets create a simple select view in both Dev and Prod to return the actual values that we would choosing in the prompts for Dev and Prod.

SELECT 'Dev' FROM DUAL

SELECT 'Prod' FROM DUAL

       

Now, for each of these new sources, create a complex join with the fact tables as shown below

       

Now, in the BMM that we are building, add a custom dimension with a single column called Source using both the above custom select views as Logical Table Sources. Also, set the fragmentation in such a way that one LTS is chosen, when Dev is chosen in the prompt and the other LTS is chosen when Prod is chosen in the prompt as shown below.

       

Now, for all the dimensions and the fact tables, include both the Prod and dev sources. Ensure that fragmentation is set for each one of them to Prod or Dev depending on the LTS. For example, the fact table related Dev LTS is shown below

       

I have not shown the dimension build and other aspects in the RPD BMM as everything else would remain the same. Instead of one LTS for each dimension, use 2, one for prod and the other for dev. Of course, as your RPD becomes complex, adding more and more LTS would only make the RPD to grow big. That is something one would have to consider. But this approach has very good flexibility and can be implemented by anyone who wants implement a similar requirement.

Now lets log into Answers and check how this works. Create a very simple report as shown below and have the Dimension Source column as Is Prompted in your report. Also create a prompt to show the Source values of Prod and Dev. Include the report and the prompt in the dashboard. Lets choose Dev first and run the report.

       

If you look at the query, you would notice that Dev database is used for generating the above report.

       

The same would be the case for Prod as well. But one drawback is that, if we do not include this filter, we would get a union all from both the LTS as shown below.

       

       

One cannot expect every user to create a prompt like the one above in each and every report. So, in order to avoid the duplicate issue and to ensure that a group of users automatically have a Dev or Prod filter assigned to them just add a custom Security Group and add the filter as shown to each and every Logical Table in the BMM. One Security Group would have a filter on Dev and the Other on Prod. So, anytime a specific user from one of the above groups creates a report, the Dev or Prod filter is automatically passed and this will ensure that the fragmentation happens properly.

       

       

Posted in All Posts, OBI EE Plus | 5 Comments »

Hyperion Essbase 11.1.1.0 – Database Archive and Replay in BSO Cubes

Posted by Venkatakrishnan J on February 18, 2009

One of the new features that was introduced in the EPM 11.1.1.0 release is the ability to restore an Essbase database back to its original state after a database failure. Typically, restoring a database from a backup always restores the database back to the point in time when the backup was taken. But there could have been transactions/dataloads that could have happened after that backup. Since these new transactions/data loads are not part of the back up, till the previous release(9.3.1), we could restore the database only to the point of time of the backup. But now in the current release, it is possible to restore the new transactions/data loads related data as well using the Database Replay feature. Before we see how this replay feature works, lets quickly understand the basics of database backup in Essbase.

BSO Cubes can be backed up in multiple ways. The most commonly used feature is the Database Archiving feature. This is supported only for the BSO cubes in the 11.1.1.0 version. In 9.3.1, archive works(only the ESSCMD version works. The maxL version would not work. Before using ESSCMD on ASO cubes in 9.3.1, better get a confirmation from Oracle Support as the documentation clearly says it is not supported) even in ASO databases. The archive works in 3 steps. They are

1. Put the database in read-only mode by using the begin archive command.
2. Manually backup all the files listed in the archive file generated as part of the begin archive command.
3. Revert the database to the normal mode by using the end archive mode.

Basically the database begin archive command does the following

1. All the modified data which have not been committed so far onto the disk would be committed.
2. Puts the database in Readonly mode
3. Creates a file which will basically list down all the Essbase specific files that need to be backed up

As you see above, archive command only facilitates easy backup. It itself does not do the backup though which was the case till the last release. But now the backup can be done as well using the Archive and Restore feature in the current release. Lets first use the traditional approach as shown above using MaxL

alter system logout session on application 'Demo';
alter application 'Demo' disable connects;
alter database 'Demo'.'Basic' begin archive to file 'D:\DemoBasicArchive.lst';

Now the above maxL would generate the archive list file shown below.

Once this file has been generated, these files would have to be backed up. Or the entire app folder can be backed up using external softwares. In the current release even that backup has been made simple by the maxL grammar shown below

alter database 'Demo'.'Basic' archive to file 'D:\DemoBasicArchive.arcfile';

The above command can also be done from Administration Console itself as shown below

And the same database can be restored again using

alter database 'Demo'.'Basic' restore from file 'D:\DemoBasicArchive.arcfile';

Now that we have seen how to backup/restore an Essbase database, lets look at the Database Replay option. The database replay option works on the premise that Transaction logging has been enabled on the database. This is done through a couple of settings on the Essbase.cfg file. The settings are given below

TRANSACTIONLOGDATALOADARCHIVE Demo Basic SERVER_CLIENT
TRANSACTIONLOGLOCATION Demo Basic D:\EssbaseReplay NATIVE ENABLE

The above setting in the Essbase configuration file basically lets Essbase know about the directory where all the transactions after a backup are stored. The important aspect of this feature is this can be used like the Flashback option in Oracle database. To test out the Replay lets follow a simple sequence

1. Archive the database
2. Update the data in Essbase using Excel Add-in
3. Restore the database
4. Enable Replay to see whether we are getting the updated data from Excel Add-in

Lets take a very simple Excel add-in report as shown below. This is the data that we have before doing an archive.

Now lets go ahead and archive the cube.

After the archive, lets go ahead and update the data in Excel as shown below. After changing it update the cube using the Lock and Send mechanism.

Now, lets restore the cube back using the Restore option. Before restoring, we would have to stop/unload the database.

If we go back to Excel, we would get the data that we had before the Lock and Send transaction.

Now, lets go and apply the Replay Transactions to restore the transactions that happened after the backup.

This should revert back the data that we had updated from Excel.

This replay option can be used for rule files and also the data files.

Posted in All Posts, EPM, Hyperion Essbase | 9 Comments »

Oracle Data Integrator 10.1.3.5 – Loading Data into Hyperion Essbase Cubes from Relational Data Sources – Using Rule Files

Posted by Venkatakrishnan J on February 13, 2009

I was on a call yesterday with an Oracle partner who was developing a custom data loading solution(for a complex reporting system involving Essbase, Oracle OLAP and Oracle 10g) for one of his customers. One of the issues that they were facing was in using ODI to do data loads into Hyperion Essbase. Before i proceed further, i would like to point out John’s article series here which has the details on how to go about using ODI for loading into Essbase. The issue faced by the partner was, they were not comfortable in using JAPI provided by ODI as they felt that the data loads could be much faster using traditional approaches. They were actually used to loading data through rule files using the SQL Interface. And they were of the opinion that SQL Interface was performing better than the JAPI. Though the conclusion arrived at was not correct, as there were no quantifying facts/metrics stating JAPI load was slower than the SQL Interface, this question made sense as the partner was more or less new to ODI and was not comfortable in using JAPI for data loads. But one of the features that they liked about the ODI and Essbase integration was the ability to define Essbase connections which offered them the flexibility of doing every data load using the ODI interfaces. The partner basically wanted to know of a way where an ODI interface could be created with no mappings to the source and all the interface should do is to call a Rule file through the SQL Interface and load the data. I thought i would blog about this since this is more or less a common requirement considering the fact that many Essbase customers use the SQL interface rather than the JAPI. First let us start with creating a physical schema using the Hyperion Essbase Technology.

       

The next step is to create a logical schema and establish a relationship between the physical and logical connections using a context.

       

Now, lets jump over to the designer and create a model. While importing the objects choose customized reverse and enter the relevant details as shown below.

       

       

       

Now, the next step is to create a custom project (or use an existing project). This project should have all the Knowledge Modules. Now duplicate the “IKM SQL to Hyperion Essbase (DATA)” and give it a custom name “IKM Essbase Data Load using Rule Files”.

       

And in the custom tab, remove the steps “Load Data Into Essbase” and “Report Statistics” from the IKM.

       

The reason for removing these 2 steps is to ensure that the IKM does not use JAPI to do the data loads. Now, lets create a new interface and include the fact table related to the Global cube as target into the Interface. Ensure that the staging area is SUNOPSIS_MEMORY_ENGINE.

       

       

Now go to the flow control and choose the custom IKM that we create above. Since we have removed 2 steps, Load Data into Essbase and Report Statistics, the not all the options that you see for the IKM are relevant anymore. The only options that are relevant are given below

       

Now, in order to use the SQL interface to load the rule files, create a maxL script as shown below and save it as GlobalUnitsLoad.maxL

spool on to 'D:\GlobalUnitsLoad.log';
import database 'Global'.'Global' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' on error write to 'D:\GlobalUnitsLoad.err';

As you see the maxL script basically spools the log out to a specific file. And also, it calls the rules file to do a data load. When this data load happens, it would be through the sql interface. Now, come back to the Interface and in the options for the IKM enter the below details. The maxL script would be called through the PRE_LOAD_MAXL_SCRIPT option.

       

Now save and run this interface.

       

Now, you should be able to bypass the JAPIs to do your data loads. This is not always the recommended option. But neverthless this is something that would be wanted by everyone who is new to ODI and who is used to loading rule files through the sql interface.

Posted in All Posts, EPM, Hyperion Essbase, Oracle Data Integrator | 4 Comments »

Oracle BI EE 10.1.3.4 and Hyperion Web Analysis – Drill from BI EE to Web Analysis – Part 1

Posted by Venkatakrishnan J on February 10, 2009

If you had looked at my blog entry here, i would have shown how to go about drilling from BI EE to HFR. Along the same lines, lets look at another common requirement which is to drill from BI EE to Web Analysis. Again as i have said here before, the fundamental idea is to get the URL parameters for Web Analysis and understand how Web Analysis accepts its parameters. In order to understand this, lets create a simple report using Web Analysis studio as shown below.

       

Our aim is to determine the URL parameters for Web Analysis. So, for one of the measure values add Related Content and just include the same report as shown below

       

       

Now, if you see there are 3 types of links offered by Web Analysis

1. Embedded Link
2. Java Link (The Web Analysis Studio Link)
3. HTML Link

Lets analyze all the 3 links one by one. The first one is the Embedded link.

http://localhost:19000/workspace/index.jsp?module=tools.relatedcontent&repository_uuid=0000011f02bc7ee8-0000-0000-0a0a0a0a&repository_format_id=html&repository_name=Parent%20Report&repository_path=/Users/admin/Reports/Parent%20Report&sso_token=$SSO_TOKEN$&$CONTEXT$&mimetype=application/hyperion-analyzer-report

As you see this has the following parameters

1. module – This is a workspace specific parameter which specifies that the report comes as a related content.
2. repository_uuid – This a constant id for the webanalysis report
3. repository_format_id – Since workspace can embed multiple objects of different mime types, this specifies the output type of the report.
4. repository_name – name of the WebAnalysis report
5. repository_path – path to the actual Web Analysis report
6. SSO_TOKEN – This token is useful for all SSO enabled workspace contents. If this is passed in the URL, the target application is opened automatically without signing in. (If SSO is not enabled then SSO_USERNAME and SSO_PASSWORD can be passed in the URL)
7. CONTEXT – This passes the actual POV of the cell that we have clicked. This is the most important parameter as this would let us know how the parameters are passed to web analysis.
8. mime_type – This determines the application type.

Typically it is recommended to use the Embedded link as the output type is generally driven by the URL parameter itself. The second link is the Java Link. This helps in drilling within the studio itself. This is not generally used a lot unless a person requires the drill to happen within the studio itself.

http://localhost:19000/WebAnalysis/WebAnalysis.jsp?sso_token=$SSO_TOKEN$&report=/Users/admin/Reports/Parent%20Report&$CONTEXT$&sso_token=$SSO_TOKEN$&$CONTEXT$&mimetype=application/hyperion-analyzer-report

The parameters are exactly the same as the one in embedded link. But not every parameter in the Embedded link is supported in the Java Link.

The third type of link is the HTML link. This is used for generic content viewing in HTML. Its URL is shown below

http://localhost:19000/WebAnalysis/servlet/HTMLViewer?sso_token=$SSO_TOKEN$&action=login&postLogin=LoadReport&report=/Users/admin/Reports/Parent%20Report&$CONTEXT$&sso_token=$SSO_TOKEN$&$CONTEXT$&mimetype=application/hyperion-analyzer-report

In all the 3 links above, the most important of all is the $CONTEXT$ parameter. This is what passes down the individual Point of View (POV) values to the target. But to pass this from BI EE, we need to know how this $CONTEXT$ parameter is structured. So, instead of using a report as a related content, lets include a custom link as shown below.

       

Now, launch this URL from within Web Analysis Studio. You would see a new IE window with the actual CONTEXT parameter as shown below

http://sample/?cepversion=1.1.0&applicationtype=WebAnalysis&applicationurl=&applicationversion=11.1.1.1.0.797&dsclass=OLAP&disconnection=ADM&dsurl=&dsprovider=&schema=SH&attribute=Times%2eid%2eTimes&attribute=Promotion%2eid%2eNO+PROMOTION&attribute=Product%2eid%2eProduct&attribute=Measures%2eid%2eMeasures&attribute=Channel%2eid%2eChannel&attribute=Customers%2eid%2eRio+de+Janeiro&attribute=SH%2evalue%2e3%2e0

       

In HFR, the $CONTEXT$ that we saw earlier had a specific format. If you look closely, Web Analysis too follows a similar pattern. To pass down individual member values to the report just use

attribute=<Dimension Name>.id.<Dimension Value>

For example,

attribute=Times.id.2008

There are also other parameters in the URL which are specific to Web Analysis. Now, that we have seen how the URL parameters for web analysis is structured, the integration from BI EE remains the same. The idea is to generate the above URL dynamically (which i have shown in the HFR blog entry). An example is given below

       

Posted in All Posts, EPM, Hyperion Essbase, Hyperion Web Analysis, OBI EE Plus | 2 Comments »

Oracle BI EE 10.1.3.4 – Integration between BI EE & Hyperion Financial Reporting(HFR) – BI EE Essbase Reports to HFR Drills – Part 3

Posted by Venkatakrishnan J on February 6, 2009

This is the 3rd and final part of the BI EE and HFR integration. In the prior blog entries here and here, i had shown the integration between a relational BI EE report and an HFR report. But there are cases wherein one would need integration between a BI EE Essbase Report and an HFR report. In such cases, it is not possible from BI EE directly to generate the dynamic URL as specified before. One of the main reasons for that is, the concatenation of custom HTML strings are not supported in MDX queries. So, in order to achieve such integration we would have to bring in a relational source somehow. Lets understand this through an example. Consider a sample report in BI EE shown below

       

As you see, its a very simple report based on the Demo->Basic cube. Now our intention is to have a custom hyperlink column called as Drill which would be used for drilling to a HFR report. And while drilling, the corresponding row member names for the Gen3, Year and Gen3, Market need to be passed to the target report. The target HFR report is given below

       

Now, what we had done here was, we used the capability of BI EE to generate a dynamic URL from answers itself. But if we do that on a pure Essbase sourced answers report, it would throw an error as normal string functions are not supported on a MOLAP source. So, in order to overcome this, we shall use the horizontal federation approach. For this we would need two conforming dimension tables as shown below (these dimension tables is where we would pass the values to the URL)

       

       

Now, lets setup horizontal federation for the 2 conforming Year and Market dimensions by importing these 2 tables into the physical layer. Also create a custom physical alias table in the physical layer as shown below

SELECT
YEAR,
QUARTER,
MONTH,
MARKET,
REGION,
CITY,
'<a href=http://localhost:19000/workspace/index.jsp?module=tools.relatedcontent&repository_path=/BIEE%20Integration&elementType=2&repository_name=BIEE%20Integration&repository_format_id=html&run=1&sso_username=admin1&sso_password=welcome1&attribute=Product.id.Product&attribute=Market.id.'||City||'&attribute=Year.id.'||Month||'&attribute=Accounts.id.Profit&attribute=Scenario.id.Actual >HFR Drill</a>' CustomURL
FROM
MARKET,
YEAR

       

As you see, what this alias does is, it creates a cartestian join between the Year and Market tables. This would be used to produce our dynamic URL. Also, this would be used as a fact placeholder to generate the URL. Be careful while using this cartesian join approach on large dimensions. If you have very big dimensions, i would recommend using a complete fact view with only the necessary dimension values. The below screenshots shows how to setup the federation. The first step is to add another provider LTS for the Year dimension for all its columns.

       

Similarly set the same for the Market dimension. Once that is done, include the cartesian alias that we created into the BMM. Our idea is to use the CustomURL (in the above query) column as the column that generates the dynamic URL. Since we are including this as a fact column and since we are using the conforming dimensions approach, we need to ensure that the measures of Essbase and this CustomURL are of the same grain. So, make sure to set First or Last as the aggregation property of the CustomURL column.

       

       

       

       

       

Now, lets create the answers report as shown below

       

As you see, we get the URL in our report along with the Actual and Budget measures. Now, lets convert the URL column type into HTML. This will convert it into a drillable column as shown below

       

       

Now, if you click on any one of the links, it would take you to the HFR report. Also, it would pass Year and the Market dimension values as a parameter as well.

       

There are other possible approaches as well. But i wanted to use this approach in order to demonstrate the kind of flexibility BI EE offers from an integration standpoint. To make this even more flexibile instead of hardcoding the values, just create a function in the database and use EVALUATE to generate the URL.

Posted in All Posts, EPM, Hyperion Essbase, Hyperion Financial Reporting, OBI EE Plus | Leave a Comment »

Oracle BI EE 10.1.3.4 and Essbase Connectivity – Differences between EVALUATE and EVALUATE_AGGR – Creating Custom Measures

Posted by Venkatakrishnan J on February 4, 2009

One of the frequent questions which i am getting recently on the Essbase and BI EE connectivity is about the fundamental difference between EVALUATE and EVALUATE_AGGR. If you had looked at my blog entries here and here, i would shown you 2 variations of using EVALUATE. The former shows how to use EVALUATE for producing dynamic members. The latter shows how to use EVALUATE to produce custom measures. Since both of them are possible using EVALUATE itself, the question was “What is the use of EVALUATE_AGGR?”. Well, there are 2 major differences in the way Essbase uses EVALUATE and EVALUATE_AGGR

1. When used in Answers
2. When used in the Repository

In the first case, what happens is that the MDX query is determined by the BI Server. But since we do not have the option of specifying whether the EVALUATE column is a dimension or a fact measure from the UI, we are sometimes at the mercy of the BI Server determining the MDX for us. That is where my series of blog entries here would be helpful in understanding how BI Server generates these MDX queries. But in the second case, it is straightforward to understand.

1. When used in a repository, EVALUATE should be used for generating members or one to one numeric values like Rank in Dimensions. This accepts dimension generations as its parameter.
2. When used in a repository, EVALUATE_AGGR should be used for generating fact related custom measures. This accepts a static measure (Account dimension) members as its parameter. It can also accept dimension generations as its parameter. But the static measure member is a must.

Lets understand this with an example. We shall use the same example that we used here. So, basically our aim is to create a measure in the repository that would find the variance between Actual and Budget measures. From Answers, it is not possible to pass the measure values to EVALUATE as it supports only dimension generations as input. Hence, if you notice, we had hardcoded the measure values within the EVALUATE function. Instead of that let us go to the repository and create a new measure column as shown below

       

This custom measure is defined manually in the repository using the Evaluate function shown below.

EVALUATE_AGGR('%1 - %2', "localhost"."Demo".""."Basic"."Actual", "localhost"."Demo".""."Basic"."Budget")

       

Ensure that this measure has Evaluate_Aggr as its aggregation property.

       

Now, lets go to Answers and create a report out of this custom measure.

       

If you look at the MDX query, you can notice that the custom measure has been correctly translated into a custom measure in the MDX itself.

       

With
  set [Market3]  as '[Market].Generations(3).members'
  member [Scenario].[MS1] as '[Scenario].[Actual] - [Scenario].[Budget]', SOLVE_ORDER = 101
select
  { [Scenario].[Actual],
    [Scenario].[Budget],
    [Scenario].[MS1]
  } on columns,
  NON EMPTY {{[Market3]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [Demo.Basic]

So for creating any custom measures in the repository EVALUATE_AGGR is used.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | Leave a Comment »