Business Intelligence – Oracle

Archive for December, 2008

Hyperion Essbase 9.3.1 – Currency Conversion in ASO Cubes – Using MDX and Dynamic Hierarchies – Approach 1

Posted by Venkatakrishnan J on December 31, 2008

In the couple of blog entries here and here, we had seen an alternative approach to currency conversion in BSO cubes. One of the main reasons why i introduced that approach was to make sure that approach to currency conversion remains seamless as we move to ASO. The idea and the approach remains the same i.e we would be loading the rates and data within the same cube. The major difference between the currency conversion in ASO and BSO is that ASO currency conversion is dynamic i.e the conversion happens during data retrieval. In BSO the currency conversion happens during the data aggregation/calculation. To start with let us use the Aggregate Outline Migration Wizard to migrate the outline from BSO to ASO. When you migrate, ensure that you are using interactive migration so that you can modify the violating members manually. Following are the changes that one would have to make in the outline

1. Convert the Products dimension to have Multiple Hierarchies. And change the Diet member to be having a dynamic Hierarchy. This is because ASO does not support multiple shared members within a stored hierarchy.

2. Convert the Scenario dimension to have multiple hierarchies. The idea is to load all the input Local currency data into the Stored local hierarchy. And all the conversions would happen under the Dollar dynamic hierarchy. We need to convert Dollar to dynamic hierarchy since ASO does not support calculations on non-Accounts dimensions.

3. Change the input data for Rates to be loaded against Act and Bud intersection members instead of Actual and Budget.

4. Create a new ASO cube and copy the members from the wizard to the ASO outline.

5. Copy the rule files that we created the other day from the BSO cube to the new ASO cube.

6. The outline conversion wizard does not convert the BSO formula automatically to ASO MDX formulas. So, change the formulas for the below shown members

Margin % ----- ([Margin] - [Sales])/[Margin] * 100
Profit % ----- ([Profit] - [Sales]) / [Profit] * 100
Variance ----- [Actual] - [Budget]
Variance % ----- ([Actual] - [Budget]) / [Actual] * 100

7. This will ensure that the outline is now ASO compliant. In order to accomodate the currency conversion, we need to add formulas to the Actual, Budget and Actual @ Budget XChg members. The formulas are given below

Actual ----- [Act] / ([Act],[Rate Product]).value
Budget ----- [Bud] / ([Bud],[Rate Product]).value
Actual @Budget XChg ----- [Act] / ([Bud],[Rate Product]).value

As you see, all we are doing is we are making the Actual, Budget members to be dynamically calculated using the MDX formulas above.

8. Let us load the data using the same rule files which we copied over from BSO. Once that is done, let us look at the data to see whether the calculations are done correctly.

As you see, the numbers above match exactly with what was there in the BSO cube. But the drawback with ASO is that, certain BSO specific features like Expense Reporting etc are not supported. These would have to be manually handled using MDX and UDAs. Also, the above approach might not be suitable in all the cases whenever retrieval performance is very important. For such cases, let us look at another approach in a future blog entry.

Belated New Year Wishes everyone!!!

Posted in All Posts, EPM, Hyperion Essbase | 3 Comments »

Oracle Data Integrator 10.1.3.5 – Configuring Metadata Navigator & Lightweight Designer

Posted by Venkatakrishnan J on December 30, 2008

I had the opportunity to dedicate some of my time on Oracle Data Integrator 10.1.3.5 today. Oracle Data Integrator is a very interesting ETL tool since it is based on a ELT architecture like Oracle Warehouse Builder. But apart from that, the similarity between these 2 tools end, both in terms of product usage and also in terms of the product related jargons. Now that there are plans to introduce ODI as the back end ETL tool for loading data into Hyperion Financial Management, Hyperion Planning, Hyperion Essbase(these 3 already exist now), BI Applications, my interest in this tool grew a bit further. I have worked with both Informatica and Oracle Warehouse Builder in the past and hence it was very interesting prospect to analyze the similarities and differences across these toolsets. My intention is to primarily cover the connectivity to HFM, Hyperion Planning and Hyperion Essbase. I would be covering these in the future blog entries. For an introduction to this tool, i would recommend Mark’s series of blog entries here. But in today’s blog entry we shall be covering the new feature(web based metadata navigator & Designer) that was introduced in the 10.1.3.5 version of Oracle Data Integrator. Though the documentation is technically correct, certain details were missing. In this blog entry, i would briefly explain how to go about configuring these 2 pieces as part of Oracle Data Integrator installation.

Pre-Requisites(for the below set of steps to work):

1. Oracle Data Integrator 10.1.3.5 should have been installed
2. The master repository should be created on an Oracle Database
3. A sample Work Repository called WORK REPOSITORY should have been created.
4. A sample OC4J instance. If you have BI EE installed, then just use that instance itself. These navigators are pretty lightweight.

Configuring Metadata Navigator:

Configuring metadata navigator is straightforward. All we need to do is to deploy the corresponding War File in the OC4J instance.

1. Open the OC4J console and deploy the oracledimn.war(from the installation setup/manual folder) in the OC4J container.
2. Use oracledimn as the application context and click on next.

3. Just deploy the war file with the default configurations.

4. Now we should be able to open up the Metadata Navigator console. The link to be used is http://<hostname&gt;:<port>/oracledimn

5. Ensure that you have configured your work repository. Also, ensure that you are able to login to the work repository using the Designer java client. In order for the online web app to use the same connect repository details, just copy the {OracleDI_HOME}\oracledi\bin\snps_login_work.xml file to {OC4J_HOME}\j2ee\home\applications\oracledimn\oracledimn\WEB-INF directory. This will ensure seamless login of the metadata navigator into the repository.

6. Restart OC4J. Now, we should be able to browse the metadata navigator.

As you see, we have the entire metadata repository(Topology) in the web based console. One very good thing is that this is very lightweight and is much easier to configure than the OWB metadata repository. Currently this is just a navigator. Online updates are not supported. This i believe is a big difference between Informatica and the 2 ELT based tools. Informatica currently supports online updates to the repository, Integration Service etc.

Configuring Lightweight Designer:

Configuring Lightweight Designer involves more detailed configuration than the Metadata Navigator. Some of these steps are not explained in detail in the configuration guide. Lets go through the steps one by one.

1. Deploy the oracledilwd.war in the same OC4J container as above. Use oracledilwd as the application context. Ensure that you are able to access the login page of the online designer using the URL http://<hostname&gt;:<port>/oracledilwd

2. Since we are using Oracle as the Repository database, we need to copy the jdbc jar file of Oracle to this application. So, copy the ojdbc14.jar from {OC4J_HOME}\j2ee\home\applications\oracledimn\oracledimn\WEB-INF\lib to {OC4J_HOME}\j2ee\home\applib. The ojdbc14.jar would be deployed automatically when we installed the metadata navigator above. If you do not have this navigator, then you can get the jar file from the ODI or Oracle DB installation directory.

3. Restart OC4J and open up the OC4J enterprise manager. Go to the Administration tab and go to the JDBC Resources configuration screen.

4. We need to create 1 connection pool and 2 data sources. Let us start with creating a data source first. Give the name for this connection pool as ORACLE_MASTER though it does not matter in this case.

Ensure that you are also providing the schema details of the master repository and test the connection. It should be successful. This will validate the jar file copy that we did above.

5. Now, we need to create 2 data sources. One with the name ORACLE_MASTER and the other with the name ORACLE_WORK. Ensure that the details for these data sources are as shown below. The JNDI path is very important. If this is wrong then the connection would not happen.

Do not worry about the name Lightweight Designer Application for the connection pool in the above screenshots. The name of connection pool can be anything. In you case, it would be ORACLE_MASTER.

6. Go to {OC4J_HOME}\j2ee\home\applications\oracledilwd\oracledilwd\WEB-INF and edit the Web.xml file. In that XML file add the below XML directives.

        <resource-ref>
			<description>
				OracleDI Local Repository
			</description>
			<res-ref-name>jdbc/ORACLE_MASTER</res-ref-name>
			<res-type>javax.sql.DataSource</res-type>
			<res-auth>Container</res-auth>
			<res-sharing-scope>Unshareable</res-sharing-scope>
	</resource-ref>
	<resource-ref>
			<description>
				OracleDI Local Work Repository
			</description>
			<res-ref-name>jdbc/ORACLE_WORK</res-ref-name>
			<res-type>javax.sql.DataSource</res-type>
			<res-auth>Container</res-auth>
			<res-sharing-scope>Unshareable</res-sharing-scope>
	</resource-ref>

7. The above directives basically point to the JNDI path that we specified in step 5 above.

8. Go to {OC4J_HOME}\j2ee\home\applications\oracledilwd\oracledilwd and edit the Repositories.xml file. Add the below XML directive to the file.

<logins>
	<login name="OracleDI Local Repository">
		<master name="jdbc/ORACLE_MASTER" masterDriver="oracle.jdbc.OracleDriver"/>
		<work name="jdbc/ORACLE_WORK" workName="WORK REPOSITORY"/>
	</login>
</logins>

Ensure that the Work Repository name provided in the above directive matches with your Work Repository name.

9. Restart OC4J. Now you should be able to login to the Lightweight Designer application.

Posted in All Posts, Oracle Data Integrator | 4 Comments »

Hyperion Essbase 9.3.1 – Alternative Approach to Currency Conversion in BSO Cubes – Using Calculation Scripts – Part 2

Posted by Venkatakrishnan J on December 29, 2008

In the last blog entry we saw how to go about designing the Essbase Outline to accomodate Rate loads into the same database. Today we shall look at how the currency conversion is accomplished. To begin with let us load some sample data into the Essbase Cube. The sample data is shown below

As you see, the input data is in Local currency of each city/country. Let us create a simple rule which will load the above data into the Essbase Cube.

Now, let us load the rates (the rates that we saw in the last blog entry) using a similar rules file as above.

Once the rate load is done, the next step is to convert the Local currency into the corresponding USD currency. This is achieved through calculation scripts. In effect, the Members of the Scenario dimension under Local, would store the local currency. And the Members of the Scenarion dimension under Dollars, would store the converted USD. To achieve this, let us create a simple calculation script as shown below

SET CACHE HIGH;
SET CALCPARALLEL 2;

FIX("Sales",@LEVMBRS("Market",0))
"Actual" = "Act" / "Actual"->"Rate Product";
"Budget" = "Bud" / "Budget"->"Rate Product";
"Actual @ Bud XChg" = "Act" / "Budget"->"Rate Product";
ENDFIX;

FIX("Actual","Budget","Actual @ Bud XChg","Act","Bud")
AGG("Product");
CALC DIM("Year");
ENDFIX;

FIX("Actual","Budget","Actual @ Bud XChg")
AGG("Market");
ENDFIX;

The above script can actually be split into 3 parts. They are

1. Currency Conversion at the lowest level
2. Aggregation of all the relevant dimension for all the Scenarios except Markets
3. Aggregation of the Markets dimension for USD converted Scenario alone.

The above trifurcation is needed in order to ensure that we are not aggregating Local Currency across Markets as it would not make sense to add Euros with Pounds. Now, let us execute the above calculation and look at the data.

As you see, the conversions have happened correctly based on the input rates. The above approach basically overwrites the existing rate values. Hence, after a calculation the rates would not exist in the cube. If rates are desired for future calculations just exclude the “Rate Product” member from the above calculation. The method above can be used/configured to suit quit a few scenarios. But the major drawback with this approach is that there are possibilities of huge amounts of cube fragmentation. Due to the complex nature of the calculations (getting the rates from a specific intersection), unnecessary blocks might be created and hence can result in reduced block density and also cube fragmentation. So, whenever this approach is used, frequent export and import of lowest level data(along with the rollups) is needed.

Posted in All Posts, EPM, Hyperion Essbase | 2 Comments »

Hyperion Essbase 9.3.1 – Alternative Approach to Currency Conversion in BSO Cubes – Using Calculation Scripts – Part 1

Posted by Venkatakrishnan J on December 29, 2008

In our blog series on Essbase, we had seen how to go about building ASO and BSO cubes. We also saw various approaches for loading data, building dimensions using rule files. In the next couple of blog entries we shall see more advanced topics like Currency Coversion in both ASO and BSO cubes. We shall start with the BSO Cube first. BSO cubes support currency conversion using 2 methods

1. Using Currency Database
2. Using inherent Rate Members against each dimension

The first approach is pretty straightforward and is the recommended option while using BSO cubes. This is also documented pretty clearly in the Admin Guide. Unfortunately, ASO does not support a seperate currency database. So, we shall look at an approach today which can be used for both ASO as well as BSO cubes. The actual implementation would vary for ASO and BSO options, but the underlying concept would remain the same. To begin with we shall understand the currency conversion from a BSO outline standpoint. To demonstrate this, we shall be using the Interntl database outline under the Sample Application. The reason for using this specific database is that, it comes with a corresponding currency coversion database which will store the rates. Before we jump into the outline design, we shall understand what exactly are we trying to do. This is summarized in the points below

1. The Interntl database is loaded with data(at local currency for each Market). i.e each record of lowest level data will be coming in Euros, Canadian Dollars, Pounds etc
2. The intention is to convert all the incoming Local Currency data into common reported Dollar which is USD.
3. For this the rates would be loaded against each Market (assuming each Market will have only one currency)
4. The rates would be in the form of a ratio of USD to the Local Currency (For example, USD/GBP etc)

This is just a high level requirement. Our intention is to replicate whatever has been done by the Currency Database using custom inherent Rate members within the database itself. This is achieved using the high level flow listed below

       

Let us take a look at our base outline first.

       

As you see, we basically have the below listed dimensions

1. Year
2. Measures (Containing both P/L and B/S accounts)
3. Product
4. Market
5. Scenario

So, our input data should be having the grain of the 5 dimensions above. Since our approach is to load the rates in the same database, we need to identify the grain of the Rates data as well. Our rates depend on the following dimensions

1. Year – Rates would vary over time
2. Measures – Rates would be different for P/L accounts and B/S accounts (in some businesses)
3. Market – Rates would vary based on countries. If the data is stored at city level, rates can be loaded at that level as well. For example, New York would be using USD. Hence while loading the rates one can load the rate directly against New York assuming USD as the base.
4. Scenario – Actual Rates would be different from Budget rates

As you see, in our case Rates have a lesser dimensional grain than the actual data. So, let us list down all the member combinations for each dimension for which rates would be loaded.

1. Year – All lowest level month members (Jan, Feb, Mar, Apr, May. Jun, Jul, Aug, Sep, Oct, Nov, Dec)
2. Measures – All lowest level Account members
3. Product – Rates do not vary based on products. Hence we need to manually add a member called “Rate Product” which will be used for loading rates alone. This is shown in the screenshot below
4. Market – Rates would be loaded against all lowest level members in this dimension.
5. Scenario – Rates would be loaded against Act and Bud members since there are two types of rates that we would be loading.

       

To give an example of how the rates would be loaded, a set of sample input records is given below

       

As you see, we are loading the rate against all the possible intersections that we identified above. So, in effect we would be storing the rates in the same database as the actual loaded data. There can be multiple variations of the above approach like using UDA’s for currency to GEO mapping etc but this should give an idea. In the next blog entry we shall see how we can use the above rates to do the data conversion from Local currency to a common USD using Calculation scripts.

Posted in All Posts, EPM, Hyperion Essbase | 3 Comments »

Enterprise Performance Foundation (EPF or FEM) – Data Model Introduction

Posted by Venkatakrishnan J on December 24, 2008

As you might already know, with the acquisition of Hyperion, Oracle’s product stack for financial clients have been strengthened quite a bit due to products like Hyperion Planning and Financial Management. Hyperion Planning and Financial Management are best breed of products in its line of offerings. Having said that, there are quite a few customers who are still on the Oracle’s own products like Enterprise Performance Foundation, Financial Consolidation Hub, Enterprise Planning and Budgeting etc. I was in three different calls last week with 3 customers who were using Enterprise Performance Foundation as their primary reporting data warehouse. Also all these clients were using CPM analytics for doing their reporting in BI EE. One of the major advantages of this reporting data warehouse(its not a true data warehouse per se. But it is tuned more towards reporting) is that it is integrated very tightly with E-Business Suite. It is even tightly integrated with products like Profitability Manager, Financial Consolidation hub etc. And it caters to almost all kinds of reporting. I had covered CPM analytics in my blog entry before here. CPM Analytics is one of those BI Applications which does not have an ETL layer. Rather, it provides a set of reports and dashboards for reporting directly on Enterprise Performance Foundation. One of the frequent issues the end users face with this application is that, the reports and dashboards do not give correct or expected results. This is primarily not because of any issues in CPM analytics but instead because of customizations in the EPF layer. To understand this better, we shall first try to understand how the EPF is structured at a high level from a data model perspective. Also, we shall compare and contrast this data model layer with respect to Hyperion Financial Management later. At a high level, the below diagram would give a basic overview of the EPF data model.

       

I typically call EPF or FEM as the financial data warehouse. Its fact tables primarily contains all financial data like Customer Loans, Credit Cards, Consumer Credits, Account Balances, Company Profit and Loss etc. Each of these different financial data is identified by means of different dimensions. Enterprise Performance Foundation has these below out of the box dimensions.

1. Geography
2. Financial Element
3. Customer
4. Product and Product Type
5. Cost Center
6. Channel
7. Natural Account
8. Project
9. Task
10. Ledger
11. Activity
12. Entity
13. Line Item

These are the out of the box dimension that EPF provides. Not everyone use all of the above dimensions. The extent of usage of the above dimensions are driven by the type of company and the source system. Typical source for EPF would be Oracle General Ledger. Apart from the above dimensions, EPF also provides 20 more user customizable dimensions(named as User1…User20). In all the cases that i have seen so far, there is always a necessity to use atleast 3-4 of these user customizable dimensions. This is primarily because not every business requirements are satisfied by the 13 or more out of the box dimensions. In fact, there are cases wherein custom dimensions would have to be used as a replacement for the out of the box dimensions. For example, the Customer dimension provided by EPF might not satisfy all the necessary tracking required by business. In such a case, one would have to use any one these custom dimensions as a replacement for this dimension. This is the primary reason why the reports/dashboards provided by CPM Analytics do not give a true state of the business. Having said that, the repository and the approach to reporting provided by this CPM analytics is excellent and in fact can be leveraged to provide better results.

Now lets go back to the data model and try to understand the various aspects of reporting that EPF provides. One of the excellent flexibilities of EPF is that it can maintain multiple hierarchies for the same dimension. In addition to multiple hierarchies it can also maintain multiple versions of the same hierarchy. And all the hierarchies are maintained in the form of parent-child relationships. For example, let us take a look at the NATURAL ACCOUNT dimension.

       

As you see, this dimension supports SCD-2 & SCD-1 kind of reporting. The ENABLED_FLAG, LAST_UPDATE_DATE flags provide the flexibility to maintain history of the same account (if necessary). Also, each dimension can itself be further subclassified based on different value sets. The dimension structure is same for all the dimensions. Now, lets take a look at the hierarchy structure (always parent child).

       

The child_id column above in the hierarchy view is actually the source dimension tables member ids. As you see, this has quite a few good features like SINGLE_DEPTH_FLAG, CHILD_DEPTH_NUM etc which are very crucial for reporting. The SINGLE_DEPTH_FLAG helps in identifying whether the child and parent are immediate parent and child. For example, in cases wherein a member does not have any parent(a root node) then the parent_id and child_id for such members would be the same. Hence, in reporting when we do a connect by this will result in a loop(in 10g and above we have loop identifiers in the CONNECT BY queries). The SINGLE_DEPTH_FLAG helps in identifying such members. Also, the HIERARCHY_OBJ_DEF_ID helps in maintaining different hierarchies for the same dimension. Also, there can be different versions of the same hierarchy. As a next step lets take a look at the Attributes view.

       

EPF supports 3 type of attributes. They are Varchar, Numeric and Date type attributes. The above pretty self explanatory. Each dimension can have a lot of attributes. For example, Customer would have an Age attribute. And each attribute can take many values. The attributes view above provides all such combinations for reporting. The structures above are very generic and would remain the same across all the dimensions. All the above are maintained through a generic EPF UI (similar to Ebiz screens). EPF has around 20 out of the box fact tables catering to different kinds of financial businesses like Corporate Loans, Banks, Credit Cards etc. Also, each fact table is bifurcated into 2 types. They are are transaction fact tables. And the other is the summarized granular fact tables. This should give us an understanding of the EPF data model. Now, in the future i will be blogging about how CPM analytics uses these above dimensions, attributes and facts to provide comprehensive reporting. But the above should serve as an introduction to anyone who is making a foray into EPF.

Posted in All Posts, Data Modelling, Enterprise Performance Foundation | Leave a Comment »

Oracle BI EE, BI Publisher 10.1.3.4 & Hyperion Essbase Connectivity – Security Integration

Posted by Venkatakrishnan J on December 22, 2008

One of the major letdowns (if i can call that) of the BI EE and Essbase connectivity is the security integration between the 2 components. Currently, BI EE cannot leverage the Essbase security. Though i had stated here that it does work, that test was done for a specific use case. But the same does not work for all the other major scenarios. This blog entry basically gives a brief on all my findings on the BI EE and Security Integration. We shall take the same Demo->Basic cube for all our testing. To start with let us use below report on the Demo->Basic cube.

       

As you see it is a very simple report containing all the dimensions in the Demo->Basic Cube. Now, let us create a new user in Essbase called as Demo1. And this user would only be given a filter access to this cube.

       

Let us create a new filter in Essbase which would have access to only the Profit Account and all the other members of other dimensions. So the filter specification would be as shown below

@IDESCENDANTS ("Year"),@IDESCENDANTS ("Market"),@IDESCENDANTS ("Product"),@IDESCENDANTS ("Scenario"),"Profit"

       

Then assign this filter to the Demo1 user.

       

Lets change the user in the connection pool to Demo1.

       

Once this is done, let us take a look at the report.

       

If you look at the report closely, the data itself is completely wrong. We are not supposed to get any data for accounts other than Profit. But in our case not only are we getting data, it is juxtaposed with numbers from prior members. Now, let us take a look at the MDX to see whether the filter that we setup above works correctly.

With
set [Accounts2] as '[Accounts].Generations(2).members'
set [Market2] as '[Market].Generations(2).members'
set [Product2] as '[Product].Generations(2).members'
set [Year2] as '[Year].Generations(2).members'
select { [Scenario].[Actual] } on columns,
NON EMPTY {crossjoin ({[Accounts2]},crossjoin ({[Market2]},crossjoin ({[Product2]},{[Year2]})))} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [Demo.Basic]

Now, let us fire the above MDX query directly in Essbase as the Demo1 user. This will let us know whether the filter has been applied properly on the cube.

       

As you see above, the MDX does generate No Access label for all the Non-Profit account intersections. Hence the problem actually lies in the BI EE Server wherein the conversion from this MDX output to a BI EE specific answers output does not happen correctly. I believe this is probably because, since the ACTUAL measure is numeric, the UI expects only numeric values in the output MDX. But if there is any non-numeric output then rendering of the output would be wrong. This prompted me to understand another important question. How does BI EE handle the #Missing label? If you see, BI EE will always pass the NON EMPTY clause in your MDX output. Hence there is no way #Missing can come in the output. The only thing that i can conclude as of now is that #No Acces label in not handled in the BI EE Server and hence it is bound to produce wrong results. Now, lets investigate further to see whether this is actually due to the BI EE UI. One is to test the MDX directly in BI Publisher and the other is to fire the logical SQL directly. Let us take the BI Publisher approach first. Let us log into BI Publisher first and then fire the same MDX Query as the admin user first.

       

As you see we get all the accounts including Profit and Non-Profit accounts. But in the above output we do not have the dimensions as part of the rowset. To investigate this in itself is a seperate blog entry which i shall do later. But for now, we are sure that the above works for the admin user. Now, lets try the same report as the Demo1 user.

       

So far so good. Looks like BI Publisher does understand the No Access label as it has filtered all the no-access rows. I believe this is either getting filtered in the UI or BI Publisher engine modifies the MDX queries in such a way that it does not display the No Access Rows. This is again a bit strange as there are no out of the box properties to filter the No Access rows in BI EE. In order to test this completely, let us remove the NON EMPTY property and also add one more scenario to the query as shown below

With
set [Accounts2] as '[Accounts].Generations(2).members'
set [Market2] as '[Market].Generations(2).members'
set [Product2] as '[Product].Generations(2).members'
set [Year2] as '[Year].Generations(2).members'
select { [Scenario].[Actual], [Scenario].[Budget] } on columns, {crossjoin ({[Accounts2]},crossjoin ({[Market2]},crossjoin ({[Product2]},{[Year2]})))} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from [Demo.Basic]

       

       

Though we thought the security did work fine above, this is not the case anymore when we added the Budget scenario as well to the query. So, what has happened is BI Publisher has converted all the No Access labels to #Missing label. But again this is not applicable everywhere as we get #Missing for the second rowset. One thing that i can conclude is that security integration to Essbase is very primitive in this current release of both BIP and BI EE.(i just hope someone proves me wrong here). And, I just hope that this is addressed properly in 11g.

As a last effort let us try to pull in the data from the same report in answers to BI Publisher.

       

       

As you see, we still get wrong data. So, the only way to handle secure Essbase data sources is from within BI EE using the same approach as relational data sources.

Posted in All Posts, Bi Publisher, EPM, Hyperion Essbase, OBI EE Plus | 1 Comment »

Hyperion Essbase 9.3.1 – OBE Series – Using Outline Migration Wizard to Convert BSO to ASO outlines – Understanding ASO Cubes – Part 5

Posted by Venkatakrishnan J on December 18, 2008

In the last 4 articles, we saw how to go about designing a BSO cube. In this article we shall see how to go about converting a BSO cube to an ASO cube using the outline migration wizard. Before we do that, let us try to understand the basic differences between ASO and BSO cubes. ASO cubes have certain restrictions. But they are an excellent fit if you have huge dimensions. Typically an ASO cube aggregates very fast and this is primarily due to the fundamental differences in the architecture of ASO and BSO. On a high level following are the ASO properties that one would have to know

1. ASO cannot be used to load data at non-level0 members. ASO will accept data only at the lowest level.
2. ASO does not support calculations in stored members for Non Account dimensions.
3. Each non-Account dimension hierarchies in an ASO cube can be of 3 types. They are Stored, Dynamic and Multiple Hierarchies.
4. A Stored Hierarchy dimension is like a normal hierarchy in BSO. But the major difference is that the same member cannot be shared more than once within the hierarchy. Also, non-level0 members cannot be shared within a stored hierarchy. This hierarchy does not support stored members within calculations.
5. A dynamic hierarchy on a non-accounts dimension has all the properties of a dimension in a BSO cube. But the major difference is that the upper level member values are dynamically obtained (during data retrieval). Also, calculated members are supported in this hierarchy.
6. A multiple hierarchy dimension can have both stored and dynamic hierarchies. But this dimension should have atleast one stored hierarchy.
7. ASO data loads are typically more flexible than BSO data loads. ASO supports the concept of load buffers which can do data addition,subtraction etc of data coming in from multiple data sources in memory
8. There is no need for identifying sparse and dense dimensions.

There are other differences as well(like attribute dimension is not supported on all the dimensions etc). But the ones listed above are the most important ones atleast from an outline and dataload standpoint. Our goal in this article, as stated above, is to create the same Global BSO cube in ASO as well. Let us first start with creating a new ASO application and an ASO database called GlobASO.

       

Now, let us migrate the BSO outline that we created before using the Aggregate Outline Migration Wizard.

       

In the source, choose the Global Database’s outline.

       

As you see below, as soon as we click on next, we would see a lot of errors/warnings which inherently show the differences between ASO and BSO. Do not do the conversion automatically. Instead use Interactive Conversion.

       

Due to these inherent differences, the wizard would not migrate everything correctly. So, we shall correct all the errors manually now.

       

The first step in the correction is to make one of the customer hierarchies as a dynamic hierarchy. This is because by default chosen as a stored hierarchy by the wizard. The reason for doing this is that ASO does not support a shared member to occur twice in a stored hierarchy. To make this change, make the customer dimension to be multiple hierarchy enabled and make the Total Market as a dynamic hierarchy.

       

Once this done the verification would go through without any problem. But from a data load standpoint we still have one open issue. If you recollect, we had UNIT_PRICE and UNIT_COST measures having a grain of only 2 dimensions. Since BSO supported the option of loading values directly to the parent members, we had loaded the values against CHANNELS and CUSTOMER dimensions (non grain dimensions). But ASO does not allow data to be loaded at non-level0 members. So, we need to create 2 dummy members(or use existing lev0 members) under Channels and Customer dimensions. These members are shown below

       

We also have to change their hierarchy types in order to ensure that the dummy members do not rollup. Instead of proceeding further in the wizard just copy all the members(dimension by dimension) to the GlobASO outline and save the members

       

Now, that we have created the outline the next step is to load the data. The rule file that we created for Units measure would still work in the ASO cube. But the SQL for the other 2 data load rule files would change since we would be loading the data against a dummy level0 member. The SQL’s are given below. Just copy the rule files to new cube and make the below SQL changes. Then use them to load the data.

SELECT
MONTH_DSC,
'Unit Price',
'Channel No Grain',
ITEM_DSC,
'Customer No Grain',
SUM(UNIT_PRICE) DATA
FROM
PRICE_AND_COST_FACT UF,
TIME_DIM TD,
PRODUCT_DIM PD
WHERE
UF.MONTH_ID = TD.MONTH_ID AND
UF.ITEM_ID = PD.ITEM_ID
GROUP BY
MONTH_DSC,
ITEM_DSC

 

SELECT
MONTH_DSC,
'Unit Cost',
'Channel No Grain',
ITEM_DSC,
'Customer No Grain',
SUM(UNIT_COST) DATA
FROM
PRICE_AND_COST_FACT UF,
TIME_DIM TD,
PRODUCT_DIM PD
WHERE
UF.MONTH_ID = TD.MONTH_ID AND
UF.ITEM_ID = PD.ITEM_ID
GROUP BY
MONTH_DSC,
ITEM_DSC

Once the data load is done, let us run the aggregation on the ASO cube. Typically an ASO cube can be aggregated based on different parameters like based on Query statistics, based on the target cube growth size etc. For now let us run the default aggregation recommended.

       

       

Now, let us check the data in this cube,

       

       

As you see the data matches exactly with what we saw in the BSO cube.

Posted in All Posts, EPM, Hyperion Essbase | 1 Comment »

Hyperion Essbase 9.3.1 – OBE Series – Designing Essbase Databases – BSO Outlines & Data Load – Global Schema – Part 4 – Loading Data

Posted by Venkatakrishnan J on December 17, 2008

In the last 3 blog entries, we saw how to go about loading and designing the dimensions for a block storage cube. Today we shall see how to go about loading data. A block storage cube can accept input data at any level. The data need not always be at the lowest level. The process of loading the data into Essbase is again done through rule files. As a general rule, lets create one rule file for each measure (UNITS, UNIT_PRICE & UNIT_COST). Generally the number of rule files is determined by how your SQL source is structured. The SQL for the rule file(for UNITS) in our case is given below

SELECT
MONTH_DSC,
'Units',
CHANNEL_DSC,
ITEM_DSC,
SHIP_TO_DSC,
SUM(UNITS) DATA
FROM
UNITS_FACT UF,
TIME_DIM TD,
CHANNEL_DIM CD,
PRODUCT_DIM PD,
CUSTOMER_DIM CD1
WHERE
UF.MONTH_ID = TD.MONTH_ID AND
UF.CHANNEL_ID = CD.CHANNEL_ID AND
UF.ITEM_ID = PD.ITEM_ID AND
UF.SHIP_TO_ID = CD1.SHIP_TO_ID
GROUP BY
MONTH_DSC,
CHANNEL_DSC,
ITEM_DSC,
SHIP_TO_DSC

As a general rule, it is recommended to have columns in the select statement arranged in the same order as the dimensions in the Outline. This will ensure that the data load is fast. The process of creating the rule files remain the same as for the dimensions. The major difference lies in property settings. Lets start with creating a new rule file. Go to Open SQL and enter the above SQL. Then ensure that the Data Load Property is set.

       

The next step is to make sure that each and every column is properly tagged to the corresponding dimension as shown below

       

This needs to be set for all the columns apart from the data. The last data column needs to set with a data property. Also, by default in a BSO cube if the existing cells have data they would be overwritten. There are properties which can change this default property. This is shown below

       

       

Once this is done, validate the rule file and save it. Now, lets load the data from this rule file using the same approach as before. The only difference in the data load is that, while doing the data load we need to choose the load only option.

       

This would load all the data in the Essbase Cube. This can be verified from the database properties. You would see that the Input Level 0 blocks would have data.

       

The various statistics above are very important while loading an Essbase Cube. In our case, we have a block size of 3K which is on the lower side. This can be increased further by making another dimension as dense. But i would not do that in this blog entry. Also, our Block density is Ok. Anything around 25% block density is considered very good. This density would reduce as we have not aggregated the cube yet. Also, the Index hit ratio of 1 is very good. So, this means that Index Cache size is being utilized completely and there is scope for increasing this value further. Before we look at Calculation Scripts lets create the remaining 2 rule files to load the UNIT_PRICE and UNIT_COST measures. The SQL for both the rule files are provided below.

SELECT
MONTH_DSC,
'Unit Price',
'Channels',
ITEM_DSC,
'Customer',
SUM(UNIT_PRICE) DATA
FROM
PRICE_AND_COST_FACT UF,
TIME_DIM TD,
PRODUCT_DIM PD
WHERE
UF.MONTH_ID = TD.MONTH_ID AND
UF.ITEM_ID = PD.ITEM_ID
GROUP BY
MONTH_DSC,
ITEM_DSC

 

SELECT
MONTH_DSC,
'Unit Cost',
'Channels',
ITEM_DSC,
'Customer',
SUM(UNIT_COST) DATA
FROM
PRICE_AND_COST_FACT UF,
TIME_DIM TD,
PRODUCT_DIM PD
WHERE
UF.MONTH_ID = TD.MONTH_ID AND
UF.ITEM_ID = PD.ITEM_ID
GROUP BY
MONTH_DSC,
ITEM_DSC

Since UNIT_PRICE and UNIT_COST have the grain of only two dimensions, we need to load the data against Topmost node for all the other dimensions(non-grain dimensions). Thats the reason why the Topmost node of non-grain dimensions have been hardcoded in the above query. The screenshots below would act as a reference for creating the rule files.

       

       

Now, lets load the data from these 2 rule files to our cube again. Once the data load is done, let us look at the statistics again

       

As you see, the number of upper level blocks have increased since we are loading the data directly to the Topmost node of Channels and Customer Dimensions. Now, the next step is to aggregate the cube. Aggregation in Block Storage cubes are done through scripts called as calculation scripts. The major advantage of these scripts is that they provided the flexibility to aggregate any portion of the cube. So, we shall create a single calculation script which will aggregate UNITS across all the dimension and UNIT_PRICE, UNIT_COST across 2 dimension. The calculation script is provided below

SET CACHE HIGH;
SET CALCPARALLEL 2;

FIX ("Units")
CALC DIM ("Time");
AGG("Channels");
AGG("Product");
AGG("Customer");
ENDFIX;

FIX("Unit Price","Unit Cost","Channels","Customer")
CALC DIM("TIME");
AGG("Product");
ENDFIX;

Basically what this does, it calculates all the dimensions for the Units measure. But for the Unit Price and Unit Cost measure it calculates only the Product and Time dimensions.

       

Now, lets execute this calculation and look at the database properties.

       

       

As you see, the number of upper level blocks have increased. Also the block density has gone up which is good. Now as the last step let us look at the data.

       

Posted in All Posts, EPM, Hyperion Essbase | Leave a Comment »

Hyperion Essbase 9.3.1 – OBE Series – Designing Essbase Databases – BSO Outlines & Data Load – Global Schema – Part 3 – Building Dimensions

Posted by Venkatakrishnan J on December 15, 2008

In the last article, we saw how to go about building rule files for loading level based primary and alternate hierarchies for the customer dimension. In this article we shall see how to go about building the remaining dimensions. We shall start with the Product dimension first. The source for our Product dimension is PRODUCT_CHILD_PARENT table. As you see, this dimension contains the Product hierarchy as a value based hierarchy. The idea is to build a rule file to load this hierarchy. When the SQL for the rule file is built, it has to be ensured that the parents come first in the Query result. So, we shall be using the query shown below in our rule file to build the hierarchy.

SELECT
NVL(PARENT_PRODUCT,'Product'),
PRODUCT_DSC
FROM
(SELECT PRODUCT_DSC, PRODUCT_ID, PARENT_ID, LEVEL LVL
FROM
PRODUCT_CHILD_PARENT
CONNECT BY PRIOR PRODUCT_ID = PARENT_ID
START WITH PARENT_ID IS NULL) A,
(SELECT PRODUCT_DSC AS PARENT_PRODUCT, PRODUCT_ID
FROM
PRODUCT_CHILD_PARENT) B
WHERE
A.PARENT_ID = B.PRODUCT_ID (+)
ORDER BY A.LVL

As you see above, the query ensures that we are getting the parent and child member name (not the ids) in the query itself. Also, the order by LEVEL ensures that we are loading the parent first. The property for the rule file is listed below

       

       

As you see above, we are using the Parent Child build property to load the value based hierarchy. Save and validate the rule file. Then just load the Product dimension using this rule file. The ouline should look like the one shown below

       

Now that we have loaded the Product dimension, the next step is to load the Channels and Time dimensions. Again, we shall be creating 2 seperate rule files for loading Channels and Time dimensions. Since the procedure remains the same, i would just provide the SQLs used in the rule files along with their properties.

SELECT
YEAR_DSC, QUARTER_DSC, MONTH_DSC
FROM
TIME_DIM

 

SELECT
TOTAL_CHANNEL_DSC, CHANNEL_DSC
FROM
CHANNEL_DIM

       

       

       

       

Now, load both the rule files to the Essbase Outline. This will load all the dimension members of Time and Channels. So, the outline should be complete now with all the dimension members.

       

Next we shall see how to go about loading data into this Essbase Cube. Also we shall be seeing how to go aggregating portions of the Essbase Database using calculation scripts.

Posted in All Posts, EPM, Hyperion Essbase | 2 Comments »

Hyperion Essbase 9.3.1 – OBE Series – Designing Essbase Databases – BSO Outlines & Data Load – Global Schema – Part 2 – Building Dimensions

Posted by Venkatakrishnan J on December 14, 2008

In the last article we saw how to go about designing the outline for an Essbase Block Storage Cube. Today we shall see how to go about loading the various dimensions using rule files. Rule Files, as the name suggests, helps in loading data as well as dimension members using certain rules. Loading a dimension means, loading the various hierarchies that comprise a dimension. In our example, we have 2 hierarchies for Customer dimension, 1 hierarchy for Channels, 1 hierarchy for Time and 1 for Product dimension. Lets start with Customer Dimension first. Before starting to build these dimensions, the first step is to identify which would be the primary hierarchy and which would be the alternate hierarchy. In Essbase, it does not matter what hierachy type it is since every hierarchy is treated in the same way. But it is mandatory to understand how the members would be present in each of the hierarchies. For example, in some cases, we can have hierarchies wherein the members within & across the hierarchies are completely unique. But in some cases, we can have the same members shared across multiple hierarchies. Lets look at the screenshot from AWM of the Customer Dimension hierarchies.

       

       

As you see, we basically 2 different hierarchies sharing the same level column called SHIP_TO. So, let us start with building the first hierarchy Shipments. In order to do that, we shall create a new rule file. In the rule file let us enter the SQL below.

       

        

SELECT
TOTAL_CUSTOMER_ID, TOTAL_CUSTOMER_DSC,
REGION_ID, REGION_DSC,
WAREHOUSE_ID, WAREHOUSE_DSC,
SHIP_TO_ID,SHIP_TO_DSC
FROM
CUSTOMER_DIM

       

Ensure that you are pointing to the right SQL source. Once that is done, click on OK/Retrieve which will retrieve the data back within the rule file.

       

Since we are loading a dimension, we need to set the data source property as Dimension Build.

       

After that, we need to set the dimension build specific properties. There are 6 types of dimension build methods available in Essbase. They are shown below

       

Generation References - This facilitates column based loading from data sources. Typically used while loading a level based hierarchy.
Level References - This is similar to Generation Type loading. But this is bottom up loading instead of a top down approach. Typically used while loading a level based hierarchy.
Parent Child References - This is used when source data is in the form of a parent child hierarchy. Typically used while loading a value based hierarchy.
Add as Sibling With match - Typically used while loading members as siblings to other members.
Add as a Sibling to Lowest level - Typically used while loading members as siblings to the lowest level members.
Add as a Child of - Typically used while loading members as child of a spefic member.

In our case, since we are loading a level based hierarchy, we shall be using Generation References method. Also, there are quite a few options that we have within each dimension build method. They are given below

       

Each option has its own significance. The significance of each method is given below.

Allow Moves - This will allow a member to change its parent.
Allow Property Changes - This will allow changes to the property of a member like Alias, Aggregation Property, Time Based Aggregation etc
Allow Formula Changes - This will allow formula for a member to be dynamically populated
Allow UDA Changes - This will allow UDA changes to a member

So, in our case this is what we shall use for the primary Segments Hierarchy.

       

       

Once this is done, we need to set the Field Based property for each column obtained from Our SQL Query. This is to designate the member, Alias etc. In Essbase, the same member name cannot be used across dimensions. So, we shall not be using the Dimension IDs as our member names as other dimensions would have the same ids. So, for each column containing the dimension ids, let us just ignore them.

       

So, we shall mark Field2, Field4, Field6 and Field8 as Generation2, Generation3, Generation4 and Generation5. So, our rule file should look like the one shown below

       

Then validate the rule file and just save it as Segments. Once that is done, right click on the Global database and click on load data. Then choose the rule file and start building the dimension.

       

       

So, effectively we should get the dimension as shown below in the outline.

       

Now, that we have built the primary hierarchy, the next step is to build the alternate hierarchy. Building the alternate hierarchy is pretty tricky. One of the main reasons for that is that the same members(the lowest SHIP_TO level) would have to be made as shared in the alternate hierarchy. If we use the same logic as above, the lowest level members would move to the new hierarchy instead of getting shared. So, we shall start with a rule file which will build the 1 st 3 generations (without the SHIP_TO level) in the alternate Market-Segment hierarchy. The rule file would be similar to the one above without the SHIP_TO column.

       

       

Now, lets load the new hierarchy through the new rule file. The outline should look like the one shown below

       

Now, the next step is to somehow automate the process of making the SHIP_TO members to automatically come under the Generation5 of the alternate hierarchy as shared members. This is done through another rule file. This rule file would be loaded as a parent child dimension load. The main reason for splitting this up is, only a Parent Child dimension load would automatically share an existing member in the outline. For level and generation references methods it is not possible to share a member that already exist in the outline i.e to share a member the parent member should also come in the same sql. So, use the below SQL in the new rule file.

SELECT
ACCOUNT_DSC,
SHIP_TO_DSC
FROM
CUSTOMER_DIM

Also, use the parent child dimension build property. Do not set the Allow moves property.

       

       

Now, load the data from this new rule file. This will result in the outline shown below

       

As you see the new members in the alternate hierarchy get shared automatically. Just make the alternate hierarchy not to roll up to the topmost member. That is done by changing the aggregation operator of Total Market.

       

Now that we have built the customer dimension, we shall see how to go about building the Product, Time and Channel dimensions in the next blog entry.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 1 Comment »