Business Intelligence – Oracle

Archive for March, 2009

Oracle BI EE 10.1.3.4 and Hyperion Essbase 11.1.1.0 – Ago and ToDate functions – Custom MDX and Evaluate_Aggr – Achieving Time based Shifting and Period To Date Calculations

Posted by Venkatakrishnan J on March 30, 2009

One of the few topics that i have not covered so far in my BI EE and Essbase connectivity series is the support of Ago and ToDate functions. As you would probably know, Ago and ToDate are BI EE rpd specific functions which can basically help in achieving Time based shifting, Period To Date kind of calculations. Depending on the underlying database, BI EE automatically function ships these 2 functions into the corresponding relational database’s sql queries. I have covered this before for a relational source here and here. Now with the support for Essbase data sources in BI EE, Ago and ToDate functions are function shipped automatically to MDX specific functions. Lets try to understand what these 2 functions do and how they work on an Essbase data source.

1. Ago – This function basically calculates the value of the measure where the Time member would actually be shifted by an index specified in the Function.
2. ToDate – This function basically calculates the Period To Date value for every intersection of the Time dimension. i.e if you have Jan, Feb in your report, the Period to Date would calculate Jan, Jan + Feb.

For the above functions to work, you need a dimension marked as Time in your Essbase outline. So, lets start with the Demo->Basic cube.

       

Now, go to the BMM of this imported Essbase cube in the BI EE repository. Mark Year as the Time Dimension and also set the chronological key.

       

       

Once this is done, go to the fact table and create a custom column. This custom column would basically be used for doing periodic shift of prior 2 months. In the column enter the formula as shown below

Ago("Demo"."Basic"."Actual" ,  "Demo"."Year"."Gen3,Year" , 2)

       

Now, include this column in the presentation layer and create a report as shown below. Ensure that the grain of the report matches the Ago function.

       

As you see, the Ago function automatically creates a custom metric calculating the value of the metric by shifting the month by 2 for every time member. Lets look at the MDX query.

       

With
set [Year3] as '[Year].Generations(3).members'
member [Scenario].[MS1] as '(ParallelPeriod([Year].[Gen3,Year],2,[Year].currentmember),Scenario.[Actual])', SOLVE_ORDER = 100
select
{ [Scenario].[Actual], [Scenario].[MS1] } on columns,
NON EMPTY {{[Year3]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Demo.Basic]

As you see above, the Ago function is automatically function shipped to an MDX function called ParallelPeriod. The major drawback with Ago is that it hardcodes the ParallelPeriod function and hence all the capabilities of the ParallelPeriod function is not leveraged (like Hierarchy filters etc). If those are required, then Evaluate would have to be used. We shall see how this can be done later. Now that we are clear on how Ago works, lets move on to ToDate function. Go to the fact logical table in the BMM and add a custom column. In the custom column enter the formula shown below

 TODATE(Demo.Basic.Actual, Demo."Year"."Gen2,Year")

       

       

If we look at the MDX query generated, you would notice that ToDate is automatically function shipped to PeriodsToDate MDX function.

With
set [Year3] as '[Year].Generations(3).members'
member [Scenario].[MS1] as 'AGGREGATE({PeriodsToDate([Year].[Gen2,Year],[Year].currentmember)},Scenario.[Actual])', SOLVE_ORDER = 100
member [Scenario].[MS2] as '(ParallelPeriod([Year].[Gen3,Year],2,[Year].currentmember),Scenario.[Actual])', SOLVE_ORDER = 100
member [Scenario].[MS3] as 'Rank([Year].Generations(3).dimension.currentmember,[Year].Generations(3).dimension.members)'
select
{ [Scenario].[Actual], [Scenario].[MS1], [Scenario].[MS2], [Scenario].[MS3] } on columns,
NON EMPTY {{[Year3]}} properties ANCESTOR_NAMES, GEN_NUMBER on rows
from
[Demo.Basic]

If you want to achieve the same functionality of Ago and Todate without actually using them, use them as Evaluate functions, as shown below

EVALUATE_AGGR('AGGREGATE({PeriodsToDate([Year].[Gen2,Year],%1.dimension.currentmember)},Scenario.[Actual])' , "vejanaki-lap.in.oracle.com"."Demo".""."Basic"."Gen3,Year")
EVALUATE_AGGR('(ParallelPeriod([Year].[Gen3,Year],2,%1.dimension.currentmember),Scenario.[Actual])',"vejanaki-lap.in.oracle.com"."Demo".""."Basic"."Gen3,Year")

       

Its good that BI Server automatically function ships these functions back to Essbase in the form of MDX. But the drawback as noted above is the fact that one does not have the flexibility to support all the parameters of MDX. This hopefully should be available in the coming releases.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus | 7 Comments »

Oracle Data Integrator 10.1.3.5 and Hyperion Essbase 11.1.1.0 – Columnar ASO export using MDX

Posted by Venkatakrishnan J on March 24, 2009

I was on a customer call last week wherein the customer wanted to know of ways to export partial data from an ASO cube. The issue with the customer was that they were on version 9.3.1. Since 9.3 does not support clearing regions of an ASO cube, they wanted ways of exporting partial data from the ASO cube and then reload it back to the same cube after clearing the cube. This is a very common scenario and i thought i would blog about it here. As you would know, Oracle Data Integrator 10.1.3.5 now supports export of data from an Essbase cube using Report Scripts as well as MDX. We shall be using the same feature today to understand how MDX exports for an ASO cube work. Also, it would be an interesting activity considering the fact that there can be different ways of framing an MDX query. So, lets first start with the GlobaASO cube that i have been using for the last 3 or 4 blog entries. First lets take a look at the outline

       

Basically this cube has data for the month of April, May and July as shown below.

       

Now, our intention is to take an export for the month of April and May from the ASO cube. There are a couple of options. One is to use a report script as shown below and take the export. But generally MDX is always recommended over Report Scripts for ASO cubes.

<Sym
{ SUPALL  }{ NAMESON  }{ TABDELIMIT  }{ ROWREPEAT  }{ NOINDENTGEN  }{ SUPMISSINGROWS  }
<SUPSHARE
<ROW ("Time","Measures","Channel","Product","Customer")
"Apr-99","May-99"
<DIMBOTTOM "Measures"
<DIMBOTTOM "Channel"
<DIMBOTTOM "Product"
<DIMBOTTOM "Customer"
!
&#91;/sourcecode&#93;

        <a href="http://picasaweb.google.com/lh/photo/a1ZHh-F0ulTU1eFn6DjKWg?authkey=Gv1sRgCLLRiY7e2sPcDA&amp;feat=embedwebsite"><img src="http://lh3.ggpht.com/_Z7VpyEq12Zc/Sck8zZA8r_I/AAAAAAAAGPk/aRNiuPIWFOM/s400/Snap3.jpg" alt="" /></a>

The above report script can be re-written into MDX in 2 ways(the main difference between the 2 is the fact that you would get a lot of #missing rows in the first query). Both the MDX scripts are provided below


SELECT
{[Measures].[Units]} on AXIS(0),
NON EMPTY {[Customer].Levels(0).members} on AXIS(1),
NON EMPTY {[Apr-99],[May-99]} ON AXIS(4),
NON EMPTY {[Channel].Levels(0).members} ON AXIS(3),
NON EMPTY {[Product].Levels(0).members} ON AXIS(2)
FROM
[GlobaASO].[GlobaASO]

       

With
set [Time1] as '{[Apr-99],[May-99]}'
set [Channel1] as '[Channel].Levels(0).members'
set [Product1] as '[Product].Levels(0).members'
set [Customer1] as '[Customer].Levels(0).members'
select
{[Measures].[Units] } on columns,
NON EMPTY {crossjoin ({[Time1]},crossjoin({[Channel1]},crossjoin({[Product1]},{[Customer1]})))}   on rows
from [GlobaASO.GlobaASO]

       

Even if we try to save the above MDX outputs as a CSV file we would get the member names wrapped within parantheses (depending on the MDX fired).

       

Now, lets go to Oracle Data Integrator and create an interface with this GlobaASO as the input. Our idea is to test both the above MDX queries. So, first lets start with the first MDX Query and insert it into a dummy table.

       

       

Now, lets try the same thing with the other MDX query.

       

       

It does look like ODI takes into account all types of MDX queries(though in effect every MDX query works on the AXIS specification). The interesting fact is that BI EE chooses the ROWS and COLUMNS (basically AXIS(0) and AXIS(1)) type of queries instead of the first type of the query where every AXIS apart from AXIS(0) comes as a seperate column. One of the main reasons for this is the fact that NON EMPTY clause would not work on individual members. It would work only on the actual existing tupules. So, the 2nd type of query with cross join would actually generate a resultset without #Missing values(lesser number of rows returned). The more you start analyzing the BI EE and Essbase connectivity, the more you get to know how the connectivity has been designed. It would also give you insights on the future product directions. For example, if you notice in EPM 11.1.1.0, a new feature has been added where saved sets can be used during the duration of the session. My guess is that this has been added keeping the BI EE 11g in mind(based on the screenshots in the OOW presentations). Anyway, i am digressing. The above was more of a initial walkthrough of the ODI-Essbase MDX export capabilities. I would be going more into detail of Java API and MDX in the future blog entries.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus, Oracle Data Integrator | 1 Comment »

Hyperion Essbase 11.1.1.0 – Transparent Partition – Using BSO as Partition Targets – Write-Back Partitions on ASO – Part 2

Posted by Venkatakrishnan J on March 23, 2009

In the last blog entry, i had given a brief on how Transparent partition works on ASO targets. One of the major drawbacks of ASO targets is that, they cannot be used to store local data. Today we shall see the advantages of using BSO cubes as transparent partition targets. The major advantage of using BSO as a transparent partition target is that, it provides the capability to do write-backs on ASO (more of a workaround). Lets take at a simple use case today. We shall be using the same GlobaASO cube as our ASO source. Lets look at this ASO outline first.

       

As you see, we have 4 analysis dimensions and one accounts dimension. This ASO cube stores only actual units sold. Lets assume that we have end users who do active planning/forecasting on the number of the units sold. And they need to have the capability to store the forecasted data directly into the cubes. Typically they do the forecasts along the product dimension. Lets also assume that forecasting is typically done for the last 6 months after looking at the 6 months of Actual units sold. So, in order to facilitate this, lets create a BSO cube as shown below.

       

As you see, this BSO outline has only 3 dimensions. So, any user who would be involved in forecasting would be looking at the actual units sold for the first 6 months and then would be doing the forecasts accordingly for the remaining 6 months. Also this outline has one extra account called as Forecast Units which would hold the Forecast Data. Now lets create a transparent partition with the ASO cube as the source and the BSO cube as the target.

       

       

The important point to note while creating a partition is the fact that we do not have the same set of dimensions in the source as well as the target. So for all the dimensions that do not exist in the target, just include the topmost member (dimension member or the member that you feel has the relevant data stored for forecast). The source partition area would look like the one shown below

       

As you see, i have chosen the topmost member(in the source) for the non-existent dimensions in the target. The target partition would look like the one shown below

       

Now, if you validate the partition you would get an error stating that the number of dimensions in the source and the target do not match.

       

To bypass this, go to the mappings tab and map your non-existent dimensions in the target from the source as shown below

       

This will validate your partition. Now go to excel-add in and query on the new BSO cube.

       

Now as an end user, i would typically look at Q1 and Q2 actual numbers and based on that i would be forecasting for the remaining 2 quarters across multiple products. Of course, there would be a lot of other factors coming into play while forecasting, for simplicity purposes i would just multiply the Q2 actual numbers by 1.3 for Q3 and Q4 forecast. You should be able to do this directly from excel-add in. Though you might be getting warnings on the write-back going against ASO(as long you are sure that the particular cell that you have locked is not part of the transparent partition you should be good), you should still be able to writeback. Essbase performs a partition area check based on the members chosen in Excel and not on the cells locked. Hence you might get warnings even if you have locked the correct cell.

       

Posted in All Posts, EPM, Hyperion Essbase | Leave a Comment »

Hyperion Essbase 11.1.1.0 – Transparent Partition – Using ASO as Partition target – Part 1

Posted by Venkatakrishnan J on March 19, 2009

In the last blog entry we saw how a Transparent partition works. Today we shall go more into detail on what this partition does and how it works on BSO and ASO cubes. To demonstrate this, i would be using a nASO cube as the partition source. Both ASO as well as BSO would be used as partition targets. This would let us understand the differences between ASO and BSO while using them as Transparent Partition targets. Lets start with a Global ASO outline as shown below. This would act as our transparent partition source.

       

Now, assume that we have 2 partition targets. One in ASO and the other in BSO. As a thumb rule, for partitioning to work, the number of dimensions in the source and the target should match. But the dimensions would typically be a subset of the source partition. The outlines of the target ASO and BSO cubes are provided below.

       

Lets create a partition on the source ASO cube. There are 4 main steps while creating a transparent partition

1. Assigning a target ASO/BSO database
2. Assigning security
3. Assigning Areas
4. Creating mappings between the source and target.

The first 2 steps are self explanatory. In our first case, we shall use ASO as a target.

       

       

       

Once the security and the target have been assigned, the next step is to define the area. The area definition (basically this defines the slices of the source and target databases) should follow certain rules.

1. Number of cells chosen in the source and the target should match
2. When the source and target dimension members match, just include them in the area definition and there is no need for additional mapping.
3. When the source and target dimension members do not match, mapping editor should be used to map the individual members.
4. If a dimension is not chosen in the area, then the partition inherently assumes that the dimension has a one-to-one match between the source and the target.

In our case, since we have the same dimension member names, we shall map just a single level-0 dimension member combination of the source with the target as shown below.

       

Since we do not have any Customer dimension mapping, the partition assumes that every member in the source Customer dimension and the target customer dimension match. Now, lets load some data into the source.

       

And lets try to view the data in the target.

       

As you see, we are able to query the source from the target. This is straight forward. Now, the next step is to define another area like the one above. This time map the Apr-99 member from the source to the target. The basic reason for doing this is to understand whether the target does a dynamic aggregation based on multiple partition/multiple areas within a partition.

       

After doing this lets validate the partition and verify the data from excel add-in.

       

As you see, the target is able to do an aggregation from 2 different areas of the partition dynamically. This is the value add of a transparent partition.

One of the major differences between ASO and BSO transparent partition targets is that, ASO does not support local data. Even if one loads data into the unmapped regions of the partition target, this would not be recognized by the ASO target. To validate this lets load the data in the ASO target for the Month of Jun-99.

       

After loading this data validate the partition.

       

As you see, the warning message clearly states that local data is not supported. Even when we query from the excel add-in we would not be getting the local Jun-99 data.

       

In the next blog entry we shall see how a BSO cube behaves when it is used as transparent partition target. I would also show how a BSO transparent partition target can be used for Custom ASO writebacks.

Posted in All Posts, EPM, Hyperion Essbase | 1 Comment »

Hyperion Essbase 11.1.1.0 – Partitioning and its use cases – Transparent, Replicated and Linked partitions – Introduction

Posted by Venkatakrishnan J on March 17, 2009

I was working with one customer this week who basically wanted to find out ways of tuning their existing Hyperion Planning BSO cubes. Multiple options were discussed like loading parts of the BSO cube into an ASO cube, partitioning etc. That too, since they were on the 11.1.1.0 release of EPM, it made the task even more simpler and broader in terms of the options available. One of the major advantages of the new release of Essbase is that quite a few excellent enhancements have been made with regard to partitioning. I thought it would be worth covering the various aspects of partitioning in Essbase through another series of blog entries. Today we shall see what are the partitioning techniques that are available and which options (ASO or BSO) does each partitioning technique support.

Basically Essbase supports 3 types of partitions.

1. Transparent Partition
2. Replicated Partition
3. Linked Partition

If you have worked with Oracle OLAP and are familiar with the partition techniques available there, you would realize that partitioning of Essbase is fundamentally way too different, though it achieves the goal of providing more performance/scalability depending on the technique chosen. With Oracle OLAP partitioning is generally always recommended for bigger AWs, but in the case of Essbase it is not always the case. Each of the partitioning techniques above have their own pros and cons, and they need to be understood in detail before using any one of them.

Transparent Partition:

This is an excellent partition technique which basically provides the same cube’s data from across multiple outlines. The other advantage is that it can be used to provide a consolidated view of data from multiple databases. To understand this better, look at the screenshot below

       

As you see above transparent partition basically is used to consolidate data from multiple databases. This technique is typically used to provide smaller versions of a bigger database. Also, there can be cases wherein a new database a created to store every year’s worth of data. A transparent partition provides a seamless way of looking at all the year’s data without loading them completely into a new database. In addition, the transparent partition target can have its own data as well. So, in effect whenever a calculation is triggered in the target database, if the calculation requires data from the transparent partition source, then Essbase automatically gets the data out of the transparent partition and combines it with the local data. The other advantage of using this technique is that data loads can be done directly from data source or data target. In the coming blog entries i would discuss a couple of use cases

1. Using ASO as a transparent partition target
2. Using BSO as a transparent partition target

Replicated Partition:

This is generally the most commonly used partition technique which copies a portion of the source database’s data to the target. This does not always ensure that the users are looking at the latest data as there are possibilities of the source and the target going out of sync. This partition technique supports data load only in the source. If the data is loaded in the target, that target data would be over-written after the sychronization with the source. A transparent partition target can use a replicated partition target as a source. But a replicate partition target cannot use a transparent partition target as a source.

       

       

Linked Partition:

Though this is called as a partition technique, it does not provide the generic partitioning capabilities. This basically provides a link to other databases (using XREF) and based on the mapping done while creating the partition, this partition provides the capability to drill from one cell in a source database to another cell in the target database. Since it provides only a linkage, it does not have all the limitations of replicated and transparent partitions. This is illustrated as shown below

       

We would go into details of each of these partition techniques in future blog entries. Each of the partition technique above have certain limitations and are supported only on certain configs. This is provided below (this is for 11.1.1.0 release).

       

As you see above, ASO is now supported as a partition target for both replicated and transparent partitions. This opens a lot of possible tuning opportunities especially in cases wherein both the complex calculation abilities of BSO and quick aggregation capabilities of ASO are required.

Posted in All Posts, EPM, Hyperion Essbase | 3 Comments »

Hyperion Essbase 9.3.1 – Clearing regions in ASO cubes – Using Data Slices – Part 3

Posted by Venkatakrishnan J on March 11, 2009

We are in the final entry on the ASO data clearance blog series. In the blog entry here, i had covered the new feature that was introduced in the 11.1.1.0 release. And in the blog entry here, i had covered an approach that can be used to clear portions of an ASO cube in 9.3 version. Today we shall look at another approach that can be used under certain conditions. Lets start with a simple use case first.

Use Case: Data is loaded into Essbase on a monthly basis. But within the same month, data can be reloaded again and again(this data can vary from the data loaded in the prior load). In such cases, how do we ensure that the last loaded data is always cleared before getting loaded into the ASO cube.

The approach that we would use today would only work for the above use case. Since the requirement is pretty common i thought it would be helpful for people who come across a similar requirement. ASO supports a concept called as data slices. Data Slices are nothing but individual portions of the database that can work in isolation. This concept is supported only in ASO cubes. For example, the screenshot below basically shows 2 slices. One is the main slice and the other is the incremental slice

       

We can have as many incremental slices as we want. All these slices can be merged together to form a main slice or a single incremental slice. ASO provides the option to clear the entire cube or incremental slices alone. In order to solve the above use case, what we would do is to put the volatile data (current month data being loaded) in to the incremental slice. So during every load, the incremental slice alone will be cleared and reloaded again without affecting the main slice. Once the data has been freezed for that month, the incremental slice would be merged to the main slice as shown below

       

Now to understand this better, let us first go ahead and load some sample data for the month of Apr-99 using the following maxL.

alter database 'GlobaASO'.'GlobaASO' initialize load_buffer with buffer_id 1;
import database 'GlobaASO'.'GlobaASO' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' to load_buffer with buffer_id 1 on error abort;
import database 'GlobaASO'.'GlobaASO' data from load_buffer with buffer_id 1;

The above set of commands would basically load the data for the month of Apr-99 (this is decided in the rule files) into the main slice. We can confirm this by looking at the properties of the ASO Cube.

       

As you see, since we have loaded the data normally (without a slice) the number of incremental slices is zero. Now, lets assume that we are moving to the month of May and we have the requirement to keep reloading the May data alone 4 or 5 times. In order to do that, lets load the data again, but this time by creating a slice.

import database 'GlobaASO'.'GlobaASO' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' to load_buffer with buffer_id 1 on error abort;
import database 'GlobaASO'.'GlobaASO' data from load_buffer with buffer_id 1 create slice;

If we look at the statistics again, you would notice that, we would have one incremental slice.

       

Now, since we need to reload the May data again and again, we would have to clear the incremental slice (which will not clear the already loaded May data). This is done by using the maxL shown below.

import database 'GlobaASO'.'GlobaASO' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' to load_buffer with buffer_id 1 on error abort;
import database 'GlobaASO'.'GlobaASO' data from load_buffer with buffer_id 1 override incremental data;

       

As you see, the above statement has actually removed the older incremental May data and overwrote this with a new data set. Now, when we want to load the June data after the freezing of May data, we need to merge the incremental slice with the main slice. That is achieved using the command below

alter database 'GlobaASO'.'GlobaASO' merge all data;

       

Posted in All Posts, EPM, Hyperion Essbase | 3 Comments »

Hyperion Essbase 9.3.1 – Clearing regions in ASO cubes – Using rule files and Report Scripts – Part 2

Posted by Venkatakrishnan J on March 8, 2009

In the last blog entry here, we saw how the new feature introduced in the 11.1.0 release helps us to clear regions of an ASO cube easily. But majority of the customers/users are still on the 9.3 or earlier releases of Essbase. Unfortunately, in the prior releases of Essbase there is no out of the box way of clearing portions of an ASO cube easily. If you are on the earlier releases, then you might find this and the next blog entry useful. Depending on the type of data clearance required, portions of an ASO cube can be cleared using 3 approaches

1. Subtract existing data with a new dataset (exact replica of what was loaded before) using rule files – For this to work, backup of the older data that was loaded should be available.
2. Extract portions of an ASO cube using Report File or MDX and reload into the same cube with subtract load.
3. Use incremental slices to maintain volatile data. Clear the volatile slices and reload the data – This is applicable only if prior data load needs to be cleared. This does not allow data clearance for a certain point in time.

We shall discuss the options 2 and 3 alone since the option 1 is pretty straight forward. But generally option 1 is not used that much since rarely do we have the copy of an already loaded data lying readily somewhere. So, lets start with the second option. Just for illustration purposes, lets assume that we have loaded 3 months of data (Jan to Mar) into the ASOSamp->Sample cube. In the month of April, it has come to the notice of the Essbase admin that the portion of January data which was loaded earlier was not correct and hence a reload for that month alone is desired. Further more, it was also noted that the erroneous data originated from some of the “Electronics Online” stores. Also, the erroneous data was for all the Sales related quantity (Units) for all types of Digital cameras in the Western Region. So, the first step in the data clearance activity is to take an extract of the prior loaded level-0 erroneous data using a report file. The report script is given below as an example (even MDX can be used. But right now extracting data directly from MDX is available only from ODI or through the JAPI)

<Sym
{ SUPALL  }{ NAMESON  }{ TABDELIMIT  }{ ROWREPEAT  }{ NOINDENTGEN  }{ SUPMISSINGROWS  }
<COLUMN ("Measures")
"Units"
<SUPSHARE
<ROW ("Years","Time","Transaction Type","Payment Type","Promotions","Age","Income Level","Products","Stores","Geography")
"Curr Year",
"Jan",
"Sale",
"Check",
<DIMBOTTOM "Age"
<DIMBOTTOM "Promotions"
<DIMBOTTOM "Income Level"
<DIMBOTTOM "Digital Cameras"
<DIMBOTTOM "Electronics Online"
<DIMBOTTOM "West"
!
&#91;/sourcecode&#93;

The output of the report script is a tab limited text file containing all the erroneous data.

        <a href="http://picasaweb.google.com/lh/photo/LkhDQ-r0bhkUU0MDMJwbKg?feat=embedwebsite"><img src="http://lh3.ggpht.com/_Z7VpyEq12Zc/SbQoXR6VccI/AAAAAAAAGEg/K1DdlT1cR64/s400/Snap1.jpg" alt="" /></a>

The idea is to use the same output text file to reload back into the cube using a simple rule file. So, lets fire a simple maxL script which would run the above report script and export the data into a text file.


export database 'ASOsamp'.'Sample' using server report_file 'Export' to data_file 'Export.txt';

Now, create a simple rule file using this text file as input as shown below

       

Also ensure that this rule file subtracts from existing data.

       

Now, import this back to the database and this would automatically subtract the values from the database thereby clearing it. Now the cube is ready for a reload again. In the next blog entry we shall see the third and final approach which is handling the data clearance activity through multiple slices.

Posted in All Posts, EPM, Hyperion Essbase | 3 Comments »

Hyperion Essbase 11.1.1.0 – Clearing regions in ASO cubes – Using MDX and MaxL – Part 1

Posted by Venkatakrishnan J on March 5, 2009

Till the release 9.3.1, there was no automatic way of clearing portions of an ASO cube. This is probably one of the most common requirements as there is always a possibility of the data being volatile. In 11.1 release of Hyperion Essbase, a new feature has been added that allows end users to clear portions of an ASO cube using a combination of MDX and MaxL. Lets look at various approaches for clearing an ASO cube, both in 9.3.1 and 11.1 versions. We shall start with 11.1 release first.

11.1 release basically allows for 2 kinds of data clearance from an ASO cube. They are

1. Logical Delete – In this case, Essbase automatically creates another slice of data with an exact negative of the actual data in the region. So, when end users query from the region, they would in effect be getting zeroes for all intersecting cells in the region. This is the recommended delete option as it is much faster than Physical Delete.
2. Physical Delete – In this case, Essbase deletes the physical cells from the region defined. It is generally slower than logical delete as it has to do I/O to remove the cells physically.

For example, lets take the ASOSamp->Sample ASO cube. A typical scenario is wherein the data is loaded each and every month. There are always a possibility of prior months loaded data being volatile. So, in order to load the data for a prior month again, it is necessary to remove that month’s data alone (for all intersections) and then reload it again. In our example, lets first take a look at an Excel Add-in report for month of Jan.

       

As you see above, we have 3 months of data loaded into the ASO Cube. In the month of Mar, due to some error somewhere, we see the necessity to reload the data for Jan. In BSO, this is straight forward as Jan data can be cleared using a simple calculation script. In ASO, so far till 9.3.1 it was not possible directly to remove the Jan data alone using a simple approach (we shall look at some approaches for 9.3.1 in the next blog entry). Now with the advent of the logical and physical deletes this is straightforward even in ASO now. Lets first do a logical delete first and see what happens.

The syntax for the command is

alter database <appname>.<dbname> clear data in region '{MDX Set Expression}' [physical]

The logical delete for Jan is done as shown below

alter database ASOSamp.Sample clear data in region '{[Jan]}';

       

Now, let us take a look at the Jan data in Excel-addin.

       

As you see, this delete has basically not physicall deleted the data. The actual logical delete is explained in the pic shown below

       

Let us do a physical delete now.

alter database ASOSamp.Sample clear data in region '{[Jan]}' physical;

       

       

As you see above, the physical delete has actually removed the data from the cells physically. Thats the reason why we are seeing #Missing against Jan. The actual physical delete is explained as shown below

       

The above is easy. The example below shows a much more complex MDX based delete.

alter database 'ASOsamp'.'Sample' clear data in region '{CrossJoin(CrossJoin([Jan].children,[Age].children),[Stores].children),[Geography]}';

In the next blog entry, we shall see how we can achieve the same functionality in the 9.3 version.

Posted in All Posts, EPM, Hyperion Essbase | 3 Comments »