Hyperion Essbase 9.3.1 – Clearing regions in ASO cubes – Using Data Slices – Part 3
Posted by Venkatakrishnan J on March 11, 2009
We are in the final entry on the ASO data clearance blog series. In the blog entry here, i had covered the new feature that was introduced in the 220.127.116.11 release. And in the blog entry here, i had covered an approach that can be used to clear portions of an ASO cube in 9.3 version. Today we shall look at another approach that can be used under certain conditions. Lets start with a simple use case first.
Use Case: Data is loaded into Essbase on a monthly basis. But within the same month, data can be reloaded again and again(this data can vary from the data loaded in the prior load). In such cases, how do we ensure that the last loaded data is always cleared before getting loaded into the ASO cube.
The approach that we would use today would only work for the above use case. Since the requirement is pretty common i thought it would be helpful for people who come across a similar requirement. ASO supports a concept called as data slices. Data Slices are nothing but individual portions of the database that can work in isolation. This concept is supported only in ASO cubes. For example, the screenshot below basically shows 2 slices. One is the main slice and the other is the incremental slice
We can have as many incremental slices as we want. All these slices can be merged together to form a main slice or a single incremental slice. ASO provides the option to clear the entire cube or incremental slices alone. In order to solve the above use case, what we would do is to put the volatile data (current month data being loaded) in to the incremental slice. So during every load, the incremental slice alone will be cleared and reloaded again without affecting the main slice. Once the data has been freezed for that month, the incremental slice would be merged to the main slice as shown below
Now to understand this better, let us first go ahead and load some sample data for the month of Apr-99 using the following maxL.
alter database 'GlobaASO'.'GlobaASO' initialize load_buffer with buffer_id 1; import database 'GlobaASO'.'GlobaASO' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' to load_buffer with buffer_id 1 on error abort; import database 'GlobaASO'.'GlobaASO' data from load_buffer with buffer_id 1;
The above set of commands would basically load the data for the month of Apr-99 (this is decided in the rule files) into the main slice. We can confirm this by looking at the properties of the ASO Cube.
As you see, since we have loaded the data normally (without a slice) the number of incremental slices is zero. Now, lets assume that we are moving to the month of May and we have the requirement to keep reloading the May data alone 4 or 5 times. In order to do that, lets load the data again, but this time by creating a slice.
import database 'GlobaASO'.'GlobaASO' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' to load_buffer with buffer_id 1 on error abort; import database 'GlobaASO'.'GlobaASO' data from load_buffer with buffer_id 1 create slice;
If we look at the statistics again, you would notice that, we would have one incremental slice.
Now, since we need to reload the May data again and again, we would have to clear the incremental slice (which will not clear the already loaded May data). This is done by using the maxL shown below.
import database 'GlobaASO'.'GlobaASO' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' to load_buffer with buffer_id 1 on error abort; import database 'GlobaASO'.'GlobaASO' data from load_buffer with buffer_id 1 override incremental data;
As you see, the above statement has actually removed the older incremental May data and overwrote this with a new data set. Now, when we want to load the June data after the freezing of May data, we need to merge the incremental slice with the main slice. That is achieved using the command below
alter database 'GlobaASO'.'GlobaASO' merge all data;