Business Intelligence – Oracle

Archive for the ‘Oracle Data Integrator’ Category

Oracle Data Integrator 10.1.3.5 – Connectivity to Open LDAP of Shared Services

Posted by Venkatakrishnan J on June 9, 2009

One of the features of Oracle Data Integrator is its ability to connect to a lot of disparate data sources using JDBC. One such feature is its ability to expose any LDAP directory as a relational source. If you are on earlier releases of Hyperion EPM like 9.3, where there is no out of the box SSO and authentication/authorization capability to BI EE with open LDAP, one approach is to configure BI EE to authenticate against OpenLDAP and then get the user-group information from some other custom table(or by using the DBMS_LDAP package). I had shown how to configure BI EE to authenticate against OpenLDAP here. Since BI EE cannot automatically pick up the groups directly from OpenLDAP in prior releases, one way is to get the user-group related information from OpenLDAP and then populate that into a set of custom tables. Then BI EE can be used to get these groups from the custom tables. The architecture would look something like this

image

Lets look at what it takes to setup the OpenLDAP connectivity from ODI. As a first step lets first log into Topology Manager and create a new LDAP connection. Choose the “Sunopsis JDBC Driver for LDAP” as the JDBC driver

image

And then choose the JDBC URL.

image

To enable the connectivity to any LDAP directory, the password would have to be passed in an encoded format. To encode the password, run the below command from a command prompt.

 
java -cp {OracleDI}\oracledi\drivers\snpsldapo.jar 
com.sunopsis.ldap.jdbc.driver.SnpsLdapEncoder
<the of password root openldap>

image

Copy the above encoded password. In the JDBC URL, enter the below URL

 
jdbc:snps:ldap?ldap_url=ldap://localhost:28089/ &amp;ldap_password=KILAKMNJKKLHKJJJDDGPGPDB
&amp;ldap_basedn=dc=css,dc=Hyperion,dc=com 

image

The basedn above is what would be used for searching all the users, groups, roles etc. In the Data Server definition, enter the username as root user who has traversing access to the entire OpenLDAP directory

image

You should be able to test the connection to the LDAP from here. The root user of OpenLDAP is different from the admin user. In fact, the admin user’s original cn is not admin. It is 911. admin is the givenName attribute of the 911 user. The root user password is by default root. One behavior that i noticed across the releases, was the fact that in 9.3 release admin user had the traverse directory privilege. But in EPM 11, 911 user does not have the traverse directory privilege. In my case, the default root password did not work. So, i had to reset the root user password from shared services.

image

As a side note, if you feel that shared services web console does not give you the actual LDAP directory structure, i would recommend a free LDAP client like JXplorer. The screenshot of shared services OpenLDAP using this free client is given below

image

Now, if you go to the Designer and reverse engineer this data source using selective reverse.

image

image

This should convert the entire directory structure to a relational format. From this point onwards, its a matter of building the interfaces and loading the custom user-group tables. Though the setup of the above is pretty straight forward, this can come in very handy especially when you are trying to consolidate/report against multiple user sources.

Advertisements

Posted in All Posts, EPM, Hyperion Shared Services, OBI EE Plus, Oracle Data Integrator | 1 Comment »

Oracle Data Integrator 10.1.3.5 – Using OBI EE as a Data Source – Using BI EE JDBC Driver

Posted by Venkatakrishnan J on June 7, 2009

In a prior blog entry here, i had shown how to use BI Server as a data source for loading data from Essbase into custom tables. That approach basically used the Sun ODBC-JDBC bridge which in most cases has bad performance. Today we shall see how to use the native BI Server JDBC driver to connect to BI Server and in-turn use that for loading Essbase Data into a custom table.

One of the major advantages of ODI is the fact that it can be used to connect to any data source that has a valid jdbc driver. In order to make ODI to use the BI EE JDBC driver, we need to make the JDBC driver (jar file) of the BI Server to be accessible by ODI. The jar file(bijdbc14.jar) of BI Server can be obtained from the BI Publisher web server installation folder {OracleBI}\oc4j_bi\j2ee\home\applications\xmlpserver\xmlpserver\WEB-INF\lib. The main reason why we are going to the BIP folder structure is because BIP uses the JDBC driver of BI Server to do reporting on BI EE metadata.

image

The above screenshot basically provides the database driver class and the connection string of the JDBC driver of BI EE. Now copy this driver jar file to {OracleDI}\oracledi\drivers directory.

image

Now open up Topology Manager and create a new Universal Data Server. Use the above driver class and connection in the JDBC URLs.

image

Ensure that you are able to test the connection using Administrator username and password.

image

Now, in the physical schema choose any subject area in the Presentation Layer.

image

Now once this is done, go to the Designer and create a Project. And import the necessary Knowledge Modules that i had shown in the last blog entry. Since we are using JDBC, we can do a selective reverse of all the necessary tables.

image

But be aware that, BI EE supports multiple presentation tables with the same name across subject areas. But ODI currently does not support multiple table import with the same name (as there is no BI EE specific RKM. This can be resolved if you build a separate RKM. Also, though you might be able to reverse objects with special characters, they would not work within an Interface. So, its better to create aliases with proper ODI supported naming conventions. ). Lets choose just 4 such tables, Basic, Market, Scenario and Year.

image

image

image

Now lets build a very simple interface wherein we would be inserting Gen2, Market and the Measure value into a dummy table. Remember in BI EE, whenever you are bringing in multiple tables from a single presentation subject area, no joins are needed i.e. a cross join would have to be configured across multiple tables.

image

If you execute this interface you would notice that BI Server would generate the MDX accordingly and would then insert the output of the MDX to the database table.

image 

This should give you an idea of how to go about using multiple BI EE subject area tables in an ODI interface and then using that interface to load data into a database table. Also, this method should typically be as fast as your BI Server is since it uses the JDBC connectivity as opposed to the JDBC-ODBC bridge.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus, Oracle Data Integrator | 3 Comments »

Oracle Data Integrator 10.1.3.5 – Using Oracle BI EE as a Data Source – Loading Essbase data into custom tables – Using JDBC-ODBC Bridge

Posted by Venkatakrishnan J on June 2, 2009

In the blog entry here, i had basically shown multiple ways of loading data into Essbase. Today we shall see an approach wherein one can extract data from Oracle BI Server using the JDBC-ODBC bridge. The idea is to use BI Server to import Essbase metadata. Then Oracle Data Integrator would be used to extract the Essbase metadata through MDX via the BI Server and then load into a custom table. Though there are no out of the box RKM’s to reverse engineer BI Server metadata, we can work around them using some other functionality. If you want to extend this, you can create your own custom RKM to reverse-engineer BI Server metadata tables.

To start with lets first import a sample Demo->Basic cube into our BI EE repository. While creating the subject area of this cube, ensure that you do not have spaces/special characters in the column names. If you need the special characters for Answers, then add aliases with new no space/special characters name. For example, if you have Gen1, Year as one of your column names, add an alias as Year.

image

image

The idea behind doing this is to ensure that we can treat these tables as normal tables using the AnalyticsWeb ODBC connection. Of course, we can make ODI to append double quotes if special characters are present in a column or a table name. But for now lets go with this approach.

Once the aliases have been added, go to the Topology Manager and create a universal Data Server using the SUN JDBC-ODBC bridge.

image

image

image

Create a physical schema. Ensure that you are entering the correct subject area name in the schema.

image

Ensure that you are able to test the connection. Once this is done, create a logical schema and assign it to the Physical Data Server through the Global Context.

image

After this go to the designer and create a project. Import the the following Knowledge Modules in to the Project.

1. LKM SQL to SQL
2. CKM SQL

Then create a model. Since we do not have a RKM to reverse engineer the data sources automatically (though you can actually extend other RKM’s for this purpose). So, for now lets manually enter the tables and columns in the Model as shown below

image

Use the Aliases as the column names.

image

image

Now ensure that you are able view the data directly from Essbase using this.

image

image

If you look at the screenshot above, BI Server would be showing the data against the actual names (Gen1, Year etc). So, you can be sure now that the aliases can be accessed even outside of BI EE using the ODBC connection. Now, to make this simple lets create a straight forward table called YEAR_DIM_ESS in any database schema (this will act as our target)

 
CREATE TABLE 
YEAR_DIM_ESS ( 
YEAR VARCHAR2(100), 
QUARTER VARCHAR2(20), 
MONTH VARCHAR2(20)); 

Import this table to another Model. And then create an interface mapping the columns from the source to the target. Ensure that you are using the SUNOPSIS_MEMORY_ENGINE as the staging area.

image

image

image

Now, if you execute the interface, you would have essbase data loaded inside the database table.

image

This is one other good option to consider if you have full set of end to end oracle tools like ODI, Essbase, BI EE etc. In this case today, i have used the JDBC-ODBC bridge. Later i would be covering how we can use the JDBC jar of BI Server directly within ODI.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus, Oracle Data Integrator | 3 Comments »

Oracle Data Integrator 10.1.3.5 and Hyperion Essbase 11.1.1.0 – Columnar ASO export using MDX

Posted by Venkatakrishnan J on March 24, 2009

I was on a customer call last week wherein the customer wanted to know of ways to export partial data from an ASO cube. The issue with the customer was that they were on version 9.3.1. Since 9.3 does not support clearing regions of an ASO cube, they wanted ways of exporting partial data from the ASO cube and then reload it back to the same cube after clearing the cube. This is a very common scenario and i thought i would blog about it here. As you would know, Oracle Data Integrator 10.1.3.5 now supports export of data from an Essbase cube using Report Scripts as well as MDX. We shall be using the same feature today to understand how MDX exports for an ASO cube work. Also, it would be an interesting activity considering the fact that there can be different ways of framing an MDX query. So, lets first start with the GlobaASO cube that i have been using for the last 3 or 4 blog entries. First lets take a look at the outline

       

Basically this cube has data for the month of April, May and July as shown below.

       

Now, our intention is to take an export for the month of April and May from the ASO cube. There are a couple of options. One is to use a report script as shown below and take the export. But generally MDX is always recommended over Report Scripts for ASO cubes.

<Sym
{ SUPALL  }{ NAMESON  }{ TABDELIMIT  }{ ROWREPEAT  }{ NOINDENTGEN  }{ SUPMISSINGROWS  }
<SUPSHARE
<ROW ("Time","Measures","Channel","Product","Customer")
"Apr-99","May-99"
<DIMBOTTOM "Measures"
<DIMBOTTOM "Channel"
<DIMBOTTOM "Product"
<DIMBOTTOM "Customer"
!
&#91;/sourcecode&#93;

        <a href="http://picasaweb.google.com/lh/photo/a1ZHh-F0ulTU1eFn6DjKWg?authkey=Gv1sRgCLLRiY7e2sPcDA&amp;feat=embedwebsite"><img src="http://lh3.ggpht.com/_Z7VpyEq12Zc/Sck8zZA8r_I/AAAAAAAAGPk/aRNiuPIWFOM/s400/Snap3.jpg" alt="" /></a>

The above report script can be re-written into MDX in 2 ways(the main difference between the 2 is the fact that you would get a lot of #missing rows in the first query). Both the MDX scripts are provided below


SELECT
{[Measures].[Units]} on AXIS(0),
NON EMPTY {[Customer].Levels(0).members} on AXIS(1),
NON EMPTY {[Apr-99],[May-99]} ON AXIS(4),
NON EMPTY {[Channel].Levels(0).members} ON AXIS(3),
NON EMPTY {[Product].Levels(0).members} ON AXIS(2)
FROM
[GlobaASO].[GlobaASO]

       

With
set [Time1] as '{[Apr-99],[May-99]}'
set [Channel1] as '[Channel].Levels(0).members'
set [Product1] as '[Product].Levels(0).members'
set [Customer1] as '[Customer].Levels(0).members'
select
{[Measures].[Units] } on columns,
NON EMPTY {crossjoin ({[Time1]},crossjoin({[Channel1]},crossjoin({[Product1]},{[Customer1]})))}   on rows
from [GlobaASO.GlobaASO]

       

Even if we try to save the above MDX outputs as a CSV file we would get the member names wrapped within parantheses (depending on the MDX fired).

       

Now, lets go to Oracle Data Integrator and create an interface with this GlobaASO as the input. Our idea is to test both the above MDX queries. So, first lets start with the first MDX Query and insert it into a dummy table.

       

       

Now, lets try the same thing with the other MDX query.

       

       

It does look like ODI takes into account all types of MDX queries(though in effect every MDX query works on the AXIS specification). The interesting fact is that BI EE chooses the ROWS and COLUMNS (basically AXIS(0) and AXIS(1)) type of queries instead of the first type of the query where every AXIS apart from AXIS(0) comes as a seperate column. One of the main reasons for this is the fact that NON EMPTY clause would not work on individual members. It would work only on the actual existing tupules. So, the 2nd type of query with cross join would actually generate a resultset without #Missing values(lesser number of rows returned). The more you start analyzing the BI EE and Essbase connectivity, the more you get to know how the connectivity has been designed. It would also give you insights on the future product directions. For example, if you notice in EPM 11.1.1.0, a new feature has been added where saved sets can be used during the duration of the session. My guess is that this has been added keeping the BI EE 11g in mind(based on the screenshots in the OOW presentations). Anyway, i am digressing. The above was more of a initial walkthrough of the ODI-Essbase MDX export capabilities. I would be going more into detail of Java API and MDX in the future blog entries.

Posted in All Posts, EPM, Hyperion Essbase, OBI EE Plus, Oracle Data Integrator | 1 Comment »

Oracle Data Integrator 10.1.3.5 – Loading Data into Hyperion Essbase Cubes from Relational Data Sources – Using Rule Files

Posted by Venkatakrishnan J on February 13, 2009

I was on a call yesterday with an Oracle partner who was developing a custom data loading solution(for a complex reporting system involving Essbase, Oracle OLAP and Oracle 10g) for one of his customers. One of the issues that they were facing was in using ODI to do data loads into Hyperion Essbase. Before i proceed further, i would like to point out John’s article series here which has the details on how to go about using ODI for loading into Essbase. The issue faced by the partner was, they were not comfortable in using JAPI provided by ODI as they felt that the data loads could be much faster using traditional approaches. They were actually used to loading data through rule files using the SQL Interface. And they were of the opinion that SQL Interface was performing better than the JAPI. Though the conclusion arrived at was not correct, as there were no quantifying facts/metrics stating JAPI load was slower than the SQL Interface, this question made sense as the partner was more or less new to ODI and was not comfortable in using JAPI for data loads. But one of the features that they liked about the ODI and Essbase integration was the ability to define Essbase connections which offered them the flexibility of doing every data load using the ODI interfaces. The partner basically wanted to know of a way where an ODI interface could be created with no mappings to the source and all the interface should do is to call a Rule file through the SQL Interface and load the data. I thought i would blog about this since this is more or less a common requirement considering the fact that many Essbase customers use the SQL interface rather than the JAPI. First let us start with creating a physical schema using the Hyperion Essbase Technology.

       

The next step is to create a logical schema and establish a relationship between the physical and logical connections using a context.

       

Now, lets jump over to the designer and create a model. While importing the objects choose customized reverse and enter the relevant details as shown below.

       

       

       

Now, the next step is to create a custom project (or use an existing project). This project should have all the Knowledge Modules. Now duplicate the “IKM SQL to Hyperion Essbase (DATA)” and give it a custom name “IKM Essbase Data Load using Rule Files”.

       

And in the custom tab, remove the steps “Load Data Into Essbase” and “Report Statistics” from the IKM.

       

The reason for removing these 2 steps is to ensure that the IKM does not use JAPI to do the data loads. Now, lets create a new interface and include the fact table related to the Global cube as target into the Interface. Ensure that the staging area is SUNOPSIS_MEMORY_ENGINE.

       

       

Now go to the flow control and choose the custom IKM that we create above. Since we have removed 2 steps, Load Data into Essbase and Report Statistics, the not all the options that you see for the IKM are relevant anymore. The only options that are relevant are given below

       

Now, in order to use the SQL interface to load the rule files, create a maxL script as shown below and save it as GlobalUnitsLoad.maxL

spool on to 'D:\GlobalUnitsLoad.log';
import database 'Global'.'Global' data connect as 'global' identified by 'global' using server rules_file 'UnitsLd' on error write to 'D:\GlobalUnitsLoad.err';

As you see the maxL script basically spools the log out to a specific file. And also, it calls the rules file to do a data load. When this data load happens, it would be through the sql interface. Now, come back to the Interface and in the options for the IKM enter the below details. The maxL script would be called through the PRE_LOAD_MAXL_SCRIPT option.

       

Now save and run this interface.

       

Now, you should be able to bypass the JAPIs to do your data loads. This is not always the recommended option. But neverthless this is something that would be wanted by everyone who is new to ODI and who is used to loading rule files through the sql interface.

Posted in All Posts, EPM, Hyperion Essbase, Oracle Data Integrator | 4 Comments »

Oracle Data Integrator Enterprise Edition – New bundling – ODI and OWB merged

Posted by Venkatakrishnan J on January 31, 2009

Well, this news has been floating around for sometime and now it has been officially released. Oracle Warehouse Builder and Oracle Data Integrator have been merged together into a single product called Oracle Data Integrator Enterprise Edition. This has a primary impact from a product pricing/purchasing standpoint. Going forward its not possible to purchase OWB (Enterprise ETL) and ODI seperately(i believe this is the case, not completely sure though). The official details of this are available below

1. Oracle Data Integrator Enterprise Edition

2. Oracle Data Integrator – Statement of Direction

So, now with a single product(from a licensing standpoint) it is possible to deploy both OWB and ODI (customers have more flexibility in their deployments which is good). But effectively both the components will have their seperate releases (according to the SOD above). The actual product merging would happen only after a seperate individual release of both the products. More details are available in the statement of direction.

Posted in All Posts, Oracle Data Integrator, OWB | 1 Comment »

Oracle Data Integrator 10.1.3.5 and Oracle OLAP – RKM and IKM – Using Oracle OLAP as a Target – Part 2

Posted by Venkatakrishnan J on January 6, 2009

In the blog entry here, we had seen how the new RKM for Oracle OLAP works. Today, we shall look at the other KM(IKM) which would be used for loading data directly into Analytic Workspaces. I had noted earlier that, the Oracle OLAP RKM just reverses the OLAP cube to its underlying source tables. So, basically, it is not Oracle OLAP aware per se and it does not support all kinds of data extract and load. This is the major difference in the Oracle OLAP RKM and Essbase RKMs. Essbase RKMs treats Essbase itself as multidimensional data store. So, all data extract through MDX, Report Scripts etc are supported. We shall look at how ODI uses the Essbase RKMs in a later blog entry. Lets proceed with our test on the Oracle OLAP IKM which would be used to load an Oracle OLAP AW Cube. To test this out we shall start out with a new cube in the Global Analytic Workspace. This cube mapping is provided below as a reference.

In our case, the UNITS_FACT_TGT is a table that is used as a source for the cube. We shall create another table UNITS_FACT_SRC which would be used as source to populate data into this target table. Our idea is to simulate an environment wherein the target table would be populated temporarily before loading into Oracle OLAP. This target table would contain only the incremental data.

CREATE TABLE UNITS_FACT_SRC AS SELECT * FROM UNITS_FACT;
CREATE TABLE UNITS_FACT_TGT AS SELECT * FROM UNITS_FACT WHERE 1=0;

Now, lets create an interface which would be using the above 2 tables (one as a source and the other as a target).

In the flow control, ensure that you are choosing the “IKM Oracle AW Incremental Update”.

This is very similar to the normal Oracle to Oracle data load. The only addition are the 4 options which are specific to AW loading. The 4 options are

1. REFRESH_CUBE – This will enable the automatic start of data load into Oracle OLAP at the end of data load into the target table.
2. AW_OWNER – Owner of the Oracle OLAP AW
3. AW_NAME – The Analytic Workspace name
4. CUBE_NAME – The cube whose data needs to be refreshed.

This IKM assumes a straight data load into the Oracle OLAP. The data load cannot be customized. The only way to do that is to call custom OLAP specific Pl/SQL packages. But again, i was expecting that to be available in the IKM itself. Now lets execute the interface and see what the Interface does under its covers.

Let us take a look at the Olap Refresh option to see the exact code.

declare
xml_clob clob;
xml_str varchar2(4000);
isAW number;
v_awname varchar2(50);
v_owner varchar2(50);
v_cube varchar2(50);
v_dimension varchar2(50);
v_line varchar2(250);

cursor c1 is select DIMENSION_AW_NAME from
olapsys.all$olap2_aw_cube_dim_uses
where
aw_owner = v_owner
and aw_name = v_awname
and aw_logical_name = v_cube
and DIMENSION_AW_NAME<> 'TIME';
begin

v_awname := 'GLOBAL';
v_owner  :=  'GLOBAL';
v_cube   :=  'UNITS_FROM_ODI';

sys.dbms_aw.execute('AW ATTACH '||upper(v_owner)||'.'||upper(v_awname)||' RW');
DBMS_LOB.CREATETEMPORARY(xml_clob,TRUE);
dbms_lob.open(xml_clob, DBMS_LOB.LOB_READWRITE);
v_line := '  <BuildDatabase  Id="Action7" AWName="'||upper(v_owner)||'.'||upper(v_awname)||'" BuildType="EXECUTE" RunSolve="true" CleanMeasures="false" CleanAttrs="true" CleanDim="false" TrackStatus="false" MaxJobQueues="0">';
dbms_lob.writeappend(xml_clob, length(v_line),v_line );
dbms_output.put_line(v_line);
open c1;
loop
fetch c1 into v_dimension;
exit when c1%notfound;
v_line := '    <BuildList XMLIDref="'||v_dimension||'.DIMENSION" />';
dbms_output.put_line(v_line);
dbms_lob.writeappend(xml_clob, length(v_line),v_line );
end loop;
close c1;
v_line := '    <BuildList XMLIDref="'||v_cube||'.CUBE" />';
dbms_output.put_line(v_line);
dbms_lob.writeappend(xml_clob, length(v_line),v_line );
dbms_lob.writeappend(xml_clob, 18, '  </BuildDatabase>');
dbms_lob.close(xml_clob);
xml_str := sys.interactionExecute(xml_clob);
dbms_output.put_line(xml_str);
dbms_output.put_line('finish');
sys.dbms_aw.execute('AW DETACH '||upper(v_owner)||'.'||upper(v_awname));
end;

The above is a java code and you would see calls to the DBMS_AW package (for attaching and detaching the AW). The actual data load is done through a custom XML load parser. This can be customized according to our needs. But again this is not something that can be done from the front end. This would have to be changed in the backend class files of ODI.

After i saw how the RKM worked, i was pretty sure how the IKM would work. The above confirmed my guess. In a future blog entry i would be comparing this with the Essbase specific RKMs and IKMs.

Update: John Goodwin has an excellent series of articles on ODI and Essbase,Planning connectivity. I would recommend everyone to check that out. Thanks John for letting me know.

Posted in All Posts, OLAP, Oracle Data Integrator | 2 Comments »

Oracle Data Integrator 10.1.3.5 and Oracle OLAP – RKM and IKM – Using Oracle OLAP as a Source – Part 1

Posted by Venkatakrishnan J on January 4, 2009

As you might probably be aware, the latest version of ODI 10.1.3.5(and also the prior release) supports a new set of KM’s which can extract and load data into Oracle OLAP Cubes. We shall try to understand how this connectivity works in this release. There are 2 aspects to this connectivity. They are

1. Using Oracle OLAP as a data source – This is supported by a Reverse Knowledge Module called “RKM Oracle Olap (Jython)”.
2. Incremental data load into Oracle OLAP – This is supported by a Integrating Knowledge Module called “IKM Oracle AW Incremental Update”

Both the above KM’s are available in the default install of ODI. One would have to manually import the KM’s from the {ODI_HOME}\oracledi\impexp directory. We shall be using the Global Analytic Workspace for all our testing. This Global AW contains 4 dimensions

1. Channels Dimension – Containing one normal Level Based Hierarchy
2. Customer Dimension – Containing one normal Level Based Hierarchy
3. Product Dimension – Containing a value based hierarchy
4. Time Dimension – Containing the normal level based fiscal time hierarchy

Also this AW contains 2 cubes, one having one measure each. The below screenshots would give you an understanding on how my AW is structured.

To use this AW as our data source in ODI, we need to create the physical connection in the Topology Manager of ODI.

Once the physical connection is created, lets create a logical connection to denote the usage of this connection as a source.

Also tie the physical and logical connections together by means of the Global context as shown above. This will ensure that the logical connection will use the physical connection defined above. Once this is done, let us login to Designer and import all the out of the box KMs provided by ODI.

As a next step lets create a model and import this AW using the RKW imported above. While creating the model, we would have to use Oracle as the Technology and then assign the Logical connection that was created above.

And in the Reverse tab, choose the context and use the customized option as shown below.

Ensure that you are choosing the RKM properly in the dropdown. Also, enter all the the Analytic Workspace related connection details as shown above. Now for the Reversing operation to work there are 2 jar files that are needed. They are

1. olap_api.jar
2. awxml.jar

These 2 files need to be copied from the Oracle database home i.e Copy the 2 files from {ORACLE_HOME}\olap\api\lib to {ODI_HOME}\oracledi\drivers. The documentation states that awxml.jar alone is sufficient. But i was not able to get the Reverse functionality to work with just that single jar. Only after i copied over the olap_api.jar file over to the drivers directory, the reverse started to work. Once the files are copied over, a restart of the designer is required. After the designer restart, just click on reverse from the Model that we created. The reversing operation is in itself treated as an execution and can be tracked using the Operator.

Once the reverse operation is successful, we should be having the imported objects in the Model as shown below.

If you observe closely, what we have basically obtained is not actually the OLAP object itself. Instead, what the RKM has done, is that it has used the XML API (using awxml.jar) of the Global AW to reverse engineer all the dimension and fact tables from the mappings defined in the AW. If you see, we have obtained 3 dimension tables and 2 fact tables. Also, one dimension table is missing which is the value based product dimension. I believe the current version of RKM does not support reverse engineering Value based hierarchies. So, the question now is if we incorporate these imported dimension/fact tables into our interface, will we be extracting data out of the cube or from the fact tables? Unfortunately, i believe the current version does not support direct extract of data from OLAP using these 2 KM’s (if data is required directly from the cubes one would have to create custom SQL Views on top of the AW and then use them in the interface). To test this out, lets create a sample dummy fact table with no rows as shown below

CREATE TABLE UNITS_FACT_TEST AS SELECT * FROM UNITS_FACT WHERE 1=0

We shall be using the above table as our target in the interface. For the source, we shall use the imported UNITS_FACT table. The screenshots of the interface is shown below

As you see, we are treating this as a normal oracle table in our interface. So, when we execute this interface the data extract would happen only from the underlying fact tables instead of the Cube AGGMAPs. I was expecting a bit more advanced form of the KMs to support Oracle OLAP natively. But considering the fact that the cube level data can be exposed through SQL Views this is not much of a hinderance as it is possible to use them to extract data directly from the cube. Neverthless, it would be interesting to see what the Incremental Update IKM does. My hunch is that this will support data load into the fact tables and in turn will kick start the cube aggregation. We shall get into more details on the AW data load using ODI in a future blog entry.

Posted in All Posts, OLAP, Oracle Data Integrator | 2 Comments »

Oracle Data Integrator 10.1.3.5 – Configuring Metadata Navigator & Lightweight Designer

Posted by Venkatakrishnan J on December 30, 2008

I had the opportunity to dedicate some of my time on Oracle Data Integrator 10.1.3.5 today. Oracle Data Integrator is a very interesting ETL tool since it is based on a ELT architecture like Oracle Warehouse Builder. But apart from that, the similarity between these 2 tools end, both in terms of product usage and also in terms of the product related jargons. Now that there are plans to introduce ODI as the back end ETL tool for loading data into Hyperion Financial Management, Hyperion Planning, Hyperion Essbase(these 3 already exist now), BI Applications, my interest in this tool grew a bit further. I have worked with both Informatica and Oracle Warehouse Builder in the past and hence it was very interesting prospect to analyze the similarities and differences across these toolsets. My intention is to primarily cover the connectivity to HFM, Hyperion Planning and Hyperion Essbase. I would be covering these in the future blog entries. For an introduction to this tool, i would recommend Mark’s series of blog entries here. But in today’s blog entry we shall be covering the new feature(web based metadata navigator & Designer) that was introduced in the 10.1.3.5 version of Oracle Data Integrator. Though the documentation is technically correct, certain details were missing. In this blog entry, i would briefly explain how to go about configuring these 2 pieces as part of Oracle Data Integrator installation.

Pre-Requisites(for the below set of steps to work):

1. Oracle Data Integrator 10.1.3.5 should have been installed
2. The master repository should be created on an Oracle Database
3. A sample Work Repository called WORK REPOSITORY should have been created.
4. A sample OC4J instance. If you have BI EE installed, then just use that instance itself. These navigators are pretty lightweight.

Configuring Metadata Navigator:

Configuring metadata navigator is straightforward. All we need to do is to deploy the corresponding War File in the OC4J instance.

1. Open the OC4J console and deploy the oracledimn.war(from the installation setup/manual folder) in the OC4J container.
2. Use oracledimn as the application context and click on next.

3. Just deploy the war file with the default configurations.

4. Now we should be able to open up the Metadata Navigator console. The link to be used is http://<hostname&gt;:<port>/oracledimn

5. Ensure that you have configured your work repository. Also, ensure that you are able to login to the work repository using the Designer java client. In order for the online web app to use the same connect repository details, just copy the {OracleDI_HOME}\oracledi\bin\snps_login_work.xml file to {OC4J_HOME}\j2ee\home\applications\oracledimn\oracledimn\WEB-INF directory. This will ensure seamless login of the metadata navigator into the repository.

6. Restart OC4J. Now, we should be able to browse the metadata navigator.

As you see, we have the entire metadata repository(Topology) in the web based console. One very good thing is that this is very lightweight and is much easier to configure than the OWB metadata repository. Currently this is just a navigator. Online updates are not supported. This i believe is a big difference between Informatica and the 2 ELT based tools. Informatica currently supports online updates to the repository, Integration Service etc.

Configuring Lightweight Designer:

Configuring Lightweight Designer involves more detailed configuration than the Metadata Navigator. Some of these steps are not explained in detail in the configuration guide. Lets go through the steps one by one.

1. Deploy the oracledilwd.war in the same OC4J container as above. Use oracledilwd as the application context. Ensure that you are able to access the login page of the online designer using the URL http://<hostname&gt;:<port>/oracledilwd

2. Since we are using Oracle as the Repository database, we need to copy the jdbc jar file of Oracle to this application. So, copy the ojdbc14.jar from {OC4J_HOME}\j2ee\home\applications\oracledimn\oracledimn\WEB-INF\lib to {OC4J_HOME}\j2ee\home\applib. The ojdbc14.jar would be deployed automatically when we installed the metadata navigator above. If you do not have this navigator, then you can get the jar file from the ODI or Oracle DB installation directory.

3. Restart OC4J and open up the OC4J enterprise manager. Go to the Administration tab and go to the JDBC Resources configuration screen.

4. We need to create 1 connection pool and 2 data sources. Let us start with creating a data source first. Give the name for this connection pool as ORACLE_MASTER though it does not matter in this case.

Ensure that you are also providing the schema details of the master repository and test the connection. It should be successful. This will validate the jar file copy that we did above.

5. Now, we need to create 2 data sources. One with the name ORACLE_MASTER and the other with the name ORACLE_WORK. Ensure that the details for these data sources are as shown below. The JNDI path is very important. If this is wrong then the connection would not happen.

Do not worry about the name Lightweight Designer Application for the connection pool in the above screenshots. The name of connection pool can be anything. In you case, it would be ORACLE_MASTER.

6. Go to {OC4J_HOME}\j2ee\home\applications\oracledilwd\oracledilwd\WEB-INF and edit the Web.xml file. In that XML file add the below XML directives.

        <resource-ref>
			<description>
				OracleDI Local Repository
			</description>
			<res-ref-name>jdbc/ORACLE_MASTER</res-ref-name>
			<res-type>javax.sql.DataSource</res-type>
			<res-auth>Container</res-auth>
			<res-sharing-scope>Unshareable</res-sharing-scope>
	</resource-ref>
	<resource-ref>
			<description>
				OracleDI Local Work Repository
			</description>
			<res-ref-name>jdbc/ORACLE_WORK</res-ref-name>
			<res-type>javax.sql.DataSource</res-type>
			<res-auth>Container</res-auth>
			<res-sharing-scope>Unshareable</res-sharing-scope>
	</resource-ref>

7. The above directives basically point to the JNDI path that we specified in step 5 above.

8. Go to {OC4J_HOME}\j2ee\home\applications\oracledilwd\oracledilwd and edit the Repositories.xml file. Add the below XML directive to the file.

<logins>
	<login name="OracleDI Local Repository">
		<master name="jdbc/ORACLE_MASTER" masterDriver="oracle.jdbc.OracleDriver"/>
		<work name="jdbc/ORACLE_WORK" workName="WORK REPOSITORY"/>
	</login>
</logins>

Ensure that the Work Repository name provided in the above directive matches with your Work Repository name.

9. Restart OC4J. Now you should be able to login to the Lightweight Designer application.

Posted in All Posts, Oracle Data Integrator | 4 Comments »