Monthly Archives: February 2015

First look at HFM

HFM Exalytics imageHyperion Financial Management

Lighter, Faster, Simpler & Portable





There has been much anticipation and speculation about this delayed platform release v11.1.2.4. Particularly so from the HFM community as Oracle had indicated major revisions to the HFM ‘engine’, and to set HFM free from it’s Windows shackle’s to be platform independent.

Which surely means that this release would be the most significant update to HFM delivered by Oracle since the ‘unlimited’ (won’t get into the theoretical design debate here) custom dimensions feature released in

Since the release a couple of weeks ago there has been some good technical blogs (John G covered the first look at installing the new release and the intricacies of installing on Windows 2012 Server, see: More to life than this…).
However there is no real detail out there on what has happened ‘under the bonnet’ for HFM.

So let’s start with the highlights and take a look at the new engine and some stat’s.

Firstly, there is HFM’s new simplified architecture (freshly ported to Java). The objective of reworking the HFM application server components in Java, means there is no longer reliance on Windows technologies like IIS and DCOM.

Which for the first time in its existence means HFM is now “platform independent”. Although for customers the choice for the moment is only between Windows servers or Oracle’s own super hardware – Exalytics. Commodity Linux is not yet supported…

While the developers were tinkering around with the ‘HFM engine’ they did the following:

  • fitted an optimised central data query engine – improving data retrieval used by the web UI, SmartView and Financial Reporting;
  • replaced the database ADO driver with an ODBC driver to improve database interaction, claimed better performance with Oracle RDBMS;
  • fitted new ‘multi-core scaling’ wizardly that ensures during consolidation up the entity hierarchy the system will use all of the hardware cores available;
  • fitted ‘SmartHeap’ technology to improve memory allocation and reduce thrashing of heap;
  • redesigned the ‘CalcStatus’ code, to store only used currencies, thereby reducing unnecessary storage of data and improving metadata / data loads;
  • replaced ‘Web-services’ with ‘Thrift’, claimed to optimise the transfer of objects between C++ server and Java based web tier.

Some interesting statistics from Oracle product development is the following optimisations the developers managed to do compared to v11.1.2.1 code line:

  • reduced number of file in HFM by 45%
  • reduced libraries by 88%
  • reduced total installed files by 94%

Diagram of HFM’s technology stack transition:

HFM Architecture


Much like a Moto GP race machine, the developers have managed to make the ‘engine’ lighter, simpler, and more efficient. The result in testing by Oracle in the lab’s using real client applications is claimed performance improvements of 2-5x times faster on the same hardware.

The performance claims alone are quite impressive for just an upgrade (no application optimisation), and exciting what can be achieved by moving HFM to an Exalytics box for the larger HFM clients.

The consolidation times comparison provided by Oracle is: (Windows) (Windows) (Exalytics X4)
60 min 23 min 13 min


What is supported by what version is in Oracle’s Compatibility Matrix, which is usually not the easiest to navigate if you just need to know if SmartView works with the latest MS Office.

So it’s useful to know that the release now supports:

  • Browsers: Internet Explorer 9, 10, 11 and Firefox 31 ESR
  • Office 2013
  • Desktop: Windows 7, 8 & 8.1


Like any other dot zero release, there is going to be some quirks, and some undocumented features when we get into implementing. However in this kind of scenario, skilled HFM consultants thrive, pushing design to make the best use of the new features, but at the same time also understanding the limitations.

In the next blog I will take a look at some the new usability features in this release, and new utilities provided. Some of the new features are stated to only be included in certain patch releases (PSU’s), so will examine the roadmap provided by Oracle to understand what will be available when.

Watch this space…


Tagged ,

Seismi sponsor Hyperion SIG


Event date: 25 February 2015

We are delighted to be supporting the UK Hyperion community by proudly sponsoring the UK Oracle User Group (UKOUG) Hyperion Special Interest Group (SIG) on the 25 February 2015.

This is the first of two Hyperion focused events taking place this year organised by the UKOUG, an independent, not for profit membership organisation.

James Gordon, our Managing Director, will be presenting on a topic which is fundamental to our approach to delivering integrated financial applications: Unleashing the Power of Financial Master Data Governance. This presentation examines the critical area of financial master data governance and how a controlled approach to financial master data improves the quality and confidence in the numbers an organisation reports. We explore the benefits of effective Financial Master Data Governance and contrast these against the risks associated with allowing financial applications to move independently of one another.

We hope to see many of our clients and members of the community at the SIG. James will be available throughout the day and at the drinks social to answer any questions you may have. Feel free to discuss our complementary review of your financial master data management processes if you manage to corner him!

Details of the event can be found here.

DRM and DRG -

As you probably know by now, Oracle have released DRM The first thing that strikes me is that this new version comes with a new look. The buttons and elements are in the same place but the colours and theme have been updated to give it a new, clean look. The next thing we noticed is that overall performance seemed better. Oracle have implemented a new improved architecture optimized for multi-processor deployments on 64-bit hardware. The trade-off is that this new version is no longer compatible with Windows 32 bit operating systems. Applications now use a single engine and server to reduce the overhead linked with the multi-engine architecture of previous versions. The result is an application capable of handling much more concurrent processes with less hardware.

The new features delivered in this new version can be divided between DRM and DRG.

DRM New Features

To start, we had a look at the new hierarchy group exports feature. In DRM you can now define exports that will run on all the hierarchies within a specific group. These exports will always run from the top node of all the different hierarchies. You cannot select a specific branch as you can in normal exports. This can be remediated through the effective use of filters. This new feature can be very useful for companies who are governing their chart in E-Business Suite or Fusion from DRM as it will make it a lot simpler to define which hierarchies should be published to the value sets.

The second feature worth noting is that DRM imports now support reverse lookups. This means you can take advantage of lookups tables you have defined. The goal here is to have one single point where you can define the relation between a system attribute and it’s meaning for the users. So for example, Account Types are stored in E-Business Suite as single characters (“A”, “L”, “E”, “R”…). In DRM you can have a user friendly property for the account type (“Asset”, “Liability”, “Expense”, “Revenue”…) and a lookup property to transform that to the E-Business Suite format. Thanks to these reverse lookups, you can import the chart of accounts from E-Business Suite with the single character and the reverse lookup will automatically understand that “A” means that your user defined account property should be set to “Asset”. This feature may not seem like a major breakthrough but it is down to the core goal of Master Data Management, centralising definitions in a single location.

Other noteworthy new features include substitution parameters in imports and exports to use runtime parameters, dynamic columns in exports to add columns without creating new properties, imports with no sections to ease the integration with upstream systems and the ability to set node types and validations assigned to an imported hierarchy within the import definition. These will all make the integration simpler to create and manage.

DRG New Features

Just as the general DRM interface, DRG requests have changed look. Requests now contain different tabs to view items, comments, attachments, participant’s details and activity. Information is easier to access and yes, you can now attach documents to a request! DRG now also supports custom labels and property instructions so you can adapt the label of a property to the specific workflow and add instructions for that property. These small changes go a long way in making requests clearer for all the different stakeholders. Another important new feature is the separation of duties option that enables workflows to enforce that an approver cannot have participated in any previous stage of the request. This is a good simple way to force that every request is reviewed by at least two pairs of eyes!

The main new feature that many of us working with DRG have been waiting for is also included in this release: conditional stages. When building a workflow, you can now assign conditions to the execution of a stage. The condition can be based on properties or the result of validations and you can also chose to apply the stage to the full request if at least one item meets the condition or to split the request and apply the stage only to the relevant items. This new feature does exactly what it needs to.

The next feature we tested is the request from file option. This allows users to create a request with multiple items from a file. The theory here is that a user can prepare a large request in a spreadsheet, save it in the correct format and load it into a request. DRG requires that similar commands are grouped into independent files. These files can then be loaded and combined into one request. In practice this feature was a little disappointing; it does not look like you can automate this load into the DRG queue. As a result a third party system can generate a load file but cannot automatically submit the identified changes into the DRG process for onward processing. Hopefully this limitation will be addressed soon.

Note – After publication of this post, we were contacted by Oracle to let us know that it is possible to trigger the load of a file containing items of a DRG request from a batch file. The batch parameters required to do so were just missing from the documentation. This is great news as it means we will be able to automatically import master data from a third party system then trigger a set of requests to ensure that master data is enriched as required. Users will simply arrive in the morning and find the required actions waiting in their inbox. We will be testing and posting a review of that feature soon.

Finally, one point worth noting is that DRG is going mobile. This release is compatible with the Oracle EPM app which allows users to review and action DRG requests in their inbox directly from their tablet or smartphone. This same app can also allow you to interact with Hyperion Financial Management, Planning, Tax Provision and Financial Close Management. Obviously, access through a smartphone or tablet will depend on each company’s security policy and we would always recommend ensuring that these clients have the appropriate protection before allowing them to interact with core financial applications. However, provided this is done, this shows how Oracle is adapting to the new ways we work and interact.

As a conclusion, this may not be a ground-breaking release like when Oracle introduced the new DRG workflow and JavaScript derived properties and validations. It feels more like a natural evolution of into a finer tuned version. The improved architecture, the new clearer layout of information in DRG requests and the new conditional workflow stages are reason enough to look at adopting


Tagged , ,

Offers extended due to popular demand!

We are pleased to announce that we are extending our special offers until May 2015 due to popular demand. Take advantage of our free Master Data Management Process Review or for those who already use Oracle DRM, our free Data Relationship Management Health Check.

For more details, don’t hesitate to contact us.

DRM Highlights

Having started our review of the new release we thought we would highlight what we anticipated to be the most valuable from both a business and technical perspective. The full list of features is available here. From a business perspective some of the most notable features are:

  • Conditional Workflow Stages – A workflow model can be configured to conditionally alter the workflow path for individual requests. You can include or exclude particular workflow stages depending on whether request items have certain property values or if they fail certain validations. You can also separate request items that require different approvers and split enrichment tasks into different requests to follow separate workflow paths.
  • Separation of Duties – You can configure workflow stages to require a separate approving user who has not submitted or approved for any other stage in the request.
  • Request Items from File – Request items can be loaded into a workflow request from an external flat file created by a user or source system. You can load request items during a Submit or Enrich stage. Source files may be loaded using the Web Client or Batch Client.

From a technical perspective the following three have real potential:

  • Improved Architecture – Data Relationship Management offers a streamlined application server architecture optimized for single machine, multi-processor deployments on 64-bit hardware. Each application utilizes a single engine and server, instead of the multiple engine and server configuration used in previous releases. These improvements result in higher concurrency of read operations, eliminate event traffic between engines, and reduce connections to and data transferred from the repository.
  • Reverse Lookup on Import – Lookup type properties can be selected for import section columns to perform a reverse lookup on column values being imported. The resulting value is stored in the defined property that uses the lookup property.
  • Hierarchy Groups in Exports – Hierarchy groups can be used to auto-select hierarchies for exports instead of having to manually select the hierarchies for each export. Each export profile is configured with a hierarchy group property and hierarchy group. When a hierarchy is assigned to a hierarchy group, the hierarchy becomes immediately included in all exports using the group.

We will update you on our progress once we have completed our full review. Watch this space!