A while ago I got asked to come up with a solution for Smart Merge. Now I’m more a database developer then much else, so my first though was lets do it on the DB. As I spend much time coding this and I got to say I’m not yet really happy with my jython skills I’m not sharing the code, but the idea. Hopefully it will help you get trough getting it developed.

I used 2 additional artefacts on the DB. One table and one view. The view is droped and recreated during the BefLoad. Basically what I did is create a table that is similar to TDATASEG and into which before a new load to TDATASEG is done copies the lines for the pertinent CATKEY, PERIDKEY and ENTITY, to the auxiliary table.

The overview of steps executed:
I. In BefImport: Data from TDATASEG for previous relevant load is stored in TDATASEG_SM, and view TD_TO_TDSM (which holds: TDATASEG_SM – TDATASEG) is created for current CATKEY, PERIDKEY and ENTITY
II. In AftExportToDat, based on the view we append lines with NODATA

The additional candy is the fact there a function that sorts trough the file and makes a list of all entities in there, so to query dynamically which Entities are present in the file.

Got questions about it? Just leave me a comment bellow and I will try to explain, without giving out the code 🙂

So what if we would like to export data to a file via CalcScript DATAEXPORT function, and then use the output file to be feed to FDMEE? How do we do that? Easy: call it from a Maxl script, wrap that into a batch file, and call it from the BefImport, so the file is there at Import step. All this was done using the knowledge Francisco has shared a while ago on the OTN.

But let’s look into the exact steps to take:
1) Create a simple Maxl script (which I encrypted because reasons, and so it’s called mxl.mxls):

2) Create a CalcScript that looks like something similar to this (and named Export, as seen above):

Why like that? ‘Cause I set up in the DLR to use the file output.txt in the inbox:

3) Then make a batch file (mxl.bat), with the following content – & just feeding Essbase back his private key, so it can decrypt the script for itself:

4) As the last step put the following code as BefImport script:

Quick and painless!

I was helping out a colleague of mine so I was investigating why a script that would export data in 11.1.2.2 was not exporting anything in 11.1.2.4. Started by only using

Essbase would respond with:
Received Command [Calculate] from user [admin@Native Directory] using [EXP.csc]
DataExport detects Dynamic Calc member [KittenMbr] in the range. Exporting Dynamic Calc data may slow down performance.
This DataExport operation will export data from existing blocks only. Any FIX on sparse dynamic calc members will be ignored. Use DATAEXPORTNONEXISTINGBLOCKS ON option to export data from all potential blocks.
Data Export Completed. Total blocks: [15]. Elapsed time: [0.023].
Total Number of Non-Missing cells exported: [60].

Now this is a cube with many Sparse members on DynamicCalc – I got no influence on this part -, which was mitigated in the old version by using:
DataExportDynamicCalc ON;
DATAEXPORTNONEXISTINGBLOCKS OFF;

Same result. Hmm interesting.

After 2 minutes on Metalink, we were able to find Document 2123909.1, where Oracle development explains changed the behaviour. Short story: you are supposed to use them as:
DataExportDynamicCalc ON;
DATAEXPORTNONEXISTINGBLOCKS ON;

After using it as above, I get the response from Essbase:
Received Command [Calculate] from user [admin@Native Directory] using [ESC.csc]
DataExport Warning: This DataExport operation will export a total of [3958416] blocks. Exporting dynamic calc members from all blocks has significant performance overhead. Use DATAEXPORTNONEXISTINGBLOCKS OFF option to export data from existing blocks only.
Data Export Completed. Total blocks: [45]. Elapsed time: [508.234].
Total Number of Non-Missing cells exported: [180].

Do I got to add be careful with this? if nothing else is an indication, just look at how much more time it took for export of still such a small amount of data.

I’ve already written partially about this, but now I’m trying to solve it by adjusting the listing.xml that LCM uses to import articafts. LCM moves most of the needed stuff before environments correctly, with few exceptions, organizing the folders by date of creation being the main one.
Now this is a dummy case, but it could happen – especially in older version than I’m testing this on – that when importing from Acceptance to Production you end up with a similar mass:

When you look at listing.xml it will look this:

The lines circled, and the only that need adjusting, both on Folder side and as well Forms one. This is what they will look like once re-ordered:

Once you have saved the changes, and after deleting the incorrectly ordered folders just simply import by using the new listing file and TADA:

With the introduction of version 11.1.2.4, it’s more and more relevant to use Firefox instead of IE with the EPM System. Now while testing the new version, the only problem I have run into is, that it might be that while trying to use the functionality “Open in SmartView”:
OpenInSmartView
You encounter this:
Error

For that one user you will need to add a registry key, to fix it, something like this pointing to the installation location of SV:

For this to take effect, restart Firefox. After restart it will ask you if it may install the extension, that should be now visible in the add-on tab:
SVExtension

Hope this helps 🙂

Working with HP sometimes you get to dig in the Planning DB, here is my latest story. Might be good to mention don’t do this home alone without supervision by an adult.

I have been dealing with an applications sited in an older Hyperion (x.2.1) version. And when using LCM, it decided to re-order folders as it wished. “Move” did not do much in the sense of letting me re-order it. Here is a view of me trying to – btw you might notice it’s not the same version as my demo lab is x.2.3. – import the data forms.

In LCM:

LCM

In HP after import with LCM (AppX*) :
InPlanning

Which leaves me with a nice mess. So I decided to have a look at the background.

As I had to somewhere I first had a look at a select statement to get the most common object types counts:

On one of the apps I’m running on my x.2.3 lab, gives this result:
Result01

What about the Folder count per parent, to be able to compare two environments, you need the name and not id as that is probably different from app to app.

So let’s run the query, which just gives us you the count and the names. All the queries from here on were run on our AppX database:
Result02

But let’s go back to the problem that bought me here. You might have already know or notice while reading this the important column is Position as is how Planning knows how to sort the folders. Let’s take the “two ends” of the folders, just for a comparison, as you can see that the folders starting with 1, :

If you don’t know the “Forms” object ID, you can always do this to see the second generation:


Result03

After update has run:
Result04

If we now check our AppX we see this:
InPlanningOrdered

Very easily you can do something similar with TaskLists:

ID_and_POSITION_TaskList

So on the other hand this is how it looks creating folders in Planning:
Step 1:
FolderCreation
Step 2
FolderCreationMove
Step 3
FolderCreationInFolder3
And Step 4 looks like 1, so working as intended, so if you want to re-order you just have to do it with some sql skill.

Here is a tease of a session I encourage you to join on COLLABORATE 15, as I help with the prep work a bit 😀

Collaborate

I will post some tricks you might need to this on your own, if you want to try it out, after the session tho 😀

As promised the lessons learned:
I) Essbase Application on which we are reporting should be rolling up possibly to top (Gen1).
II) Name Generations in Essbase – for less confusion
Essbase_Named_Gen
III) Don’t flatten account dimension – keep “Fact” – and change setting on Account dimension
Accounts
IV) Tag one dimension as a time dimension
Period
V) (“the other option”): Can be done trough Data Model (BI Publisher) created from SQL statement from reports created in Analysis (copied from under “Advanced” tab), example of SQL statement (which you can already adjust):
GetSQL

SELECT s_0, s_1, s_2, s_3, s_4, s_5, s_6, s_7, s_8, s_9, s_10, s_11, s_12 FROM (
SELECT
0 s_0,
"PRS"."Account"."Accounts" s_1,
"PRS"."Department"."G2: AllDepts and Settings" s_2,
"PRS"."Department"."G3: Department" s_3,
"PRS"."Product"."G2: AllProd and NoProd" s_4,
"PRS"."Product"."G3: Product" s_5,
SORTKEY("PRS"."Account"."Accounts") s_6,
SORTKEY("PRS"."Department"."G2: AllDepts and Settings") s_7,
SORTKEY("PRS"."Department"."G3: Department") s_8,
SORTKEY("PRS"."Product"."G2: AllProd and NoProd") s_9,
SORTKEY("PRS"."Product"."G3: Product") s_10,
"PRS"."IncStmt"."Fact" s_11,
REPORT_AGGREGATE("PRS"."IncStmt"."Fact" BY "PRS"."Department"."G2: AllDepts and Settings","PRS"."Department"."G3: Department","PRS"."Product"."G2: AllProd and NoProd","PRS"."Account"."Accounts") s_12
FROM "PRS"
WHERE
(("Year"."Year - Default" = 'FY16') AND ("Scenario"."Scenario - Default" = 'Budget') AND ("Version"."Version - Default" = 'Working Version') AND ("Period"."G2: Total Year and BegBalance" = 'YearTotal') AND ("Product"."G2: AllProd and NoProd" <> 'NoProd') AND ("Department"."G2: AllDepts and Settings" <> 'Settings') AND ("Account"."Accounts" IN ('Price', 'Volume', 'Revenue', 'COGS', 'Discount', 'Bonus', 'SalesExp', '')))
) djm ORDER BY 1, 10 ASC NULLS LAST, 8 ASC NULLS LAST, 9 ASC NULLS LAST, 7 ASC NULLS LAST
FETCH FIRST 65001 ROWS ONLY

A) Working with Data Model
Creating a Data Set
Create_Data_Model

In the Data tab:
I) Click View
II) Click “Table View”
III)(Optional) Either in SQL or in tab Structure change the column names to make it easier to identify in Mobile designer
Create_Data_Model02

Ready to save and use

Normal by using Subject Areas
Create_Mobile_BISA

Using Bi Publisher Data Model
Create_Mobile_BIPublisher

And this is how it looks like at end:
Mobile

While thinking I was done with my installation, seems I was wrong, while working with HPCM, I wanted to test “design aggregation”, and found this:

JavaFX

Hopefully I get it before Friday, keep you posted.

So look at it again and found something I found funny – as when you read the following note from Oracle Doc ID 1608746.1. Now they say that if you want to obtain JavaFX go here. No linux edition here. So then you have a look at the asconsole.jnlp and see this:
easconsole_jnlp

Why is all this win reference jumping out now? 😀

So no news…in this case is not good news.

Better late than never. I got to install the new version of HP. Still not patched but that will be my next step. For now I got this:

In the installation I used: CentOS 5.11 and Oracle DB 12c.

Kernel info (you will be able to see it in the validation, later on as well):
CentOS511

The uninformed surprise (as I did not prep on the new things in the DB, this bit was much a surprise – both positive and negative one):
DB_EM_Express

I was surprised how easy the Hyperion bit was. Install was pretty much as always Next, next, next…and the configuration, tho I did it in few iterations, was very simple and without any pain.

Validation (“Green…green everywhere”):
validation

At this point I had to see a new application in the system…as seeing EAS empty made me a bit sad.

HCSample seemed a good idea. At this point I wanted to see LCM error, before all else. So imported CalcMng before just to see it error:
LCM_Errors

Sounds correct…I did try to trick you CalcMgr. Now importing “module” per module works. But did they somehow improve the import process? Almost…almost

LCM_RAF

…2min later. Did I configure FR even? Nop 😀

Okey. Lets try this again. Delete existing app and rules…just in case DS as well, so at least I know I started with a clean environment.

And ta-da:
LCM_All_In

After further check – still had to create a FR connections (5 in this case).
The FR book still does not work as expected.

On further note it all good looked in Planning (& FR), until I tried to look into CalcMgr.
CalcMgr
Turns out only some parts got imported. Reimport on it’s own and much more artifacts are present now.

Last though on this point:
– As for now I would still NOT recommend importing the whole starter kit in one go…at least from this experience.
– I’m using 8Gb on my Virtual Machine, running DB and WebLogic and Hyperion and it’s not slow – so kudos*.

*For Reference
Resources

So as we were saying in January this not very known feature of HP is not a miracle cure. But after spending some days on trying to prototype it for an example with something like 11 dimension application (+ all the attribute dimensions) I thought it would be good to add something to my previous post.

I had one HP application (one plan type), which is an EPMA application. After few attempts I was able to create an application that has a compatible data model as the HP one. Reused few dimensions from the Shared Dimension Library and removed members, like the one linked to SmartLists from the Account dimension. I created two additional dimension to describe the options we have in SmartLists and deployed. Now something I have been chasing around for a bit was the labels. So if you read the documentation you will noticed that name or alias of the new dimensions has to match label of the SmartList members/options. This point is very very important.

The application created is an Essbase BSO and deployed from EPMA. Now I made it as BSO just because…in fact you should think about this very carefully. There is much sense in make it a ASO application.

This is an example out of the prototype (changed some names, but you get the point):


Something that took me a while to grasp is the two SmartLists are now real dimensions, which means you need an intersections of at least one member out of it to hold the data. So what we can do is add a “N/A” member, for those combination of data, in the Prototype I just choose the first member out of the lists, anyway what transpires is that this new dimension would probably be dense…making all “a bit” bigger, on both sides source (HP) and target (Essbase BSO). Maybe ASO? Maybe custom mapping?

I explored the custom option as well. Thought of exporting data from HP application Essbase cube, first and re-mapping it on the base of what is in the metadata of the application. Here is a sample Select statement for this:

Select entry_id, name, label
from HSP_ENUMERATION_ENTRY
where enumeration_id = (Select ENUMERATION_ID
from HSP_ENUMERATION
where name = 'SmartListName');

HSP_ENUMERATION, contains the all the SmartList and their id codes.
HSP_ENUMERATION_ENTRY, the SmartList members are in here with their id, labels and their id to tide them to their SmartList.

Point being is doable and with not THAT much effort and this would mean the source target does not need to have any value of the original SmartList instead only the target would have the N/A member where to map this slices of data.

Anyway, a bit of a long post. And last but not least my question is: Does anyone has any “how to”, that they would like to recommend for this?