Mark Norton 5 June 2014 01:24:44 p.m.This will be a large post, currently in development.
- Comments 
Welcome to the IDIOM Decision Products Knowledgebase.
This blog is a growing base of knowledge that will help you get the most out of your IDIOM products. It offers comments on features, usage, approaches, background theory, helpful hints and other insights that will allow you to harvest outstanding ROI from your IDIOM product investments.
IDIOM’s senior practitioners contribute posts that collectively provide advice from different viewpoints, providing a 360 degree view of IDIOM product usage. All posts are moderated to ensure that any viewpoint is consistent with the evolving IDIOM best practice, but you should expect to see alternative approaches to the same subject from time to time.
Customer comment is welcome and encouraged. We will moderate any comments to ensure that they are consistent with an evolving best practice model, but we do hope to learn from your real-world experiences and we hope that you will participate and share them with us.
IDIOM started on the decision automation mission in 2001. We have developed some powerful approaches that are only more recently being emulated by competitors. These are now being captured into the Knowledgebase on a regular basis. With your help, this Knowledgebase will be the industry leading resource for decision automation within commercial systems.
Thank you for visiting, and again welcome!
Mark Norton 5 June 2014 01:24:44 p.m.This will be a large post, currently in development.
Mark Norton 28 April 2014 10:00:58 a.m.The current favored design pattern for IDIOM related applications usually requires that each instance of the core transaction document be stored in a database. That is, over its life a document is stored many times.
Also, IDIOM Decision Manager is typically used to address large commercial problems. This results in transaction context documents that often go to hundreds of thousands of nodes.
Multiple copies and large documents can result in bloated data storage and excessive DOM instantiation times. The point of this brief post is to highlight that full use should be made of attributes in schema design. In early stages of development elements can be used to provide more flexibility, but any long lived design should have elements that are in fact attributes (that is, they are atomic values) converted to schema attributes before work progresses on any final design.
Also, pay careful attention to naming standards, be consistent, and conduct a check for typo’s etc. immediately post POC and at each regular rules release.
Mark Norton 15 April 2014 12:42:37 p.m.The issue is that the Microsoft component that IDIOM uses to sign the Repository 'release' documents is unable to sign large XML documents on import/export, causing an out of memory exception. We are not sure of what the upper limit is but a recent repository of 68mb generated an exception. The issue can be alleviated by switching to a 64bit build as it is capable of addressing more than 2gb of virtual memory (unlike 32bit processes) - but may still cause problems.
Note that this has nothing to do with the physical amount of memory installed on a machine. This has been reported to Microsoft who have closed the issue under the "will not fix" category (http://connect.microsoft.com/visualstudio/feedback/details/663085/out-of-memory).
We have been using the same signing code for a decade and the upper limit appears to vary with different IDIOM builds for unknown reasons.
Some suggestions to address this issue:
Mark Norton 29 April 2013 09:03:56 a.m.
Idiom agrees strongly with this approach, which is described in greater detail in the Idiom authored article on Modern Analyst here: http://www.modernanalyst.com/Resources/Articles/tabid/115/articleType/ArticleView/articleId/1354/Requirements-and-the-Beast-of-Complexity.aspx
Mark Norton 28 March 2013 11:10:07 a.m.In this interesting article by Matt Asay*, he describes a world that is withdrawing from the large legacy infrastructures (Oracle, IBM, SAP, Microsoft are quoted). As the financial headwinds build up, those that stay with these vendors are going to find a world of increasing cost for relatively less investment by the vendors in their products.
Assuming this to be true, the question becomes: ‘how do I protect myself and create options that allow me to move when I have to’.
IDIOM Decision Models are a technology independent representation of core business IP. By building up a library of decision models that capture the organizations core IP, any move to alternatives including open source and/or cloud based service infrastructures, is made dramatically easier. Simply regenerate the source code, and (possibly) wrap into different software wrappers according to the target infrastructure.
There are significant benefits that accrue before any transition takes place, so this is a positive move with a free ride on the ‘freedom to move’ when needed. With the corporate IP safely sitting in decision models, which are themselves represented in XML, logical English, and source code (Java, C#, others possible), there need never be another legacy system.
Mark Norton 25 March 2013 05:13:47 p.m.Idiom has regularly addressed ‘rules problems’ that have been beyond the capabilities of some of our better known peers, regardless of their size and pedigree.
This post explains why we think this is occurring. Those of you who have worked with Idiom know that we do not claim to be a rules engine per se, notwithstanding our credentials in this space. We do claim to address the decision automation space. The key is the difference between rules and decisions in the preceding statements.
Idiom seeks to fully address the ‘big’ decisions that are required to manage the state of core enterprise assets – assets like pension fund accounts, insurance policies, patient episodes, passenger itineraries, and utility meters. For a background discussion on this topic please see our article on Modern Analyst, Requirements and the Beast of Complexity.
It is likely that the single ‘big’ decision will require hundreds, perhaps thousands, of intermediate decisions to be calculated and re-consumed in the overall decision making process before the ‘big’ decision can be made.
What is less obvious is that in our experience, the bigger the decision, the more likely we are to need to manipulate the context data. That is, the ultimate decision cannot be fully determined until the context data has been transformed and re-evaluated, often more than once.
This ability to dynamically transform context data between bouts of decisioning is as important to being able to address the ‘big’ decision as are any of the more traditional business rules capabilities like condition/action tables, decision tables, or forwards and backwards chaining!
In fact, if you can’t make these transformations ‘on-the-fly’ within the decision making process, then you are limited to addressing ‘little’ decisions within a staged, traditionally coded application. Complexity, cost, and risk, have just gone up by an order of magnitude, but more importantly, this is now outside of the ability of our target user, the SME and/or analyst, to address the ‘big’ decision as a delegated task. This fundamental goal becomes unreachable.
A recent example can be used to make this point:
A utility company runs power meters. The ‘big’ decision is: 'Is the meter working correctly?'
Before any assessment can be made, the traditional irregular billing and power consumption patterns need to be normalised into standardised, artificial billing periods, and then aligned with locally standardised consumption. Only after this can we apply rules to detect likely meter errors, which is a precondition to any meaningful decision making about meter status and rectification.
If the utility SME and/or analyst is to be able to meaningfully assess the raw context data (that is, the prior billing) and resolve this into useful decisions (like ‘the meter is not working correctly, go and fix it!’), then transformation into standardised data that is aligned on like date-time boundaries, and which is smoothed according to local variations in consumption, is a necessary part of the decisioning process. This would not fit into a traditional business rules paradigm (and certainly not rete). Ergo, business rules cannot achieve decision automation, and decision automation is ‘more than just business rules’.
Mark Norton 8 February 2013 09:03:22 a.m.Bottom line – the issue is not the tool! Idiom’s major difference is in the approach to building a decision centric system, and our tool is simply the best (but possibly not the only) implementer of our approach. With this approach, we have invariably developed large scale decisioning systems much faster than customers have expected – some referenceable examples include auditing for a $AUD30bn pension fund, underwriting for one of the world’s largest insurers, and billing for the 47 hospitals, 121 outpatients clinics of the Hong Kong Hospital Authority.
The risk is in the approach, so that should be the primary focus. A traditional development approach leaves developing the full complexity of the decision-making policies until later in the development cycle, and can only confirm the correctness of the developer's interpretation of that complexity when the data dependencies are fully wired in and available to the Drools (or other rete based) engine. This inverts the development dependency – that is, you need a system to wire the rules into, but you can’t know what the system must look like before knowing what the decision-making looks like when correctly modeled. This inverted dependency is the root cause of failures in complex decisioning projects (because you are building blind, and the subsequent complexity undoes you).
With the Idiom approach, we would expect to fully develop (ie model) the decision-making policies and test them completely independently of any application. This activity is an order of magnitude (literal use of the term) less than the application development build implied above, therefore both cost effective and risk averse. This is a business/analysis function, not a development function. When proven, we would then loosely couple the model(s) at the database or application service level as required. The implication, correctly, is that the rules may stay the same even if the system changes and vice versa. The system reflects current technology choices, whereas the decision model(s) always reflects the current business policy – and this means two distinct life-cycles, so that thereafter, policy and application would live in different life-cycles.
Idiom delivers the decision-making complexity in a form that the policy makers can manage ongoing, more or less independently of the technical application and its developers. This policy focus is further supported by a new tool, the Idiom Decision Tracker, which traces and reconciles the bi-directional dependencies between policy documents (as defined in MSWord, Excel) and the decision models that are implementing those policies.
Some areas where Idiom differs technically from Drools are as follows:
1. Drools has no visible global rules model; Idiom has the Decision model and the Formula Palette:
· Drools implements rete, this is a ‘condition/action’ paradigm where the conditions and actions are atomic (they don’t know about each other – the role of Rete is to orchestrate them at run time), so that it is impossible to see and test the combined affects without testing the entire model in situ;
· Idiom builds a decision model where the linkages and dependencies between decisions is visible and manageable, and can be tested immediately and interactively even when incomplete.
· Drools is developer oriented and rules are programmed within a Java development environment (Eclipse); Rules are defined in DRL, a text based language (a Drools rule file is a text file) that is scripted by authors just as they would script Java itself.
· Idiom is business oriented, with rules ‘programmed’ by dragging ‘lego-blocks’ in a graphical model that is ‘more fun than playing golf’ – quote from the CEO of the company that delivered the rules for the Hong Kong Hospital Authority billing.
· Idiom’s decision model does for the rules what the data model does for a database!
2. Drools is tightly bound to the underlying system object definitions; Idiom is tightly bound to a data schema representing the domain problem:
· Drools wires in every required attribute with code at the method level. Any change to the required data requires a programmatic change to the application code base.
· Idiom defines all data using an XML schema – new data can be added without any programming change.
· This highlights a high degree of decoupling in the Idiom approach that can be made absolute with some smart technical design – without this decoupling, the ‘independent life-cycle’ concept is impossible to manage.
3. Drools' ability to manipulate the context data is very limited; Idiom can manufacture new, large and complex data objects as required within the rules execution process:
· Drools’ limited condition/action paradigm means that large scale manipulation of the context data cannot be done – for instance, dynamically creating intermediate data constructs that represent different data conditions over time.
· Idiom can generate large, virtual data objects from scratch (within the rules execution process) to synthesise different situations derived from the supplied context data. This allows Idiom to build multiple images of each transaction that are amenable to different rules, but which are not naturally visible in the inbound data (that is, it takes the rules-builder’s knowledge to make them visible).
An example of this is entitlement calculations where there is overlap between policy changes, and changes in claimant circumstances, within individual cases through time, so that each unique time slice within one case may give rise to a different application of rules. This was also a critical complexity of the hospital billing problem above, where many contracts may concurrently apply to a single patient episode, overlapping over time.
Mark Norton 21 January 2013 10:43:53 a.m.The Idiom definition of a decision model is as follows:
Defn: An ordered assembly of decisions that creates new and proprietary information to definitively determine an optimal course of action for the business.
Because of its discrete and autonomous nature, a decision model is managed externally to the computer applications that use it (see also decisioning).
A decision model is an automated proxy decision maker that can directly control both systems and business behavior in a well architected system. When we put the decision models in the hands of a power business user who is the authorised owner and by implication the ultimate subject matter authority for the relevant area of the business, then we allow that user to directly control how the business responds to relevant events. These power users are the business users and subject matter experts who are the legitimate owners of the area of business that is controlled or influenced by the decision making under their control.
The Idiom Decision Manager allows decision models to be promoted into computer applications through IT sanctioned processes that need not require hands-on IT intervention. Therefore, when a decision model is assigned to a power user then it is fair to say that the aspect of systems or business behavior that is controlled by that decision model has been delegated from the application to the responsible user. The decision model itself is the means of delegation.
This delegation of direct, hands-on control of the programmed components (the decision models) that make the decisions in applications reclaims business control for the authorised business owners without compromising system integrity. Having a single source of truth (the business owner) for both the development of business policy and the decisioning that implements that policy reduces both business and IT risk while improving business agility and decision making transparency. At the same time the shorter change cycle implied ensures lower costs and improved timeliness.
Mark Norton 18 September 2012 04:30:37 p.m.In response to a partner query, we have made some notes here regarding the relationship between decision models and data models.
Question: “We can’t build final decision models until the data model is stable, as the decision engine schema must match the data model?”
For a green-fields development, the relationship between the data and decisioning is that they will BOTH become more correct and more stable as understanding of the domain increases. Both disciplines are very useful analysis techniques, and both are rigorous, in that they are disciplined approaches to derive an end point that can be verified correct before traditional systems development even begins. In this regard decision/data modelling can be used as a form of provable requirements analysis preceding the traditional SDLC.
There is a loose and cyclic coupling between data and decision modelling as follows:
· The data needs of the decision models define the scope and context of the data models (defined as xsd’s in Idiom).
· Data modelling then creates the vocabulary that we will use to build the decision models. Each context implied by the decision models becomes synonymous with a schema and its root element. All required data that is in scope is then normalised into the context schemas.
· As the decision definitions get more complete they will create explicit demands for new data elements that may need to be added to the schemas. This process of adding to the data models and decision models is iterative until the decision models execute all required use cases correctly (these use cases are supplied as XML documents that comply with the schemas).
· Successfully executing all use cases verifies both the data and decision models.
The resulting schema defined, context specific data models must align with the enterprise data model - that is, the enterprise data model must encompass the synthetic superset all of the schema models.
The enterprise data model will also collate other data elements that are required by the processes but not necessarily by the decision models (for instance, any un-enumerated text, identities and keys, audit data, etc).
If it is not a green fields development and the application already exists, then we can start with an assumption that the current data model is correct (at least we know it is working, so it is correct with respect to its current usage). If any decisions are unable to be defined against it, then the business cannot make that decision, unless it first addresses the data anomaly.
So for a decision model development in support of an existing application (for instance for validations, supplementary calculations, or complete add-in products) the data model can be assumed to be correct and the schemas should be drawn from it. In this case the existence or lack thereof of data is a constraint on the scope of decision modelling and not the other way around. Idiom would argue that a constraint on decision making implies a failure on the part of the existing application, and is a strategic issue whose importance is directly derived from the importance of the constrained decisions.
Question: “If so, what is the role of the Idiom Database Mapper? For example could we build generic decision engines using aliases in the schema and map the aliases to the database?”
The mapper can map the global data structures in the enterprise data-model onto the transaction specific schemas required for the decision models using aliases if required. This can also be easily achieved by any competent development tool (Visual Studio for instance). The specific benefit of the mapper is tight integration with the decision model execution cycle and very high performance while doing so.
Question: “If not does this mean that if I build a portfolio of standard decision models then they have to be partially rewritten for each new site?”
That is an option. On the assumption that your sites are running a common application, then you could build a standard set of decision models based on the common elements of the application, to be run in series with possibly bespoke decision models to address any customised portions of the application. Or, write a standard series of decision models and include a transformation step as a precursor. The transformation can include tailored mappings managed via a decision model to accommodate non-standard data representations in unlike databases, mapping to a common schema for the standardised decision models. However, this presumes that the required schema defined data-structures are logically contained within and can be distilled from the application database.
Mark Norton 11 October 2011 10:41:52 a.m.XML System Design Principles
This post highlights the benefits of using XML as the core of a systems design philosophy.
Idiom promotes XML centric design approaches. These approaches underlay the Idiom Decision Manager and Idiom Forms.
The design impact of an XML centric approach can be significant and positive.
What do we mean by an XML centric approach?
An important characteristic of an XML document is that with an appropriately designed schema (that is, an xsd) it can carry a complete and substantial (hundreds of thousands of nodes is not uncommon) transaction context within a single document. This means one data artifact for the transaction – not tens, hundreds or thousands of tuples that might need to be designed, captured, and otherwise managed in a traditional relational database representation of the same transaction.
In an XML centric approach, this transaction XML becomes a single data artifact that can be stored in a database. Contrary to popular belief, this blending of XML and relational technologies is not necessarily detrimental to database search performance. Modern databases like SQL Server allow for database supported XML datatypes and blended XPath and SQL queries. Performance is enhanced by the huge reduction in database reads required to acquire the transaction data. Similarly, bloat that is sometimes presumed to occur with XML syntax is more than compensated for by the reduction in the number of tables, and their associated indexes, that are otherwise required to hold the transaction data.
The bottom line is that for a business with complex transactional data (Idiom customer examples of this sort of transactional data include: finance, insurance, logistics, health admin, clinical health, Telco servicing and billing, government policy) an XML based approach can reduce the number of tables in the database by an order of magnitude. A rule of thumb suggests that 10 dependent entities are required to support each primary transaction entity – it is these dependent entities that are being collapsed into the XML transaction document.
It is important to recognize the no denormalization is implied here – the XML schema should be fully normalized, and the document that complies with the schema will be placed in its proper context in the database. The entire fabric of normalization is maintained.
The data that remains explicitly declared in the database includes the full set of reference data, and all other data that lives outside of and supports the core business transactions. For an extended discussion regarding the primacy of this transactional data, please see our article on Modern Analyst or our original 2006 decisioning article published by the Business Rules Journal.
On the database tables which store the transaction XML we should retain sufficient database defined attributes to support the following:
· Locating and identifying the transaction;
· Managing transaction state;
We will assume that supporting activities such as accounting and reporting will be managed by subsidiary systems, and that transaction data will be transformed into appropriate views for these systems as required. This clear separation of function leads to clean system boundaries with well defined and transaction agnostic interfaces.
The discussion above has outlined a design philosophy that isolates the business transaction inside an appropriate database – the transaction itself (e.g. an insurance policy, an airline reservation, a loan, a patient episode, etc) is a payload. We can now think of it as CONTENT.
The benefits of this design approach are already significant. By designing systems that treat the transactions as ‘black box’ content we simplify system design, which in turn encourages better engineered, faster and more robust systems at lower cost. These systems are more agile because we can introduce new and different types of content, potentially even repurpose the system, without changes to the underlying system.
We now need to turn our attention to managing this transaction ‘content’. By using declarative, XML aware tools we can significantly extend the benefits already outlined.
There are typically three types of functions that need to be ‘content aware’ – that is, to manage the internal state of the transaction content, and to manage the interaction of this content with the outside world.
1. Decisioning: for managing internal state and for transforming the content to meet the needs of external system consumers.
2. Forms: for managing interactive human involvement.
3. Documents: for managing passive human involvement.
All non human users can be supported by XML that is generated by decision models, so that (1) above can be used to provide interfaces for multiple external system consumers.
Idiom has built a suite of tools and capabilities that support the management and operation of XML based content within XML centric systems as described above.
These tools provide a comprehensive capability to declaratively define the processes that are required to manage a transaction. For the sake of clarity, this means that we can now build a host system and plug in new transaction types by defining new decision models, and if required, forms and/or documents. By way of example, a new transaction type might be a new insurance product, a new billing regime for a hospital, a new loyalty points calculation for a new business partner, etc.
Idiom Decision Models
Idiom Decision Manager is the heart of the capability. It provides an easy to use tool for analysts or SME’s to define the various transformations that create value for the business transactions. For example, approving and rating insurance policies, approving claims, calculating health billing, calculating loyalty points, etc.
This includes decisions regarding validation, acceptance, value (cost and/or price), state, and workflow.
Decision models can also act as proxies for external consumers. For instance, generating accounting records that comply with the rules of the accounting system; or transforming data for use in generic reporting systems; or generating transaction updates for business partners that comply with partner system requirements.
The Decision Models can also be used to drive webforms and human readable document generation.
Idiom Forms is not a universal forms development tool. It is targeted at a specific design goal – that is, the integration of ‘full strength’ decision models into dynamic webforms. Note the use of the word ‘dynamic’ in the preceding sentence – if the decision model is only applied on submission of the form, then use of Idiom Forms is not mandated. Idiom Forms provides significant development advantages when binding is required between server side and browser side data images – the decision models execute against the server side images, and the webform interacts with the browser side image. Idiom Forms automatically manages the synchronization of this effort.
That being said, Idiom Forms does allow rapid forms development (faster than equivalent native forms) and it does support a wide range of events and custom controls.
The downside is that extending beyond the available events, controls, and other forms components already supplied by Idiom Forms is a programmer task.
Therefore, when integrated decisioning is not required and/or there is a significant degree of bespoke programming required in the form, then native forms development is indicated. Things that would indicate the use of native forms include sustained interaction with a database (use of one or more cursers to manage search results for instance), sustained interaction with external services, or maintaining multiple transaction contexts within one session.
Idiom has recently developed and used a hybrid forms model, which allows Idiom Forms to be managed within native coded WebPages. This hybrid approach means that native code development can be used to manage workflow, orchestration, database interaction, and other complex transaction requirements, while Idiom Forms can be plugged in to manage the decision intense transactions within the workflow.
The Idiom Decision Manager provides a range of capabilities that can be used to help produce documents for human consumption within an otherwise generic document generator.
In this context Decision Models can be used to:
· Calculate new variables;
· Transform calculated variables into print substitution variables - i.e. strings for printing;
· Calculate the presence of circumstances that control the inclusion of text into the document:
· Create the actual text to be included in response to the presence of circumstances.
When the above are applied to a pre-written document template, a transaction specific document can be produced with relative ease using a generic reporting program.
The use of XML centric design and development supported by appropriate tools leads to more agile systems at lower cost, time, and risk. Idiom is currently developing systems using these techniques and tools for multiple insurers, financial intermediaries, airlines, billing systems and Telcos.
Call us for a discussion on whether we can help you too.
+64 21 434669