Welcome to the IDIOM Software Blog.

This blog is a growing base of knowledge that will help you get the most out of your IDIOM products. It offers comments on features, usage, approaches, background theory, helpful hints and other insights that will allow you to harvest outstanding ROI from your IDIOM product investments.

IDIOM’s senior practitioners contribute posts that collectively provide advice from different viewpoints, providing a 360 degree view of IDIOM product usage. All posts are moderated to ensure that any viewpoint is consistent with the evolving IDIOM best practice, but you should expect to see alternative approaches to the same subject from time to time.

Customer comment is welcome and encouraged. We will moderate any comments to ensure that they are consistent with an evolving best practice model, but we do hope to learn from your real-world experiences and we hope that you will participate and share them with us.

IDIOM started on the decision automation mission in 2001. We have developed some powerful approaches that are only more recently being emulated by competitors. These are now being captured into the Knowledgebase on a regular basis. With your help, this Knowledgebase will be the industry leading resource for decision automation within commercial systems.

Thank you for visiting, and again welcome!

    Transform your Systems and your Business

    Mark Norton  25 March 2016 01:26:08 PM
    Image:Transform your Systems and your Business
    System Audit and Remediation

    An assurance that the output of current processes matches the current businesses obligations is important in its own right, and as a foundation for almost any change agenda. The Virtual System Image produces, on a customer-by-customer basis, the delta between the current system's results and those from its own virtual image of a perfectly implemented system.  If the delta is zero then good - the organisation can proceed at speed with confidence. If there is a delta then we can fast track rectification because the nature and cause of the delta is made transparent.

    This assurance is also useful as a 'before and after' validation when migrating systems, products, or customers, either to new processes or computer systems, or to new product propositions.  

    Generally a proof of concept for this can be done in two to four weeks, and a production ready capability for some significant subsection of a business possible in the region of 4-12 weeks.

    System Simplification and/or Enhancement

    The virtual image at the customer level becomes the executional cornerstone for a rationalised product and service offering. The IDIOM derived virtual images can be projected for scenario analysis, and also to automate the data transformation that delivers the chosen product taxonomy. Mass migrations of customer and other system data can be completely and flexibly handled by the extensions to the Virtual System Image. The Virtual System Image also allows simulation of past, present and future scenario based business cases.

    Digital Ready

    Digitisation goes beyond automation.  It is automation plus knowing in real time the actual customer context.  Using a GPS analogy, information is always based on the exact real time location of the customer.  The same idea can apply to financial services, except that 'location' is not a geographic concept; it is the 'context' of the customer in relation to all of the company’s product and service rules.  Because the Virtual System Image provides the complete real time context for each customer at each interaction, e.g. all financial and benefit information, it enables digitisation of the customer experience - i.e. real time determination of the ‘best-next’ process at any device at the moment that device is used.  

    A relevant banking analogy to this is what happened to enable ATMs to work in real time when the infrastructure was batch. ATMs operated in real time using 'mirror' files that periodically updated to the 'real' batch files.  The idea here is not so dissimilar. The Virtual System Image is a mirror system that is akin to the mirror file. The legacy infrastructure can remain batch-like behind an agile real-time customer-facing shop front that is enabled by the customer's virtual image within the Virtual System Image. (The analogy is crude in the sense that the Virtual System Image is much more sophisticated than a mirror file.)

    Replacement

    The mirror file analogy can continue to deliver value indefinitely. As it does so, it will end up with more and more data from the legacy system residing in its own ‘mirrored database’. Sooner or later its functionality will both supersede and exceed that of the legacy system it is mirroring. This is the time to consider the replacement ‘switch’. With minimal effort and minimal risk, at some point the increasingly redundant legacy system can be simply 'switched off' as we transition fully to a new world of nimble, continuous, and perpetual systems improvement.

      Are Processes and Rules Two Separate Things

      Mark Norton  3 January 2016 12:36:23 PM
      https://www.linkedin.com/groups/1175137/1175137-6063447769715073028

      Original Comment by Paul Harmon, Owner at BPTrends

      Are Processes and Rules Two Separate Things?

      In a recent blog, Ron Ross, who is a columnists for BPTrends, and therefore presumably aware of what process is all about, ends up making the following distinction:

      "A major characteristic of the new digital world is that activity is never static in any sense of the word. You simply get no points for hardwiring repetitive behaviors. You must:
      - Continuously make informed operational decisions in the blink of an eye (actually often faster
      than that).
      - Selectively respond to changing circumstances with subtle adjustments.
      - Be as dynamic as possible, yet still produce outcomes of predictable, consistent quality.
      These too are business rule problems, not process problems."

      In effect, Ron is saying two things: First that processes describe static situations, and that Two, rules aren't a part of a process, but somehow separate from processes.

      To my mind this is all wrong. It reflects that same bad thinking we find in Zachman's Framework that wants to separate things into little boxes, with rules in some boxes and processes in others. And, two, it represents the very unuseful thinking that we find in the Business Architecture Guild's approach, which suggests that processes are entirely defined by BPMN, and somehow define static things, while "value streams" are dynamic. And, incidentally, this goes very much against the grain of the OMG's efforts to establish a standard for Case Management.

      My assumption is that Ross is feeling a little upset that a field that he's worked in for years has largely been subsummed into the business process field. From my perspective, after all, people do different things when they undertake activities, and one of them is that they make decisions (often using business rules). If you want to define what is done in a process or one of its activities, and what is done involves an activity, then you need to understand how the decision is made. In our process courses, we urge people to define the rules used in the decision. It's just one more tool in the process analysts tool kit.

      I'd be interested in the views of others on this discussion group.

      IDIOM Response

      Hi Paul,

      I am with Ron on this one. There are only 2 steps in a process - the current one, and the best-next one. And you can only determine the best-next one when you have completed the current one, because the current one might change the best next one. IDIOM calls this BinaryBPM.

      So Ron is right in my view, there are no points for hardwiring process patterns.

      A process is a real-time, rules driven orchestration of activities that is only visible in hindsight. It emerges based on the current state of the data, the rules, and the available activities, all of which can change independently, and all of which can have different owners. Think GPS managed navigation, not train tracks and bus routes!

      The rules that determine best-next are fluid and responsive to organizational learning. They are the heart of the learning organisation, and have an existence quite separate from the activities, and from the processes that emerge.

      Regards,

      Mark

        Competitive Product Analysis

        Mark Norton  26 June 2015 01:47:28 PM
        Evaluation of a Rules Management Product

        If you are evaluating rules products, here are some suggested areas where you could ask questions and confirm capabilities for comparative purposes.

        Complexity of rules

        What are the fundamental limitations of the underlying rules design patterns and does this limit the complexity of the decision-making that can be automated?

        For instance decision tables and/or rete are unable to define a complete rules requirements specification unless the scope and type of rules is constrained. Focusing on constrained decision-making use cases will give a false picture of capability.

        Intermediate data – the ‘idiom effect’

        In our experience, generating intermediate data on a large scale (often thousands of new data nodes per rules invocation) usually forms the majority of both analysis and computational effort in rules based adjudication of externally supplied data.

        This data transformation is required to derive the idiom (viz. ‘a specialized vocabulary used by a group of people’) of the business policy, which is also, by definition, the language of the primary decisions that implement it. The idiom includes the nouns and noun clauses of the proprietary language that we assert is always present when describing business policies and the rules which implement them. See our earlier Modern Analyst paper for more on this subject..  

        What is the ability of the product to dynamically define and populate data on this scale within each rules invocation?

        Continuous, perpetual, versioning

        The primary purpose of a rules engine is to embed decision-making as agile ‘content’ within an application. As such it will be managed using a discrete 'rules development life cycle' that must version independently of the host application.

        Does the product allow continuous and perpetual versioning and effective dating of every aspect of the rules definition, so that rules development is future proofed independently of the application – forever?

        Ease of integration

        Proprietary, independently licensed servers can increase infrastructure complexity and operational overheads. By way of contrast, source code generation (C# and/or Java) allows for all native integration options using a single interface that is easily wrapped and deployed.

        Do you adapt to the rules engine, or does the rules engine adapt to you?

        What integration overhead is required to implement and use the product?

        Enterprise class toolset, Enterprise platform support

        Development and deployment of complex enterprise scale rules requires a rules development process that is a first-class participant in the over-arching enterprise development environment.

        And deploying into enterprise scale proprietary environments like Oracle, WebSphere and SAP can extend this challenge.

        Again, does the product fit into your development and operational processes, or do these processes need to be restructured around the product’s own requirements?

        Cost, support, and vendor agility

        Compare cost, vendor support, and the ability and willingness of the product vendor to adapt to local needs.

        For an overview of the IDIOM product capabilities please see our product overview.

          binary BPM

          Mark Norton  20 May 2015 11:17:02 AM
          This post was written by guest author John Salamito.

          It is truism to say that everything is a process.  It is reasonable to suggest that processes can be modelled and improved. Thus the usual transformation effort starts with mapping the processes, identifying waste and rework to avoid those, and then automating steps as far as possible.  It is a seemingly foolproof method and has boundless opportunities.  

          In practice we know it isn't that simple; there is a 'knowing-doing' chasm. But do we know why?  The blame could be laid upon 'doing' factors, like project or change management, but the more likely root cause lies at the 'knowing' end; that is to say, it is much more difficult than presumed to map the processes in the detail required.  There are just too many variations and too many compromises imposed by pre-existing processes and promises.  This often comes as a surprise because at first process improvement feels "all-so-doable"; the exponential variations don't raise their head until later.  

          Mountain climbers know that getting to Everest base camp is a modest achievement.  It is climbing the last few hundred meters that is a monumental achievement requiring mammoth preparation as well as good fortune.  Scaling the process mountain is similar: level 1 is straightforward and builds early confidence (of the type that climbers know not to fall prey to) but getting to the final goal of OPOTOP - i.e. what one person or machine does at one time in one place - is elusive as each exception, constraint, and new requirement breeds exponential complexity.  

          The way around this is to avoid pre-designing processes altogether! Of course processes always exist because they are the way activities string together.  While this is always evident in hindsight, the open question is whether there is value in knowing it with foresight.  After all, process adds no value; it is the activities that add value, and so we must recognise process to be a means to an end and not an end in itself. So... what if the processes simply emerged as needed rather than be predefined?  By definition this would not only best fit the situation at hand but would also be simpler.  

          The capability that enables real time orchestration we call decision management.  A decision model will determine what the best next action is given the context at that one time for that one customer. If you use a sat-nav system you are already familiar with that concept; the sat nav constantly adjusts in real time to whatever new context you find yourself. A bank or insurer or any other organisation can do the same thing.  

          Even in a complex bank the number of activities (or services) is finite, say only a few hundred.  These can be arranged in millions of ways, but for any given context there is only a small number of sensible next steps to take, perhaps even only one according to company policy.  A decision model can determine what this next action is.  Decision models can be built quickly (weeks) because the only input is company policy without regard to existing operational design, systems, processes, etc.  

          The decision model captures company policy for (a) risk and compliance, (b) product offerings, and (c) customer service standards. Together these policies define exactly what we want to happen in any situation. The decision model is the 'knowing' model.  It is not a 'doing' model and does not say how these policies get implemented via the activities that involve people, computers and suppliers.  

          Armed with a decision model and a catalogue of business activities/services, a massively simpler description of the end to end business emerges. We can now see it as a finite set of activities that are triggered according to a finite set of policies.  Each activity is self contained, and most of them can be automated.  The medium in which they are invoked is flexible - e.g. by a device or a human channel, etc. - and does not impact the core activity itself.  Similarly, the event that triggers the activity is flexible - e.g. it could be a calendar date, an ad hoc phone call, etc. - and does not change the core activity.  To clarify this, suppose an activity is to value a property.  This could be triggered by a calendar event like 'end of policy' or by a request from the customer, and the medium could be a smart app or a letter, but in all cases the property valuation activity itself can be the same.  

          Let's summarise.  The business can be reframed into its policies (via the decision model) and its activities, which are a finite set of reusable building blocks. Together they fully describes what the business thinks and what the business does.  New products are mostly rearrangements of existing building blocks. New channels and events can be connected to existing building blocks (easily if service enabled). New regulation are adjustments to the existing decision model, and will mostly use existing building blocks, perhaps with minor modification. So far in the story there is little justification for ongoing complexity and one can see how the business can handle new products, new channels, and new regulations easily.  

          However, the truth is companies can't do these things easily and have become enormously complex. Why?  It is largely because predefining how the building blocks of activity are orchestrated is enormously complex, and this is aggravated by building block duplication. It now  has a chicken-and-egg conundrum as to whether unnecessary new building blocks arose in order to satisfy orchestration patterns, or visa versa, but the net result is seen universally:  too many duplicated building blocks and too many process variations, often only minutely different but nonetheless causing major difficulties for flexibility, cost, and compliance.  

          It is worth further explaining that these complexities are often not conscious choices but adopted implicitly when particular solutions are bought or developed.  For example, a product admin system might impose an approach to process management. A customer relationship system might have product rules embedded in it.  And so on until inevitably a mishmash of paradigms, processes, rules, controls and data pools exist in uneasy co-dependency.  

          But despite all this, we can reframe the company as being little more than a decision model that directs a finite set of self contained activities. For the  most part the traditional complexities of process management can be eliminated, and this makes it practical to find a path to this much simplified operating model that is distinguished by its level of automation, agility, and real time response to each individual customer interaction.

          Because of the binary nature of the linkages between the activities that make up the processes, we refer to this approach to process analysis as 'binary BPM'.

          Further detailed discussion on this topic can be found in this IDIOM article 'Taming the IT Beast with Decision Centric Processes'.

            Does DMN spell the end of the "rules engine"?

            Mark Norton  30 April 2015 05:34:28 PM
            https://www.linkedin.com/groupItem?view=&gid=4225568&type=member&item=5999240280551731203&commentID=5999308008557920256&report%2Esuccess=8ULbKyXO6NDvmoK7o030UNOYGZKrvdhBhypZ_w8EpQrrQI-BBjkmxwkEOwBjLE28YyDIxcyEO7_TA_giuRN#commentID_5999308008557920256

            Does DMN spell the end of the "rules engine"?

            Paul Vincent

            Consultant, Isvana Ltd

            DMN transformations to decision services does not require any tradition "rules engine"! There is no inferencing implied in DMN (arguably DMN could be simplified to exploit inferencing, but no one is asking for that). Incremental development is no longer a big deal (so interpreted rules engines or declarative rules languages as targets are less important), although rapid prototyping of data models might drive a requirement. So does DMN mean the end of the rules engine?

            Case in point: the Sapiens DECISION (TDM modelling) tool I've been using can generate decision services via rules engines or Java. But if there are no restrictions on the Java implementation, the question (as asked by a slightly offended vendor in one customer meeting) is "then what is the role of the rules engine now?"...

            Mark Norton

            CEO and Founder, Idiom Ltd

            Hi Paul, this is a timely question.
            Most rules requirements are intended to execute within commercial transactions.
            The rete algorithm was never intended for use in a transactional context, as per the final comments of Forgy’s original rete paper:
            “Certainly the algorithm should not be used for all match problems . . . Since the algorithm maintains state between cycles, it is inefficient in situations where most of the data changes on each cycle.”
            Transactions use a new set of data for each transaction (by definition), so Rete was always the wrong solution for transactional problems. The target use cases for rete and procedural implementations are mutually exclusive.
            This topic is discussed more fully here. http://www.modernanalyst.com/Resources/Articles/tabid/115/ID/2713/Decisioning-the-next-generation-of-Business-Rules.aspx
            Regards, Mark


              A conversation on the LinkedIn ’Decision Model and Notation (DMN) Group: Sub-decisions, are they really?

              Mark Norton  19 March 2015 01:15:10 PM
              A conversation on the LinkedIn 'Decision Model and Notation (DMN) Group.

              https://www.linkedin.com/groups/Subdecisions-are-they-really-4225568.S.5983144355412668417?view=&item=5983144355412668417&type=member&gid=4225568&trk=eml-b2_anet_digest-hero-4-hero-disc-disc-0&midToken=AQEACoJKdbs46g&fromEmail=fromEmail&ut=33KOOCJ1GNRmE1

              Sub-decisions, are they really?

              Paul Konnersman Diagnostician and Designer, organizational work processes

              [Opening statement]

              I see repeated references to "sub-decisions" which strike me as misleading. Doesn't, "sub-x" imply a subordinate echelon of a hierarchy in which "X" is superior? While I understand the attractiveness of hierarchy for modeling and understanding, I don't think it is an accurate characterization of the relationship between decisions that are being termed "sub-decisions." It is deceptive because while the simplest examples that we use could be seen as hierarchal, a decision which is a sub-decison of two or more other decisions would be participating in two or more hierarchies. While I suppose this is possible It seems more appropriate to see what have been called "sub-decisions" as decisions that are simply logically, and therefore temporally, prior to the decision to which they are alleged subordinate. Therefore, I would refer to them as prior, or earlier decisions rather than sub-decisions

              [other parties debate the meaning of decision, sub decision, and context]

              Mark Norton CEO and Founder, Idiom Ltd

              This thread appears to be raising questions about what a decision is or is not (what is a sub-decision?), and about the relationship between decisions (is the relationship governed by context or by sequence or by both?). Is this correct? If so, what does this thread imply for the DMN standard itself?

              Paul Konnersman Diagnostician and Designer, organizational work processes

              Mark,
              I speak only for myself, but I think it is clear that a decision is simply a choice between two or more alternatives. I hope that is not in dispute. I raised a minor concern about the use of the term “sub-decision” to refer to a decision (say “A”) that was required by another decision (say “B”). By extension I also dislike using the term “decision decomposition” to describe the process of eliciting so-called “sub-decisions.”

              My reasons in both cases are that this terminology suggests a compositional hierarchy in which decision “B” is an aggregation of one or more other decisions. It seems to me that this is a misleading and unhelpful representations because the relationship between the various decisions is one of logical priority rather than aggregation. They don’t decompose, they are arrayed in time; they are sequenced, like it or not. I do understand that this is very much disputed and that I may be a minority of one on this point. Paul V. has sent me off to learn “goal-directed reasoning and the like,” which are alleged to be outside of any domain of logic with which I am familiar. I’m working on it and will report back. Any suggestions of tutorials will be appreciated.

              I don’t think I understand the use of the term “context” as it’s being used here. Of course the a decision is being called a “sub-decision” in the context of a related decision which requires that “sub-decision.” But that just restates the usage that I’m questioning. I would say that the relationship is not governed by either the context nor the sequence, but rather that the sequence is governed by the relationship which is one of logical priority—If the task is to determine the quantity of carpeting required, we cannot make the area decision, without first making both the length and width decisions; and we can’t make either the length or the width decision without first making the rooms to be carpeted decision.

              As for what this thread implies for the DMN standard itself, I’m not sure that it implies anything, but I’m not the person to ask. I believe that the descriptions we use, including notations, are of the greatest importance. But I’m afraid OMG went off into the deep woods a long time ago. How many pages of notations do we need? …how many decorative icons?

              I offer as food for thought, the following words of Nobel laureate and polymath Herb Simon:

              “How complex or simple a structure is depends critically upon the way in which we describe it. Most of the complex structures found in the world are enormously redundant and we can use this redundancy to simplify their description. But to use it, to achieve the simplification, we must find the right representation.”– H. A. Simon, “The Sciences of the Artificial,” 2nd ed. (Cambridge, MA: MIT Press:, 1981), p. 215.


              Mark Norton CEO and Founder, Idiom Ltd

              Hi Paul,
              Your reference to Herb Simon is very appropriate. And it begs the question: Do we (and perhaps does DMN) have the right representation?

              Your definition of decision as ‘simply a choice’ might not be in dispute within DMN, but it is quite limiting nonetheless. This definition presumes that a finite set of alternatives is already known to the decision making process, so that determination of a price or any other continuous (i.e. infinite) variable would seem to be outside of the scope of DMN. Similarly so with transformations, where there is no choice as such. Both calculations and transformations are important to any decisioning process and in our view are peers with ‘choice’ or ‘constraints’ as decisioning concepts. It is our experience that most real-world decisioning problems require transformations, calculations, and choices to achieve the ultimate decision outcome. The need for transformation as a core part of the decisioning repertoire is discussed in this paper on Modern Analyst [http://bit.ly/1B1XyrU] (and this on the Business Rules Journal [http://bit.ly/1BWSREo])

              Regarding your second para, perhaps the concepts of sequence and context are not as much in conflict as you seem to suggest. If, in order to determine decision A, I need to derive decisions B, C, and D, and if I only need B, C, and D to derive question A, then the sequence issue is clear. However, this is also a decomposition: B, C, and D are all inherent and indivisible parts of A, as in ‘what decisions do I need to make in order to decide A’. So a hierarchical structure of decisions could be defined by its dependency on both context and sequence simultaneously.

              Context is an interesting issue, which has been recently discussed in this article on Modern Analyst [http://bit.ly/19ACJ0W], and this on the ‘Business Rules Journal’ [http://bit.ly/1AGXxdD]. Context frames the data and the decisions, and we believe is a necessary part of any decision definition. My definition of ‘context’ is that it is all those characteristics of the actual ‘instance transaction’ that define the boundaries and execution pathways for both the data acquisition, and for the decision-making that uses that data, for that transaction execution.

              For instance, in our example above, decision B might only be required when determining A if the customer is in Europe, because B implements an EU policy. Note that in this case, context is not simply the key of the data; the location of the customer is not part of its key, but it is part of the context for determining decision B, and by extension, A.
              Another almost universal example of context is effective dating. If the effective date of the transaction is 25th December 2014, then the data must be selected according to this constraint, and all decisions must execute as at that date, even if the transaction is executing today.

              Thanks for bringing up the issue of context in particular. I am sure there is a lot more to come on this topic – we appreciate the conversation, no matter where it leads.

              Paul Konnersman Diagnostician and Designer, organizational work processes

              Mark N.,

              Thanks for the detailed critique and references.

              I intended to raise the the question with my Simon quote rather than begging it. I do indeed think that BPMN has adopted a very poor representation which is why it keeps expanding notations seemingly without limit and that DMN is problematic in having attached itself to BPMN.

              My definition of “decision” does not presume a finite set of alternatives.” The phrase “two or more” encompasses infinite sets, both denumerable and non-denumerable. Actually, I see now that I can make it more elegant by saying simply that a decision is a choice from among alternatives,” since zero and one do not provide alternatives.

              I’ve not yet had time to read the four papers you cite, but aren’t transformations and calculations ways of automating a decision — that is, of specifying how a choice is to be made from among alternatives?

              I don’t see “context” as being in conflict with sequence. I simply don’t see why the reference to context is necessary or what it implies beyond what is already there.

              I would say that to the contrary, B, C and D are patently divisible. Every so-called sub-decision can be separately identified and named. Each can be separately considered and made without reference to, or even knowledge of decision A. I acknowledge that some simple decisions including most of those used as examples can be seen a hierarchal without problem. However, more complex networks (known mathematically as directed acyclic graphs) that cannot be viewed as hierarchies and as decompositions. They are better regarded as be associated by relations of logical precedence than by aggregation. I am saying that so-called sub-decisions are not components of the decisions that depend upon them, they are logical (and therefore temporal) predecessors of the decisions that depend upon them.

              I have no problem with dependency relationships being contingent if that is what you mean by “context,” but I think that it is far less clear to invoke “context” than to simply indicating the contingency.

              Finis

                Message to Novopay

                Mark Norton  3 August 2014 11:58:20 AM
                Message to Novopay!!

                Audit and remediation of catastrophic failure in core applications is causing headlines on both sides of the Tasman, with payroll a particularly popular subject. Yesterday’s generation of systems failed to manage the underlying business complexity as it changed over time. So why would we think that using these same approaches to audit and remediate their own failure is going to achieve a better outcome today?

                Perhaps it is time to change the approach. In fact, a change in how we think about systems and complexity completely.

                This blog introduces two recent IDIOM projects that expose the real complexity involved in calculating and maintaining the entire life-time history for complex entities. The first entity is a pension account, with some account lifetimes exceeding 30 years; the second is an employee payroll account limited to the last 10 years of transactions, but extending to 40 years of payroll data in its calculations.

                In both cases the underlying business entities (pension account, employee) have survived millions of events throughout decades of change. Equally importantly, they were transformed through multiple systems replacements and dozens of changes in their event handling processes. The result: loss of data integrity, and an inability to recover it from within the system itself.

                IDIOM was requested to assist using its tools and approaches to recalculate the correct state of the entities from scratch (i.e. over decades), and to generate any entries required for remediation to reset the current state. For the sake of clarity:

                ·        to recreate every version of every process throughout the history of these entities (we can’t say systems – the entities survived multiple systems);

                ·        to reprocess up to 30+ years of data to arrive at the correct state as at today;

                ·        to difference the calculated and actual data and generate remediating transactions.

                This ‘time-spanning’ view of processes is not easily achieved in traditional systems. One underlying reason is the relational database - how do you version a relational database? Answer: you don’t – you migrate the data from version to version. In which case, how do you keep alive the full complexity of multiple, concurrent generations of event processing versions? Answer: you can’t!

                The IDIOM toolset is able to acquire and absorb the full extent of entity data for the life of each entity; to refactor it into a ‘whole of life’ data schema (complete with accommodation of even such subtle changes as precision over time); to re-apply the full lifetime series of events to the entity and recalculate all prior and current event outcomes; and to match these recalculated outcomes with actuals to calculate the required remediation (if any). The examples follow:

                Pension (Defined Benefit) Scheme

                The aim of this project was to build a decision model to:

                • Initially, test the changes made to the legacy system to support the changes made to the Trust Deed of a Defined Benefits scheme
                • Eventually, replace the benefit calculation component of the legacy system that supports the Trust Deed of the Defined Benefits scheme.
                To achieve these aims it was necessary to build a model that:
                • Retrieved the required base source data from the legacy system (e.g. all working hours and salary history for all members for the lifetime of their membership in the Defined Benefits scheme –              sometimes in excess of 30 years)
                • Calculated all intermediate components that make up the benefit calculations (approx. 40)
                • Calculated all the specific benefit calculations (approx. 30).
                The complexities involved in this project included:
                • ~100,000 members in the Defined Benefits scheme, with total payouts due of circa $30billion
                • Poor source data quality where minimal verification was undertaken at source data entry, requiring management of data quality issues in the model when retrieving base source data (e.g. an employment terminated with no record of that employment ever commencing)
                • A data migration from a prior system to the current legacy system that created a number of data integrity issues
                • Calculations based on a Trust Deed that has been modified and added to over time, resulting in inconsistent definition of some apparently similar benefits that therefore need to be modelled           differently to meet the interpretation of the exact wording of the Trust Deed
                • Only 1 person with a strong (but still incomplete) level of knowledge of the calculations and supporting data required
                • Working from 4 documents (combined total of approx. 600 pages) that provided only a high level view of approx. 75% of the calculations.  These documents provided approximately 20-25% of the detailed information required, and in some cases provided conflicting information
                • Recalculating all member accounts “from scratch” – that is, from the commencement of their membership of the scheme, which for long standing members is in excess of 30 years – whereas the legacy system just applies an update from the last review (a period measured in months)
                • A long standing member can have 30 to 40 separate periods of employment service that need to be worked through for the “from scratch” calculation, as a new employment service period commences if the member changes role, employer, payroll, service fraction (part time work), leave without pay, temporary incapacity, permanent disability, deferral from the scheme, leaving the scheme, re-joining the scheme, etc.
                • Reconciling model benefit calculations for members with the legacy system benefit calculations, and identifying potential issues with the legacy system calculations – compounded by the legacy system not adjusting for past “errors” as it always works from the last review
                • Incorporating many historical changes in the scheme over the past 30 plus years – e.g. prior to a particular date a calculation is undertaken in a certain manner with a certain set of rates applicable, which is then changed a few years later (with potentially a different set of applicable rates), and changed again at a further later date.  Some benefit calculations have up to 4 or 5 of these historical changes in calculation method that need to be catered for
                • Incorporating regulatory changes that the scheme had to support over the past 30 plus years – e.g. changes in required or permitted employer and employee contribution rates and the flexibility of those, such as ability to voluntarily reduce contribution levels in certain circumstances
                • Incorporating the acquisition and merging in of members from other schemes over the past 30 plus years – where in most cases members of those schemes have the rights to maintain the benefit structures and rules inherited from those schemes
                • Special treatment for periods of employment service commencing or ending during a leap year
                • Special treatment for periods of leave without pay, temporary incapacity, and permanent disability during the employment service of a member – each of these three treated differently to each other, and in some cases differently for the same item in different circumstances
                • The indexation by either daily or quarterly CPI of certain calculation components and benefit calculations in some different circumstances – e.g. just for a certain period, or from the date of a certain event forward or backward in time, or for a certain event only when other specific conditions apply
                • The splitting of many component calculations into “Pre Scheme Date” and “Post Scheme Date” components based on different formulas and sub components that apply differently to all underlying factors (e.g. employment service) prior to the Scheme Date change, and post the Scheme date change
                • The modification of many benefit calculations to utilise both the new “Pre Scheme Date” and “Post Scheme Date” components, multiplied by other components that differ, based on whether they are “Pre” or ‘Post”
                • Projecting certain benefits forward to a member’s retirement birthday as well as current or historical calculations
                • In some calculations managing and dividing base data (e.g. employment service) into separate periods based on up to 4 variable dates (e.g. Scheme Date, Calculation “as at” Date, service threshold date, retirement birthday) with different calculation rules applying for employment service in each “Pre” or “Post” period for each potential date period
                • Management and alignment of rounding precision that changed over time, with calculations often multiplying 6 figure components based on salary and using small percentages (~ 1%) as a part of a calculation that multiplies 5 or 6 numbers
                • Several calculations having 5 or 6 high level components, with each of those components having 4 or 5 levels of sub calculations, with each of these sub calculations also having up to 5 or 6 components, giving rise to individual formulas that include more than 100 separate calculations
                • Each member to be processed as above in less than 2 seconds.

                Payroll - Termination Reprocessing

                The aim of this project was to build a model to:

                • Reprocess all termination payments for people terminated between 1/4/2004 and 27/10/2011 to verify that they had been paid in accordance with the NZ Holidays Act 2003
                To achieve these aims it was necessary to build a model that:
                • Retrieved the required base source data from the Payroll System Database (e.g. Employment Details, Allowances, Payments Made, Leave Taken, LWOP Days etc.)
                • Construct an intermediate data structure suitable for the analysis
                • Calculate what the payments should have been and compare these with payments made
                • Produce a report with references to the individual inputs and the intermediate calculated structures to provide a detailed audit trail to support remediation payments (or the lack thereof).
                The complexities involved in this project included:
                • ~10,000 members terminated in the period, with total remediation pay outs of several $$million
                • Poor discipline and procedures in the underlying payroll system resulted in varying data quality, including inconsistent and irregular data (e.g. leave taken following termination)
                • Data structures and database keys changing over time as systems migrated
                • Running different Calendars for different parts of the country
                • Data to be analysed reached as far back as 40 years
                • All nuances in the Holidays Act over the full term including definitions of Base Rate, Ordinary Rate and Average Weekly Earnings
                • Creating an intermediate data structure representing each day the person was employed, to be marked with the amount the employee was paid that day in aggregate, or if leave was taken
                • Calculated TOIL and Special Care Cash Ups
                • Calculated Annual Leave, Long Service Leave, Shift Workers Leave and Statutory Holiday pay outs
                • Compared the Calculated Amount to the Amount Paid
                • The Amount Paid was not uniformly found and needed to be located from different places depending on time period
                • Tuning and parallel processing took this down to four hours in the final run, using data extracted from an Oracle Database and processed by six parallel run-time processes
                Change is endemic and perpetual; it is time we built systems that reflect this fundamental truth.

                IDIOM is a member of the NZ Government 'Open Door to Innovation' program.

                  Rules Based Applications

                  Mark Norton  28 April 2014 05:15:28 PM
                  Rules and Applications – where is the separation and does it matter?

                  Most people think of ‘business rules’ as a modest amount of precisely targeted business logic (implicitly Boolean in nature) that is injected into an application. This is a narrow view of rules that can be very limiting in terms of the value that can be harvested from implementing a rules approach.

                  While we can agree that this view does define rules as practiced by many – in fact it is the essence of the Business Rules Manifesto – the concept of rules can be productively expanded to obtain much more value from any effort being invested into business rules. In fact, we can extend the scope of ‘rules’ to embrace the entire business policy that defines how the business responds to events throughout the life cycle of its core transaction entities, regardless of the domain. In doing so we redefine the boundary between applications and rules, with significant benefits to both.

                  If we start with the limited scope of rules as implied in the preamble, the first and obvious extension that can help us to obtain more value is to expand the definition of rules beyond the boolean ‘constraints’ view espoused by the Manifesto, and embrace all forms of business defined derived-data, especially numbers, dates, and text. Examples of these additional business owned and defined values include costs, prices, and other values; trigger dates and related meta-data for workflow management; and dynamically compiled text and other messages for communication and agreement. The people who are targeted for the limited scope business rules as described in the preamble above are almost certainly the same people who ‘own’ these other values – any separation between these classes of ‘rules’ would be artificial in their eyes.

                  A second scope extension is perhaps less obvious, but every bit as important. Business rules are usually defined in an idiom (Defn: “A specialized vocabulary used by a group of people; jargon; a style or manner of expression peculiar to a given people.”). The language that forms the basis of the idiom is essentially proprietary because the behavior of the organisation is proprietary. However, the raw data in any domain typically describes the domain’s real-world entities in real world terms (by definition – the entities exist irrespective of the organisation that records them). There is almost always a language gap between the language of the ‘real world’ data and the idiomatic language of the business rules, and this gap must be addressed before the rules can be applied. This requires a step to validate the real world data as collected by the business, and to then transform it to align with the idiomatic phrasing used by the rules. Only then can the rules be applied and the business respond appropriately to the events described by the data. The idiom is the language used by the business to define its strategies, its policies, and lowest in this hierarchy, the rules, which ultimately govern its behavior (for those seeking more background on this topic, please see ‘Decisioning – the next generation of business rules’ as recently published in both the Business Rules Journal and Modern Analyst).

                  The ‘rules’ are the ‘sharp end’ of applied business strategy – they are the means by which business policies actually get applied to real-world entities and their events. These rules are applied within discrete, atomic units that we refer to as ‘decisions’. A decision in this context is a unique, business defined outcome that is derived by a specific set of rules (all of which must be applied as a single unit-of-work). We can draw an analogy with molecules and atoms – a decision is like a molecule; the smallest tangible outcome from a rules process that can exist in its own right. Whereas the rule is like an atom – essential as part of a decision, but cannot be used in isolation to achieve any particular purpose.

                  In order to apply the policy defined business rules we need to bridge the gap between the raw data and the business idiom. Ergo, an effective rules engine must also be a data validation and transformation engine. In fact, our experience suggests that the vast majority of rules development effort is in the validation and transformation area. But unless this transformation is fully and completely achieved, the policy defined business rules cannot be applied.

                  Again, the validation and transformation rules are business defined (by definition) and belong in the same scope as the core business rules we started with – they are co-dependent, so that one cannot be useful without the other. There is no useful boundary between a business rule that derives an intermediate value from one which derives the ultimate business outcome – they are all just rules aggregated into decisions and ultimately into complete sets of decisions – a.k.a. ‘Decision Model’ in IDIOM parlance.

                  This is a useful point to recap – the above discussion has (re)defined the business rules scope as including everything involved in the transition of the raw business data describing an entity in one state, through to its return in a new (and valid) state. This is an important observation – it means that we can define all of the bespoke business elements of an application within the ‘business rules’ concept.

                  So what is left of the traditional application? At least a platform infrastructure that includes resource management, authentication and authorization, etc.; and a series of generic, event driven features and functions like searching, selecting, opening, and saving entities; applying business rules; and receiving and sending messages to other devices or locations (e.g. printers, webservices, queues, legacy systems etc.). The latter generic capabilities can be controlled by meta-data that is defined by the real-world capability of the feature or function itself – their meta-data is externally and statically defined, just like the real-world business data. And like that real-world business data, we can use rules to derive the system meta-data and so explicitly control the system’s features and functions – rules become the business owners ‘remote control’ for the system.

                  The above now implicitly defines a generic application that can service virtually any commercial need provided that the appropriate rules are supplied to define and apply the relevant business policies, and to then drive the supporting application features and functions (via their respective meta-data).

                  Such an application is not hypothetical! IDIOM has now built many such applications for organisations large and small, all very similar in concept and architecture to the application described in our paper Architecture for Agile Provisioning of Financial Products and Services.

                  Most of these applications have used a common underlying pattern, which includes the following generic concepts:

                  ·        Business policy is developed, defined, and implemented using a mix of rules and configuration documents, all of which are stored in the database, and which are themselves controlled by rules.

                  ·        The context data (meaning the entity and all of its related transaction and event data) that describes the current state of the entity in its entirety is stored extant as XML in an XML column in the database. The context data can be large and complex – often hundreds of thousands of nodes with many nested levels of hierarchy – and only needs to be understood by the rules that govern it.

                  ·        Most other traditional database column values are generated by the rules, and only exist to provide database indexes or for the special needs of other (less agile) applications.

                  ·        Rules generated messages are used to align any and all other systems including legacy and accounting systems, with the latest entity state. This data is extracted automatically when saving, and then routed to meta-data defined end-points.

                  ·        Workflow is fully managed by rules that create forward looking actions and events – for instance ‘Bring-Ups’ (automated future events) and ‘Actions’ (future events for manual resolution). These Workflow triggers are declared in the database for each entity, and are fully replaced for the entity every time the rules are run, so that the entire life cycle of the entity is inherently managed.

                  By building an application using the above design pattern, development time, cost and risk can be reduced by an order-of-magnitude. And the business will love you for an application that empowers them while still hiding the technical complexities.

                  For those familiar with data design, the following sample database schema provides some insight into the underlying architecture of the design pattern:

                  Image:Rules Based Applications

                  In the above diagram, we can see the context data (the transaction XML document) stored as ‘ContextDataXML’ in the Core Business Entity table [blue]. For commercial domains of even modest complexity, this simple design concept can save us from having to build and provide application support for perhaps hundreds of database tables; nor are we constrained by the incredible development inertia that is implied by large fully normalized database designs. As a bonus, query performance against this structure can be better than with standard normalized database schemas because joins are rarely required. Also, full text search on the XML column can provide instant Google like search over the full extent of the database.

                  The rules for managing the full, business policy defined life-cycle of the ContextDataXML can be found in the ‘DecisionModelBinary’ column of the Decision Model table [green]; IDIOM now routinely class-loads the decision model dlls or JARs directly from the database (at least in those environments capable of it). Various business policy configuration parameters that relate to the particular type of entity that is in focus are in the EntityConfigXML in the Entity Class table [green], and are automatically delivered to the rules each time they execute.

                  The Financial Entry, Bring-Up, and Action table rows are generated by the rules within the ContextDataXML and are automatically stripped off by the database handler when it is saved.

                  The following two diagrams provide a high level view of the overall IDIOM approach, centered on business policy development and testing using the IDIOM Decision Manager in the first diagram. The second diagram outlines how the business policy is deployed and eventually executed using the Decision Engine (and a sample set of decision models by way of example only). The green arrow linking the diagrams is used to indicate the flow of decision models, which are exported in a manifest that includes generated source code for each model (a JAR or dotNET assembly). This code is compiled and executed behind the ‘Decision Engine’ interface, which is also provided in source code form (Java and/or C#) for complete peace of mind.

                  Diagram 1:
                  Image:Rules Based Applications

                  Diagram 2:

                  Image:Rules Based Applications


                  Finis

                    Decision Models and Normalisation

                    Mark Norton  24 June 2013 04:23:19 PM
                    Can normalisation be applied to business rules?

                    In a recent article in Modern Analyst, New Opportunities for Business Analysts: Decision Modeling and Normalization, Larry Goldberg and Barbara Van Halle (principals of Knowledge Partners International [KPI]) posit that normalisation as it applies to data can be refactored and applied to decisions, essentially by using the decision term ‘conclusion’ as a substitute for the data term ‘key’, and the decision term ‘condition’ for the data term ‘attribute’.

                    This transforms the colloquial definition of normalisation from ‘every attribute must describe the key, the whole key, and nothing but the key’ into ‘every condition must participate in the evaluation of the conclusion, the whole conclusion, and nothing but the conclusion’. Sounds good!

                    Larry and Barbara claim that normalisation is science not art, and assert that this science can be applied to their Data Model concept – in fact, it is a fundamental premise of it. IDIOM commented on the opportunity to ‘normalise' decision design in its original published article on Decisioning (published in the Business Rules Journal, Jan 2007). Our view at the time, and one that we continue to hold, was that normalisation is more art than science, although we did and do agree that it provides useful lessons for decision designers.

                    This post discusses normalisation as it applies to decision models. Unfortunately, ‘decision model’ has different meanings in the KPI and IDIOM vocabularies, which needs to be clarified.

                    The KPI concept of a decision model is a relatively recent innovation that provides a formal structure for collating decision tables. It is intended to present the normalised structure of the ‘business logic’ as it is defined by collections of condition statements, which collectively derives a single ‘decision’. A decision in this case is a special class of ‘conclusion’ (one that is deemed to be of special interest to the business), so that the KPI Decision Model is a model of the internal structure of a decision.

                    The IDIOM Decision Model on the other hand, is a structure containing a collection of decision(s) and is defined as follows:

                    Defn: An ordered assembly of decisions that creates new and proprietary information to definitively determine an optimal course of action for the business.

                    The IDIOM Decision Model is a model of a collection of decisions and the relationships between them, including logical groupings of decisions that are achieved by the eponymous ‘decision groups’ within the decision model, so that the IDIOM Decision Model is a larger concept that has an external existence that is independent of any particular decision. It is the collection of decisions working together that delivers value to the business in the form of a single, significant, orchestrated and inherently valued change of state. By definition, all decisions involved in the state change must operate as a single ‘unit of work’, hence the larger concept called a decision model.

                    To our knowledge, there is no equivalent construct described in the KPI published works.

                    In IDIOM, the internal structure of a decision is described by a formula. In KPI terms, a formula can implement anything from a single condition, to a single row in a ‘rule family’, to the rule family itself, to a complete KPI Decision Model. Actually, it can do substantially more because the formula can use the full range of data and logic operators rather than just the logic (ie Boolean) operators that are available to the KPI conditions.

                    Because any formula can be implemented via a decision, it is possible that a KPI Decision Model could map to one or more IDIOM decision groups, decisions and/or formulas.

                    The decision definitions have a likeness between the KPI and IDIOM worlds.

                    A ‘decision’ in KPI terms is a synonym for a conclusion as described above, albeit with the qualifier that it must be ‘of interest to the business’ viz.

                    KPI Defn: A business decision is a conclusion that a business arrives at through business logic and which the business is interested in managing.

                    IDIOM defines a Decision as:

                    IDIOM Defn: A proprietary datum derived by interpreting relevant facts according to structured business knowledge.

                    While we think the word ‘proprietary’ conveys the business linkage in the IDIOM decision definition, the business importance of any individual decision is not highlighted in the IDIOM methodology, since this is subservient to the overarching intent and purpose of the decision model.

                    While these terms appear to be similar, in that they both describe decisions that are derived by the application of business knowledge, they also harbour a distinct difference.

                    The KPI emphasis on ‘business logic’ (their emphasis) as the means of derivation highlights this difference. Business logic is but a small subset of the operators available to the IDIOM formula, and this substantially limits the scope of ‘decisions’ that can be defined by the KPI Decision Model. When this significant limitation is coupled with the limited ‘single decision’ scope of the KPI Decision Model itself (when compared with the bigger IDIOM Decision Model concept that could include hundreds of decisions), the effect is pronounced in terms of the type and scope of decisioning tasks that can be modelled. But the question remains, can normalisation as a concept be validly applied to these models?

                    The normalisation of business rules in the KPI article is achieved by applying the traditional normal forms (first, second, third are described by KPI) to the set of conditions that are associated with a conclusion, aka a decision. IDIOM agrees with the KPI assessment that the relationship between conditions and conclusions as described benefits from applying these normal forms, which provide a useful guide to organising the underlying decision tables. This approach might be of value in building an IDIOM decision table, although these are not often used natively in IDIOM formulas – the equivalent results are usually easier to achieve using other means.

                    We have agreed that normalisation can be of value in organising the decision tables that are the essence of KPI Decision Models. We now look at the concepts that KPI does not implement: the IDIOM Decision Models and the Formulas.

                    The datum that results from an IDIOM decision is always output to a defined context node, and the context is always assumed to be normalised (in its data sense), therefore the structure of each decision’s derived data is normalised by definition.

                    However, in the IDIOM Decision Model, each decision also has a distinct location in the Decision Model itself, which is unrelated to the location of its context data.

                    Does normalisation have a role in this model?

                    Probably not. Normalisation is defined as a function of the dependency between an item and its key(s) – or ‘conclusion(s)’ in the KPI view. The existence of a decision or decision group in an IDIOM Decision Model is always unique by definition, because its location in the model has meaning – this is emphasised by the word ‘ordered’ in the decision model definition (because the same decision made immediately before or after another could deliver a different result). This means that the model is normalised by definition, just as a data model would be if we were to define each datum simply by its position relative to the key (example: if the definition of the first attribute says, “the first attribute for the entity”, etc. then it is correct by definition provided it is first. While sounding simplistic, this can occur in practice, with for example, “address line1” of an Address entity).

                    And within a formula? Here the answer is probably yes. To normalise the logic within a formula means breaking the formula up into its component parts, each of which is unique across the entire model (and beyond actually, to the entire Repository, which is an IDIOM concept that could equate to an application or even a whole enterprise). The only qualifier is that this involves a small effort, and if the redundancy is entirely local, there is always the pragmatic option of simply copying the logic within the formula itself.

                    As a final general comment, we tend to avoid reliance on normalisation in its more prescriptive forms. Notwithstanding its ‘scientific’ dependency notation, it is more art than science.

                    Normalisation is a set of very useful guidelines for structuring data to reduce the potential for update anomalies. While normalisation is a powerful concept and is fundamental to good data design, it does not of itself provide a ‘scientific’ solution to any problem.

                    Determining what data is relevant to the problem domain (what is your point of view?); choosing the level at which to define it (would you use 28 tables to define how to render a single name from its name components, as was done for the failed billion dollar CS90 project?); choosing the balance of meta data and data (is describing everything as name/value pairs normalised?); and creating the actual definitions for each datum (where you place an attribute in the model will prescribe its definition, and vice versa); these are all very loosely defined, subjective assessments that get input to the ‘scientific’ normalisation process, giving rise to the statement made in our earlier published articles:

                     “Data normalization is a semi-rigorous method for modelling the relationships between atomic data elements.  It is 'semi-rigorous' because normalization is a rigorous process that is dependent on several very non-rigorous inputs, including:

                    ·        Scope:  determines what is relevant -- we don't normalize what is not in scope.

                    ·        Context:  determines how we view (and therefore how we define) each datum.

                    ·        Definition of each datum:  this is highly subjective yet drives the normalization process.”

                    In summary, we think normalisation has an important role in structuring the data used by and produced by decisions. It may also be useful to help understand and structure decision tables. Its overall usefulness in this regard would depend on how widely used decision tables are within the methodology; in the case of an IDIOM Decision Model, this is limited.

                      BPTrends - BPM and Data

                      Mark Norton  29 April 2013 09:03:56 AM


                      Paul Harmon, writing for the Business Process Trends Advisor on the subject of BPM and Data made the following comment:

                      "If a given activity used a set of rules, then all of the data required for those rules would need to be provided for the activity to make appropriate decisions. When focused on redesigning the mid-level process the team would focus on capturing the business rules involved in making decisions. If they used a software tool to capture the rules—by far the best practice if many rules are involved—then they would have a way of tracking the data required to drive the rules."





                      Idiom agrees strongly with this approach, which is described in greater detail in the Idiom authored article on Modern Analyst  here: http://www.modernanalyst.com/Resources/Articles/tabid/115/articleType/ArticleView/articleId/1354/Requirements-and-the-Beast-of-Complexity.aspx