Welcome to the IDIOM Software Blog.

    Discussion on the relationship between decision models and data models

    Mark Norton  18 September 2012 04:30:37 PM
    In response to a partner query, we have made some notes here regarding the relationship between decision models and data models.

    Question: “We can’t build final decision models until the data model is stable, as the decision engine schema must match the data model?”

    For a green-fields development, the relationship between the data and decisioning is that they will BOTH become more correct and more stable as understanding of the domain increases. Both disciplines are very useful analysis techniques, and both are rigorous, in that they are disciplined approaches to derive an end point that can be verified correct before traditional systems development even begins. In this regard decision/data modelling can be used as a form of provable requirements analysis preceding the traditional SDLC.

    There is a loose and cyclic coupling between data and decision modelling as follows:

    ·        The data needs of the decision models define the scope and context of the data models (defined as xsd’s in Idiom).
    ·        Data modelling then creates the vocabulary that we will use to build the decision models. Each context implied by the decision models becomes synonymous with a schema and its root element. All required data that is in scope is then normalised into the context schemas.
    ·        As the decision definitions get more complete they will create explicit demands for new data elements that may need to be added to the schemas. This process of adding to the data models and decision models is iterative until the decision models execute all required use cases correctly (these use cases are supplied as XML documents that comply with the schemas).
    ·        Successfully executing all use cases verifies both the data and decision models.

    The resulting schema defined, context specific data models must align with the enterprise data model - that is, the enterprise data model must encompass the synthetic superset all of the schema models.

    The enterprise data model will also collate other data elements that are required by the processes but not necessarily by the decision models (for instance, any un-enumerated text, identities and keys, audit data, etc).

    If it is not a green fields development and the application already exists, then we can start with an assumption that the current data model is correct (at least we know it is working, so it is correct with respect to its current usage). If any decisions are unable to be defined against it, then the business cannot make that decision, unless it first addresses the data anomaly.

    So for a decision model development in support of an existing application (for instance for validations, supplementary calculations, or complete add-in products) the data model can be assumed to be correct and the schemas should be drawn from it. In this case the existence or lack thereof of data is a constraint on the scope of decision modelling and not the other way around. Idiom would argue that a constraint on decision making implies a failure on the part of the existing application, and is a strategic issue whose importance is directly derived from the importance of the constrained decisions.

    Question: “If so, what is the role of the Idiom Database Mapper? For example could we build generic decision engines using aliases in the schema and map the aliases to the database?”

    The mapper can map the global data structures in the enterprise data-model onto the transaction specific schemas required for the decision models using aliases if required. This can also be easily achieved by any competent development tool (Visual Studio for instance). The specific benefit of the mapper is tight integration with the decision model execution cycle and very high performance while doing so.

    Question: “If not does this mean that if I build a portfolio of standard decision models then they have to be partially rewritten for each new site?”

    That is an option. On the assumption that your sites are running a common application, then you could build a standard set of decision models based on the common elements of the application, to be run in series with possibly bespoke decision models to address any customised portions of the application. Or, write a standard series of decision models and include a transformation step as a precursor. The transformation can include tailored mappings managed via a decision model to accommodate non-standard data representations in unlike databases, mapping to a common schema for the standardised decision models. However, this presumes that the required schema defined data-structures are logically contained within and can be distilled from the application database.