'Bindedness' in metamodels

A long post-TOGAF conversation on Friday with enterprise architect and academic Erik Proper brought up the question of what I’ve been calling ‘bindedness’ in metamodels and compliance reference models for enterprise architecture.

In conventional models and metamodels, links between items are kind of binary: if they exist, they apply every time, and if they aren’t in the model, they’re deemed not to apply at all. It fits well with the standard rule-based world of IT, which is why we see them so often in all manner of IT-type models, from data-models to process-models to ArchiMate layered models, and so on.

The catch is that that isn’t how things work in the real world. The reality is described well in Volere requirements modelling: rather than a simple true/false or ‘is-required’ / ‘is-not-required’, the template allows for at least five levels of ‘bindedness’, in two different directions (satisfaction if present, and dissatisfaction if absent). Even the old-style requirements techniques allowed for at least two levels, described via ‘shall’ (mandatory) and ‘should’ (desirable or highly-desirable). Yet another variable bindedness is the ‘MoSCoW’ set applied to requirements in some Agile software development styles:

  • Must – mandatory
  • Should – highly desirable, strongly recommended
  • Could – desirable, recommended, an option known to work in this kind of context
  • can Wait – a ‘nice-to-have’ that can wait until a later iteration (known as ‘Waiting Room’ in Volere)

Much the same applies to reference-models: these too need similar contextual bindedness. Compliance to some parts of the reference model may be mandatory in law; we don’t have any choice about that, and these are the same straightforward true/false links as in standard software models and the like. But for the rest, it’s really about risk-management and levels of risk: we can move away from the standards specified in the reference-model if we must – perhaps because of practical constraints, or because what we need simply isn’t available and can’t be bought or built within reasonable cost at the present time – but every deviation from the standard represents increased risk. High bindedness indicates high risk from any deviation from the standard; low bindedness indicates low risk. In enterprise architecture, we document the deviation by a formal ‘dispensation’ or some other equivalent risk management mechanism.

Which brings us back to models and metamodels, because in the standard rather IT-oriented approaches to modelling, there’s no way to describe this variability of bindedness: we’re stuck with true/false, mandatory or not-required-at-all. The OMG’s Semantics of Business Vocabulary and Business Rules specification is one existing definition that goes part of the way there, but it uses horrible formal-logic terminology such as ‘modality’ (meaning bindedness, but a word which has many entirely different meanings in other contexts) and ‘alethic’ versus ‘deontic’ (true/false versus slightly more variable than true/false), and I haven’t seen any modelling technique – with the possible exception of ORM, or Object Role Modelling – which actually applies it. Even then, it still doesn’t carry through either the simplicity or richness of the bindedness of a reference model, but instead again drowns us in the impenentrable rigidity of formal-logic.

To me it’s just laziness that we’ve stuck with the crude true/false links for so darn long, because they don’t fit what we actually need, and they’re often dangerously misleading in practice. It shouldn’t be hard to define and implement variable bindedness in a metamodel: at its simplest it’s just a straightforward categorisation of link-strength, and could be displayed in a modelling notation by thickness of line or by colour-coding or the like. I’ll have a go at building it into the metamodel for enterprise architecture that I’m working on at present, but it would be good to avoid reinventing the wheel if someone else has done it already.

Advice / suggestions, please?

Tagged with: , , , , , , ,
10 comments on “'Bindedness' in metamodels
  1. Erik Proper says:

    Things become even more fun if we realise that we apply a “closed world semantics” to our models. If something is not stated in the model, we generally hold it to be false.

    If we regard a model as a set of logic-based statements (which SBVR does in my opinion, as does ORM), then we can import a lot from logic. In the field of logic there are other logics other than the first-order predicate logic (dominant in IT thinking), and add relevant modal operators (must, should, usually, etc) to our model statements. Even more, adding forms of “Default reasoning” would allow us to do more useful things with reference models as a “knowledge theory”.

  2. Tom G says:

    Erik – thanks very much for this – is much appreciated (as also our Friday conversation).

    Modal logics would be a huge advance on binary logic, and will definitely help (though I do wish they weren’t so damn impenetrable apparently for the sake of impenetrability! 🙂 ). As we get a bit further down the track, though, I’m wary of falling into the old ‘analysis trap’ of trying to use logic where logic itself is misleading. I know it’s only a crude analogy, but the ‘quantum point’ analogue for the moment of sale in a sales process indicates a point where any conventional logic is probably out of scope – and I suspect that would apply to all principle-based decision-making in the chaotic ‘market-of-one’ domains.

    In Cynefin terms, binary logics apply in the two order-based domains – rule-based and analytic – but fail in the non-ordered domains – complex and chaotic. Modal-logics would probably work quite well in the complex-domain, but I suspect – as above – that they would fail in the chaotic-domain. (They would fail in wicked-problems for much the same reasons.) For EA models, we probably can’t model as such in the chaotic-domain – precisely because we can’t get a logic to work there – but we should at least be able to indicate that the standard logic-rules do not and cannot apply, hence a warning within the model that we shouldn’t attempt to use them in that specific context. That alone would help to cut down on the number of foolish attempts to apply IT-based ‘solutions’ in contexts where, by definition, their binary logic cannot succeed.

  3. Colin Wheeler says:

    Hi Tom,

    I have a metamodel that we use already for this purpose and we rate it along with a pyramid completeness model as well as a benefit model to define nature of the relationships on the Moscow model that you were talking about above. The basis for each relationship must be set by the stakeholder that needs that relationship formed and they must justify those decisions. The metamodel itself is recursive so that the level of detail that is achieved is always the minimum practical level that is useful.

    Cheers
    Colin Wheeler

  4. Tom G says:

    Colin – great! – would love to hear more on this.

    The aim is to use the metamodel as a base for an Open Source enterprise-architecture toolset, drawing on the work that Charles Edwards has done at the Agile EA website – would you be interested in helping in this, or in contributing some of the metamodel work that you’ve done?

  5. Just an out of the blue thought.

    I understand the true/false logic problem. To me I always saw it as a black/white comparison where the world could always been somewhere in between (the print colors codes come to mine…pantone color code scheme). Would the accuracy or bindedness of the relationship be tied as a scale, the probability, or measure of stickiness or commonality?

  6. Tom G says:

    Pat – once again, many thanks.

    I don’t have a simple answer: that’s why I asked the question. 🙂 As Erik says above, modal-logics are probably the best way to tackle it in the formal sense, though I would prefer some much simpler mechanism such as the MoSCoW set (must, should, could, can wait). For now (or rather, for when we get closer to actual implementation of the core-metamodel) I’ll probably start with something like the MoSCoW set, and ask Erik and others to help with the formal modal-logics if/when we get that far.

    I’ve just had a more in-depth look at SBVR: was disappointed to discover that it’s still essentially first-order (true/false) logic, which the commentator then blithely said “would cover most if not all business contexts”, which it doesn’t. Bah. ORM is closer, but I’m still struggling with the pedantry of its modal logic when something simpler would fit the need much better. My head hurts… I ain’t no mathematician, and that lack is really starting to show here! 🙁 🙂

  7. What is the metamodel for? How do you expect it to be used?

    (Sometimes people building metamodels don’t seem to be able to answer these questions, or look at you as if the question doesn’t make sense to them. But I think they are important questions.)

    If you are building or customizing a tool or repository, to be used within the context of some methodology, you may actually need two metamodels. One metamodel tells you what incomplete and possibly inconsistent stuff you are able to put into the tool for further refinement. And a second (probably more rigorous) metamodel tells you what consistency and completeness rules a given model must satisfy in order to proceed to the next stage. Alternatively, you can deal with these rules by defining quality policies and integrity rules against the first metamodel.

    If the tool builders or methodologists don’t understand the difference between these two, then the tool or methodology will either be too inflexible or insufficiently rigorous.

    I am suspicious of optional rules (should/could), because this may lead simply to greater complexity and lower quality. I prefer to see context-driven rules. For example, your data model can be like this, but if you want to generate a database schema your data model must be like this.

  8. Tom G says:

    Hi Richard – The aim for the metamodel is to use as the base for an Open Source enterprise architecture toolset.

    I take youur point about “two metamodels”, but this is arguably one step below both of those examples: it’s the core metamodel (metametamodel?) which defines all possibilities within the toolset.

    The standard first-order logic of true/false or exists/does-not-exist – which, as you say, we would need for a conventional data-model – is merely one type of logic that could be implemented in a modal-logic schema. We can specify that all links used in the respective model-types would be fixed as first-order logic – because first-order logic _is_ a mode within the modal-schema. If we have separate metamodels for different modal schemas, but which also share many other entity-types and attributes, we’ll end up with a horrible mess of special-cases. Far simpler to build the modal-logic in at the root, and constrain it as required as we build model-types on top of that schema.

    Incidentally, this is one reason why the mechanism I’ve suggested for model-types – namely that model-types contain lists of link-types which in turn point to entity-types – is a heck of a lot simpler to drive at the metamodel level: constraining modality becomes part of the model-type definition, rather than buried somewhere in amongst a morass of proliferating link-types. To answer your last comment, in effect the model-type in this approach _is_ the “context-driven rules”.

    Still a long way to go in working on this, obviously, but it is coming together at the thought-experiment level at least.

  9. I agree with Tom that bindedness is important. I think this means that the critical dichotomy we have to deal with in the metamodel isn’t true/false but overdetermined/underdetermined. Flexibility in the model (and in the artefacts built from the model) comes from the bits that are underdetermined.

    But this involves a lot more than just changing the colour or thickness of a line on a diagram.

  10. Tom G says:

    Richard – yup, very definitely agree that “it involves a lot more than just changing the colour or thickness of a line on a model”. But content and display of that content are two separate issues: the colour/line comment was about possible means to display modal difference within graphical models, not about the content or structure of modal-logic links.

    Not quite sure what you mean by overdetermined / underdetermined: sounds a bit like what in the books I’ve called ‘completeness’ of patterns or composites – but expand, perhaps, if you would? Many thanks.

Leave a Reply

Your email address will not be published. Required fields are marked *

*