Microsoft's Jezz Santos is hitting some interesting questions about how to
generate code when the DSM solution scales past toy examples to real world size:
multiple code files from multiple models, with no one-to-one mapping, and
possibly with multiple organisations. He asked me for help back in January:
"I am exploring aspects of artefact generation from multiple views of the same model, that I feel must have been already covered in the DSM space. I am really looking at the patterns for this, and techniques that can be used to solve these challenges. ... Would you be able to offer some guidance/reference here?"
To answer the basic question about how to make this work: you first need two things:
- A tool that supports multiple integrated models, made in multiple
integrated modeling languages.
- Generators that go beyond simple templating to being able to freely navigate through multiple
models and freely output to multiple file streams.
Those are minimal
requirements in our experience for any kind of real working DSM / software
factory. Since they are currently missing in DSL Tools, any serious user will
hit these problems and have to roll their own solution (or hack!), e.g. model
integration in Ordina
or multiple model-to-file mappings in Clarius.
It will be interesting to see what happens as people try to combine several such solutions, all evolving separately and at a fair pace, with DSL Tools and its evolution, and the evolution of their own DSM solution. A change in any one of these is likely to break things downstream, especially at this early stage: it's better for MS, Clarius etc. to bite the bullet and change APIs if experience shows there were poor choices, rather than keep backwards compatibility but be forever locked into a bad solution.
Once the features for multi-model and multi-file generation are in place, as they are in MetaEdit+ and presumably at some future time in DSL Tools, it's possible to look at how best to use them on a macro scale. In most cases it's a piece of cake to generate multiple files from multiple models, a bit from here and a bit from there. The only provisos are that the modeling language is made reasonably sensibly, and that the processes surrounding the models and files are designed sensibly.
Jezz's blog post seems to me to reveal two clear mis-steps there: the separation into PIM and PSM, and the separation into an ISV and end-user company. Both of these are IMHO bad ideas, one from OMG/IBM and the other from Microsoft. In fact, they're so hard to justify from a real world or technical perspective that I'm inclined to assume they were invented because of particular marketing or business pressures in those organizations.
Jezz touched on some of these issues in an earlier
post, and in particular a comment
there by Tom Hollander points in
the right direction. Our experience is that in any particular case, with good
tool support it is easy enough to come up with a solution you can be pleased
with. Trying to come up with general advice takes rather a lot more experience.
With enough brainpower you can generate reasonable guesses, but it's been
interesting to see over the last decade how a number of those guesses have had
to be re-evaluated in the light of the surprising things that apply in some
places in DSM. It's also fun to see that the newcomers to this space are tending
to come up with some of the same bad guesses :-).