Hi CCG,
Is the model they used "open source" so to speak? Or is there at least a detailed breakdown of the factors they considered to build it?
I don't know if the software is open source...
But here are some links to a couple of books that may contain some of the details you're looking for (I haven't read either of them but I know that the updated version of LTG contains information on what "components" of "the world" are built into the model).
http://www.amazon.com/Limits-Growth-Revisited-SpringerBriefs-Analysis/dp/1441994157http://www.chelseagreen.com/bookstore/item/limitspaperAs far as I understand the model, it is not limited to any particular set of components and can be adjusted.
As for black swans - I think one is forced to ignore them, at least initially, as by definition if they're a black swan you have no framework to predict them. However, I question that they mean a model is useless - mostly I would argue they mean a model may tend to be optimistic and should be taken to represent a more optimistic scenario.
If results are important (ie, "safety" is a concern), I would argue that any model of a complex system should never be used alone as justification for intervention within that system - they can provide a false sense of security and can be a recipe for unintended consequences.
Modelling something complex (ie, something with generally non-linear dynamics) will create predictions that will almost certainly be "out of touch" with empirical reality thanks to the uncertainty involved.
Interventions within complex systems are "naive interventions" in the sense that the actual risks involved with intervention are impossible to know, making "risk management" an inherently incomplete defense against unwanted consequences.
If a model of a complex system is only being used as "food for thought" (sorry, "for fun" was not the best way to put it) then there is really no harm in its use.
However, using a model of a complex system as, say, the basis for a national economic policy is a recipe for disaster - and a failure to realize this is a set-up for Black Swan event at some point (possibly "good" but usualy "bad").
If this sounds like it should be obvious, it apparently isn't as history is littered with examples of people underestimating the complexity of the system that they intend to intervene in and end up causing all manner of unintended consequences as a result.
A good "rule of thumb" ought to be that anyone proposing some kind of intervention in any complex system (whether that be a human body, a banking system, an economy or an ecosystem) should have to bear the burden of proof that the proposed intervention will do no harm - and model results don't count as actual proof ;-)
With respect to making such a model for fun or to justify risk taking I would argue it must surely be better to have some model at all to justify what one does, rather than operating from a basis of individual analysis and (to some extent) gut feel?
Again, I think it depends on what's at stake.
Imagine the following situation:
You are on a ship with other people in a very thick fog.
The people who are piloting the ship want to accelerate into the fog along their proposed route and they are confident despite the fog because they have a "map".
But what about floating logs, other ships and maybe icebergs - hidden risks that do not show up on a "map"?
Would you be ok with this operation or would you like to get off the ship before it heads out?