Scientific American has an interesting article on how we should shape our policies. Rather than aiming for optimal solutions, we should find robust ones that will give us the most flexibility and most likely good outcomes.
I wonder how this simple idea applies to software design, user interfaces, and so on. In research, we tend to go for fully optimal solutions (but for narrow cases), rather than robust ones that get us 80% of the way there.
Making Policies Robust
Traditional tools such as cost-benefit analysis rely on a "predict then act" paradigm. They require a prediction of the future before they can determine the policy that will work best under the expected circumstances. Because these analyses demand that everyone agree on the models and assumptions, they cannot resolve many of the most crucial debates that our society faces. They force people to select one among many plausible, competing views of the future. Whichever choice emerges is vulnerable to blunders and surprises.
Our approach is to look not for optimal strategies but for robust ones. A robust strategy performs well when compared with the alternatives across a wide range of plausible futures. It need not be the optimal strategy in any future; it will, however, yield satisfactory outcomes in both easy-to-envision futures and hard-to-anticipate contingencies.
This approach replicates the way people often reason about complicated and uncertain decisions in everyday life. The late Herbert A. Simon, a cognitive scientist and Nobel laureate who pioneered in the 1950s the study of how people make real-world decisions, observed that they seldom optimize. Rather they seek strategies that will work well enough, that include hedges against various potential outcomes and that are adaptive. Tomorrow will bring information unavailable today; therefore, people plan on revising their plans.