Reviewed by
Frank Dignum
Utrecht University
Basically, the accuracy-effort trade off does hold in "small worlds", where (in principle) all information can be retrieved if enough effort is put into it. Intuitively this means that if we have more information about a situation, we can make more optimal decisions. This sounds very logical. However, in the real world this situation is not always true. In fact, this is illustrated (also in the book) by the fact that stock brokers and analysts do not make better predictions on the values of stocks than people without any knowledge about the stock market. (In The Netherlands they show that a monkey can consistently do as good as the analysts and his chosen stocks over the last 10 years made far more money than the stocks chosen by analysts). Thus, this is a very concrete example where having more knowledge and information does not lead to better decision making.
Just giving some examples of situations in which heuristics perform better than sophisticated optimization techniques would not be very convincing though. The argument is certainly not that some simple heuristic like "take-the-best", (which orders cues and then just uses the highest ordered cue to make the decision) always will be as good as any other method, but rather that in some environments this heuristic works better. The interesting question is what are the characteristics of the environment to make the heuristic perform well.
There is not a simple answer to this question. However, the book gives us some cues. One of them can be illustrated by trying to fit a function on a number of data points. If the data are accurate than a higher order function will fit the data better and this function will thus give a better prediction of where new data points will lay. However, if the data points contain noise, fitting the data exactly means also incorporating more noise into the function and thus the higher order function will contain more noise than a lower order function and thus does not make better predictions about new data points. This is a similar phenomenon as observed when training neural networks. If they are overfitted on a training set they might perform worse on the data set. Thus, in an environment where data are noisy (like in the real world) a simpler function might give better predictions, even though it fits less good with the data points.
The book contains many (40) articles (coming mainly from the ABC research program of the Max Planck Institute in Berlin) showing that humans often use simple heuristics with great success in particular environments and often better than sophisticated optimization methods. I found the articles quite interesting, although they contain some overlap, because they often start arguing the use of heuristics again. However, the range of domains covered also gives a good idea about types of heuristics and their characteristics that are available.
The main thing I missed in the book is some concluding chapter that would summarize what has been achieved so far and what are the challenges for the research area. For instance, many different heuristics are described, but no attempt is made to classify them yet (although the editors do talk about building blocks of heuristics).
For the simulation community it seems at first sight that the book supports the KISS approach to social simulation in which simple rules are applied. However, this would oversimplify the message of this book. Using the heuristics is not always simple and especially finding out which heuristics should be used for a decision is not clear on forehand. This makes me think about what kind of agent architectures we would need for agent based social simulation, if we assume that agents make their decisions based on a set of heuristics (rather than on some logical inference or optimization method based on utility). Although I could not give you an answer to this question yet, I think that any book that can set you rethinking some principles on decision making is worth reading!