W. Brian Arthur
Good morning everybody, it's early morning here in California and late at night in China. I'm delighted to be here. Thank you Simon for a wonderful talk. And I want to emphasize something that Simon has said. A complex system is really a system with multiple elements that create some pattern. They react to the pattern. They're creating, maybe they change their behavior, and then the pattern itself reacts. There's a recursive move. Very simple example I'm fond of is thinking of cars as the elements. They created pattern that we call traffic. And the cars are adjusting to the traffic all the time, but the traffic is the outcome of cars adjusting. What if we looked at the economy this way? Is the economy an evolving complex system of that sort? My answer would be yes. Physicists are always telling us economists that we borrowed these ideas from them. I like to point out that since time of Adam Smith, economists have been looking at individual players in the economy, be they firms or producers, creating some market pattern that they're reacting to and change, and thereby the market itself is changing.
Quite a long time ago in 1987, Kenneth Arrow the economist and his counterpart in physics, Philip Anderson, Nobel Prize winner in Physics. They convened a meeting in the Santa Fe Institute in 1987 on this topic: Is the economy an evolving complex system? By the way, present at that meeting were Doyne Farmer, who will speak next, Tom Sargent, who's with the Academy, Larry Summers, John Holland and so on. I was there as well. This became Santa Fe’s first research program. And I was asked to lead it. We discovered when we were trying to convene at Santa Fe to do long term research on this, it was not quite easy to say how the economy might be an evolving complex system.
But we began to realize that in reality, firms in any market differ. They're not all the same. I happen to live in Silicon Valley. So if you're launching into some new industries say, fleets of driverless trucks doing some new activity coming up. And you're trying to launch a firm to do that. You simply don't know what resources the other players have. You don't know what technologies are used. You don't know their intentions. It's not just a matter of putting probabilities on these things. You simply don't know what the other players are going to do. There's certainly a good area of fundamental uncertainty.
So this means that problems, if you want to be rigorous in economics, are in general not well defined. There's no optimization you can perform if you don't quite know what the problem is. We decided economics itself had got stymied on that, and our group got stymied as well. John Holland was a member, and John quite rapidly pointed out that people do act in situations that are not well defined. That's research he had done all his life. John was a computer scientist, interested in what we would now call AI, teaching computers to get smart playing chess or checkers. John told us that individuals, people in the situation that is not well defined, tended to try to make sense. They formed hypothesis. They tested these out, they continually changed their hypotheses, dropped ones that weren't working, adapted new ones. So we began to ask, how can we do economics in that spirit? We decided that we would construct models where each agent could use a range of rules. The rules and hypothesis might differ from one agent to another in our models. The rules might not be very smart to begin with. It might be randomly chosen. And over time in our models, agents could learn which hypotheses and rules worked for them, which were accurate or which were getting better results. They would generate new ideas or rules from time to time. They would throw out the ones that weren't working. And over time, an intelligence would emerge. At first sight, I want to point out that looking at problems this way is very much the ancestral strategy to the current one that trains computers to play Go very well. So this is a kind of ancestral thing for Alpha Go Zero.
But if you look at problems that way that immediately lands in a world where forecasts, strategies, actions are getting tested. Within a situation, I call that ecology, the forecast strategies or actions are together mutually creating. So immediately, we're in a more biological looking situation. Evolution, survival, ecologies come up, appear out of this logical thinking. Also, you can't quite keep track of all of this in your mind. Maybe you can, most people can’t. And so you have to track all this using computation. It turns out that's the backbone, but I just described the logic of agent based modeling.
One thing we wanted to do is to see if we carry this out on some real economic problem, would it give us any different results? So by 1988 and very much with the help of working with John Holland and others, in particular, with visits from Tom Sargent, we began to think we might be able to model a stock market or what's called asset pricing. Robert Lucas had a wonderful mathematical model of asset pricing in 1979. It's a classic paper in standard economics, beautifully done, very elegant solution. Looking at identical investors, using as equilibrium strategy that on average their forecasts are correct. And that produces bids and offers of a single stock where the forecasting method is in equilibrium, obviously called a rational expectations equilibrium. The problem of Lucas's model, there was a lot of phenomenon in real markets, whereas a lot of trading that gets ruled out if all investors are the same. There aren’t bubbles and crashes like seen in the real markets are on periods of high volatility and no volatility.
Lucas had a beautiful model which I think is still classic, but it left out phenomenon you see in real markets. What we decided to do is to see if we could replace Lucas' very mathematical, identical investors with differing what we call artificial investors. And these would be small, intelligent computer programs that could differ from each other, each one representing an individual investor in our arbitrary computerized world. And rather than start them with forecasting methods, that on average, were the same and always correct. We set them the task of discovering forecasts that will work very similarly to what I was showing you. Agents might start off a random forecast. So for example, prices have being falling, I forecast that tomorrow prices will rise by 1.2%. This might be a totally random forecast, but they quickly figure out which forecast in their sweep of forecasts, which hypotheses are working.
We created this artificial market. Remember, this is 1988, 30 years ago. All of this sounds very familiar now, but it was an early model then. I was disappointed. We managed to, feeding in a random dividend series, actually solve mathematically for the Lucas solution. That's the top graph. Our model was producing the bottom graph for the same random series. You can see they're almost identical. At first, I was very disappointed that there is no difference between our complexity approach and the standard approach. And then we started to notice. That's if you look at this little bubble here, the yellow bubble here, you can actually, it's a little crash that appeared, as we looked at differences between the neoclassical solution and our solution, we managed to see several things happening. There was a phase change. They're, very much like what Simon was talking about. We found that if the rate of people trying out new ideas, new strategies, new prediction methods, if the rate of change of trying out new things was low, the market would indeed hover around the rational expectations solution. So Lucas's solution was an attractor. But if we dialed up the adjustment rate, the rate of expiration, not very much, suddenly, there is a phase change. Technical trading would emerge. That is investors would start looking at the past pattern of prices and base their forecasts on those bubbles, and little crashes would appear. Random periods of high and low volatility would appear. There be times when the market wasn't doing very much other times when it was going crazy.
My explanation for that last phenomenon, you can explain all these phenomenon by the way, But my explanation is that perhaps one of the investors or a few discover some highly effective new forecasting method. Then that's they will load and invest into the market more than they had previously. That would change the market very suddenly. That might outdate other investors prediction methods. There might be a ripple or avalanche of change of investment methods getting used right across the economy. So success or gross failure would cause a lot of readjustments that could ripple across our little investor economy.
I want to make a remark on this. A lot of people looked at us and said, These are departures from rationality. No, they're not. Let me remind you that rigorously speaking, the agents are all in a state of fundamental uncertainty. They don't know what other agents are doing in this particular model. They're discovering behavior. Each agent is discovering behavior. It works temporarily. In the context of other agents, discovering behavior that works temporarily. Bottom line here is that there are what we would call in complexity circles, emergent phenomena: bubbles, crashes, arch behavior - behavior as high volatility and randomly giving way to low price volatility.
Let me summarize at this stage. I believe complexity economics is a form of non-equilibrium economics. Nobody is quite clear if non-equilibrium economics goes beyond complexity economics, but think of the two as roughly in parallel. In general, in complexity economics, we don't assume that problems are well defined. In general, agents explore and hypothesize, bringing new methods, bring in new decision rules, et cetera. That creates an ever-changing ecology in which beliefs, strategies, or behaviors are trying to survive, given other agents, beliefs, or strategies or behaviors. Outcomes may not necessarily be an equilibrium, our little stock market never ever settled down. Grand Master players at chess, John Holland used to assure me, have never settled down. People are always discovering new strategies. So this is what we would call perpetual novelty in general and sometimes an equilibrium does emerge. And randomly there is an attractor, although there might be random behavior. And computation is necessary, things have come complicated enough in these models that you need computation to track what's happening. It's not that fundamental to the method. I think computation, be it agent based modeling or some other form of computation is not always necessary, but it tracks behavior. Often as I was pointing out, novel phenomena emerge and novel patterns that you haven't seen emerge. Perpetually, there's this question of, does this negate standard economics? Is this a bold addition to standard economics? I would say neither. It widened standard economics by relaxing some of its assumptions. Sometime around the 1850s and 1860s, mathematicians started to experiment with geometries that didn't fulfill all the Euclidean conditions. Those are called non-Euclidean geometries. And that gave a new set of interests or new branch of interests to mathematics and different types of non-Euclidean geometry.
So basically, what we're trying to do here is relax some of the standard assumptions. Agents may differ, and they don't have common knowledge. Problems may not be that well defined and may be fundamentally indeterminant. Equilibrium is not assumed and if it is present it has to emerge. I see this as a widening of standard economics and certainly not in competition with standard neoclassical economics, but rather it can widen and lead into new areas. Thank you.
For more information, please visit Luohan Academy's youtube channel: Luohan Academy