Episode 3 Trailer: The Neuroscience of Stock Markets
Introduction (00:03): Great science and engineering often begin with a singular hypothesis. But how does a lone spark of innovation become popular science? From Caltech, this is The Lonely Idea.
Rich Wolf, host (00:16): Welcome to The Lonely Idea. I'm your host, Rich Wolf. Today, I have the pleasure of speaking with Colin Camerer. Colin is a leading neuroeconomist who is mapping the structure and functions of the human brain to study how people make economic decisions. Thank you for joining me today, Colin.
Colin Camerer, guest (00:33): Very glad to be here.
Wolf (00:34): So, Colin, behavioral economics is something that is throughout the news today. You can pick up a newspaper and learn about it, yet it's for many of us kind of an opaque field in some ways. If you had to define behavioral economics, what the field means to you and the history of the field, what are some of the things you'd like people to think about or know about the field?
Camerer (00:56): The first thing is that behavioral economics was a contrast or an add-on [i.e., extension] to plain economics, something that's called neoclassical. And a lot of the previous economics, "pre-behavioral" I sometimes call it, was really built to use simple math and kind of a caricature of human nature.
In the standard economics that's still taught as the foundation, including here at Caltech, people know exactly what they want, they can plan for the future, they don't have any problems of willpower, there's no emotions, and they're quite self-centered. So, it's like if Martha Stewart and Mr. Spock [from "Star Trek"] had a baby. That's the model of human nature. And behavioral economics came in and said, "What do we know—evidence and also methods for understanding people better—about natural limits on how much people can figure out, on willpower, and on how selfish they are?"
The first wave really took a lot of ideas from Danny Kahneman and Amos Tversky and other social psychologists, cognitive psychologists, who were interested in limits of computational. A lot of it was influenced by a computer metaphor: The brain is like a computer with limited memory and limited perception. More recently, behavioral economics includes quite a few other things, like the influence of cultural norms on how much people share and whether they obey the law and evade taxes and so forth. That's the aerial tour of behavioral economics.
Wolf (02:27): So, we're going to come back to how behavior affects decision-making, and obviously that's at the core of all of this. But before we do that, I want our listeners to get to know a little bit about you and how you got into the field. You're obviously one of the pioneers in the field. You're a young student in college, and you don't wake up one day and decide, "I'm going to study behavioral economics." How did you end up getting to where you are today?
Camerer (02:47): At that time, in the late 1970s, finance was very obsessed with the efficient markets hypothesis and the idea that a lot of smart people try to make money in the markets, and if there's some information you could use to easily beat the market that's highly, widely available, a lot of people would get rich being in the market. So, those opportunities can't be too persistent. They have to disappear quickly. And after two years of learning the basic math and the evidence and so on, I just thought, "There's got to be something more." So, I went to talk to Gene Fama, who was later a Nobel Laureate, and said, "I want to write a thesis on market psychology." And he said: "Oh, yeah, market psychology. What's that?" And I said, "Well, I thought you could tell me." And he said: "We don't really understand that. It's not in our model." It was just too vague, and it was kind of too early.
So, I tilted to the psychology part and began to study judgment decision-making—how do people generally make decisions. And the math, which is at the core of pre-behavioral economics, is still there, except we put more things in. So, if you think people procrastinate, maybe it's because they don't just discount the future in a certain way—which is an old, very simple idea—but they have a special value for things they can get right away, and everything beyond that is kind of devalued a little bit. That's called present bias. And so, the history of behavioral economics has been adding in ingredients, in a parsimonious way, and where you use a combination of lab experiments and field data to say, "This is really the best way to expand the model."
And so, I had this wake-up experience basically halfway through graduate school, when I realized I wasn't going to be a finance professor and I had to do something else. And behavioral science happened to be [flourishing], even at that time at [the University of] Chicago, which is very hard-core—people are rational, markets are good, government is bad. There was a lively behavioral science presence. And people at the University of Chicago like to argue. So, it was actually a good crucible [for learning]. I had to be constantly defending, "How are you going to do behavioral finance [model irrationality]?" and so forth and so on.
Wolf (04:47): We're constantly trying to take very complex systems and put mathematical equations on them to make them less complex. You just said, actually we want to do the opposite. We want to understand the most complex aspect of all, which is the piece we can't understand. When a market fails in a heavy tail, there's something else that's causing that tail. Where did that lead you in your research?
Camerer (05:07): We really value parsimony and empirical discipline, too. It's sort of like decorating your house. You could have a very minimalist house, or you could have a slightly less minimalist house, or you could have a very baroque house that's too cluttered. We don't want a baroque house that's cluttered with a list of a hundred different biases and effects. So we're constantly trying to dimension-reduce. What is the one thing that's been left out that we can agree is psychologically realistic? If we add one parameter, it's going to help us understand something, and we can explain the January effect or size premiums or distressed firms or something like that in the case of finance, or lots of other things for consumer behavior, like why people make mistakes in picking mortgages.
Wolf (05:51): You mentioned the January effect. I think that's a great example. Would you mind giving that example and how that plays into behavioral economics?
Camerer (05:57): Sure. The January effect refers to the idea that people often fail to expect regression toward the mean. That means when something unusually high or low has happened, or good or bad, statistically, if you have a typical statistical process, the next thing is going to be less extreme. It's going to regress to the average of the mean. So, if you go to a football stadium and you find the tallest dad there, their child is probably taller than average, but not as tall as dad. The "Sports Illustrated" curse is another one. If "Sports Illustrated" puts teams that have just had a big success on the cover, they tend to do worse [in the months] after that. That's because you can only be great two months in a row, and that's very, very rare. And so the January effect was essentially that applied to companies. It turns out that, in the tails of corporate performance, there is mean reversion.
Wolf (06:45): What were some of the early experiments that you proposed as you got started in the field?
Camerer (06:50): One of the early things we did was what we call mirages in asset markets. Suppose there may be inside information, but no one's really sure if there is. This was an experiment that followed from work that Charlie Plott had done, which was very pioneering, on how does information—if there's a subset of people who know something about an asset price—how quickly does it get into the price, and does everyone else figure it out? Which is actually at the absolute core of the efficient markets hypothesis and a lot of [other] things in finance.
The difference was that Charlie and we did this in a simple experiment where we had kind of a God's-eye view. We knew exactly who knew what, and we could see when they came into the market. It's like what the SEC would love to be able to do with actual insider trading. But we did it because we created our own market. And what we found was [that] sometimes there [were] no informed people, but they didn't know that nobody was informed. So, the price would move, and people would think, "Wow, he must know something," and then they would buy. So, Rich is [thinking], "Colin must know something; I'm going to buy." And then I was [thinking], "Wow, Rich must know something." And I didn't realize that Rich thought I knew, and so you would get these essentially transitory bubbles.
They were very short-lived, so we called them mirages. It's sort of like a visual illusion—you're in a desert and it's hot and you see a shimmering and you think there's water. The nice thing is that, if we tried to spot these in naturally occurring markets, it would be very hard to even know what to look for—but in the lab, you could control these things and kind of trap them and then see if you can make them go away.
Wolf (08:26): One of the words that we often use in the investment business is confirmation bias. And there's two ways that I think about confirmation bias. One is I'm hoping for a certain outcome, so suddenly I see all the hallmarks in the road signs that lead me to that outcome. That's the simple form of confirmation bias. But you're also talking about another form of confirmation bias, which is just the mere fact that I'm doing the experiment that I'm doing may actually cause a confirmation bias itself. Where did that lead you in terms of research? Because suddenly, now, how do you set up experiments? Doing the experiment itself may create a confirmation bias. How do you set up experiments to try to solve that?
Camerer (09:04): That's a tricky one. Like a politician, I'm going to answer the question I wish you'd asked.
Wolf (09:13): That's fine. That's probably better.
Camerer (09:15): One of the things that was a big help in behavioral economics was we almost always had two competing theories. One is Bayesian rational people, very standard mathematical theory. If people have figured everything out, then you should not see a mirage or a price bubble or confirmation bias. And the other was sort of a behavioral alternative hypothesis. And so, up to the constraints of statistical power and sample size and all those sorts of wholesome stuff in science, we tried to design these so there was going to be a winner and a loser. And you were then seeing interesting behavioral effects, or maybe you don't, and you figure out it's because there's some forces, there's some assumptions that are going on that are not part of the psychology, or that economics is kind of trumping the psychology because people can make more money or something like that. That was useful in a lot of things we did.
Wolf (10:06): How would you pick out—going back to your confirmation bias of Rich knows something, and then he talks to Dave, and Dave knows something, and he talks to Fred, and Fred knows something—how do you design an experiment to figure out that it's that versus someone actually in that chain knows something?
Camerer (10:22): Well, usually the way we would do that—to figure out who knows what in the chain—is that we control what each person is told and what they're told about what other people know. There's actually a substantial body of research now that's called herd behavior, or cascades, that are exactly this kind of thing. You go by a new restaurant, and there's a line outside, and you think, "I heard it was kind of crummy, but those people are in line." Do you get in line, or do you trust your instinct or your private information, we would call it?
And quite a bit is known about whether that herding—how persistent it is. And, by the way, I think the best examples of herding and these kinds of confirmation cascades often come from the animal kingdom. You see it in things like flamingos. One flamingo hears a pebble and thinks there's a predator and flies away, and all the flamingos chase them. You can see these kinds of herding cascades in nonhuman animals, too.
Wolf (11:16): And are there predictions that you can make about the duration that these things sit there? Let's use a stock. It's something that everybody can kind of wrap their head around. You have a company, people want to invest in the company, but it's down a lot. It's not doing well, and it's been down a lot for a while, and suddenly you have this confirmation bias where everyone thinks, "Well, the reason it's down a lot is because it's bad, and everybody seems to know something is bad." What recatalyzes things to change? Is there another study that says how does the mirage effect end? How does that work?
Camerer (11:47): I think the key for if you have what some have called reverse-herd or a mistaken cascade or mirage has to do with publicness of information.
Wolf (11:59): Let's use 2000 as an example. Not all of us had cell phones in 2000. Today, there [are] more cell phones on the planet than there are people—and they're not just cell phones, they're smartphones. So, the ubiquity of information is not anything we need to repeat here. Everyone knows about it and lives about it every day. But that has to play into what you're talking about. Have your models changed as a function of cellular telephony, smartphones, and data accessed on the smartphones? Have you seen a change in the way you run—in the results of these same experiments?
Camerer (12:28): Interestingly, a lot of people have been studying that in things like political discourse since polarization [inaudible]. It hasn't really taken hold in behavioral economics. I mean, we're dealing with basic fundamental questions from 10 or 20 years ago that are still—we're still collecting interesting data. But I think there's an intuition, which I think is important, which is a lot of the orthodox pre-behavioral finance. The basic idea was if there's more information out there and it's easy for people to get, prices should be really accurate.
One of the things we do very well at Caltech is economic history. If you go back and look at, say, the price spread between stocks that were traded in London and in New York after the telegraph was invented and you have faster shipping routes, prices come into line. That's the way it should work—you know, low-cost information, market price discovery worked great, we really know what companies are worth. But I think a lot of the amplification of social media could easily go in the opposite direction. Rumors can travel much more quickly now. And confirmation bias may be stronger and stronger because people who are bullish read bullish websites, and people who are bearish read bearish websites.
Wolf (13:39): So, ironically, the mirage effect here could actually be amplified as a function of the ubiquity of information?
Camerer (13:42): Absolutely. Because it's not just the sheer amount of information that someone looking down sees. It's the fact that people have less time. Information has gone up, but human attention has not gone up.
Wolf (13:55): Well, it's the ability to perpetuate, sorry for using the phrase, fake news, whatever the definition of fake is in this case. I want to bring this back to you, Colin. So, now you've graduated from graduate school. You've really embarked on the career in behavioral economics. What was your next move? We're now into the 1980s. What was your next move?
Camerer (14:15): So, 1987 was the first time I visited Caltech. I came in the winter from Philadelphia, where I was a professor at Penn, at the Wharton business school. And winter in Pasadena is very nice.
Wolf (14:28): Nicer than Philadelphia.
Camerer (14:30): Yes. And one of the first reactions I had to Caltech was, "How do people get any work done?" Like, I would be sitting outside having coffee all day, which the European grad students do, and some of the others. And the answer is, "It's always like this, and there's a kind of work hard, play hard sort of intensity about it."
Anyway, so a few years passed, I went back to the University of Chicago business school, got a call from John Ledyard in social sciences, [who] said: "We're looking for an economics experimental behavioral kind of special person. Why don't you come? And we have a chair for you." So, I came to Caltech in 1994, and what I was really drawn to was basically the idea that I thought the frontier in behavioral economics was getting much more mathematical than I was interested in, and dealing with field data and empirical data sets when I was kind of more of an experimentalist. I wanted to really understand the more basic mechanism. And Caltech was a really good place to do the kind of science of human nature as well as—there were a lot of mathematicians around who are kind of explaining to me how things work and collaborating on different kinds of stuff.
Wolf (15:44): So, when you go from a place like Penn, Wharton, or the University of Chicago, [where] you've got lots and lots of folks that are doing similar stuff—and even not just behavioral economics, but are just doing finance. Caltech is a relatively small institution. You were the only person doing behavioral economics when you came here. How did that impact your research, both positively and negatively?
Camerer (16:08): Yeah, I think of Caltech as like living in a small town, but it's a small town where everyone's friendly and willing to lend you their farm instruments. And, given the size of the faculty, we have a tremendous wingspan in terms of the range of stuff people do as well as the willingness to participate. You can knock on literally any door. So, when I got interested in neuroscience around 2002, I went and knocked on John Allman's door, because, it's like someone said, "John Allman knows a lot about the brain." I mean, literally I realize it was like taking up golf, and you should, yeah, you should ask your neighbor …
Wolf (16:43): Jack Nicklaus.
Camerer (16:45): Jack Nicklaus. "Jack knows a little bit about golf. He'll help you with your putting." And John was very gracious, and he's an educator, so he, like, "Ooh, great, I'll talk to you all day long about comparative human brains." And so there was a lot of tutoring and coaching, kind of neighborly, in a neighborly way, which was very, very helpful to get into neuroeconomics. And then what happened with that particularly was [that] the Broad imaging center [Caltech Brain Imaging Center] was built. The social scientists are in charge of human neuroscience at Caltech to a large extent. And when the imaging center was built, there were three different scanners for different species. And it was like, well, I guess we should throw in a human scanner, even though we don't know who's actually going to use it.
There were a couple of people in biology, like John [Allman], Richard Andersen, and Shin Shimojo, who had been collaborators over the years. There's a very unusual pattern of links across divisions, which you just don't see at most institutions, at least those with a specific enterprise and money to bring people together. And so we were able to do some very early neuroeconomics with Steve Quartz, Peter Bossaerts, who has since left. John O'Doherty and Ralph Adolphs then came. And so we hired out a few people to do, you know, what's going on in human brains when people are trading assets or buying stuff or tempted to eat junk food.
Wolf (18:05): What's so fascinating to me is that, as someone who's been an armchair observer of behavioral economics, there's sort of, in my mind, the last couple of decades have two sorts of schools. One are the people that go out and they make statistical observations. Steven Levitt comes to mind. All CEOs are over six feet tall. Ergo, your probability of being a CEO is higher if you're over six feet tall. And then they go and they look at, "Why is that? Why, why do tall people become CEOs?" But the other is this area that you helped pioneer, which is what's actually going on inside someone's brain when they make this decision.
So, as I think about your career, I think about that period from the time you finished graduate school until the time you came to Caltech as being an early pioneer in doing very, very fundamental experiments around behavioral economics. And then, suddenly you pivot a little bit, and now we're thinking, "What goes on inside the brain functionally when someone's making the decision?" Could you walk us through what was happening in the field at the time, and what were some of the key ideas that you came up with? Because, in my mind, this is truly a lonely idea in 1994.
Camerer (19:09): Yeah, I had a few lonely ideas. This was the loneliest. Although again, there were a lot of pieces in place, and the Broad imaging center. The fact they didn't have much business—it was like the lonely Maytag repairman [from the commercials]. A lot of [academic] imaging centers were really packed, and you could have two hours a week to do your scanning because there's 30 groups. And here, we went over and Mike Tyszka and others who were staffers were very helpful. It was like, "Oh, you want to use the human magnet? Great, great, let me blow off the dust!" And they were incredibly helpful at finger knowledge about things that [helped us avoid] a lot of mistakes, and they really improved the quality of the science.
Again, part of the community [and] collegiality you see throughout Caltech. But I think an important way to think about this is that, in biology, there's a three-level framework due to a vision scientist called David Marr. He and Tomas Poggio, who was the thesis advisor of Christof Koch, who was an important figure here, too, has moved on now to the Allen Institute. Marr's idea was that, look, you don't really understand something, and you can't engineer it better, unless you understand the algorithmic center of it. What are the equations, or what's the blueprint to build a house? What's the functionality above that? What is it for? What is the house for, what are wings for? What is the brain trying to do? What evolutionary problem did it solve? And then, underneath that, in the basement, is the mechanism. If you're going to build the house, what materials are you going use? If you're going to build a brain to adapt to certain computational biophysical equations or to make certain computations about basing and updating or food value, how does the brain actually do that? The neurons firing the neurocircuitry and so on. This sort of three-part understanding is taken for granted in biology.
But to an economist—in economics, mostly we don't ask, "What is the evolutionary function of basing and updating?" In some ways, it was just my personality, and also that Caltech attracts and helps people like me succeed, because you get a lot of help from your neighbors on the things you're not an expert on. And I like doing new things, and, you know, there was a period of time where people [thought]: "Oh, this is such a shame. You were doing such good economics. Now you're wasting your time." A person at a famous West Coast university whom I will not name said: "Whenever you decide to quit this foolish neuroeconomics stuff and come back and do some interesting stuff, you know, let me know. You should come up and give a seminar." It was like you're banned from campus if you're going to talk about the brain. And, for us, that's just rocket fuel, right? That's great.
And we had some early successes. We were able to produce good-quality data. My goal was always to publish in the early days the neuroeconomics, not in economics journals—although that would be great, and we'd done a little bit—but in "Science" and "Nature" and "PNAS" and the flagship journals across [neuroscience]. I wanted people in biology here and elsewhere to know that we were bringing ideas from economics and challenging decision problems as part of neuroscience, and that we could pass the highest kind of test.
Wolf (22:22): Let's go back to the very first experiment that you did or [the] first couple of experiments where you said, "I'm really onto something." What was that? Bring it back to that day.
Camerer (22:30): The first one we did was about ambiguity aversion. It's a little jargony, but I'll try to help people understand. Ambiguity is a term of art for unknown unknowns. It's when you don't know the probability, like you would know about a roulette wheel or you'd have a histogram of the last 20 years of returns of Intel stock. You still don't know what's going to happen in the future, but you have some information to work off of. But the unknown unknowns are a new tech start-up or climate change, which [is] forecasting out very far. If you make a graph, the confidence interval flares out until I don't really know what's going to happen in 2075.
So, the suspicion [the idea that unknown unknowns are different] was first articulated by Daniel Ellsberg of the Pentagon papers fame. He was a Rand economist before he was an activist, and he said, "I think what's going on is when people aren't sure whether stocks are going to go up or down, they're not willing to bet that it's going to go up and they're not willing to bet it's going to go down"—which is very illogical from a probabilistic point of view. If you think it won't go up, you must think it will go down, because probabilities have to add to one. And Ellsberg produced some examples suggesting there was an extra ingredient, which was this fear of choosing when I'm really not sure what's going on. And other people like Warren Buffet and Charlie Munger I think have said, "You know, if you don't understand the business, don't invest in it." That's sort of the mantra of the ambiguity-averse person. And probably good advice. It's certainly worked for them.
Wolf (24:01): It's okay not to swing the bat.
Camerer (24:03): Yeah, exactly. The first study we did was we gave people choices where they could bet on a known known—like you have five red cards and five black, and if you've been on red, you're going to win—and an unknown—like there's 10 cards, and you don't know what the colors are. And what we found was very interesting—was [an activation in] a region of the brain called amygdala, which is involved in [a very fast] kind of vigilance and threat and like an early-warning system. It's sort of like a dumb guard dog. It goes off a lot, then alerts the cortex and the rest of the brain to say, "Is this something really bad or not?"
Wolf (24:38): So, it's not the analytical piece. It's the fight-or-flight piece.
Camerer (24:40): Exactly.
Wolf (24:41): And it sends a signal to the analytical piece, the cortex that says, "What should we do?"
Camerer (24:46): Correct. What's going on here? And that was great, because that fit into this picture that people who are ambiguity-averse have a fear of action in the face of the unknown. Very much this is exactly what the amygdala, not exactly, but this is something the amygdala would be a good candidate region. And we also found activity in an area connected to the amygdala, in the sense of information passing in prefrontal cortex.
And what really made this paper a success was [that] Ralph Adolphs worked on this, and he had just come from the University of Iowa, where they have a big registry of people with lesions in different parts of the brain. So, you can see if somebody has damage in prefrontal cortex, and our theory says that area is used to shy away from things that are ambiguous, then people who have damage there will not shy away from things that are ambiguous. They'll behave kind of abnormally.
Wolf (25:35): So, back to your original example of, "Is the thing going to go up, or is it going to go down?" They will make a choice. They won't sit in the middle.
Camerer (25:41): Correct. Exactly. So, [we found] people with this special kind of brain damage—and, ironically, they're actually acting in a way that many decision theorists might think of as more rational, because they don't have the normal human reaction, which is to be afraid of action when you don't know up or down. They don't have that natural fear reaction, so they act almost hyper-logically [i.e., hyper-rationally], even though they have brain damage, and it doesn't help in other areas of their life—they have trouble making decisions in general [but ambiguity does not inhibit them].
So, we sent it to "Science," which is a leading general science journal, and they said, "Well, if you had some lesion patients and you found that they behaved abnormally, that would corroborate what you found in the brain area." And Ralph Adolphs said, "Well, I have lesion patients." And I was terrified, because if it doesn't work, we have two papers that conflict with each other. Ralph was less terrified. And also we had a pretty good sample of lesion patients, because often these lesions are quite specialized, and you'd like to have not just one. You can learn a lot sometimes from one person, but you can't learn very much statistically.
Wolf (26:44): Sure. You need statistical significance from a certain number of patients.
Camerer (26:46): Exactly. A little bit of power. So, we found [around] 10, which was pretty good. And it turned out that, indeed, they are ambiguity neutral. They rolled up or down [made similar decisions for risk and ambiguity].
Wolf (26:54): All 10?
Camerer (26:54): Well, on average. They're abnormal decision makers, because what we call neuro-typical signals about being afraid of ambiguity aren't being processed in their brains in the same way.
Wolf (27:08): And is it because of the connection between the amygdala and the frontal cortex?
Camerer (27:11): Correct. Exactly.
Wolf (27:11): So, we know there are people that are born without a corpus callosum that connects the left and right sides of their brain, and they're very impulsive. So, literally, the fact that these folks had had this damage that caused that reactionary portion of the brain, the fight-or-flight portion of their brain, not to connect properly to the analytical portion of the brain completely changed the way that they made decisions.
Camerer (27:33): Exactly, exactly.
Wolf (27:34): So, were there other pivotal moments? Because neuroeconomics has taken so many twists and turns since then, and now we have a much more nuanced view. What were the other very key things that happened in your research or in research across the board in neuroeconomics that changed the field?
Camerer (27:50): I think there were two big things, [in] both of which Caltech had a big role to play. One, obviously, was that me and Peter Bossaerts were working on this most actively, and [Peter was working with] Steve Quartz, who's a philosopher. Actually—he's what we call a "wet philosopher," because he's interested in how the brain actually works. Then we hired Ralph Adolphs, John O'Doherty after that—I'll come back to him in a minute—and Antonio Rangel, and then, a couple of years ago, Dean Mobbs.
And this is our dream team in terms of just sheer talent, but also the [coverage of different] methods. Ralph is sort of Mr. Lesion—it's like "The Avengers," you know, Mr. Lesion. Antonio has a PhD in economics, so he's very technical. Dean studies how afraid you are when there's a tarantula near your foot—I mean literally. And I'm kind of in the middle of wrangling and working at finance and whatever crosses my path.
Wolf (28:46): Well, I hate to say it, you're the old man of that group. You're the senior man.
Camerer (28:49): I'm happy to be. Yes, I spend a lot more time reminiscing and old-man storytelling. And we'd like to hire more. We're a little bit faculty-constrained, as is always the case at Caltech. You're always kind of told, "You can't hire lots more people like you, but you can attract postdocs who will be the new you, or you can find someone in bio [the Division of Biology and Biological Engineering] who's actually interested in things you're interested in." So, we grow kind of tall, you know, but not wide, in terms of the faculty.
Wolf (29:22): It sounds like it's not that hard to induce an engineer or another biologist or a chemist or someone to come and be part of this, because they want to.
Camerer (29:32): Yeah. And also, economics is a very small-scale discipline, and finance as well. Economics and finance people—often one or two people write a paper together, one paper per year. It's really hard-core. And in the lab sciences, it's like if you bring some money, particularly, and an idea that's unique, and it's, like: "Are other people doing this? No? If we collaborate, will that be something cool and exciting the NIH is excited about? Does it have some translational things?" Usually you can at least start a conversation about potential collaborations across really far boundaries here.
Wolf (30:08): So, let's go back. You entice these people to now come and be a part of this. You've got this great core group. You said there were two fundamental things that happened. What were the two?
Camerer (30:15): Correct. So, the second thing was John O'Doherty—I mean, all of the folks we have assembled I think are absolutely great, and we are constantly fending off job offers to people to go elsewhere—the price of success. John was very pivotal in what's called computational and event-related fMRI [functional magnetic resonance imaging], which basically means, so, we did a study on stock prices, for example, to see if there's price bubbles. The computation might be, "Is the stock going to go up a lot?" And we infer what people are thinking from how they trade. But then we actually know how they're thinking, because not only do we infer it, we say, "Well, their expected return seems to be this." Or sometimes we just ask them directly, "How optimistic are you about the stock?" So we infer a series of, well it's going to go up, it's going to, it's going to go down, it's going to go up. And then we look in the brain for areas that have a blood flow into regions that have that same pattern. And we find, for example, there's activity in the nucleus accumbens, which is an old part of the brain that's involved in reward, prediction, error, and anticipated reward. And so, like Alan Greenspan, following Bob Schiller, talked about irrational exuberance—this is where rational exuberance is expressed.
Wolf (31:24): It's where it sits in the brain.
Camerer (31:26): Exactly. But we wouldn't have found that unless we knew, almost with a [numerical] spotlight, right where to look, because we had these numbers that we thought people were computing, and we were trying to find, "Where's that series of numbers?" So, we were going way beyond the first wave of fMRI, where it's like: "I do X, and I do Y."—"Where is the X portion of the brain? I look at X versus Y." We were doing something much more nuanced [computational fMRI]. The brain is like this computer, or like an office building, but not every office in the building is computing the same thing. So, we're going to try to find that little section that's computing expected return.
And then we found another section [i.e., region] that's essentially computing "be afraid," called insula cortex, which is an area that's involved in discomfort, pain, empathy, financial uncertainty. And in the experiments on price bubbles, people who had earlier activity in insula cortex tend to sell before the bubble and made a lot more money. So, the simple two-sentence story about what the brain tells us about artificial price bubbles in lab experiments was [that] the nucleus accumbens [activity] is the fuel that fuels the bubble, the insula kind of puts the brakes on, and if you listen to these insula warning signals and sell, you make more money.
And also, like the Heisenberg uncertainty principle, it's the selling that also precipitates the crash, because once the prices start to go down, they go down sharply. We could never have done that without a grant from the NSF and a collaboration with Virginia Tech, where they have three scanners. We just had one human scanner. And then the computational model tells us how to exactly find what we're looking for in there.
Wolf (33:04): And how many subjects did you go through to come to this?
Camerer (33:07): That was a really hard study, because first we wanted to use no confederates and no deception. We wanted everyone there to have skin in the game, and they're actually trading with each other. There's no fake prices. So, there were 16 groups of 20 people each. Each group basically trades at whatever price they want. As an experimenter, it's very hair raising. It's like they could form a bubble [or] they could not form a bubble.
Wolf (33:32): There were literally no boundary conditions. You have these different groups, and they're trading assets or whatever it is. You had them trade bananas.
Camerer (33:37): Correct. Artificial assets that then cash out for money.
Wolf (33:40): Yes. And, ultimately, what was the most important conclusion that then drove the field?
Camerer (33:45): I think the main thing we've learned about decision neuroscience—the price bubbles [are] a little bit more esoteric, and it's right on the frontier—involved nucleus accumbens and the basil ganglia. Particularly, we take very seriously the idea that some of the decisions that people make are called model free, which means, "I did this once, and it worked great; I'm going to do it again." And others are model based, which means, "I have an idea like a GPS map or like a decision tree or a spreadsheet that tells me if I do A what's going to happen, if I do B what's going to happen, if I do C what's going to happen." In a way, a lot of economic theory says people are probably using this model-based spreadsheet. But humans are animals. We're basically an ape with a thinking cap, a neocortex, and we use a combination of model-free and model-based things.
Model free is related to something like habit, which is kind of like neural autopilot. You know, the brain does a lot of work for a three-pound machine that works on a small amount of electricity. And so, whenever we can offload things to do things habitually, that's going to be great, that's something we're going to do.
And that distinction between model free and model based, it hasn't entered into economics and other social sciences too much, although obviously psychology and other fields. And I think that's something that a lot of people here have developed. Me a little bit, John mostly, Antonio, and others. And I think that's going to be what started as a lonely idea that's going to be a big idea in terms of human nature and behavior change.
Wolf (35:16): And I can't help but think that there must be other imaging modalities, whether it be PET or whether it be—maybe it's EEG—plus where you're monitoring someone 24 hours a day, seven days a week. There must be other things that haven't been introduced from an engineering standpoint that can help advance the field. If you had a wish list, what would it be—and they might be biochemical, maybe they're biochemical markers. If you had a wish list, and you said, "Gosh, if I could wave a magic wand and tomorrow I had this modality, you know, a biochemical marker that measures activity in the amygdala or a new imaging modality allows me to look at someone 24/7." What is the next frontier for that?
Camerer (35:59): fMRI I will probably never really be portable, although there may be some for neuroscience. I don't like to say never, but maybe some baby version. But EEG is something which is really neat because it's fast. So, if you want to study thinking fast and slow, that's how to do it. The great advantage of fMRI is [that] it shows the whole brain. If you really care about the whole circuit or you think something somebody is doing, like conforming to a social norm or deciding whether to have a baby, is activating lots of really different parts of the brain that are linked, like emotion and planning and mentalizing about humans—you know, "Is my spouse going to be a good parent?" Big decisions are going to involve the whole brain, so you'd like to see the whole brain. And EEG really just covers the cortical surface. But that's what's human-specific.
So, if you want to study game theory, you need to get at mentalizing circuitry, which is the medial prefrontal cortex on the top, the TP temporoparietal junction, which is kind of above your ears—and that's much more likely to be very portable. I mean, there are actually versions as I understand it. They're not quite the same as the one we use in our lab, which has 128 different nodes to pick up these very weak pulses, almost like earthquake tremors in a typical brain. But you may get a scaled-down baby version, and then people could be sitting and watching a movie, they could be on a trading floor, they could be cheering at a game.
Wolf (37:33): Just the freedom, the physical freedom that the subject then has obviously opens up a whole new type of experimentation. Are there bioethical considerations that are now starting to creep in, because you've laid out a framework for how people make decisions? Is there bioethics attached to this, or should there be?
Camerer (37:50): Yeah, I think so. I think that there's a natural constraint in the short run. For example, with some exceptions, like political situations and others, it would be hard to force somebody into an fMRI against their will because you're going to then look at their brain and decide to sell them a bunch of crappy products. And so, whether, as you're going into the Rose Bowl, there's some kind of sensor reading your thoughts that might do something exploitative that you hadn't agreed to, we're not quite there yet. But, at the same time, I think the bioethicists, which has got to be a mixture of ethicists and biologists talking, better get going because ...
Wolf (38:35): This train is moving ...
Camerer (38:36): ... the potential to demonize could be coming along very, very quickly. And then there are things that are much more easily already available technologically, like face recognition and crowd matching for surveillance, and things like that. I will say, in another area which is maybe even more dramatic, like gene editing, my sense is that bioethics has done very well at fairly quickly coming to consensus. David Baltimore was involved in one effort of this type about CRISPR and gene editing.
Wolf (39:08): And he was previously with STEM cells as well.
Camerer (39:10): Correct. Exactly. And so, we have some successful use cases, which is really helpful. I mean, not that you want to go to replicate exactly and do something like intrusive fMRI or whatever dystopian thing you kind of imagine, but the combination of scientists and ethicists and social institutions that decide, "Okay, we're in a good place here, and here's what we're going to try to actually reign in and not allow." I think that's very, very encouraging.
Wolf (39:36): Wow. What a great way to end. Colin, thank you so much for joining us on The Lonely Idea.
Camerer (39:39): My pleasure.
Conclusion (39:42): The Lonely Idea is produced at Caltech. Learn more about Caltech innovators and their research on our website at www.Caltech.edu or connect with Caltech on Facebook, Twitter, Instagram, and YouTube.