Camillo Padoa-Schioppa is a Professor of Neuroscience at the Washington University School of Medicine in St. Louis. In this conversation, we talk about Camillo's work on economic values in the brain, whether it is causally involved in choice, Camillo's career, working with different species, and much more.
BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.
Support the show: https://geni.us/bjks-patreon
0:00:00: The historic background of economic value
0:12:31: How Camillo became a neuroeconomist
0:38:50: What does neuroscience add to our understanding of behaviour?
0:47:52: Value in the brain / discussing Camillo's 2006 Nature paper
1:05:47: Does the brain even need to compute value?
1:11:59: Causality in neuroscience / discussing Camillo's 2020 Nature paper
1:27:19: Trivial decisions
1:31:26: Is it wise to do neuroscience in humans and in animals, or should I focus on one approach?
1:40:15: A book or paper more people should read
1:43:19: Something Camillo wishes he'd learnt sooner
1:45:53: Advice for PhD students/postdocs
Ballesta ... & Padoa-Schioppa (2020). Values encoded in orbitofrontal cortex are causally related to economic choices. Nature.
Bentham (1780). An introduction to the principles of morals and legislation.
Gigerenzer & Gaissmaier (2011). Heuristic decision making. Annual review of psychology.
Hayden & Niv (2021). The case against economic values in the orbitofrontal cortex (or anywhere else in the brain). Behavioral Neuroscience.
Padoa-Schioppa (2009). Range-adapting representation of economic value in the orbitofrontal cortex. Journal of Neuroscience.
Padoa-Schioppa (2011). Neurobiology of economic choice: a good-based model. Annual review of neuroscience.
Padoa-Schioppa & Assad (2006). Neurons in the orbitofrontal cortex encode economic value. Nature.
Padoa-Schioppa & Conen (2017). Orbitofrontal cortex: a neural circuit for economic decisions. Neuron.
Padoa-Schioppa ... & Visalberghi (2006). Multi-stage mental process for economic choice in capuchins. Cognition.
Padoa-Schioppa, Li & Bizzi (2002). Neuronal correlates of kinematics-to-dynamics transformation in the supplementary motor area. Neuron.
Smith (1759). The theory of moral sentiments.
Salzman ... & Newsome (1990). Cortical microstimulation influences perceptual judgements of motion direction. Nature.
Salzman ... & Newsome (1992). Microstimulation in visual area MT: effects on direction discrimination performance. Journal of Neuroscience.
Visalberghi & Trinca (1989). Tool use in capuchin monkeys: Distinguishing between performing and understanding. Primates.
Episode w/ Smaldino: https://geni.us/bjks-smaldino_2
[This is an automated transcript that contains many errors]
Benjamin James Kuper-Smith: [00:00:00] Yeah, you know, it's weird. This is, I think, actually my first proper neuroeconomics episode. I wanted to do them all, but somehow I never got around to them. Yeah, I thought we could maybe start, you know, want to talk about neuroeconomics, uh, your work in particular, your career. Uh, let's maybe, kind of the historic background of economic choice until you started, uh, your career roughly.
So maybe let's, I think you mentioned it in a talk I saw, kind of the link between the Iliad, the Odyssey. And economic choice. What's the, what's the relationship there?
Camillo Padoa-Schioppa: Well, in that talk, I was pointing out that humanity and in particular, Western thinking has been reflecting on economic choices for a long time, and the way we've been thinking about them has evolved over time, and one step of that evolution is already evident in comparing the Iliad and the Odyssey [00:01:00] that are nominally both Okay. books by Homer, uh, but in reality, almost certainly was written by different authors. And one of the reasons Greek scholars believe that that's the case is that the conception of decisions, uh, of how decisions happen or how the choice process takes place is very different in the two books. In the Iliad, heroes are Sort of their actions are guided by gods, essentially.
So Achilles decides to enter, uh, the battlefield, uh, because he's guided by some of the gods. And so on. The whole book is like this. Whereas in the Odyssey, it's very clear that it is Ulysses who makes decisions for himself. And, for example, in the episode of the Sirens, he's not only making the choice of tying himself to the mast, but also Can foresee himself being impaired in his decision process. He [00:02:00] knows that if he hears the silence, he'll be drawn to towards throwing himself in the sea. And so he's very aware of his own decision process. And so this different conception sort of highlights sort of a more evolved way of thinking about our own mental processes. And then from there, in that particular talk, and I can revisit now, I pointed out that the way we Again, as, as, uh, especially as the Western civilization has been thinking about choices that continued to evolve over time.
And there has been a lot of thinking in Western philosophy about free will and are humans really free to virtue, uh, and avoid, uh, sin or are our. Guide actions not really free and always determined by some factor, which can be physical factor or sometimes in some versions of the non free will sort of taken.
Our actions are guided by societal constraints or or [00:03:00] societal factors. So anyway, that's it's a longer story about how people think about choice. And I was just revisiting that.
Benjamin James Kuper-Smith: Yes. Interesting that it's, you know, as you just mentioned these two, the. These two ways of how decisions come about one by the gods and one by people themselves and I Mean that is pretty much like the two extremes on the free will debate, right? Some people saying like there's no free world. It's all predetermined, etc, etc And the other saying like no, of course, there's free will Yes, it's kind of interesting that that that the the kind of two extremes of that position on the free will debate have existed You know since since we've had people writing stuff basically
Yeah, so So, I mean, one of the main topics in economic choice and neuroeconomics and that stuff is the debate of value, economic value, and that kind of stuff.
Where, where does that, when, when did that whole thing start?
Camillo Padoa-Schioppa: Yeah. Yeah. So as I was saying, the way we thought about, we've been thinking as a society or [00:04:00] as a civilization about choices has evolved and a turning point was between the 16 and 1700s when different thinkers came up with the concept of value. Arguably, one of the first was blessed Pascal. He sort of famously argued that as we, for example, decide whether or not to believe in God, we have to make calculations about what are the advantages of believing in God, whether or not God exists and what are the disadvantages and sort of like so essentially assigning value to our own belief in God versus non believing God. And he concluded from sort of like those calculations, it was worthwhile believing in God, whether or not that was actually true, uh, meaning the existence of God was actually true. The early economists, economics as a, as a discipline has emerged in the 18th century, essentially in England and Scotland, and the early economists, the two main figures were Adam Smith [00:05:00] and Jeremy Bentham.
And Adam Smith. He's famous for having written the book, The Wealth of Nations, but also wrote a second book that was essentially a treaty of, uh, of psychology, the theory of moral sentiments. And, and in that book, sort of like he elaborates on the fact that the choices are driven by, Pleasure and pain sort of by values that people assigned to the different options and very much Jeremy Bentham had similar thoughts.
So the idea was that choices we make are based on our expectation we have about the joy that we will derive from, let's say, choosing one path versus another path, or perhaps from the pain we might expect to experience if we choose one path versus another path. So this was sort of like. The concept, the idea here is that choices are based on, on values. And that idea was, you know, a very powerful idea and it has remained in the economics, [00:06:00] economic literature and way of thinking for, for a long time. And, but at the same time, economists gradually came to realize that there is a little bit of a problem with that idea or some degree of circularity with that idea.
And the reason there is a circularity is that sort of the concept is that. We assign value to the options and then our decisions is taken. So to maximize the expected value, the problem is that there is no way for anyone to measure values other than by observing choices themselves. For example, if I offered you a, you know, a hundred dollars versus a glass of water. I suspect you'll choose a hundred dollars, and most people would. But if one day I come to you and offer you a hundred dollars versus a cup of water and you choose the water, I will never know and I will never be able to tell whether you choose the water because there's something wrong in the way you maximize utility in your, uh, your decision process. Or perhaps you choose water because you're already very wealthy and in fact you're very [00:07:00] thirsty. And so it turns out that glass of water is more valuable to you at that time than $100. And so this circularity, the fact that we cannot measure values other than by looking at choices themselves made it so that economists realized that the concept of value was not so easy to build on.
And so gradually they came to build theories that don't rely on that concept anymore. And the sort of the apex of that process was in the 1930s when standard economics was essentially formalized. And the idea that was formalized is that the entire theory can be built on the concept of revealed preferences. As opposed to values, reveal preferences essentially means that the theory builds on observing choices for what they are without investigating the motives of the choices. And so that was also a moment where effectively economics and psychology became independent of each [00:08:00] other. Or if you want the economics. no longer needed to have much of a psychology theory underlying it. It could just sort of like be self standing, assuming that once the choice, in fact, or building a theory such that once the choices are known that that everything can follow from it. And so, you know, this is standard economics or neoclassical economics.
And when the field of neuroeconomics came about in the early 2000s, about 20 years ago, then I would say that for the first generation of studies, the question was whether values are a real thing. Sorry, taking a step back, standard economics relies on revealed preferences, and it is often said that it is as if, as if choices were dictated by values, but there is no commitment, strong commitment to the fact that values are truly computed during the choice process. And as we started investigating, we, as a field of neuroeconomics, [00:09:00] started investigating the brain mechanisms. Underlying choices. The first question was, I mean, are these values a real thing are bodies actually computed during the choice process? Or is that not the case? So that was really kind of the central question for for the I would say for the first generation of studies in this field. And if there is one thing that I believe is very clear in the field and undisputed. And I would point to as a success story in the field is to show that indeed values are computed and they are explicitly represented at the neural level. In fact, in numerous brain areas, in fact, a certain point it became clear that the fact that you see a value signal is not even enough to really say that that value signal is part of the decision process.
That's, that's a longer story, but sort of, there is no doubt that values are indeed computed. And explicitly represented the neural level so that I would point that as a success [00:10:00] story for the field.
Benjamin James Kuper-Smith: Yeah, I want to ask a bit more about that later. Before that, uh, first, uh, slightly random question, just, uh, it's kind of interesting as you mentioned that, you know, economics Tried to get, you know, in some sense, tried to not rely on something like a value or something like that. But then it happened around the 1930s, it occurred to me that was also roughly the time that behaviorism really became popular in psychology.
You know, a way of psychology of also getting rid of psychology basically. I'm curious, is that, uh, you know, this, this non reliance on internal states, do you think it's a coincidence that it happened in these kind of two fields at the same time? Or do you know whether there's any connection between the two?
Camillo Padoa-Schioppa: There is a connection. In fact, that's, uh, you put your finger on something very real. So when I say that economics sort of liberated itself from psychology, so to speak, sort of didn't need to have a psychological theory, that's actually an oversimplification. In reality, what happened is economics [00:11:00] embraced one particular psychological theory, and that is behaviorism. Okay, so that means a theory where everything can be explained by stimulus response associations, and there is no need to open the black box. And someone like Samuelson, Paul Samuelson was probably the greatest economist of the 20th century and probably the greatest figure in this unification of, or this formulation of standard neoclassical economics, so it is central to that effort. As an economist, explicitly embraced when asked, explicitly embraced behaviorism and was very aware of that. And from the other side, if you read Skinner, Skinner in his writings writes that choice is an illusion. Choice is an illusion. Because again, everything is Is a stimulus reward association. And so, yeah, no, these two things, standard economics and behaviorism, particularly this kind of strong version of behaviorism [00:12:00] proposed by Skinner are very much two pieces of a larger, of a larger story, of a larger scientific, uh, viewpoint.
Benjamin James Kuper-Smith: Okay. Yeah, it seemed a little bit too much of a, too, too similar to be a coincidence, but I guess, I mean, these things happen, I guess these kind of coincidences. So, so that's a little bit about the history of economic choice. Uh, I want to kind of start at the, at the different history, the history of your education, so to speak, and kind of when, when the two meet.
First, maybe, why did you study, I mean, you studied physics originally in your, in your, I don't know whether it was a bachelor's or what exactly the system is in Italy, but, or was back then, but, uh, yeah, why, why, why physics and what, like, kind of what drew you into that topic or field rather than topic?
Camillo Padoa-Schioppa: Uh, in some ways, I'm still asking myself the question. So for me, the story starts a few years before. Okay. So when I was in middle school, I like science and I thought [00:13:00] I maybe will go towards a high school that would be teaching me some science. And, uh, you need to read the two schools, the sort of the two types of school that were considered the better ones. Where one called classical where there is a lot of humanities and one called scientific where there's also a lot of humanities But a little more math and science and I was thinking of doing that But then my parents kind of taught me to go into the classical and so so I ended up doing the classical So I studied a lot of humanities, you know, the backbone of the Italian system is history So I studied a lot of history philosophy Latin, ancient Greek history of literature, history of art, and very little in the ways of math and science. But then to compensate that a little bit as a teenager, I had the opportunity to volunteer in a lab that studied ethology. Uh, so animal behavior in particular, they studied tool use in capuchins. The person I was working with his name. Elisabetta [00:14:00] Visalberghi, she was at the time a young researcher at the National Research Council in, in, in Rome.
And so I, so that was a very fulfilling experience for me. I, I would spend a couple of afternoons every week for several years. It was really kind of like, it really shaped my interest, the way I'm, I've been thinking. And, um, in any case, when I ended high school, I mean, in some ways it would have been logical to continue or to pursue this interest in the mind and behavior that I developed aside from school. But I also felt in reality in some ways ready for something a little bit different, something new. And so, uh, I knew I liked math, but then sort of, I don't know, I ended up studying physics because I thought that, uh, it would combine. That math, you know, I felt math maybe was going to be very lonely enterprise, and so I, I chose to study physics. In reality, I also realized after not too long [00:15:00] that physics was not really my passion. I was not particularly good at it, and, and, uh, I enjoyed some aspects of it, but sort of it was not going to be my future. And so, but the Italian system was such that when you enter a program, there was not even really a bachelor at the time.
It sort of would take you all the way to a master's, so it would take you, you know, typically five years or something like that. There was not really any way out other than backtracking and starting again something different. Otherwise, you had to sort of go through and finish. And so, so I did that. I went on with physics.
I studied statistical physics again. I kind of enjoyed aspects of it, but it was not really a passion. So that answers the question. Maybe how did I choose to study physics? If you were gonna ask me what led me from physics to the next steps, then, uh, I would say that, in a sense, In a sense, going to neuroscience was a [00:16:00] little bit resuming my early interest in the mind, although in a different way.
Um, so that's, um, that's a little bit sort of, um, sort of my early path.
Benjamin James Kuper-Smith: Yeah. Um, I mean, and I, I do want to ask that question, how you then kind of ended up in neuroscience, but before that, there were just a few, uh, two questions I thought that were kind of interesting. First is, is it common to do an internship in a lab whilst you're in school in Italy? I've, that's something I haven't heard of.
Camillo Padoa-Schioppa: No, very unusual, very unusual.
Benjamin James Kuper-Smith: Yeah. And how did that kind of happen?
Camillo Padoa-Schioppa: I think again, sort of, um, you know, I was, I think when I started, I was 15 and I think my dad talking with me, he knew I was interested in science and he said, you know, maybe you can find a lab and sort of, he didn't know it was going to be a animal behavior lab. He thought maybe it could be a chemistry lab, you know, some lab and have some experience.
So it was really like. I had not thought of it. He had this thought. And then a family [00:17:00] friend was a neurobiologist. And so my father asked him, you know, I guess some advice. And this family friend put me in touch with with this young researcher, Elisabetta. And so that's how I started. I remember arriving there. It was at a time called the Institute of Psychology of the CNR, uh, National Research Council. It changed name since, but it's basically the same institute. There were a few researchers there and they told me a little bit what they were doing and sort of, then I started. It was very unusual. To this day, I don't think it's very common at all.
And I really have to think on one side, sort of like the family connections that sort of put me in touch. But on the other side, this, this researcher who at the time was in her early thirties, had just started as an independent investigator. In fact, semi independent, she was not completely independent.
And I guess maybe because also she was young and she didn't have a group yet, since then she has grown a bit. much larger group. But the time she [00:18:00] was just getting started herself as an independent investigator. And she took me in and you know, we're still close friends many years later. Um, anyway, so that's Yeah, it was unusual and very fortunate.
Benjamin James Kuper-Smith: Yeah, I mean, it sounds incredible, especially you'd imagine that someone who's just starting a lab or wants to has better things to do than to try and teach a 15 year old. Were you doing anything like useful or interesting or like, what were you doing? I mean,
Camillo Padoa-Schioppa: Right? That's a good question. Uh, Yeah, I was trying to do something useful. So at the beginning, I was doing very menial work like, you know, inserting data in one of these old, uh, Macintosh computers that had floppy disks. Um, you probably never even saw those, but, um, uh, so I was doing that kind of stuff.
But then a certain point again, I, you know, I kept going there. Uh, regularly for several years. So at certain points she would, she was running experiments and she would, you know, take me with her experiments in her case [00:19:00] often where something like putting, you know, one animal or, or, or sometimes two animals, depending on the kind of experiments in one apparatus or in one.
situation where they had the opportunity maybe to use a tool to do, uh, some tasks. It could be termiting, meaning, for example, putting, um, using a tool to recover some, uh, some food that was at the bottom of a little well or using a stick to push a not through a tube and stuff like that. And so the experiments would be putting the monkey in a certain situation, then record manually what was happening.
And so I did some of that. I was, you know, essentially collecting data as Elisabetta was doing her experiments, or sometimes I, there was a time where I, I think I could run the experiments myself. So, yeah, I think I was, I was, you know, being moderately useful. It's not like I had my own project or anything like that.
I was, I was a kid and, and, uh, and, and then in addition for, [00:20:00] so this is my contribution to the lab, the way things came back to me. I mean, in multiple ways, participating in these experiments was fantastic. I mean, just watching the animals do stuff, it's, it's, uh, it really developed my thinking and my, my, uh, my intuition about what an animal is thinking or doing when they're, when, when, when they're behaving. It was also an opportunity to read stuff. So it is a bit, I would suggest some books that I would, uh, I would read. And so I, I, yeah, I, I've read several interesting thing in, in animal behavior, in, uh, anthropology. Yeah. So it's, it's, um, I mean, I, I would assume I got out of this more than it's a bit that did get out of me, but, uh, but I don't know. Um, hopefully also was somewhat useful.
Benjamin James Kuper-Smith: yeah, I'd imagine that a, uh, a researcher is aware that this is more [00:21:00] helping someone out rather than actually having like a, uh, a skilled worker, but yeah, I mean, in a way like I guess you can do useful stuff at almost any age considering that, you know, when I was whatever, 16, I worked in a shop, you know, stacking shelves and that's not, you know, that's an actual thing with economics behind it and you shouldn't mess it up.
And, you know, so why not do basic stuff and have, if you're interested in. Yeah.
Camillo Padoa-Schioppa: Yeah, yeah, yeah. No, I agree. I agree.
Benjamin James Kuper-Smith: Yeah. So, uh, so the other question briefly was whether you mentioned, you know, that you, your background in, in the classics and history and that kind of stuff is, is, does that, do you find that useful to have that background? Do you think that, I mean, it's obviously difficult to, to, to know what your thoughts would be if you had a different background, but, uh, yeah, I'm just curious whether you think that kind of adds something to your way of looking at.
Uh, your work that, you know, you'd otherwise lack.
Camillo Padoa-Schioppa: Yeah, I think you put your finger already on both [00:22:00] my answer. So the first answer is I don't really know what I would be if I had not had that experience as a teenager. And the other is Yeah, I think it does shape to some extent the way I'm thinking about everything. So I guess for me it is useful. It's, you know, I didn't need to study Ancient Greek the way I did for several years. I probably would not be that different if I had not. But at the same time, for example, you know, the Italian high school has a pretty serious Programming philosophy. So we studied three years of philosophy and that I would say almost certainly informs the way I sort of, I, I, I juggle ideas and, and think about problems in a broad way. So yeah, I definitely have been shaped by that high school. I'm not sure I would recommend to. Uh, my [00:23:00] self as a, as a teenager to do the same path, but I certainly draw, drew out of it quite a bit.
Benjamin James Kuper-Smith: Yeah, yeah. I, I didn't do Greek, but I did Latin in school. And the main reason is just I didn't want to do, like, I went to school in Germany, but I was born in England. So I didn't want to do English as a foreign language because it's just boring, you know, listening to people learn how to say their name.
Uh, so I did Latin as my first, uh, first foreign language. And yeah, it's weird. I guess it does give you like a very different perspective, but I, yeah, also not sure it really. I don't know. The good thing is now I can, it feels like, uh, when I, when I see something Italian, it's vaguely familiar. I can, I can understand half of the words roughly, but yeah, that's basically most of the benefit, I think.
Camillo Padoa-Schioppa: Well, Latin for an Italian is, is very close, right? So it's, it makes a lot of sense for an Italian to study Latin. I grew up in Rome, you know, Latinity, Roman, [00:24:00] ancient Rome is everywhere. It's in the language, it's in the culture, it's, it's, so Latin is, is make a lot, makes a lot of sense for an Italian. Ancient Greek, in my opinion, a bit less, meaning that it would have been better, I think, if my experience in high school had been heavier on math and science, possibly at the cost of some other things, and maybe Greek would have been the one I could have done without. Not everyone agrees on that, but that's definitely, uh, the way I think back about it.
Benjamin James Kuper-Smith: Yeah. Um, anyway, so, so you did all of that. Uh, you were, you were mildly useful in the lab. You did your physics and then somehow you ended up doing a PhD in neuroscience of movement, I believe at MIT. Remember, is that correct? Yeah. Um, uh, yeah. First, why, why us, how did you decide that kind of choice? How did that happen?
And Uh, yeah, that may be from movement. How did you end up doing, going from [00:25:00] movement to economic choice then thereafter?
Camillo Padoa-Schioppa: Okay, so at the end of my time in physics, again, I knew I didn't want to study physics going forward, so I thought maybe I should stop school and sort of find a job, but part of me was still kind of excited about, I don't know, studying more. I think I had realized that I was interested in the brain. Okay, I think I had got that far.
And we're talking here about around 1996. And Internet was in existence, but it was a lot newer than it is now. And, you know, Chrome didn't exist. Google didn't exist. We used software like Mosaic or Netscape. In any case, using that kind of software, I realized that there are departments of neuroscience. around the world. You need that. Actually, there wasn't any. There was one in Trieste that I learned about later, but there were neuroscience departments. And so I thought that maybe I could apply for or sort of like go [00:26:00] there. And I took a trip. To the U. S. I visited a few schools, met a few scientists contacted by email at the time.
Sort of email again existed, but it was kind of a new technology and then came back home. And then I sent I forgot if 10 or 12 applications to Ph. D. Programs to the U. S. I didn't get into any of them. Um, well, with the exception of M. I. T. I got into M. I. T. And I'd be you. Also, they offered me. But in that case, they only I was only accepted to their masters program. And, and the fact that I didn't get into any was actually not surprising. I had essentially, first of all, I spoke a terrible English. Uh, seriously, I, my English was, was, uh, not good. Uh, number one, number two, I had no background in neuroscience or in biology for that matter. And I also. Was not really familiar with, uh, with how the American system ward sort of [00:27:00] universities in the U.
S. So I was a very unlikely candidate, and it's much more surprising. They might he accepted me then, uh, than everyone else didn't, uh, at MIT. They accepted me. I almost certain because I had when I taken that trip to the U. S. I had visited MIT in particular. Emilio beats you, who Later become my PhD mentor, advisor and Emilio. Why I had spent an hour with him talking or half an hour. I forgot it's also talking in his office. And first of all, we spoke to speak Italian because it's Italian, immediate Italian. And also Emilio probably was able to see. in that conversation thinks that because he's Italian, he understands sort of the way things work.
So he probably sees some qualities that would not have been as apparent to someone not familiar with the Italian system. And so that I'm sure was like the main thing. And the other thing is coming from physics. Even though I didn't have a background in neuroscience, physics is usually considered [00:28:00] a good school of science.
It is. And a place like MIT was probably particularly open minded to that kind of, to that way of thinking. In fact, I was not the first student who joined the PhD program at MIT coming with a background in physics and practically nothing else. There was at least another student a couple of years ahead of me.
And so it was a tradition. MIT is very, both very, Open minded and sort of interdisciplinary minded and also in general sort of, uh, interested in people with strong computational background. So MIT was a particularly good place for, uh, or particularly receptive place for someone like me. But that said, it, it, it was a very lucky. For me, that they took me in the program. And so when I arrived at MIT, so at the time, you know, the PhD program is now, for the most part, they have rotations. So students join the program, they do first year rotations, two, three, four labs, and then they choose a lab. But that was not the case at the time in the [00:29:00] PhD program I was in.
So when I joined the PhD program, I joined Emilio's lab. And Emilio studied Motor control. He had sort of that. His research has always been on that. And so essentially, that's what got me into motor control. I was more interested, more interested in neuroscience and motor control per se, sort of motor control in some sense happened to me. But, you know, I got excited about what I was doing. I was doing motor learning. I started working with monkeys, doing neurophysiology. And, you know, there is a lot to learn in during your PhD. in a sense, independently of what precise question you work on. And so, and, and I took a bunch of classes. I learned quite a bit of cocci, for example.
I mean, of course I learned system neuroscience. So for me, it was an epiphany arriving in the PhD program at MIT. I, again, I didn't have essentially any background in neuroscience. And in the first year, I sort of really immersed myself. I subsequently realized and sort of estimated in the first 12 months I was in neuroscience, I read about 500 [00:30:00] papers through many of them through classes and many of them just like that. And so I went essentially, you know, I, I, from zero to being sort of in the field. I took classes, totally opened my mind. Uh, so I loved it. It was, you know, I w I worked pretty hard and, and then I started doing research in Emilia's lab. And then towards the end of my PhD, three years later, so four years later, I started thinking about what to do next. So I want to stay in neuroscience and I realized I was interested in decisions. And so I started thinking about decisions and, um. I took a couple of classes in economics because I thought that would be the right, a good starting point. And then I, I, I'll circle back to a couple of things that we already brought up in this conversation.
One is that I, um, so I, I, I came up with the behavioral paradigm that I've heavily used since then, essentially. And the idea is that You know, let's say you have an [00:31:00] animal, a monkey that chooses between, let's say, pieces of apple and raisins. And if you offer one piece of apple against one raisin, maybe the animal chooses the raisin.
But if you go offer the raisin against maybe two or three pieces of apple, then the animal eventually will flip its choices. So maybe the animal could be indifferent between one raisin and two apples and then choose the apples consistently if there are three or four or four against one raisin. And so my question is, so there will be a trade off here. And I started asking the following question to economist friends that I knew. I said, okay, so let's say the animal is indifferent between one raisin and two apples. Can I say that the value the animal assigns to the one raisin is equal to the value they assign to two apples? I asked that question first to my parents who are economists. And their answer was like, you know, that's not usually how economists think about this. Usually. Usually people don't think about values so [00:32:00] much. And, and, but the person who really has been thinking about these kind of issues the most is Paul Samuelson, the economist I mentioned earlier, and, you know, and Paul Samuelson was an old professor at MIT.
So, you know, the building next to mine, but I was not going to go bother Paul Samuelson with my task, my Gedanken task. And so I asked to another friend of mine who was a student in economics at MIT, and basically asked the same question. And I got essentially the same answers, like. You know, that's not really how people usually think in economics and and actually Paul Simons has been given a lot of thoughts early on to this kind of issues.
I wonder what he would say. And then one night I was in Cambridge, I was sitting at the dinner next to Modigliani was also a professor, an old professor at MIT. He was someone I knew. And Franco asked me, you know, how are you doing? And I say, I'm good. You know, I'm thinking about this task. You know, let's say I have an animal, a monkey.
I offer raisins versus apples. Let's say that the animal is indifferent between one raisin and two apples. Can I say that the animal assigns the same [00:33:00] value to one raisin as it assigns to two apples? And Franco Modigliani said. Yeah, that's interesting. Um, I wonder what Paul thinks about this, Paul being Paul Samuelson.
And at that point, I said, okay, that's it. I'm going to write to Paul Samuelson and ask him that question. And so I did. I sent a letter. Paul Samuelson didn't use email because again, it was the relatively early days of emails. But so I wrote him a letter on paper and eventually I met with him first. First, we had a conversation on the phone that we met in his office and we had a longer conversation, but sort of the key questions like, you know, can I say that, uh, is it fair to say that, uh, in the situation described, you know, the animal assigns to one raising the same body as the science to apples. And his answer was. No, you should not be thinking like this. You should be thinking about this behavior the way Skinner did. Okay. You should be thinking about this behavior the way Skinner did. He also [00:34:00] discouraged me from pursuing this line of work. He said, I would not recommend a student to do this kind of to, to pursue this kind of work. But then he also added, he said, remember that science makes progress one funeral at the time. And I am already 87. so with that, I thought that maybe like he was not completely discouraging. And so I decided to continue on that. And here I'm going to circle back to another story is that so I didn't know whether this behavior would work at all. Uh, you know, this, this, you know, apples versus raising things. And so I got in touch with Elisabetta Wieselberg in my old. Friend and mentor in Rome. And so I went back to Rome. I spent the summer there and and I did on her capuchins this first first behavioral experiment, asking monkeys to choose or let the monkey choose between different foods, offering different quantities and observing tradeoffs. And that behavior [00:35:00] worked very well in that setting. In fact, the very first paper I published in the on the topic of economic decisions was On those capuchins with, uh, Elizabeth, uh, and, and students co-authors. And then once that worked, then I decided to, to do, go ahead and do the neurophysiology of that.
And so then I pursue a postdoc on that. And I can tell you more about this.
Benjamin James Kuper-Smith: Yeah, that's fascinating. Do you, it's, I'm curious, like, I guess this is a kind of general question about seeking the advice of very esteemed people in the field, because it seems like, you know, a very good thing to do. But then if they're dismissive of it, you know, there's also the risk that you just don't do it.
I mean, for example, you know, let's imagine he hadn't added to that thing at the end. He just said, no, it's a stupid idea. I mean, you know, like, I don't know whether it would have discouraged you necessarily. Uh, but, um, I'm curious, would you, I don't know exactly what's the, is it just to take [00:36:00] people's advice with, you know, the grain of salt and say like, this is someone's opinion and they might be seeing it from a different perspective than me or how would you, how do you,
Camillo Padoa-Schioppa: Well, I, I, I think hearing the advice and the opinion. is useful, but you don't have to listen. What they tell you. It's, uh, I mean, I told the story the way I did, but honestly, I probably would have done it anyway, even if he had not added the, you know, sort of the beat about the funerals, but to hear his perspective on that task, the way I was thinking that was helpful, that was helpful.
And in fact, in that paper with, uh, with the capuchins, one of the experiments. I designed trying to address some of the issues that Samuelson had raised to me. Okay. Uh, this is just to say that to me, talking with him then was helpful. I mean, of course I was, you know, I, I, it took him a [00:37:00] while before I went there and talk to him, because I mean, you know, it's like. Yeah, I was, you know, I, I honestly thought that the question I had was a simple question that any student in economics would have been able to answer. And, and it's only when I realized that there was more to it than I even understood, then it seemed worthwhile asking sort of like a luminary or sort of, you know, a giant of the field to spend time with me discussing this issue.
And even though he was discouraging, he didn't seem He didn't seem to think the question was stupid. It's more like he seemed to think that going down that path, it's something that wouldn't be worthwhile doing for someone starting at the same time. It was clear to me also that he was thinking like an economist does where there is behavior and only behavior, whereas I was kind of like coming. With the mindset of a neuroscientist, meaning that behavior is sort of like what you're interested in. But ultimately, [00:38:00] a lot of the research is gonna be about brain processes. So he wasn't thinking about brain processes. And so having his perspective was Very valuable, but I knew from the beginning that I was coming from some somewhere a little bit different than he would embrace and in general, I think asking the advice of senior people, I think it's a good idea if you don't want to waste people's time, but you want to confront yourself with big ideas and big thinkers that is worthwhile doing sometimes sometimes you don't need to talk to them in person.
You're kind of like, Okay. You can kind of like see what they think from the writings, for example, but why not, uh, uh, why not, uh, talking to people?
Benjamin James Kuper-Smith: yeah. Uh, I thought I'd ask this a little bit later, but I guess we can, I can ask it now, which is about the relationship between the brain and economics or neuroscience and economics and kind of the [00:39:00] question, you know, what exactly does studying the brain allow you to, to understand about people's behavior and their decisions that you can't get at it without it.
I mean, I remember, for example, Yeah. Um, I talked to Paul Smordino had an episode about his book on formal modeling and social behavior. And, uh, he also said, like, he was briefly very interested in new economics and they, I can't, I'm going to tell the story incorrectly, but basically, I guess maybe the, the, the brief summary of it is that he was really excited by this, this finding because I said, Hey, look, people behave this way.
And now we find a neural correlate of that. And then his advisor or someone he asked that, like, I mean, we know that people do this. And we know that behavior comes from the brain, so it's kind of obvious that you would find these corridors in the brain if you look for it, if the behavior actually exists.
Um, that's kind of roughly the, the, the story. Yeah, so kind of, I just wanted, I wanted to use that and your, what you mentioned about Samuelson and to [00:40:00] kind of ask, like, what exactly Yeah, maybe why, why add the brain when, when trying to understand how people make decisions?
Camillo Padoa-Schioppa: Yeah, that's a fair question. I mean, there are two answers to that. I think one answer is that, you know, I, I am professor in neuroscience. I'm in the school of medicine. My research is funded by the national institute of health and the idea is sort of to understand brain mechanism. So for me, In part, the answer is that I think what we neuroscientists, let's say, are doing to understand the brain mechanism underlying choices. It's worthwhile doing whether or not at the end of the day, we understand choices at the behavioral level better for the work we're doing. In other words, there is a payoff in just understanding brain processes because, sort of, that's, that's our business, sort of, we're trying to understand brain processes.
So there is a, part of your answer is not really an answer to your question, just says, look, you know, [00:41:00] it's, it's, it's worthwhile anyway. Okay. But. I think your question, for me, your question is central in the sense that, first of all, between behavior and neural processes, there is something a little bit more harder to pinpoint, which is cognitive processes, okay, or cognition, and at some level, that's what interests me most, okay, and that's what I think that, I think that, uh, when systems neuroscience or neuroscience more generally, particularly system neuroscience, is at its best, in addition to revealing truths about neural activity, it also reveals something about cognitive processes. Okay. And I think that the case of economic choice as a behavior and then as a [00:42:00] mental function, then as a neural process is one that actually can highlight these different layers of different epistemological levels and and how sort of like that information that comes in both directions. And I'll make this case like this. A simple, a simple argument is one that I have already sort of highlighted. So if you only look at behavior, exclusively look at behavior, the way a standard economist would, at the end of the day, you can't really say whether values are a real thing or not. The reason standard economics shied away from a strong concept value is not because sort of they like the simpler math.
In fact, it was not even that much simpler, arguably more complicated, is because they realized that it was not really feasible to nail values based on behavior alone. In reality, you may as well sort of [00:43:00] realize that, that they will build a theory that doesn't rely on values. So then the concept of values is a concept, is still potentially a concept valid in cognitive sense, sort of, in a cognitive sense.
Is it, is it true that the decision process sort of entails the computation, the comparison of values? That there's a question that makes sense from, in a psychological, in psychological terms, whether or not economics will be affected by the answer to that question. And neuroscience come in and say, look, there is neural activity in, in fact, different brain regions that really, really, really looks like value signals. And so here is a case where evidence from neural activity informs the way we think about the cognitive process that is the decision process. If a standard economist feels like, you know, this is not my business, I don't learn anything from this. Uh, I don't really care about this, you know, because [00:44:00] I'm not in the business of understanding cognition or brain activity, and I'm just in the business of understanding choices at the behavioral level. I think that's a legitimate point of view. In fact, in a point of view, the sound economists have made explicitly there's been a debate about 10 or 15 years ago, in fact, between economists, largely between those that had the position I just said and those who are in fact more interested or open to to stuff from neuroscience. But it's illegitimate for economists say, I don't care about this. But I think I think neuroscience has informed quite a bit. Our understanding of cognitive processes, uh, in this case, and, and, and like this, there are other examples I could point to where the behavior is compatible with, let's say, at least two different possible cognitive mechanisms and evidence from neuroscience can Constrain or resolve that ambiguity. So in that sense, I think [00:45:00] I think there is a use now whether economic theory will ever be affected by evidence from neuroscience. I think that early on again when the field emerged, some economists express a lot of enthusiasm for that. I remember being part of a debate with David Lapson at Harvard. In the early days of neuroeconomics and the debate, the question we're supposed to debate is does economics need neuroscience and David, who's professor in economics at Harvard, was Absolutely positive.
You say, you know, neuroscience is gonna be a game changer for you for economics. And at the time I was much more cautious. I would say, you know, I think that the jury is still out. We'll see if there really is an impact of neuroscience for economics. And I think 20 years later, largely the jury is still out. There is a cultural interest that economists may have for the neuroscience of decision making but whether really that interest will [00:46:00] percolate and enter the theory and sort of like generate fundamentally new concepts or theories in neuroscience, in economics is a complicated story. I mean, there's definitely people thinking that way, but I think by and large, the jury's still out,
Benjamin James Kuper-Smith: Yeah, I mean, I guess it depends also on your purpose, right? Like what you're trying to do. I mean, if you're, if you're an economist and you want to have a descriptive model of behavior, I mean, you're just trying to describe it as precisely as possible. And I guess, I mean, maybe a mechanistic account might help you describe it more precisely, but I guess it's not really exactly what you're going for versus if you're actually trying to understand the mechanism, then yeah, then obviously the brain is going to be a lot more useful.
Camillo Padoa-Schioppa: right? Yeah, yeah. Even within economics, there is a lot. I mean, You know, many people are interested in political economics or development or things that have nothing to do with individual choices. So you have to be a particular kind of economist for this to be of any interest. And even so, I mean, there is no doubt that there's a [00:47:00] good number of economists who are trying to sort of like shape theories and models. Inform them or build them in a way that this is guided or informed by concepts in your science, you know, multiple examples of that. I also not being economist, you know, I don't really know, but I, my sense is that. In spite of these interests, you know, these kind of models are not quite mainstream yet in economics, and I, you should ask an economist to know better, but, but my sense is they're not quite mainstream, and it's not clear that whether they will become or not, that would be more like a question for an economist than for someone like
Benjamin James Kuper-Smith: Okay, I'll keep that in my, I'll, I'll write that down for when I actually interview actual economists. I mean, we've, we've, uh, maybe to, to, to be a bit more specific. So you, you switched to, you know, neuroeconomics, to the new field of, I don't know whether it was already called that when you switched to it, but to [00:48:00] studying decisions in the brain.
Um, you already described the task that you used roughly, um, kind of, what did you find in, I mean, I guess the first clear big paper, I guess would be probably your 2006 nature paper. Um, well, you know, talking about value and all this stuff, what did you actually find?
Camillo Padoa-Schioppa: Yeah, so I did my postdoc at Harvard in the lab of John Assad. And when I arrived there, I proposed him to use this task, and John agreed. I mean, John actually was studying different things, but he got curious in this. And, uh, and so I set up to do a neurophysiology version of this, uh, of this task. So. The animals, instead of choosing between, you know, apples and raising, we're choosing between different kinds of apple juices, uh, between different kinds of juices, it could be, you know, apple juice or grape juice or tea or other things like that and different amounts.
Uh, and so that I would generate trade offs similar to the one I described. [00:49:00] The field was not called neuroeconomics. When I started, it's sort of the word neuroeconomics came about. out about that time. But in any case, the first question is, where am I going to record from in the brain? Because truthfully, there's practically nothing know about about the brain mechanism, the regions that would be involved in this kind of behavior.
And in fact, I seriously consider instead of doing a neurophysiology postdoc, doing an imaging postdoc, going to work in an fMRI lab, because that seemed like the reasonable thing to do. We have no idea at this point. Where in the brain there is neural activity that relates to this process, you know, fMRI is the right way to go instead of like sticking single electrodes in, in somewhat arbitrarily chosen brain region. And the reason I didn't do that in the end is because I kind of thought that it would be a very long path before I get an answer and that sort of with a leap of faith, maybe I could identify two or three brain [00:50:00] regions that would be good starting points. And so I did that. I studied the literature that existed, a lot of anatomy. Some lesion studies, some clinical work on patients who had frontotemporal dementia, for example. There were some neurophysiology studies that seemed relevant. And at the end of the day, I identified three regions that seemed places where maybe I would have found something interesting. One was the orbitofrontal cortex, one was the anterior cingulate cortex, and the other was the ventral striatum. And of those three regions, so I put myself in the condition I could record from all three of them, not necessarily at the same time, but sort of like in the same animal. And then I decided to start from orbital frontal cortex or OFC. And so I started recording from there. No one I knew had ever recorded from OFC. So I had to sort of find my way. Um, you're not a neurophysiologist, but sort of like, you [00:51:00] know, recording from a brain region, no one knows. Some people knew, but it was, it was, uh, nowadays there's a lot of work done in OFC, but at the time that was not the case. There had been one issue of, if I remember by the journal Cerebral Cortex dedicated to OFC, the title of which the issue was a special issue was the mysterious orbitofrontal cortex. So there was, there had been some work enough to put together a special issue, but really not that much. And people really didn't understand much about. what was going on in OFC. So in any case, I set myself up to record from OFC with the help of John Assad, whose lab I was in, and he was very helpful in many ways.
And then I started recording from there. And, um, to make the long story short, you know, I was recording individual neurons, many of them. And to make the long story short, I found that neurons in OFC have different kinds of responses. And I later understood there were different groups of cells doing different things.
And in essence, some [00:52:00] neurons, for some neurons, the activity was linearly related to the value of one of the two options. So let's say in one session, the animal was choosing between grape juice and apple juice. Uh, some neurons. encoded, meaning their activity was linearly related to the quantity, let's say of grape juice and independently independent of the quantity of apple juice, uh, offer to the monkey in that trial.
And other neurons represented what we call the offer value of apple juice, meaning that the activity was related in a linear way to the, to the quantity of apple juice. So these, these two groups of neurons can be conceptually thought of as sort of one class of neurons. So you neurons encoding the value of individual offers or offer values as as I named them. Other neurons representing a binary way, the juice type that the animal shows. So the activity maybe was high when the animal shows apple juice and low when the animal shows [00:53:00] grape juice or the other way around. High when the animal shows grape juice and low when the animal shows apple juice. And so these neurons seem to capture the Binary choice outcome.
Uh, so, uh, or, or as I call them, uh, the chosen juice and then the third group of neurons, so some neurons encode the offer value, some the chosen juice and the third group of neurons, which at the time seemed the most interesting and to these days are very interesting, but, um, really captured, uh, everyone's mind, I guess, are neurons that encoded the value of the juice chosen by the animal, independent, whether that was apple juice of grape juice. Let's say in one session the animal is indifferent between one drop of grape juice and two drops of apple juice and then we say Okay, then so that equation one grape equal to apples puts essentially on the same scale, value scale, quantities of apple juice and quantities of grape juice. And using that [00:54:00] equation, then we can express in the units of, say, value of apple juice, the values of quantities of either grape or apple juice, if we assume linear value functions. And the activity of these neurons, the chosen value neurons, Is, is linearly related to the value chosen by the animal independent, whether that's apple juice or grape juice. And what that means for that to be true is that that, uh, these neurons also have to reflect the phrases. They have to reflect the subjective nature of value.
And what I mean by that is that. If one session the animal is indifferent between one drop of grape juice and two drops of apple juice, maybe in the next day. So, sorry, this is that the animal is indifferent between one drop of grape and two drops of apples means that the relative value, the relative value of grape and apple is two.
So one grape equal to apples, but then maybe the next day the animal is [00:55:00] less thirsty. And so the relative value could be equal to three, meaning that the animal could be indifferent between one grape and three apples. Or maybe the animal is now indifferent between one grape and one apple if it is more thirsty. And it turns out that neurons that encode the chosen value, variability in their firing rate matches the variability in the relative value of, of the two juices. And since sort of the variability in the relative value is the quintessence of subjective preferences, meaning because the preference are subjective means that you and I may prefer different things, but also means that what I prefer today and what I prefer tomorrow, the, what sort of the values I assigned today and tomorrow may not be the same, depend on my internal state, uh, the context and so on. And these neurons capture that variability or reflect that variability. And so. Just to to to conclude here is that so I found these different groups of neurons are going to offer value [00:56:00] the chosen juice The chosen value and and in particular the chosen value neurons There was clear evidence that they reflected the subjective nature of of value and part of that study was also showing that the activity of these neurons did not depend on the spatial contingencies of the task.
And by that is meant that animals doing this task were sitting in front of a computer and there were two sets of squares appearing on the two sides of the fixation point. The fixation point was in the center of the monitor. And then, you know, the color of the squares indicated the juice type, the number of squares indicate the juice amount. And then the animal will indicate their choice with a nine movement with the saccade, as it is said. And, and the activity of these neurons did not depend on whether let's say grape juice was offered on the left and apple juice was on offer on the right or vice versa. And also didn't depend on whether the animal indicated choice [00:57:00] with a saccade to the right or a saccade to the left. And so, The representation of values and goods and options was abstract from the spatial contingency of the task and more abstract than activities found in many brain regions. Uh, in many brain regions, the activities effect is, is spatially biased or spatially shaped. And so those were the main findings of that study.
Benjamin James Kuper-Smith: Yeah, I have a, I guess the question is kind of what, you know, you find these correlations between these different, uh, value and juices and that kind of stuff with brain activity. Uh, maybe a very simple question is kind of what's, you know, what does that, what does that mean in a sense? Because I don't know how, from how many neurons you record and all that kind of stuff, but I, I guess I'm trying to get at like.
You know, is it just possible if you record from enough neurons, you're randomly going to get [00:58:00] some that will indicate more or less anything you, you try and correlate them with, could you maybe, maybe first answer that from a kind of a neurophysiological point, because I don't know, like how many recordings people do in these kinds of settings, how many, whether you have control sites and all that kind of stuff.
So I'm just curious, you know, whether you have a control set where you record from a completely different brain area. Yeah. And then you find none of these, but it's, you know, so yeah,
Camillo Padoa-Schioppa: Okay, so the answer to your question. That I give today is not the same that would have given a few years ago, not because the underlying sort of the, not because, not so much because what I think is different, but more because we have learned so many more things now than we did then. Okay, so at the time, I think the reason, so that paper was published in nature.
Okay, the reason it was, you know, top journal, the reason it was published in nature. likely, I mean, part of the [00:59:00] reason there's always, you know, more than one reason, but part of the reason is it was the first clear evidence of neural activity encoding value. And it was also maybe the first or one of the very first experiments where animals were not instructed what to do. They were let free. To choose anything, no correct answer, right? If, if you offer one drop of grape juice versus three drops of apple juice, there is no correct answer. Okay. And so the animal could do whatever they want, could choose whatever they want, and then value would be inferred from their behavior.
Okay. So that was a very new concept, very new way of running experiments since then. It's totally mainstream now, but we're talking about 20 years ago. Um, not quite, but almost. And thanks to that, because it was completely free, so we would draw, we infer values from the behavior, then we would have a measure of value that we could compare to neural activity or use to define [01:00:00] candidate variables. So that study was provided one of the very first, maybe the first evidence of neurons representing value. And by which I mean also the subjective nature of value. At the time, it seemed that that was an answer to the question of are values a real thing, that I mentioned before. When, when the field came together, neuroeconomics came together for the first generation of studies, it was like, is, the question was, is value a real thing? So that was The answer, you know, your question is something sort of simplify. Your question is that you're asking, okay, you found this. So what the answer to? So what?
Benjamin James Kuper-Smith: I mean, maybe just to slightly clarify it is that I guess what I mean is, yeah, I mean, in a way you're right. It's the question is so what, but the question is kind of like, okay, we find the correlations, but whether it actually is evidence for subjective value in the brain is kind of what I'm questioning a little bit and whether, whether it couldn't just be, you know, because you record a bunch of neurons and you find a few [01:01:00] correlations by recording enough, that's kind of.
Camillo Padoa-Schioppa: Okay. So, okay. So I recorded in that study where, you know, 900 some neurons. Okay. It was a large number for the standards at the time. The typical neurophysiology study at the time probably had 200 to 300 neurons. So this, this was a large data set. I had a large data set because I was able to collect that many data, I guess. Sort of, I had a good system to do that. The technology since then has advanced, but sort of like, in any case, the point is that a large fraction of neurons in OFC were doing something interesting. So these three different groups of neurons I described account for about 50 percent of cell in OFC. There is still the question, what is the other 50 percent of cells doing? But it's not sort of like, you know, if you record a million neurons, you're going to find five of them who do anything interesting. It was, you know, more like 50 percent of neurons in this area doing something interesting. The other question you may be asking is, okay, look, OFC is, sure, you [01:02:00] record from OFC, but there's a lot of brain regions.
Are you going to find the same elsewhere? This, I was answering to the second question when I said, when I recorded, I published that, that study, it was one of the first, maybe the first evidence of neurons encoding value. Since then, It became clear that a lot of other areas have value signals. Okay, ventromedial prefrontal cortex, mostly from fMRI studies, lateral PFC, anterior cingulate. If I list them, there's probably 20 different areas, cortical and subcortical, that have value signals. So, This is why I said the answer I would have given then and the answer I'm giving now is a little bit different because now that I know that so many areas have value signals, then I think what is interesting in that study is not just that there are value signals. At the time, the answer was, it's interesting because the fact that value signals exist at all already is telling us something because they didn't have to [01:03:00] be. Now that we know there's so many different areas that have value signals, then what remains interesting in that study is not just that there were value signals, it's also the kind of value signals that we found. Because there are many different areas of value signals, doesn't mean that all these value signals are the same. And so in particular in OFC, I said there are these different, these three groups of cells capturing the value of individual offers. Let's step back for a moment. If you have to choose between an apple and an orange, presumably you assign a value to the apple, you assign a value to the orange, and then the decision is made by comparing these two values. So neurons encoding the offer value of, say, you know, the value of apple juice or the value of grape juice, they look like the input of that comparison, the input of a decision process. Neurons encoding in a binary way the juice chosen by the animal. Let's say apple juice of grape juice. They capture the output of that decision process. And then the chosen value may be something like the [01:04:00] output of maybe is an internal variable. And so what I'm going with this is that the fact that remains interesting in that study, aside from sort of things that have been. stepping stone to many other things is that there are these different groups of neurons that captures both the input and the output of a decision process. And and two things are interesting. This first of all, the fact that in a particular area there is Neurons capturing both the input and the out of decision process suggest that these neurons are organized as a circuit. They're the building blocks of a circuit where maybe that decision may take place. And second, is that I said there are value signals in many brain regions, but not all brain regions have these richness. For example, many of the brain regions where you see value signals, you see things like the chosen value. So something looks like the output of decision, but not both the input or the output. And other brain regions have things like, for example, dopamine [01:05:00] cells have, you know, reward prediction errors of value prediction errors. Sort of a more complicated signal. Essentially, the difference between what is expected and what is observed at that time, and that could be useful for it. For a learning process.
So in other words, even though many different brain regions have value signals, the number of brain, only few of these brain regions have just the right value signals that look like there could be a decision process happening right there. And so the fact that OFC is. That has these different groups of cells that remains interesting and reminds, remains a source of inspiration for the kind of work we are doing and we have been doing. And then related to that, there is an issue of causality that we can discuss if you're interested.
Benjamin James Kuper-Smith: Yeah. I mean, I guess we can maybe. Start moving, you know, towards the causality stuff. Now, yeah, maybe kind of to get there and maybe introduce your, your, you know, more recent study about causality of economic [01:06:00] value in OFC. Um, I wanted to go like a little bit broader again, which is, um, kind of the question of whether the brain actually needs.
Like, why would the brain actually even need to calculate this value in many decision situations? In particular, there was one paper, unfortunately, I read it a few months ago, and I haven't, I didn't read it again today for this, but there's this, you know, the, I'm sure you've read it, the case against economic values and OFC by Hayden and Niv.
I would also just generally be interested in what your opinion is on that paper, because that came out, I think, Exactly the same time as your study, so they didn't cite you. But yeah, I think one of the points they make there is that like often you, you know, options are so different that you just don't need to do it anyway.
Like you don't, you know, it's, it's very clear which option you want. And also that you can often get two decisions via very simple heuristics rather than, you know, calculating those values. So maybe the first question is like. Kind of, in a way, what [01:07:00] is your opinion on this? You know, like, does, do you think the brain does need to compute value all the time, or only in certain situations, or?
Camillo Padoa-Schioppa: Okay. So this is not about causality. It's about,
Benjamin James Kuper-Smith: yeah, that's, we'll, we'll get
Camillo Padoa-Schioppa: values. Yeah, yeah. Yeah, I mean, so there is, uh, there is a stream of literature that predates Hayden and, and, uh, Neve's commentary. And, for example, in behavior economics, Giger Enzer and his group in Berlin. I have long argued that a lot of decisions that seem to be the kind of decisions that might require the computation of value in reality are processed by people using heuristics that don't rely on a valid computation. And that might be true. I mean, I don't question that. I'm not sort of like I'm not saying everything they suggest is right, but I'm just saying that there is plenty of behaviors that may not. First of all, there's plenty of behaviors that don't [01:08:00] require. Evaluation. Okay. And there is also behaviors that could be construed as a choice that in reality works differently.
For example, habits work very differently from the kind of choices that I studied. Okay. In fact, in, uh, in the literature, there is a distinction often made between goal directed behavior and habitual behavior. Habitual behavior, those that result from a sort of process of learning that is Through trial and error and sort of eventually forms a habit and once the habit is formed, you don't need to sort of like rethink about every options every time and presume at that point you don't give the value or at least not the value in the sense that we're discussing here. Goal directed behavior, in contrast, uh, would require that kind of calculations. Okay, so that's a classic distinction in the reinforcement learning literature. So, so it's one thing to say that there is plenty of behaviors that don't require valuation. It's another [01:09:00] thing to say that valuation is never required.
A, B, that the evidence for neural representation of values is, uh, unconvincing. So these two points that are in the review of Hayden and Neves, I think, I think they're mistaken. I think that their argumentation is flawed. I think that they invoke, I think they ignore evidence in the literature. I think that they invoke concepts that they don't elaborate.
For example, they said that choices could be based on preference signals without even explaining what the preference signals would look like as opposed to a value signal. I think there is, Problems in their, in their story. That, that, I mean, it was a provocative pamphlet, but I, I think that, uh, I mean, I, I, I, I think that their arguments are flawed.
That's, you know, I cannot say it otherwise. Conversely, there is, and, and this goes already sort of like a little bit into the causation story. There is the fact that [01:10:00] values can drive. Multiple cognitive functions on behaviors, not just choice. Okay. So for example, perceptual attention value can be very relevant to perceptual attention.
It can be very relevant to emotion. It can be very relevant to associative learning. So values can drive Uh, not just choices. And for that reason, that sort of noticing that, okay, has two implications. First of all, that if it is true that values can drive many different cognitive functions, that you should expect to see values in many different brain regions, because there are different brain regions that do different things. And that's indeed the case. That's implication number one. Implication number two is if you see a value signal in some brain region, that does not imply that that brain region participates or contributes to a choice process because maybe it has a value signal for [01:11:00] some other purpose. So the fact that there are value signals in lots of different brain areas, for one, as I already pointed out before, you should keep in mind that these value signals are not all the same. And two, Even if at this, uh, there is a point where you have, you know, a small number of candidate areas to say, okay, this is, or smaller, or maybe not, but sort of like a number of candidate areas that say, okay, maybe these areas have something to do with choice. Then you want to be careful. It's not because you see a value signal that that value signal contributes to a choice process.
You would like to show that that's the case, and that requires some causal experiment. Okay, and this is where causality experiments come handy. Um, they're not always easy to conduct, but sort of that, that's the kind of you want to show that that signal that there is a causal chain linking that value signal to a choice. You want to show that to conclude that that value signal contributes to the decision.
Benjamin James Kuper-Smith: Yeah, exactly, [01:12:00] that was kind of my slightly weird way of getting to causality. Yeah, maybe, I mean, do you want to introduce the paper that we've been leaning to now for a while?
Camillo Padoa-Schioppa: Yeah, I can, I can, uh, I can introduce that. So I can talk, I'm going to say something about causality in system neuroscience in general. First of all, causality is a dimension that exists. or a concept exists in many disciplines, many scientific disciplines, not just neuroscience in neuroscience, causality, particularly system neuroscience.
Causality usually means something along the lines of say, okay, we have some brain activity, some behavior. We would like to know that there is a causal chain linking that activity. To that behavior, and there are usually two sort of like ways of proving that causal chain, demonstrating the existence of that causal chain. One is. [01:13:00] To say, okay, if I disrupt that neural activity, for example, I could silence that neural activity using some pharmacological agent. Does that disrupt the behavior? Okay. And if silencing or sort of disrupting neural activity disrupts the behavior, then there is a hint there. There is some causality there. It's a little confusing, though, because that sort of, that evidence is, is, so this kind of way of proving causality is sometimes subject to some criticisms. Because, for example, let's say that you are doing math, and I come, and as you're doing math, I pull your hair, and I pull your hair, that distracts you, and then, you know, you're slower in doing your math problem, maybe you have it wrong. By the logic I was describing, one might say that there is a causal chain link in your hair to your ability to do [01:14:00] math. And that's almost certainly not true. Okay, so sometimes disrupting behavior by disrupting your activity could be not really demonstrating a causal link. It could be inducing some other form of noise or distraction that induces deficits in behavior. So in reality, the best way to show causality is not to disrupt the behavior, but maybe to enhance the behavior, perhaps to bias the behavior. Okay? And so that kind of experience in system neuroscience tradition is done using electrical stimulation. So neurons, you know, speak to each other through spikes.
So electrical activity. If you inject current In the proper conditions, if you inject, you know, the proper amplitude in the proper way, if you inject current, let's say in cortex, you can induce higher firing rates in the activity of the neurons surrounding the electrode you use [01:15:00] to inject current. And so if by doing that you may be enhanced in some ways, the behavior or bias the behavior instead of disrupting it, that is usually taking a stronger evidence linking In a causal way, the activity of those neurons and, and, and the behavior and one of the classic and most elegant examples of this is work that was done by Daniel Salzman when he was a graduate student in Bill Newsome's lab in the early nineties, 1990s, and in that lab, they were studying the perception of motion.
So visual perception and And they knew that there is an area called the middle temporal area or MT that has neurons that respond essentially to the direction of motion. So in those experiments, animals were looking at what is called a random dot or a [01:16:00] kinetogram. So it's a situation where on the monitor there is a stimulus that moves ever so slightly to the left or ever so slightly to the right.
And if sort of in the proper condition, that task actually can be very hard. to, to do. Uh, the, the perception is, uh, it's, it's hard. And, uh, in MT there are neurons that respond to the direction of motion. And these neurons are organized in mini columns, meaning that there is a mini column of maybe, you know, 100 microns radius that is orthogonal to the cortex and all the neurons in that mini columns have a preferred direction or the same motion to the right.
And then other mini columns will have preferred direction of motion to the left. And what Daniel Salzman did is to insert an electrode in MT, and if he injected just a little bit of current and that way, and let's say maybe he inserted electrode in a mini column of neurons that would respond to motion to the right, he injected a little bit of current, and the effect of that on behavior was [01:17:00] equivalent to making the dots moving a little bit faster to the right.
So that injection, that current injection bias the percept according to the mini column in which the elector was inserted. See, if instead insert an electrode in a mini column of neurons preferring the election motion to the left, then injecting current would enhance or bias the percept in the direction of leftward moving.
And so that's, it's sort of set a goal standard, so to speak. We were hoping to be able to do something like this with our task. Ideally, we would like to insert an electron in OFC, stimulate, inject some current, and by doing so, biasing choices in the direction of, you know, let's say grape juice, grape juice or apple juice. The challenge in doing this is that while in MT, there are many columns of neurons, they all do the same thing. In OFC, that's not the case. In OFC, [01:18:00] neurons that represent the offer value of grape juice and neurons that represent the offer value of apple juice are physically intermixed with each other, they're like salt and pepper.
And so if we insert an electrode in OFC and we stimulate, we cannot selectively stimulate only neurons that respond to the value of apple juice and not those that also not also those that represent the value of grape juice. And so that was a challenge. And for that reason, these micro stimulation studies, the causality studies that we had wanted to do for years, in reality, we had not done. Until when, a student in my lab had sort of like, what I think is a genius idea, her name is Katie Conan, she was a graduate student at the time, and her idea was to do this experiment, but to rely on a phenomenon called range adaptation. Range adaptation means the following, it means that neurons encode, you know, the offer values. have a finite firing [01:19:00] rate. So if you take a neuron, for example, you know, the activity range could vary between, let's say, five spikes per second and I don't know, 20 spikes per second. Okay. And the activity of these neurons encodes the offer values in a linear way. So, for example, in one session, The animal is offered quantities of apple juice.
The berries vary between zero drops and five drop. Then there is a linear relationship between the amount of apple juice offered in any given trial, again, varying between zero and five drops and the firing rate of the neuron varying between, let's say, five spikes per second, 20 spikes per second. So that there is a linear relationship there. Range adaptation means that if I then run a second session where I offer apple juice in a range no longer between 0 and 5 drops, but maybe between 0 and 10 drops. The firing rate of these neurons adapts, meaning that there is still a linear relationship between the firing rate and the quantity offered to the animal. But the slope [01:20:00] of the encoding, the slope of that linear relationship is now half of what it was before, such that the range of firing rate remains constant. So the slope of the encoding is inversely proportional to the range of values offered. So that's range adaptation. That's a phenomenon I had discovered. A few years before. And the idea is that if now we insert an electron in OFC while the animal is doing this choice task and we inject a little bit of current, we are going to increase the firing rate of these neurons, of all the neurons, for value, grape juice, apple juice, we increase the firing rate. If you take a neuron encoding, let's say the value of apple juice, this increase of firing rate will effectively be equivalent by the same logic of the Salzman, uh, studies. This increase of firing rate will effectively be equivalent to increasing the value Of apple juice, how much more [01:21:00] or how much more apple juice that's increasing firing rate is equivalent to. Okay, so the increasing value equivalent to that increasing firing rate that depends on the slope of the encoding. And in fact, the equivalent increase of value is proportional to the slope of the encoding. Sorry is inversely proportional to the slope of the encoding. So the shallower the slope. The larger the increasing value for given increasing firing rate. And so to make the long story short, the prediction is this, is that if we insert an electrode in the brain and stimulate in OFC, then we are going to increase the firing rate of offer value neurons encoding the value of apple juice and also offer value neurons encoding the value of grape juice. And the effects of the stimulation will be equivalent to increasing both values. Okay. However, the increasing in the two values is not the same [01:22:00] for each juice. The value increase is proportional to the range of values offering that session due to range adaptation. And so if we make the two ranges different, so, for example, let's say that in one session we offer a small range of values for apple juice and a large range of values for grape juice. Then by when we inject current and increase both values, we actually increase the value of grape juice more. And so the net effect on choices will be to bias choice in favor of grape juice. If instead we have a session where the range of apple juice is large, the range of grape juice is small, and we inject current, then the net effect of the current will be to bias choices in favor of apple juice. And so that was the prediction, and we did that experiment that Katie had designed. Interesting. Actually, she did not do that experiment. She came up with the idea, but then the experiment was conducted by a postdoc named Sebastian Ballesta and by a graduate student named Wei Kang Shi, who [01:23:00] were coauthors, co first authors in the in that paper.
And they, uh, and so they conducted that experiment and the results were exactly the way Katie had predicted. And so that essentially provided a causal evidence for a causal relationship between the activity of these neurons, this offer value sense and choices.
Benjamin James Kuper-Smith: Um, just to take a slight step back, you know, as, as we've established, I'm not in electrophysiology. I was a little surprised that you can't just stimulate the activity of an individual neuron. So I would have assumed that it would be possible to, let's say, record as you did in your original paper. And let's say you find the neuron that encodes the value of one offer or the, you know, any of those only three you mentioned, and then specifically just increase the firing rate of that neuron in particular.
I was a little bit surprised that that's not possible.
Camillo Padoa-Schioppa: Well, actually it is possible, but not in a monkey or not with the, [01:24:00] okay, we know with the kind of preparation that We've used in these experiments. Okay. So, so in standard neurophysiology. Okay. This is everything I told you about today's is pretty standard neurophysiology. There is an electrode put in the brain and the tip of the electrode captures the voltage, the local voltage essentially. And, uh, and so we recorded the spiking activity of cells. There are other ways to record from the brain, and one way that in principle can be used in any species, but in practice is mostly used in rodents, is to do so called calcium imaging. So you can essentially create or modify, genetically modify neurons. In the brain by injecting viruses or by using transgenic mice in such a way that when the activity of the when the neuron activates, you can see that activation sort of like with a microscope so you can look through a microscope has to be done. If you're [01:25:00] working, I see through a lens, but sort of you can actually record image, the activity of neurons, many neurons at once.
Okay. So that's one technique to record their activity. Another technique that exists that also is in principle usable in every specimen practice, mostly using rodents, is that you can do so called optogenetic inactivation. So you can modify, uh, genetically modify neurons either, either through virus, viral injections or, or using.
a transgenic mouse or transgenic, transgenic animal. And you can essentially force a neuron to spike by inject by by shining light on that neuron. Okay. And what you can do also, and this is a little more cutting edge, but it can be done. You can, while you image using two photo calcium image, a bunch of cells, you can optogenetically stimulate one or two or [01:26:00] whatever neurons you want. So in that sense, you can actually stimulate one neuron at a time, or a small number of neurons at a time, and not just a small number of neurons, but also picking exactly what kind of neurons you want to stimulate. Okay, so we're actually attempting that kind of experiments. So we have also a line of work in mice. They also make economic choice. They also choose between different juices. We do use calcium imaging. We do use optogenetics, and we're getting up to do this kind of experiments where we stimulate a specific subset of neurons that we can look at and we can record from as we do the stimulation. These experiments are feasible in principle.
I don't know what you know whether that will work out because we're just starting them, but also technically not trivial. And so for all the difficulty of running the older experience already told you about, these are technically more challenging, in fact, because they require not just a behavioral manipulation, but sort of [01:27:00] All these techniques, which are now relatively standards, but, you know, combined together, you know, it's feasible in principle, but we'll see if we actually can make it happen.
Benjamin James Kuper-Smith: I guess that's the beauty of not working in the field. You could be like, hey, why don't you just use this and this technique? Um, but I guess in reality it's a bit more, it's a bit more complicated. Yeah, I had kind of one slightly random question that I guess I'll, I'll add here at the end, which is, I'm wondering like, uh, so you know, in the, in the, Experimental paradigm you have and that you know, most people use in these contexts.
You have some sort of trade off, uh, you know, some amount of something and some amount of something else, and you like both different amounts, so there's some sort of trade off and you can't have everything. I'm curious, is there any worth in studying trivial decisions where it's clear what's the best option?
It's a very, like, open ended question. I mean, not even as a control question or something, but I'm just curious, like, is there any worth in asking, you know, monks, do you want one or two drops [01:28:00] of juice and you know they want one more the other? Yeah, I'm just curious whether there's any worth in that or, because I, you know, I agree, although my studies were always about trade offs, but I'm just curious, I guess I'm just questioning that, whether that's, whether maybe something can be gained from non trade off decisions.
Camillo Padoa-Schioppa: I mean, if you're interested in the decision process, then I think you're better off having a task that has a decision process. And if it's a little bit harder, you probably, there's probably more to learn. It's one thing to say that if you have one session, you can have, you know, many trials and some of the decisions can be pretty straightforward.
Some can even be trivial. Some can even be forced choices. Okay. In fact, we often have, in fact, always have forced choices in our sessions. But it's another thing to say that you just are going to have forced choices or trivial choices then. I don't see the advantage of doing that unless you're [01:29:00] interested in knowing, you know, for example, you know, contrasting two situations.
How different is the activity in OFC if, if there is a real choice or if there is, uh, or if there isn't, I mean, another way to say this is that to my mind, again, Everything here in my field, in my work, starts from sort of the fact that we study economic choices. And one would like to define what do you exactly mean by economic choices, okay? So is, you know, me picking up a cup of water and drinking a sip, is that an economic choice or is that not an economic choice? I think that there's many behaviors that can be construed as, like as choices, and we don't necessarily study all of them. Okay. That was the example I gave you earlier about habits. So at the end of the day, when we talk about economic choices, we start typically by appealing to intuition. [01:30:00] Okay. And intuition is that the trade off is almost defining a choice, an economic choice. If there is no trade off, As I often joke is a no brainer sort of doesn't mean they really don't need a brain.
You do need the brain, but sort of it doesn't require the same mental processes. It doesn't require the same cognitive function and the cognitive function, I think we study or I believe I'm interested in. I believe I'm studying is one where you do have some sort of trade off, and that's where you need the value. In the long run, it would be nicer to have a better definition of exactly what is an economic choice. And maybe that definition would be something relying on a neural trade. Economic choice is the kind of choice that requires. OFC, for example, something like that. I don't think we're there yet, but I'm just saying that when we, when we talk about economic choices, we typically appeal to intuition and it'd be nice to have a definition that [01:31:00] is not just appealing to intuition, that is a little more rigorous, but the end of the day, maybe that will require some definition based on neuroscience, uh, and going back to your question, yeah, you, you can study many different behaviors, but if you're really interested in that, in this kind of choice, I think a trade off is quintessential to them.
Benjamin James Kuper-Smith: Yeah, okay, that's kind of Yeah, roughly what I assumed, but I was just curious what you were going to say to that. Uh, another kind of, kind of quite general question, um, that goes far beyond, I mean not far beyond, but that's just a different kind of question, is working with different species and different kinds of, you know, brain signals, let's say.
For example, you know, for me there's, I mean, I don't, you know, I always, I've always worked in humans so far, but obviously lots of advantages of working with animals and that kind of stuff. So there's, there's always this, this pull to go maybe, maybe I should. Also do a postdoc in electrophysiology and animals, something like that.
Um, uh, [01:32:00] well, I've actually, I've thought about it, but the reason, for example, I haven't contacted you for a postdoc right now is that I think Is it a good idea to, you know, shouldn't I just focus on, for example, doing neuroimaging in humans, right? Rather than kind of doing the two things and doing neither correctly.
So I'm actually curious what you think of, uh, especially, I mean, we talked before you have, uh, someone in your lab now joined from the lab. I'm potentially going to join. Yeah, I'm just curious what you think about that, of using very different kinds of brain signals with different species that require different setups, that kind of stuff and, um, yeah,
Camillo Padoa-Schioppa: Ultimately, we're all interested in humans, right? I mean, that's that's what we're all interested in. But there are limits to the kind of experiments one can do in humans. I think a very good proxy for a lot of the experiments are, uh, rhesus macaques. And so I've historically done most of my research in rhesus macaques. [01:33:00] So monkeys are, um, are a good proxy. for certain things. So if you study vision, that an excellent proxy. If you study simple choices, relatively simple choices like I'm studying, I think they're still a pretty good proxy. If you're studying language, there's no point, right? So it depends on what you're interested in. Okay. But for the kind of questions that I think for the kind of behavior I'm studying, I believe they're very good proxy. Even monkeys, however, have limitations. So for example, the kind of experiments I was describing a moment ago, where you can pick exactly what neurons you Those are not easy to do in monkeys.
I mean, in first approximation, they cannot be done. Okay, uh, in principle, it could be done. In practice, they cannot really be done. So to do that kind of experiments or even to do experiments where you don't just record from neurons, but you also want to know. Whether the neurons you're recording from are from a particular layer of cortex or they are [01:34:00] pyramidal cells or interneurons, or maybe what kind of interneurons they are, or whether they project to area X or receive projections from area Y. All those are really important questions if you're interested in the circuit, you know, that generates decisions. But those are not questions that are easily addressed monkeys. And that's the reason why, a few years ago, I decided to also start a line of work in mice. Mice have trade offs. I mean, life is a trade off, okay? So, when I study monkeys instead of humans, I Gain a lot of, uh, opportunities to do experiments, to, to, to, to do things I can order in humans, I also lose opportunities. For example, if I was interested in more complicated choice tasks, probably I would be limited if I certain phenomena different, for example, it turns out that. While most humans are risk averse, [01:35:00] most monkeys are risk seeking. I actually don't know why that's the case, but it bugs me a little bit. If monkeys were more like humans in that dimension, I feel like, you know, it would be better, sort of like, for someone who does my experience. And as I go from monkeys to mice, I pay another price.
I mean, mice are, they can do the behavior that we've been describing, we've been talking about, so that's good. But, you know, they have less of a frontal lobe. Their representation in OFC, unlike that in monkeys, is actually partly spatial. And so there is a qualitative difference there. The group of neurons we find are similar, but not exactly the same. So There is a cost in, in, uh, in going to that's that species, but then there is the possibility of doing experiments that couldn't dream of in monkeys, let alone monkey humans. So in general, this is true for neuroscience and, you know, it's in [01:36:00] general, um, different species. Uh, for different characteristics, and for certain questions, it's perfectly reasonable to study not humans, not monkeys, not mice, but maybe fruit flies, or maybe worms, you know, even simpler organism. Uh, and for other questions, you know, again, if you study language. You pretty much have no choice. You only have humans, which limits the kind of stuff you can do, but it's the only species that has real language. Uh, and so, um, so, so that's where you are
Benjamin James Kuper-Smith: and from a kind of career perspective, as a scientist, you know, um,
Camillo Padoa-Schioppa: from a kind of a, what
Benjamin James Kuper-Smith: career perspective. So, you know, you've answered now from a scientific perspective, um, but I'm also curious about, you know, whether you think it's best to say like, I'm going to focus on you know, everything has trade offs and I'm going to focus on the, the really understanding the method and how that works and all that kind of stuff.
Or would you [01:37:00] think it's also fine to say, you know, I'm really going to follow the question wherever it takes me methodologically. I'm just curious, for example, whether that leads to someone who kind of doesn't really understand five methods or, um,
Camillo Padoa-Schioppa: yeah. Okay. No, that's okay. Yeah, you're right. So that's a different question. It's not easy to switch from one scientific. approach to a different one. Okay. For example, yeah, I, I've, my whole career, I've done essential neurophysiology in non human primates. In the past few years, we've done experiments in mice, optogenetics to photocalcium imaging and, and that kind of stuff. It was not trivial. And to this day, my students and postdocs who do those experiments understand a lot more than I do about every technical aspect of that work, in ways that is not true for experiments I have in my lab, uh, using monkey neurophysiology. I [01:38:00] can still walk into my lab and run my own experiments on monkeys.
In fact, I do sometimes. I would not be able to do the same in mice. And that is You know, that is a limitation, a limitation, or if you want something to keep in mind when you want to sort of change. And so if you're thinking about your case, you're finishing, or you just finish a PhD, you're sort of starting a postdoc, presumably you're thinking about the next step as sort of one in a path of scientific inquiry, maybe sort of looking for an academic, academic job, sometimes down the road, switching. Techniques, approach, preparation, problem you work on between your PhD and your postdoc. That's actually a good time to switch because it's a new start. In fact, that's typically what I tell my students and all students is like this is if you want to make a switch, this is a perfect time to do this. But there's not [01:39:00] that many times where that it's, it's a good time.
For example, between your postdoc and your first faculty position, if you get one, that's a terrible time to make a switch. I very much discourage anyone from doing it. That's the moment to build on your strength, build on stuff that you have constructed. Then another time where you can make changes is later in, you know, if you become professor, so you start your academia, you get an academic job. There's a point where in the U S you get tenure in Germany is a bit different, but it's sort of, there's a point where it's sort of like you get stability. And at that point you can take risks again, to some extent, like I did starting a mouse, uh, line of work. It would have been a terrible mistake of me to do that. When I started my lab, but I could have done it when I started my postdoc, it was different, but sort of that could have been a good time to change completely and to some extent I did change. I [01:40:00] started studying decision making instead of something else. I kept working with monkeys, but that is a good time to make changes if you're so inclined, but not every time is a good time.
Benjamin James Kuper-Smith: yeah, yeah. End of every episode I ask three recurring questions. The first is a book or paper you think more people should read. It can be old or new, famous or completely unknown. Just something that you think would be, people would benefit from reading.
Camillo Padoa-Schioppa: Yeah, so the paper that comes to mind when you ask that question is, uh, a paper published by Phil Anderson in 1972 in science. The title is More is Different, and, um, uh, Phil Anderson was a physicist, a statistical physicist, at the University of Princeton. At the time, molecular biology was emerging as a hot field in science. But physicists used to look down a little bit on molecular [01:41:00] biologists because molecular biologists seem to be less principled and lawful than physicists are used to. And the sense of Phil Anderson's paper was to tell physicists to say, look, keep in mind that when systems become more complex, more numerous, so more is different, right?
Become more numerous, they also become more complex. And the way you need to be thinking about them. It's not the same. There are different epistemological levels of scientific thinking, and you should be respectful of this and realize that if, as a physicist, you become interested in molecular biology, let's say, you have to adapt to that way of thinking.
You cannot sort of think that molecular biology will be, let's say, department of physics or subdiscipline of physics. That will not be And in the same way as he made that argument for physics and molecular biology at different times [01:42:00] that argument could be made for molecular biology and other parts of biology, for example, neuroscience, there was a time where molecular biology entered neuroscience about 20 years ago, I would say, and for a while it seemed like that every problem in neuroscience was going to be sort of like Ultimate problem of molecular biology.
I think that was naive thinking, and most people recognize that, but in general, sort of like one has to keep in mind that different questions, two different questions, different questions often correspond to different way of thinking, scientific thinking, that they can be equally rigorous, even though they're not. They work at different, at different levels. And so I think that's, uh, it's also very well written that paper, but sort of, I think, I think that thought remains valid 50 years after,
Benjamin James Kuper-Smith: Yeah, and I guess between every discipline to some degree, right? Because it's the same between [01:43:00] different types of neuroscience. You know, it's all only
Camillo Padoa-Schioppa: And cog science, exact neuroscience and cognitive science, cognitive science, and I don't know, other things, political science, economics, I don't know, you know, it's of course, this is like, um, Yeah,
Benjamin James Kuper-Smith: Yeah. Uh, second question. Something you wish you'd learned sooner? Uh, this can be from your work, from your private life, whatever. Just something that maybe would have improved your life if you learned it a little bit sooner.
Camillo Padoa-Schioppa: I, I wish I'd been exposed to computers earlier than it happened for me. Okay, this is, this just shows how old I am, but I, I was born in 1970. I was growing up in the 80s. In the 80s, computers were not as widely available as it is now. Obviously, it goes without saying, but you know. They existed. Some people were playing with them. Some people learned to program then, and that was absolutely [01:44:00] not on my cognitive map. The first time I really worked on computers for anything meaningful was when I was in my mid twenties, uh, in physics, writing my thesis. In fact, even my thesis was mostly paper and pen, frankly. It was, uh, you know, theoretical work.
But I didn't even program. I used programs that existed. Uh, so then when I did my PhD, then yes, I learned to program in MATLAB. Nothing particularly sophisticated. I mean, it's It's a language obviously, but it's not sort of, I, I think it would have been interesting and, and I, I, uh, I would have learned something interesting if I'd been exposed to computers and programming earlier.
Benjamin James Kuper-Smith: Would the lesson today be then, I mean, not to be exposed to computer sooner, but to programming sooner in particular?
Camillo Padoa-Schioppa: think so. I think, no, there's a lot of things. I mean, as I discussed at the beginning of this [01:45:00] interview, I, you know, I was exposed to plenty of humanities and classics to a fair amount of science. Two languages, I learned French as a kid. I don't know. I'm speaking Italian, obviously. So, I mean, different people have different opportunities and, and gaps. So I had, I had a lot of opportunities. I'm, I'm, I'm, I was a lucky kid. And, and so, uh, so given all the opportunities that I did have, I think computer programming would have been a cool thing to learn and to be exposed to. That doesn't mean I put it at the top of the list that people should know about.
I'm just saying that, that for me, that, that for me, that was not at all part of my education and it would have enriched it.
Benjamin James Kuper-Smith: Okay. Yep. I guess you already gave me a little bit of postdoc PhD student advice. Uh, but yeah, last question is kind of any advice for people [01:46:00] on that kind of transitory period. Um, yeah. Or is it just what we just talked about?
Camillo Padoa-Schioppa: Well, actually it is pretty much what I just said. I, I usually. My main advice is for students who finish their PhD and think about postdoc is to think broadly. Don't limit yourself to do something that follows or relates closely to what you've done so far. If you want to do that, there's nothing wrong with that.
It's not you have to change, but figure that there is not that many opportunities to make big changes. In a scientific career and that may be, I mean, you know, long ago already you chose psychology or something of that flavor, you know, neuroscience instead of, you know, go to law school or something. So you've made already big choices, but, uh, but this is a time to sort of, you know, to redirect if you need to.
[01:47:00] It's, uh, you have kind of, You can, you have the field at your fingertips, so to speak. You've been in neuroscience or cognitive psychology for a few years. You know, a lot of different things. You have read a lot of papers. You have heard talks, I don't know your personal circumstances, but many people at your stage are still relatively free.
They don't necessarily have to provide for a family. So they, it's a moment where you can take some risks. And even if you are not equally free, even if you do have to provide for a family, you know, your postdoc, you're paid. So, you know, you can still think in, in, in a, in a fairly broad way. So that's usually what I tell students.
And, and, uh, when are there, you should also know that the job market is on your side right now, meaning that. Every lab or many labs looks for postdocs to hire good postdocs. And so if you move [01:48:00] with the right timing, so sufficiently in advance, you pretty much get to choose what you do or the lab you go to more than the other way around. If you were if you send a letter to anyone and say, Look, I'm interested in working with you. I'll be finishing. I know that that's not the case for you. I'm just saying that I'll be finishing my PhD, you know, in a year or six months or something like that. I would like to come to your lab in the next, you know, 12 months or so. Most labs, if you have a convincing case of why you find the research interested, interesting, and you know, what is, you know, if you have had a productive PhD, uh, career, I would say the majority of labs are going to be open to that. Uh, so in that sense, the market is on your side in four or five years when you get on the job market and looking for a faculty position, that's the opposite. Then there is a lot of applicants, very few jobs and the [01:49:00] market is on the other side. Okay, the market is on the side of departments or or institutions that want to hire and they can choose between many candidates. So where you are now is a moment sort of like, of taking advantage of the liberty you have both in terms of like about what to do and where to do it, how to do it, sort of like you have a lot of freedom. And again, that doesn't always happen. Uh, there's sort of that stages or moments of your career where that's not equally true.
Benjamin James Kuper-Smith: Okay. That's, that's nice to hear. Um, particularly because I'd probably want to do. Two different things. So I can like, you know, really learn different techniques, but it's funny. I, I was a little bit surprised first when you said that, but then I remembered that most of the labs I know who try and find good postdocs really struggle.
Camillo Padoa-Schioppa: Yeah, that's always the case. It's historically always the case. And it's even more so after COVID because many fewer students [01:50:00] choose an academic path. So it's even harder to find postdocs now.
Benjamin James Kuper-Smith: Yeah. Well, uh, I'm sorry for the PIs out there, but I'm happy if people like me.
Camillo Padoa-Schioppa: No, no, it's good. It's good. It's fine. The other thing to keep in mind is that if you did take a big risk, Again, I don't know your circumstances. I don't even know exactly your age. Let's say you start the postdoc, you know, in a lab that is a bit of a leap of faith. You kind of like thought, okay, I'm going to try this.
Sounds pretty cool. But then a year in, you realize it's not really working. You can always change. You cannot do that too many times. Okay. But you can have a false start. It's not the end of the world. So once you, if you do take some risk, even if you don't anyway, but sort of like keep the lucidity takes to sort of like realize whether things are going well or not, because keep in mind that if you're interested in a career path going forward, your bottleneck is going to be between the postdoc and the next step. Not [01:51:00] now. So postdocs, you have a lot of freedom is an exciting time, but it's also a little stressful because there is a bottleneck coming up in ways that no, no other time. There is. In the scientific career, getting into a PhD program is not that hard. I mean, I failed all the applications, so, you know, except one.
So as I said, so it's not, it's not a given to make it in. But, you know, if you really want to do it, you prepare for it. Chances are you're going to get into a PhD program. Getting a postdoc for the reason I said now that's the mark is on your side. Once you have a faculty job, getting tenure is not trivial.
It's a lot of work, but. The main bottleneck is not getting tenure once you have the job, at least not in neuroscience. The main bottleneck is get a job from a postdoc. So postdoc is a time of liberty, is a time of opportunity, is a time of excitement, is also a time where at the end of that time, you have to have something to show. [01:52:00] So there is also some pressure that way.
Benjamin James Kuper-Smith: Yeah. It's a kind of freedom, assuming that you're going to do something cool. That's that going to allow you to get a,
Camillo Padoa-Schioppa: No, it's freedom. Then, you know, you have to, you have to use it in ways,
uh, to do something interesting.
Benjamin James Kuper-Smith: Okay, well, thank you very much. This has been really interesting.
Camillo Padoa-Schioppa: Yeah. Thank you. It was a pleasure.