On this episode of the Utilitarian Podcast, I talk with Joseph Carlsmith. Joseph is a research analyst at Open Philanthropy and a doctoral student in philosophy at the University of Oxford. His views and opinions in this podcast are his own, and not necessarily those of Open Philanthropy.

Our conversation has three main themes. We talk about the long-term future, including the possibility of actually creating utopia. We talk about Joseph’s work on the computational power of the brain. And we talk about meta-ethics and consciousness, including discussions of illusionism and the effects of meditation.

The Utilitarian Podcast now has a dedicated website, at utilitarianpodcast.com. At the site, you’ll find full transcripts of selected episodes, including this one. These transcripts have been generously funded by James Evans. I’ve also set up an email, which is utilitarianpodcast@gmail.com where you can send criticism, questions, suggestions and so on.

TRANSCRIPT - funded by James Evans

Gus:
Thank you for coming on Utilitarian Podcast, Joseph. It's great to have you here.

Joseph:
Great to be here.

Gus:
Let's start with longtermism. Just in a basic sense, what is this?

Joseph:
So, there are different variants of longtermism. The one I'll focus on is the view that positively influencing the long-term future is a key moral priority of our time. And we can distinguish maybe two sub claims that go into that. One is that what happens in the long-term future is extremely important. And the sub claim of that is often: because in some sense, the long-term future is really big, a lot of stuff can happen, and because a lot of stuff can happen in a long-term future, it can be very important what happens in the long-term future. And then a second claim is that we are in a position to have some sort of sufficiently foreseeable and positive influence on that future. And those two claims together make it a priority.

Gus:
Yeah. And is a concern with extinction risk, a part of longtermism or is that not necessarily baked-in?

Joseph:
I think it's not necessarily baked-in. So, there are ways of caring a lot about extinction risk that don't involve longtermism and there are ways of being a longtermist that don't involve caring about extinction risk. And I'm happy to go into those two if you'd like. But they are empirically associated quite strongly, partly because extinction is one of the most salient candidates for a sufficiently long lasting change. So, if we went extinct, you cancel the entire future, and or you cancel our influence over the entire future at least. And so, because that event is so permanent, it answers an objection that's very serious to longtermism, which is that a lot of attempts to influence the future are going to wash out very fast. That you do something now, but in a century or two, it's sort of in the noise, what influence it's having over the long-term future. But an extinction event is sufficiently locked in that it has that problem less. And there are a few other candidates for events like that.

Gus:
Would another candidate be something like spreading into the universe?

Joseph:
Well, I think spreading into the universe on its own... It's definitely a big deal, once you start spreading into the universe. I don't think it has the locked-in quality of extinction just on its own. So, for example, you could imagine, we start spreading into the universe and I take some action. I improve some policy in some country that's spreading into the universe, but within a few centuries amidst all the chaos of the intergalactic process, it's just in the noise what I did.

Gus:
Yeah. Okay. Why did you think this longtermism, why did it emerge now at this point in time? I wrote down some possible answers and I was thinking maybe it's the awareness of how big our universe is. Maybe it's our sense that we can better predict what's happening now than we could in the past, or maybe it's just that we have made moral progress so that we now care about people and other beings in the future. What do you think?

Joseph:
I think it's a great question and basically, I don't know the answer. I think the first possibility you raised that something about our understanding of the future empirically plays a role. I think that's a reasonable hypothesis or speculation. Longtermism does depend on a certain basic empirical picture, or at least it's associated with a basic empirical picture in practice. So, you generally talk about how long earth may be habitable. Often people are interested in the possibility of settling beyond earth and so the possibility of doing that depends on a certain kind of understanding of the earth and of our history and of the future and stuff like that.

Joseph:
That said, if you really dig down, that's not strictly necessary. If you look at you could, well imagine being someone with a very different cosmology or a very different understanding of what the future of humanity might look like and still conclude, well, maybe the future is very large and maybe we can influence it. So, I don't actually think it's hard to find one that's sort of strictly necessary for being a longtermist, but I still think it's a really [inaudible 00:04:28].

Gus:
Yeah. Although it would be difficult to say you're living 2,000 years ago and your view of the world is that history is cyclical or maybe that's-

Joseph:
Right.

Gus:
Yeah. Then it would be difficult and say, you have no idea how old the earth actually is. So, there seems to be certain things that you need to have in order to even think about the long-term future that we're in the same way that we're doing now.

Joseph:
I agree with that. I think it's also worth flagging that there are examples of many types of kind of future oriented ethics throughout history and throughout different cultures. So, I think exactly where we draw the boundaries around longtermism in particular and it sort of there's the current variant that I'm most associated with has a particular philosophical and aesthetic flavor and there are various associations and stuff like that, but broadly concern for the future is a very serious part of many human cultures throughout time. And so, I think it's worth having that in mind.

Gus:
Oh, yeah. Many theories or many traditions in history have been future oriented, has that a concern for the future. And I know you've worked with the Toby Ord on The Precipice, his book, and in this book, he lays out kind of many different ways to get to longtermism or many different ways to care about the future. In a sense, of course, I think it follows quite straightforwardly from utilitarianism. So, we are just thinking about how many people will be or how many beings will exist in the future and then we're calculating how important the future could be. And so, that seems to be quite a straightforward path, but there are others. You might talk about this idea of carrying civilization from ancestor to descendant.

Joseph:
Yeah. So, I think there are a number of ethical views and orientations that suggest some sort of concern for the future and or some sort of concern for human extinction and related events, which is the more direct focus of Toby's book. I think it's a further question whether those theories support longtermism as I stated it at the beginning, which specifically involves a claim about it being a moral priority, a key moral priority of our time. So, one competitive with the other sort of great causes of our era, where for example, if you were interested in our duties to the future that stemmed from the history of our ancestors and how they've passed the baton to us and how we therefore have a duty not to drop it, that seems like a candidate consideration. It's a further question how much weight that consideration should have relative to other priorities. So, I think there are a lot of considerations of that type. There are virtues involved in caring about the future. There are various types of significance that we might have as a species in the universe. There are things you can say, but whether they support longtermism in particular, I think is a further question.

Gus:
So, there might be moral theories that are able to support a stronger form of longtermism. And would this be a concern for future sentient beings, for example, what be able to support a strong form of longtermism?

Joseph:
Yeah. So, I should say, I don't think you have to be a utilitarian to think that nor actually do I think it, depending on how you define utilitarianism, does utilitarianism necessarily imply longtermism. So, often the way people will break this up is you have some notion of the good, where the good is translated into a ranking over worlds. And the project of ranking worlds from an impartial perspective is often understood as population ethics is the term people use. And so, the consequentialist thinks that what you should do is always pick the action that leads to either the best world according to that ranking or the best expected world or something like that. And that utilitarian is sort of a narrower view that that's true and also you should rank worlds in some sense only according to people's wellbeing, or at least that's how I understand utilitarianism.

Joseph:
So, it's sort of the inputs into your calculation of how good a world is, are just the kind of wellbeing levels of people involved. But you can still think that and have a wide variety of population ethical views. So, for example, you could be, my understanding is you're a totalist utilitarian. So, someone who just sort of adds up the well-being levels in order to decide how good a world is. And that view, if you're a totalist, utilitarian, that's going to provide very strong support for longtermism. Yes, but you could be an average utilitarian, or you could be a prioritarian, or you could be a, I mean, you could draw whatever weird, I mean, some people have very funky population ethical views, and I think some of those views and population ethics do support longtermism too, or they could, but it's a, slightly more complicated and empirical question.

Joseph:
So, prioritarianism, which is the view that you should give some priority to the people who are worse off, I think that could end up supporting longtermism too. Average views could end up supporting longtermism if the people in the future would be sort of sufficiently better off on average, or at least certain types of long-term action. But it gets just sort of more complicated. I think it's right that totalist utilitarianism is going to be the simplest argument.

Gus:
Yeah. That's also a further question is kind of marketing longtermism. So, if we want longtermism to be something that a lot of people can get behind, we might try to find out in an honest way, of course, whether there are religious ethical views that could support it and virtue ethics and all of these different ethical views might in some sense, converge or share a concern, at least for the future.

Joseph:
I think it's very plausible that they will end up sharing a concern. And I think in a lot of cases, that's enough. It's not, the specific project of wrangling over exactly how to prioritize one cause versus another is one that I think is extremely important in certain contexts, but much less important in others. And I think often just recognizing that something is deeply important is going a good portion of the way. And I think a lot of different ethical views are in a position to do that with the future.

Gus:
We should mention why thinking in longtermist terms can be so revolutionary and it kind of changes your ethical outlook. And this has to do with just how many beings could exist in the future. And there are various attempts at calculating this some more serious than others, but it is orders of magnitude above the current human population. That's a, yeah.

Joseph:
That's right. So, I think the fundamental reorientation of outlook that I think longtermism, well, maybe I'll start over. I think there's a reorientation of one's empirical perspective on the world and our place in the human story that grounds and supports longtermism very strongly, which is the, something like the realization of just how early we might be in the history of the human species. So, humans have been around for depending on how you define it maybe 200,000 years. We've had agriculture for 10,000, written language for 5,000, industrial revolution, a couple of centuries and earth could be habitable for hundreds of millions of years. And if we settled beyond earth, it could be trillions or a hundred trillion or more. And that's a very different scale of time. And like you say, we can ask how many people could live during that time on earth and if we go beyond earth, the number balloons. I think it's important to see the question of population in that period.

Gus:
I'm sorry, did you say the number of balloons?

Joseph:
Sorry, the number balloons in the sense of gets bigger.

Gus:
Ah, the number balloons, I'm with you. Okay, yeah.

Joseph:
You could also have a lot of balloons in the long term future if you wanted, which goes to my point, which is that I think the number of people is in some sense, a lower bound, or it's a kind of way of getting some sort of grip on what type of value might be at stake and what happens in the future in terms that we were at least somewhat familiar with, but terms that also raised their own issues and connotations, people have questions about, it sort of, it can sound like you're saying, "Oh, it's going to be so populated. The world is going to be so populated. Isn't that awesome?" and people are like, "I don't know, it's going to be crowded." I think talking about it in terms of people is one specific way of talking about something that I think is in fact much more general, which is that there will be time and scope and resources in the future to do extraordinary unimaginable things on scales that dwarf in the extreme the ones we're familiar with.

Joseph:
And so, seeing our current time as a kind of precursor or seed for that future, I think does change your orientation a lot of ways. I should say, I don't think longtermism should involve kind of ignoring the present or in some sense, turning away from a lot of other more familiar or ethical and practical concerns. I think it's important that there's a dance involved in integrating this kind of reorientation of perspective into a full and holistic and kind of grounded and commonsensical life. And I think sometimes just emphasizing the, "Oh my God, we might be at the very beginning of history, the future could be so big." I think that you can lose the: "and we need to integrate that with a lot of other aspects of ourselves and our communities and our lives."

Gus:
Yes, absolutely. Yeah. And I think that longtermism as a philosophy, it has kind of evolved somewhat away from a beginning in which you start out with some calculation of the number of possible beings in the future and then you talk about if we can just reduce extinction risks by .00000 and so forth, then that trumps all other concerns that these calculations might be right. But as you say, it's very important to be able to integrate this ethical outlook into, yeah, a common sense life or a common sense view of the world.

Joseph:
Yeah, I agree. And I think those sorts of calculations do, I think rightly prompt, resistance and suspicion and people saying, "Ah, this looks like the type of idea that is kind of totalizing and extreme and it feels inhuman." There's a lot of, I think, worthy forms of suspicion that that type of argument gives rise to. And I think I encourage people to kind of encountering the idea of longtermism to sit with it. I mean, I don't think we should dismiss, I think the big numbers, it really matters. It's an important piece that the numbers are so big. And I think it's an important lesson of our experience with modern cosmology, that sometimes the numbers are just really big. The universe is just in fact really big and for closely related reasons, our future might be very big.

Joseph:
And that's a very important fact. But I think we should sit with that and kind of ingest that and act on it once it feels like a kind of real part of the world and something that's kind of substantive and not kind of a trick or not a yeah, less of something that's like an argument or something that has less of the flavor of being clever and more of the flavor of something that's just real. And once it starts something that starts to feel real in the way that other deep and important considerations feel real in our lives, then I think that's where it's starting to become more trustworthy. And I think if it's just feels clever then I would be more suspicious.

Gus:
And I think, or at least for me, this has happened over time. I can't give you a good kind of a story about what my thoughts were when I first encountered longtermism. But I can say truthfully that it does seem much more normal and common sense to me now than it did in the past. So, I guess it happens over time. Okay.

Joseph:
Yeah. I'd be interested if you have, well, I'm happy to move on. But I'm curious at some point, what engendered that transition if you have a sense.

Gus:
Yeah. Why did this happen? It's difficult to say it's, I think it has something to do with me thinking about people in different times of history. So, thinking about a single person in the year 500 in the year a 1,000 in the year 1,500, 2,000 and then into the future also, and then noticing how this person has experiences that are as real as mine. And that sounds simple, but I think it has something to do with the change. And so, again, noticing that the numbers represent actual people, or at least actual beings that feel a certain way, yeah. So, for me that is the connection between the cold of raw numbers and caring in a human way about the future.

Joseph:
That makes sense to me. I think the, I mean, I don't think that shift is actually unique to longtermism. I think in some sense, the recognition that other people are as real as I am and that, yeah, the kind of vividness and substance of people's lives independent of what you can see or what you have access to, I think is kind of fundamental to a huge amounts of moral life and certainly to kind of many types of kind of consequentialism or other orientation towards helping others. And so, I think it makes, and then to some extent longtermism's claim is the natural extension of that is to look to the future. That is where a lot of the thing that you care about when you care about other people, that thing is most at stake in what happens in the future, or at least tremendously at stake. And so, that's worthy of attention.

Gus:
Yeah. And you could say the same thing happens or happened to me in terms of caring about this it's called the expansion of the moral circle. So, it happened to me personally and it has happened to our culture where we gradually care about more and more people and beings in general. And so, the edges of this currently, I would say, are caring about animals and caring about future people.

Joseph:
I somewhat agree with that. I think sometimes the moral circle expansion discourse is applied to longtermism I think too simplistically, in the sense that I think longtermism raises issues that make the relevant expansion be not actually to future people, but to possible people. And I think that's a kind of important, that's an important difference. So, in particular, there's this issue called the non identity problem, which I'm guessing you engaged with, which is basically that in the actions we take today that affect the future also affect the identities of the people in the future such that when you help someone in the future, you're not actually, if Bob was going to exist, if you did action A, and then you do action B instead, that is in some sense, creates a better future, it's not better for Bob.

Joseph:
What actually happens is Bob doesn't exist. And instead it's Sally and Sally's life is better than Bob. So, the shifting in kind of your moral circle, I think has to be, there's a different question, which is, suppose Bob is in the future and you could make Bob's life better by doing A or B that's one thing. And I think many people actually go, this surprises me because very few philosophers, if any, think this. Some people do think, well, if you're in the future, I just don't care as much, regardless of non identity issues, it's sort of, I can help Bob, but if he lives a million years from now, he just gets some massive discount or it gets ignored. But I think it's an additional question if it's Bob or Sally, that gets even more complicated and it's sort of an additional step of moral circle expansion that I think raises its own philosophical issues. So, I think it's important to have that on the radar because it is, I think it's, the argument is easier if it's just future people, but I think it's not.

Gus:
Okay. But for me, the identities, they don't matter at all. So, I'm just, I gave the example of talking about a single person because it's relatable to me and it's relatable to all humans, but what I really care about is the experiences this person is having. So, maybe this distinction is not as important to me as it would be for others, I'm guessing.

Joseph:
That would be my guess. One way of seeing it is that I think many people work with a kind of harm avoidant type of moral orientation. So, when I think about what makes an action right or wrong, it's sort of, well, that would harm Bob if I did it. And so, often, for example, when we think about climate change and we think, if we mess up the planet in a way that leads to a lower standard of life for future generations, we sort of imagine those future generations being kind of mad at us that they're kind of sitting there in this polluted climate changed world and they're going out shaking their fists at previous generations. But I think importantly on the non identity stuff, depending on how you think about it, that's actually not what they're doing.

Joseph:
They're actually saying if they're selfish and we can talk about whether they're selfish, but if they're selfish, they're sort of going well, it's bad, but at least I got to exist. And they're kind of glad that we climate changed in the present generation. And so, I think that's actually a substantively different kind of moral situation and I think often we don't imagine that aspect. I agree for someone like you, it won't matter because you're just, you're not interested in identities, but I think it is a shift.

Gus:
Yeah. Okay. Let's talk about neutrality about creating positive lives. And this is something I have trouble understanding. So, you might lay why, what is it first of all, and why do people hold this position?

Joseph:
Sure. So, neutrality about creating positive lives. So, positive lives means something like a life that is worth living, a life that, and exactly how we capture that out is somewhat of an open question, but there are some notion of a life that's sufficiently bad that you actually would prefer to kind of not exist rather than have that life. And then a positive life is a life that's above that level. And so, why do people? So, neutrality about creating positive lives or happy lives is the view that there's just, no, and it's a little unclear, but often you might say there's no moral reason or no, kind of other regarding reason to create lives that will be happy or the fact that a life will be happy, it does not give you such a reason to create it.

Joseph:
That's the view. So, why would you think that? Well, I think there are a few types of justifications. One is some people just have that intuition very directly. And so, in fact, this has been codified in what's called the neutrality intuition. And you can see that intuition in play in some contexts though, maybe inconsistently. So, for example, when we think about having children, often people will talk about how will this affect my finances and will it hurt me at my job? They won't often say, and we're giving life to a new person. And that in itself is incredibly significant. That sort of less on the table, I think. So, that's one. You can look to our kind of practices, how do we treat the prospect of creating new people and how much do we prioritize that relative to other things?

Joseph:
And then I think you can also raise metaphysical issues. Like if the person wouldn't have existed, otherwise, could it be better for them to exist? Is sort of existence non, can existence be better than non-existence for someone? And some people think that metaphysical issues and those in that vein, make it hard to say that you're helping someone or doing a good thing for someone by creating them with a happy life. And then there's a further question. Well, if you're not helping them or doing something good for them, then how can you be doing something good in general? So, those are maybe two categories of justification.

Gus:
Yeah. Okay. With regards to the thinking about having children, would it be enough to be against neutrality to think that it counts somewhat if the child has a positive life? So, if I expect my future child to have a positive life, that gives me some reason to have that child. Would that be enough to defeat neutrality?

Joseph:
Yes, it would. And that's, I think part of why neutrality seems to me implausible, mainly that it's really a very strong thesis, it's that there's no reason. Where I think it becomes much more controversial to see it as kind of the same strength of reason that you have to say save someone's life. That, I mean, I think that is a much more substantive thesis and there's a bunch of complex questions about how do you want to weigh these considerations in actual concrete human contexts, it gets very knotty and I think difficult. But I think the bare, we don't actually have to get into that complexity to talk about neutrality itself. With neutrality itself, all you have to think is it's somehow good to create a wonderful, joyful life for this person. And I agree, I feel like that's sufficiently weak that it's also much less clear that we, that our practices encode some sort of rejection of that.

Gus:
We could also take some Sci-fi case in which I could just press a button and then I create a tropical island filled with happy people. That seems great to me. And so, we can kind of shift intuitions around. So, in general, I'm very skeptical of intuition based reasoning, but kind of exactly because we can quite easily shift intuitions around at will present different thought experiments, and then we have one intuition and another intuition then so on. So, how does this even ground our reasoning in a sense?

Joseph:
Yeah. So, there's maybe two questions there. One is, are there intuitions to support non neutrality about creating happy lives? And separate questions, sort of, how do we think about intuitions in general in the context of moral philosophy? On the first point, I agree. I mean, I think the intuition that moves me most, which I sketched out in a blog post recently was, when I imagine someone deciding whether to create me. So, I love being alive. I love my life. There's a lot that seems to me incredibly precious about my projects and my relationships and all this stuff. And then if you imagine someone looking someone in a position to create me and who can look ahead and see what my life would be and what it would mean to me, and they're choosing between that and taking a walk or having a nice afternoon.

Joseph:
And then I imagine learning that they created me, that they chose to create me. Then I just, I have this feeling of like, "Wow, that was an incredibly significant act." And then personally I feel immensely grateful. And it seems very clear that they did something of profound significance for me. And so, then when I extrapolate that to other cases where I'm in a position to create someone else, and I know that they will feel about their life the same way I feel, I mean, it connects with the type of thing that you were talking about that sort of motivated some of your relationship to longtermism.

Joseph:
Then I feel like, wow, this is a very big deal to create someone's life. My life is a huge deal to me and creating someone's, like they're just as real as I am. Their life would be just as big of a deal for them. And so, there's a kind of a golden rule energy about it, almost of, kind of, I don't want people to be neutral about creating me. And so, that seems suggestive about whether I should be neutral about creating others.

Gus:
Yeah. I can see that. Okay, utopias, you've written about utopias and this kind of connects with longtermism if we think that the long-term future might become a utopia. And so, there are different ways in which we can fail to think about utopias in a productive way. And in the first failure mode is being too limited in our thinking, and this comes up in traditional accounts of utopia in which it's kind of, you have unlimited milk and honey, and you have beautiful harp music and so on. But from our perspective, that is a very limited account of how good life could become. Nick Bostrom has this example about apes thinking that utopia is just unlimited fruit. And from our perspective, we can see that this is kind of very limited and our visions of utopia or current visions or earlier visions of earlier humans might similarly be limited. Why is this a problem, or is it a problem?

Joseph:
So, I think it is a problem. I should flag that I don't think it's kind of the biggest problem with utopian thinking. If we were cataloging failure modes of utopian thing, I wouldn't start with underestimating how good utopia can be. I think if we look at history, there are very salient, sometimes horrific failures of kind of a utopian orientation. In kind of milder cases, it's you have sort of failed cities, failed communes. In extreme cases, you have kind of totalitarian horror on a terrifying scale. So, I think, and then there's also a kind of subtler ways in which I think utopian thinking can kind of distort or take us away from the kind of lived present. So, there's a lot, I think there are a lot of failure modes of utopian thinking.

Joseph:
I think the one you're talking about is sort of underestimating utopia is I think important, partly because it points to the non underestimating of utopia that I think is a very big deal. But yeah. So, I think basically we are used to imagining a better world that's kind of incrementally better than our own. So, out of the example of that's sort of seared in my mind from a friend of mine, which I mentioned in this blog post is he imagines when he gets to utopia, he's going to sit on a giant pile of pizza and play video games all day, eat the pizza as he goes. And so, this is an extreme case.

Joseph:
I think what many people's vision of utopia is somewhat more lofty. And it's a little unclear whether this is tongue in cheek, but I do think that a lot of, if you look at say the literature on utopia and the sort of literary depictions of different kinds of quote unquote utopian civil situations, many of which are actually meant as just sort of straight forward political commentary on the present world, as opposed to an actual attempt to imagine what utopia could be.

Joseph:
You have humans in kind of slightly altered material and political circumstances. So, it's sort of, there's a lot of abundance. Maybe property is owned communally. Maybe people have sort of different types of sexual relationships or something like that, but it looks a lot like our present world. And yeah, I think that that might be helpful as a way of kind of accessing

Joseph:
emotionally different things that we want or that we care about out of utopia or in our present world. But I think as a vision of what utopia could be in the best case, I think it might, well, really drastically underestimate just how good and how different the future could be. And I think that the apes example you just gave is a nice intuition pump in that respect.

Gus:
Yeah. You also have, or you also mentioned this thought about utopias being actually possible, which is important because we can be fooled into thinking that... It's almost baked into the word itself. Utopia is being something or the world is something that will never be part of the real world. But as we think of them scientifically, or as a feature of the future, as a possible feature of the future, they could become more salient to us.

Joseph:
Yeah, I think it is a very important fact, that if we play our cards right, if we are patient and mature as a civilization, we could create something of profound and incomprehensible value. And on cosmic scales, as I said. And I think the function of utopian thought and discourse can obscure that partly because it's often so detached from our lived material circumstance.

Joseph:
I think utopian thought sort of serves as a funnel for various political aspirations or religious aspirations or all sorts of things. And I think it does to some extent for me too when I think about it. It is a distant ideal in some sense, but I think... And it's also important, I think any utopia that we actually build will not be a sort of infinite heaven. It will still be limited and gritty and contingent and resource-constrained. And there will be very specific arrangements and limitations.

Joseph:
So, I think it is different from a vision of heaven in that respect. But I just do think, as far as I can tell, it's just totally possible to have, especially just radically different and better modes of being in consciousness and community life and relationship. Just like everything we care about as far as I can tell it's just physically possible to do that in a profoundly... A way that sort of takes the direction that our minds move and that our lives move when they become better and extrapolates that to a depth and profundity that's really beyond what we're normally experiencing. As far as I can tell that's just a kind of mundane physical possibility and something that if we act maturely in light of, we might actually create.

Gus:
We might go back to the ancestor example and then think about bringing a person from the year 500 to Switzerland or Norway today. And Switzerland and Norway today are limited. They are resource-constrained. They are gritty in a sense, but they are just so much better than life in the year 500 in many ways.

Joseph:
I think that's true, but I think I would be inclined to put that under the more limited end and the more concrete utopian transition. So I think it's sort of still... If you took someone from the year 500 and they would still recognize a lot of this stuff. It would be... We're still sitting around and eating the food and all this stuff. I think it's... I don't know exactly how different long-term future utopia might be. But I expect it would be sort of much more different than Sweden in the year 500 is.

Gus:
The problem we're stumbling into now is this other failure mode of utopias, which is that they become incomprehensible. When we try to describe how good things could become, it becomes psychologically distant from us in a sense. So, one way to try to describe a possible utopia is to ask a person to pick out the three best experiences of her life. And then saying something like, "Well, life could be like that all the time and even better," but it's just very difficult to connect to because life isn't like that. And we are very... I think we're wary of being let down and we are naturally skeptical which is healthy but also limiting. So, I've noted that psychedelics and that meditative bliss might give us some kind of insight into what utopia could be like. Did you have any insight here?

Joseph:
I think that many, many types of amazing experiences can give us, can point in the direction that one hopes a utopian trajectory will travel. I think putting it in terms like, "It could be like that all the time." I think can feel like, "Oh, I don't know. Oh, do I really want to be..." Say you were in the throws of some incredible music or some wonderful experience of love or something like that, that doesn't need to be meditation or psychedelics. It doesn't even need to be sort of mystical or otherworldly. It's just true happiness or joy or love or beauty, energy, immensity. We have this whole panoply of amazing experiences that aren't... And they're not just personal. We have communal experiences of being together with other people. It doesn't need to be about something limited to your head or anything like that.

Joseph:
We can watch our personal and communal lives move in the direction sometimes of really profound goodness. And then we can say, "What is that direction? That's an interesting direction. That's really important evidence. We just saw a shift along a trajectory that matters, that we think matters." And then you have to look down that trajectory, look in that direction, and you go, "Oh, wait. That goes much farther. There's no limit to that."

Joseph:
I think that's... especially, or no limit. It looks like we can go a lot further in that direction. I think you can imagine a lot of directions at that time. But I do think peak experiences of many kinds are on a especially vivid way of doing that. Partly because they are often... Sometimes the best experiences in our lives have a quality of almost surprise or, "Whoa, this is very different. This is... I didn't. This is the real thing," or "This is a more real thing."

Joseph:
And they can be hard to remember like you said. So, I think people are skeptical for a variety of reasons when you say, "Ah, remember that amazing experience. Doesn't that point to something important?" And people can also... It can feel like, "I guess that was good. But from my current, right now, my back hurts and I'm sitting here. I'm stressed about my job and it's not cognitively accessible what that was really like."

Gus:
Yeah. And the same might be said of if we think again in sci-fi terms. We could think of a brain, the size of a planet with just extreme levels of intelligence and consciousness. And it could just... What this could do or feel would be incomprehensible to us. But when I say these words and we picture this planet in our heads, we see just something stale and boring and it doesn't connect. At least it doesn't for me. And I accept this way of thinking. So yeah, there are some limits there.

Joseph:
I agree. I think there is a real risk of falling into a degree of incomprehensibility or alien types of scale and other things that just cuts off a sense of that being a kind of utopia for us, or something that's where it's clear to me what I value is at stake there. And definitely, I think things like brains, the size of planets. Again, I think the best way to think about that is that's a pointer to, "this will be very different." This is going to be... It's sort of very, in some sense alien degrees of scale and sophistication and all sorts of stuff. But I do think it is important to remember that there's no... Utopia is by definition the one that does have the stuff that you really care about.

Joseph:
It has it to the point where... It has it in a much deeper and more intense degree, but you don't need to go any further than you actually want to in the direction of... If you don't like planet-sized whatevers, or if you don't like something, you don't have... There's no obligation to do it. You can just stop where what you actually care about stops. But I think what we actually care about goes very far. It goes further than beach vacations. It goes further than just a fixed-up version of our world. I think it goes to something radical.

Gus:
And we could also say that if we stepped into the perspective of this planet-sized brains, well, then we would... And say it has amazing experience as well, and those experiences would be our experiences. So there is this kind of a breaking the barrier of perspective that could also be unnecessary. But I was thinking maybe the comparison between a person from the year 500 to a person in Switzerland today isn't appropriate here. And maybe a better comparison would be a sea creature from say, a hundred million years ago, and then a person from Switzerland and Norway today. Where there's just no way for the sea creature to even understand what's going on for the person in Norway.

Joseph:
I think that that is a productive reframe but also one that I think raises some of these issues about alienation. And obviously, I should say, and here, again, I'm speaking totally for myself and not for my employer for all this stuff. These questions about really changing ourselves are very complicated and there's a lot of need for caution and they are ethical and practical and safety questions. They're all sorts of stuff that's implicated by the idea of in some sense transitioning from a sea creature to something else. So, I think there's a lot of room for caution and a lot of need for caution there. And as you say, it's not totally clear if you told , secret creature "Hey, you could be a human" that they'd be like, "You know what?" It doesn't just need to be, "Oh, like, would I have algae as a human?"

Joseph:
It could just be like, "That wouldn't even be me as a human." But broadly, yes. I think we should move more in the direction of recognizing just how far and how different a utopia might be. But I think we should remember, it will be our utopia, or that's the one we should be shooting for is one that is still recognizable in some sense. And I mentioned this in the post that I think at least how I imagine it, if you went to this utopia, you would see, you would recognize in some sense the same thing that made you sit up straight or that shocked you or compelled you about everything that you love and care about in you own real life. You would recognize, it's the same spark for the bonfire. It's still the good. I think that's a kind of important piece. There would be that connection and that thread between them, whatever their other difference.

Gus:
Yeah. Okay. Let's shift gears and talk about your report. You've written a long report called How Much Computational Power Does It Take to Match the Human Brain?

Gus:
This was a project for Open Philanthropy. Yeah. Okay. To understand what you conclude here, we need to understand what FLOPS are. So could you tell us about FLOPS?

Joseph:
Sure. So, FLOPS are floating-point operations, where a floating-point operation is basically an arithmetic operation, like addition or multiplication or division performed on numbers, represented in a format akin to scientific notation. Yeah. So floating-point operations are a very common type of operation. In some sense, this is not the only type of operation that computers can do. And there's some complexity, but it's a common metric of computational power is, how many of these operations can you do in a second?

Gus:
Yeah. And we need some kind of perspective here. So when we talk about FLOPS, could we measure it in terms of smartphones, or laptops running in parallel, or how would you present it in a more understandable way?

Joseph:
Yeah, that's a good question. I don't have numbers off the top of my head for how many FLOPS a laptop can do. So I guess, I maybe just don't have a good way of making it more understandable. How many... Like what does that sort of... To give someone a sense of how many FLOPS is what, I'm actually not sure. I can say super network.

Gus:
You do-

Joseph:
Go ahead.

Gus:
The $10,000 supercomputer at some point, as a point of comparison. We might use that.

Joseph:
That's right. So there are various... I can give you numbers for different high-performance computers from the past couple of years. So yeah, a roughly $10,000 computer, especially, two years ago or something was, could do at the sort of maximum or ideal performance could do something like a 10 to the 14 floating-point operations per second. And newer computers for 200,000 can do, I think it's like four times 10 to the 15 floating-point operations per second. The best, or one of the best supercomputers that we have, which cost like a billion dollars is somewhere between 10 to the 17 and 10 to the 18. And there's other machine learning-based computers that are being built or have been built that are, I think in comparable ranges. So that's a lot. 10 to the 17 operations per second is a good number.

Gus:
Yeah. And 10 to the 17 per second is also way more than 10 to the 14th per second. So that is also something to notice right there, which I tend to forget.

Joseph:
Yeah. Though it's also way more expensive.

Gus:
Yeah.

Joseph:
The top computer, I think, is like a billion dollars and the 10 of the 14 computer is 10K.

Gus:
Okay. So, what is the answer to the basic question of the report then? Do we have a good idea of how much computational power is required to match the human brain?

Joseph:
One thing I'll add on the FLOPS point, which I think is important to keep in mind is FLOPS are very far from the only thing you need to do a certain type of computer application. So you also need things like memory, you need things like memory bandwidth, and the ability to move information around in the system. So there's a lot of... FLOPS itself, it's a limited metric of computational power. And it's also only one dimension. There are a lot of other things that matter. So I think that's important to keep in mind too, even with these numbers. Even if one of these estimates is right, it's sort of not itself enough to have the hardware you need, let alone the software.

Gus:
But you'd... I assume you chose FLOPS for a reason. It must be central. I must be some... There might be correlations between FLOPS and the ability to move information around them, and so on. Why FLOP as the central metric?

Joseph:
It's a good question. So, I think a part of this connects with this broader question of why did I do this report or what does it sort of feed into? And so, my colleague at Open Philanthropy, Ajeya Cotra has a broader model that she's been using to think about when we might see AI capabilities that match human capabilities in various domains.

Joseph:
And that model uses various assumptions about the costs of training in machine learning and how that training relates to the size and the computational burden involved in running the system being trained. And so, the methodology she's using proceeds by using as anchors, certain sorts of biological data points related, for example, to the brain and to human lifetime and things like that. And then extrapolates that to the costs of machine learning training, of training models of different sizes.

Joseph:
And those costs are measured in floating-point operations. So it's sort of, in order to train a system, you have to run it a really large number of times, and running it that number of times requires this very large number of floating-point operations. So, it's essentially... That's in some sense the central reason we care about that metric, or I do, and Ajeya does. But I do think it's just not the only thing. I don't think it answers the whole question. So, it's a limitation of the project in that respect that it's only this one metric.

Gus:
Remind me, what is transformative AI?

Joseph:
Transformative AI, as defined in a blog post... I think it was back in 2016, that open philanthropy to put out is, there's a few different ways of defining it. But the one I have in mind is a AI that precipitates a transition that is of comparable significance to the agricultural or the industrial revolutions. So just something that changes society in a way that's as a big deal as those things did. It's still pretty loose, but that's the general idea. And then we can try to operationalize that in different ways. Ajeya, as I recall, is particularly interested in certain metrics of growth rates in the economy.

Gus:
Yeah. We might say, this is AI that is a really very big deal.

Joseph:
Yeah. I think that's what it's aimed at pointing out. Yeah.

Gus:
Yeah. This project, was it also motivated by these recent successes in machine learning from DeepMind and OpenAI with the training models that are growing very fast?

Joseph:
I think to some extent. I mean, I should say, so again, I'm just speaking for myself here. I think various people at Open Phil have very different views about all these things. Speaking for myself, I do think that the recent results we've seen in machine learning have been suggestive of the potential for progress and in particular suggestive of hypotheses related to the role of scaling up the sizes of the models and the sizes of the training runs that we're doing, and what we might see just from that. And Ajeya's report proceeds on via an interest in that particular method of extrapolation. And there are some interesting results from people like Jared Kaplan and others that look at how the performance of the system scales just with the size and with the data used to train them.

Joseph:
I think there's a lot of uncertainty about all of that. I feel very uncertain about all that. But I do think it's interesting to see the progress we've made. I'm very interested in the results from GPT-3, which is this system that can generate a lot of different human-like language. But I also think we should hold all that with caution and a grain of salt. I mean, I think it's easy to get caught up in just extrapolating your sense of being impressed. And you're sort of like, "How impressed am I feeling right now? And what does that tell me about AI?" And I just think that that's not a very reliable methodology and that's partly why I think trying to get a little more quantitative about it can be helpful. Though obviously, that has a lot of its own issues.

Gus:
Okay. So, in this report, you contacted a lot of people, a lot of experts in neuroscience and computation or computer science. And it's not like you deliver a definitive results. You sketched out a range of uncertainties about what computation might be needed to match the human brain. But what is the central results?

Joseph:
I think the central result is something like... There's a range that I think a number of different ways of estimating this weakly point too, which is, I don't know, around 10 to the 13 to 10 to the 17 floating-point operations per second. And then there's sort of... It spreads out around there. So, I considered four ways of estimating this quantity. And one of which attempts to find something that looks more like a limit or kind of an upper bound by looking at the energy constraints that the brain has and how those relate to the computation it could be performing. And so, I ultimately have a distribution for running a certain type of model that centers around 10 to the 15. I hold this really lightly. But I think there's evidence we can look at, and here's what I found.

Gus:
You would be pretty confident that the present supercomputer, a billion-dollar supercomputer could match the computational abilities of the human brain?

Joseph:
Yes, in principle. I think it's important to, and pretty confident... Yeah. I think it's important to remember that the software piece here is incredibly important and actual brain modeling or the actual systems that we run when we're... That first perform a different task, need not be the ones that are in some sense most efficient. Nor need they be ones that resemble "the brain's way of doing it."

Joseph:
And the brain's way itself importantly need not be the most efficient way. So, I think it's very important to keep in mind that, this is kind of, could you in principle do it with this much? And again, you also need the memory and you need other things, and it's sort of a number of further steps before you start talking about actual timelines to doing anything. And that's partly why I think Ajeya's report is really a key thing to keep in mind if you're interested in moving from the estimates in my report to any concrete predictions about the world.

Gus:
You might tell us the conclusions from that report then or sum them up for us.

Joseph:
I think I'll leave that. I'll let Ajeya speak for herself on that stuff. I think very broadly, and speaking for myself, I think it's plausible that we see very, very powerful and advanced human-like capabilities this century. I put that above 50% from AI of all sorts of kinds. And so, I get some of that from Ajeya's report. Some of that from experts. Some of that from other sorts of outside new considerations. Open Philanthropy has been doing a number of different types of trying to get at this problem, and this question from a number of different angles, and there have been different reports in the past year or so that are all informing my own sense of what's plausible here.

Gus:
Okay. So, back to your report. One might ask, "Why is it even interesting to talk about the computational power of the human brain, or why specifically that target?" Because it's not like there's any implication that, "Oh, well, if we have a supercomputer that has the computational power of the human brain, well, then it can do what a human brain can do." And so, why this target?

Joseph:
I think it's a good question. As I say, I actually just don't think this data point on its own is that much information. I think it does matter. You might have... If we were in a regime where our computers were just many, many orders of magnitude away from something that looks like it's in the vicinity of what the brain is doing, or in terms of the number of operations or general computational sophistication. That seems like some... You might think like, "Ah, that's evidence that we're not in a very good position to automate these cognitive tasks." And I think that that sort of thought gets more compelling the more interested you are in evolution as an "engineer".

Joseph:
In some sense, you're wondering how will our own engineering capacities compare to evolution's in this respect? If there was a way to do the types of things that the human brain does, but in a radically much more computationally efficient manner. Would we... Then you might wonder, "Would we expect evolution to have found those or to..." So you can start asking these questions about how might human-AI development capabilities compare in some loose sense with the evolution's.

Joseph:
I think there's a lot of reasons to be skeptical about that. I think there are conceptual problems with thinking about the brain in this way. So, I hold this line of inquiry pretty lightly, that particular thing. But I think it can give you something in the sense that I do think you should think if our computers are just sort of nowhere in sight of what it looks like, the brain's level of sophistications. I think that's the reason to think we're also further away from AI or human-like AI.

Gus:
And if we've reversed that, we can say that if we... Oh, I will say this. That if we don't find these hardware limits where we are orders of magnitude away from the human brain, then it might be more of a software problem than a hardware problem to get to human-level AI or transformational AI. So there might be a hardware overhang and we have to do software to catch up.

Joseph:
Yeah. I think that's one possibility. And looking... Yeah, with these questions about hardware and software that you want to look empirically at how has software and hardware been the bottleneck in different ways. I think that this data about the brain is most interesting if you think that hardware is really the driver of a lot of this progress.

Joseph:
And because then, you think once you have hardware of the right scale then software is maybe not that far behind. But if you don't have enough hardware, then you're stuck. And I think if you have that type of view, which is I think a view that has informed some estimates in this vein in the past, then you get more interested in these hardware statistics.

Joseph:
I think the reason that the kind of, or the distinctive angle that I think Ajeya takes in her report is to view the software problem as in some sense translatable at least, maybe partly into a hardware estimate. Because machine learning effectively allows you to find software that you don't know otherwise how to program directly. If you know how to train a system, and you have the hardware and the data and the resources, and you know how to train the system, which is again an important open question, then that might get you your software.

Joseph:
And so, I think there's a way in which bringing in these recent, this machine learning angle, can start to bridge the gap between the hardware question and the software question. So, obviously, tons of questions remain in that respect, and yeah, it's not at all open and shut that that works.

Gus:
No. Yeah. So, there's definitely this question of whether we can "just" keep scaling up our hardware and keep throwing more data at our machine learning models and whether they will continue to deliver more and more interesting results, or whether there are some theoretical or software level breakthrough that will be needed before we can create more advanced software."

Joseph:
I agree. I think there are questions in that vein. There are questions about whether we get the data we need. There are questions about whether... There are tons of questions.

Gus:
Yeah. Okay. When you were working on this report, did you learn anything about the rate of progress? Because I'm thinking, we've been used to thinking about the computer industry as a place of fast progress, so even if we are at a sudden... Even if the fastest supercomputer is 10 to the 17th floating FLOPS per second, now it might be very different in just five years or something. So, what did you learn about the rate of progress?

Joseph:
I didn't spend a lot of time on extrapolating where hardware might go. So I don't have a lot to add there. I think my understanding is Moore's law has slowed down somewhat, but there's still... And particularly, you mentioned the Moore's law, I think to do with energy usage per unit volume. That particular aspect, I think has slowed down a lot and we've moved to sort of more parallel computations. But I think, we're still getting reductions in FLOPS per dollar, which is the metric that matters most in the context of Ajeya's report. And so, I think there's room for better hardware. Exactly how much, roughly, I'm not sure.

Gus:
I was thinking about the difference in energy requirements of the human brain and of a billion-dollar supercomputer. You mentioned before this thinking of evolution as an engineer and comparing the output of evolution, the human brain, with the output of human engineers, which would be this supercomputer. So, I'm guessing it's true to say that the supercomputer uses much, much more energy than the human brain. And so, is this a limit? Is there an energy limit or is that not a problem?

Joseph:
Could you say a little bit more about what you mean by energy limit?

Gus:
Might it be a limiting factor that our supercomputers require this much energy to run? So quickly, for example, I mean, if they require as much energy as this billion-dollar supercomputer we keep mentioning, then we... Is there some form of limit to how much we can do from energy requirements?

Joseph:
I think there could be. So there are absent, suitable improvements in energy efficiency of computation. Then there are just limits on how many operations you can perform on a given energy budget. And if you want to perform a very large number of operations then you're going to use a huge number amount of energy at present. And it's true that as... Well, so that I think, yes, you need... Depending on what you want to do, energy is a constraint. I think there's a separate question, which is how much does it tell us about the comparative engineering skill of evolution versus humans that the brain is so energy efficient? So it's true, the brain uses 20 Watts or so of energy, which is very, very little relative to a supercomputer.

Joseph:
And there's a reason for that, which is that evolution, in general, is very constrained by metabolic costs. And so, the brain is an already a very energy-intensive organ relative to the body as a whole. I think the body is like 100 Watts. I forgot exactly what it is. There's a lot of need for evolution to optimize, to keep energy costs low. Which is, I think importantly, that's sort of an advantage we have in trying to build machines, that do what the brain does. We don't actually have to solve quite the problem that evolution had to solve. And that's true of a lot of different dimensions of the brain. The brain has to have its own blueprint stored inside of it. It needs to repair itself from the inside. It needs to fit within a skull that can go through the birth canal. There's a-

Joseph:
... is to fit within a skull that can go through the birth canal. There's a lot of constraints that evolution faced that, that we don't face. That's said, I want to emphasize the engineering challenge here is not just building the computers, it's the software thing too. And finding the program in some sense that will perform these tasks is itself sort of a really substantially additional challenge. And one that in some sense, where I think a lot of the uncertainty lies. So just looking at the hardware challenge, isn't the only dimension.

Gus:
Yeah, definitely. One way we might bypass this software challenge is with mind uploading, which is itself a very, very speculative technology. But if we think that we can somehow scan the brain and create a functional duplicate in a computer of it, this might be a way to get to human level software without actually understanding what the brain is doing. I don't know whether this figured it all, but what do you think of this in general?

Joseph:
So one update that I made from the report was that I think I've become more skeptical about the feasibility of mind uploading without pretty radical changes in our neuroscientific understanding. So I think a big lesson for me in the report, and I talk about this, there's a section that's devoted just to this, which is kind of just how far we have to go in neuroscience. I had heard coming in, "Oh, there's a lot we don't understand about the brain," but really talking with the experts, I think it became a lot more visceral. So just as one example, we cannot at this point record from, or stimulate all of the synapses in a neuron, in-vivo especially, and I think that makes it very difficult to actually test the input/output behavior of neurons, because you can't look at arbitrary patterns of inputs and observe the outputs and observe their effect on the downstream neurons and stuff like that.

Joseph:
So that's one example of a sort of broader pattern where we're really data constrained, and it's just hard to get the amount of data we need out of the brain. And then additionally, hard to understand how it actually works. And a lot of these circuits, because it's just, it's very hard to understand sort of what parts of a system matter and don't, if you don't know what the system's doing. And this is sort of a general problem that neuroscientists have. They look at a glob of brain and they're sort of, well, there's like a bunch of mechanisms here, but if you don't know which features of its behavior are ultimately important to the functional role as playing, it can be very hard to isolate which parts can be simplified or not. So that's, I think just a general barrier to mind uploading is how little of the brain we understand. I think there's a lot to do there.

Joseph:
A second thing is that I think this bottom up approach where you try to not understand how the brain works, build in as much of the underlying biophysical complexity as you can and kind of... I mean, in the extreme case, which I don't think is what proponents of mind uploading are actually imagining, but in the extreme case, you just hope it works. You just build the most detailed model you can, turn it on, this is unrealistic to the point of parody, and you just hope you get it without understanding how the thing actually goes. A, I think the computational requirements for that are likely to be much more extreme than the requirements for running a model that you actually understood, or actually, that you've taken advantage of all the simplifications that are available, because in some sense, the mind uploading project as just described purposely doesn't do that. It just tries to go low level enough that you don't need to understand what the thing is doing enough to simplify it.

Joseph:
But it also just seems really hard to build a system at that level without understanding it and get it to work. So broadly, I think mind uploading is possible eventually, but I don't expect it to be the central route via which we end up automating these tasks. And I also think that along the way to mind uploading, we will sort of learn stuff that allows us to not do mind uploading in the full sense. Yeah.

Gus:
Yeah. So should we care how the brain does anything at all? So is it even worth... Of course there could be other reasons, but if our project is to reach human level performance on certain cognitive tasks, should we care about how the brain is doing it? Or should we just develop things from our, say, develop machine learning models from the ground up?

Joseph:
I think, ultimately what we care about, if we are doing something like trying to automate certain cognitive tasks, which isn't the only thing you can do in this space. As you say with mind uploading, for example, you might be interested in continuity of personality or a personal identity, or you might be interested in consciousness or moral status, or all sorts of things that aren't just about task performance. But if you're just talking about task performance, then yeah, roughly speaking, I think you can do it whatever way you want and getting too interested in how the brain does it, I think can be a barrier. And sometimes there's this, I think unfortunate back and forth at the intersection of neuroscience and AI, where people try to do things in AI, and then someone maybe says, but that's not how the brain does it. And there's sort of a confusion about whether what you're trying to do is understand the brain or trying to replicate these tasks.

Joseph:
I do think how the brain does it is an important clue. And if you're struggling to replicate the task via any other way, and you have an example of a system that does it, it can be worth trying to understand at different levels of abstraction, how the existence proof works. But that's just, it's an instrumental argument. It might be helpful and it might not be helpful. And it might be that we have evidence that other routes are preferable.

Gus:
I've also heard it the other way around in which people claim that our improved understanding of machine learning allows us to understand how the brain does achieve certain tasks. And the claims are that the brain does something that's analogous to what our machine learning models do in certain scenarios. So the brain might do backpropagation, or the brain might be a bayesian updater, and so on. Is this plausible to you?

Joseph:
I think it's certainly the case that we can learn stuff about... That there are interesting similarities between certain types of machine learning systems and certain types of brain circuits. I think the examples most salient to me come in the visual cortex. So, if you look at how... You can find interesting ways in which you can use trained neural networks. Neural networks that are trained to classify images can be used to predict the activity of neurons in the visual cortex without sort of being directly trained on the neural data itself. And that's true in even fine-grained ways, that sort of different layers in the visual cortex will be predictive to different degrees. There's some very interesting work from Daniel Yamins and DiCarlo that explores this, and there's a bunch of other efforts in that vein that are ongoing in the sort of auditory cortex and motor comparisons. There's sort of interesting examples of what are called grid cells that might show up in machine learning systems.

Joseph:
So I think it's a very interesting project to look at these systems that we train, not on neural data, we just train them to do tasks and then look at, are they doing things that are sort of similar to what we see in the brain? I think there's a lot to be learned there and a lot of fruitful intersections. I also think we should be cautious with things like backpropagation. There's a big debate about does the brain do backpropagation, bayesian updating, stuff like that? I think one thing that at least one expert made salient to me, which seems somewhat plausible is we're very tempted because the brain is so mysterious to import the things that we do understand, or the newest hottest thing that seems like it works and matters. We're tempted to be like, maybe that's what the brain is doing. And it might well be. At a certain point that might actually be true, but there's also a danger of doing that over eagerly or without taking the brain on its own terms. I'm not in a good position to evaluate particular cases there, but I do think that's a salient failure mode.

Gus:
So you talked to a bunch of experts, both in computer science and neuroscience, as I mentioned. Did any of them object to comparing the brain to a computer or was this accepted because you're doing a functional comparison? Did anyone tell you something like, well, we simply can't compare it because the brain is very different from a digital computer?

Joseph:
So the brain is definitely very different from a digital computer in tons of ways. And the report that I... The framing of the report doesn't actually assume that the brain is a computer, or is analogous to a computer. The assumption of the report is that a sufficient number of floating point operations sort of suitably arranged and in the context of the other computational resources you need can in principle replicate the brain's task performance. So in the extreme, that could be a very, very detailed biophysical simulation of the brain. And no one that's coming to mind of the experts I've talked to explicitly rejected the kind of in principle possibility of replicating the brain's task performance with sufficient computational resources, which is distinct from saying, the brain is a computer.

Joseph:
I think that type of talk can actually be pretty loose. And I have a long appendix in the report talking about, especially in the context of floating point operations, what are the different ways of understanding the notion of the brain's computational capacity and how does that relate to the project of the report? And I actually think there's a lot of conceptual ambiguity there, and people conflate lots of things pretty rampantly. But no one rejected that sort of, you can do it in principle with computers claim.

Gus:
In an overall sense, after having done this report, do you think it changes your AI timelines? When we're talking about AI timelines, it's your predictions for when we might see AI that's truly transformative or artificial general intelligence. There are many ways to describe this phenomenon. Did it change or your predictions, or was it too, let's say in depth and limited in scope to say anything broad?

Joseph:
I think the report itself didn't very directly change my AI timelines. I think in the context of thinking about some of the considerations in Ajeya's report and some of the other, I did at least solidify or become more thoughtful about my AI timelines during the period that I was working on the report and afterward, but I think that was centrally via (a) how the report feeds into ajeya's model, but also just learning more about the field and thinking about the other considerations that Open Phil and others have been investigating.

Gus:
Yeah. So we have now talked about two things or two considerations that might really change your ethical outlook, longterm is one of them, and the prospect of truly transformative AI is another one. Both of these things, they can feel alienating in a sense, which we also touched upon where the world suddenly seems very overwhelming. And it might feel like you have no part to play or things will change so fast that you cannot be expected to keep up. Do share these feelings or how do you feel about these two considerations?

Joseph:
I think it can feel overwhelming. Yeah, I agree. Yeah, I think that's true. It's not totally clear to me how different that is from the general sense in which the world is overwhelming. It's sort of not clear, even if transformative AI wasn't coming soon and even if the future wasn't very big or didn't matter, I think it can readily feel like it's hard to make a difference in this world. It's hard to grapple with all the things that are happening at once and to identify what levers are available to you to make what sort of difference. And-

Gus:
The thing I was-

Joseph:
Yeah, go ahead.

Gus:
... trying to point out was... So maybe in a more traditional sense, maybe my future goals would be that I want a house and I will plant some trees and we can set up a swing so my kids can swing from the tree. I have longterm plans that assume a world that is unchanging, but thinking about longterm, I might decide, well, maybe I shouldn't spend my time planting trees. Maybe I should try to reduce the risk of nuclear extinction. And again, thinking about AI timelines, maybe this future suddenly seems much less certain in a sense. Of course, there are many, many aspects of the world that are more uncertain now, but especially transformative AI, now it sounds trivial, but it might be especially transformative. So, yeah.

Joseph:
Yeah. I think there are maybe two different dimensions there. So with the longtermism piece, whether you should plant trees and build swings, I guess I would see that as still in the same vein as the questions you might ask, if you were not a longtermist, but nevertheless were aware of pressing ethical demands and problems in the world. The question of what sort of trees you should plant is also at stake in what sort of difference can you make to poverty or global health or animal suffering, or just the whole panoply of issues and injustices that this world hosts. I think there's a general question of how do we balance the sorts of rich personal forms of meaning and flourishing that we care a lot about with the kind of opportunities to change the world or to make a difference that are available to us? I think it can feel like the stakes for longtermists are especially high if you feel like the world is going to end or something like that. But I actually think a lot of the dilemmas there are pretty similar.

Joseph:
The AI piece, I think, is somewhat different in that, that might affect what happens to the trees that you build, or in just planning your life. Depending on what you think will happen in the future, you might have a significant probability on pretty radical changes, and that can just feed into your own personal planning and to your higher level understanding of what you're trying to do in the world. Yes, and I think in ways that we don't have a lot of practice thinking about. To some extent, I think that we do have it, we've seen important, big technological changes throughout the 20th century, too. And probably in the past too, but it was at a much slower pace than at least some predictions of transformative AI would imply. So yeah, I think that does suggest a disorientation, or it can.

Gus:
Another example might be thinking about doing scholarly research, thinking about writing philosophy papers, for example, which is, as you know-

Joseph:
Right.

Gus:
... a very slow process and a longterm thing. Your plan could be to explore a topic and this can quite easily become a decade-long project. So if you think about the prospect of getting transformative AI, then it might sort of undermine your, well, for one thing, motivation, but also undermine your plans because, well, I guess we could get these models that will just perform far better than me at writing philosophy papers, or solving theoretical problems, and so on. Is this something you worry about? Is this something you think about?

Joseph:
I do think about that in ways. I think Nick Bostrom has written about this, where he views a lot of intellectual progress as centrally about making the insight or the contribution available earlier than it would have otherwise been, which I think is sometimes not how it's thought about. Sometimes it's sort of like, well, I discovered this, therefore I sort of contributed the whole value of that discovery. But if you think it would have been discovered by someone else soon afterward, then really, what you contributed is the difference in time. And in the context of-

Gus:
Yeah. It can often feel magical, as if the world had kind of planned for a certain discovery to arise at a certain time. You see this with simultaneous discoveries in both practical domains and theoretical domains. Yeah, go ahead. I'm sorry I interrupted.

Joseph:
Yeah. Yeah. I agree with that. And it's interesting. It's surprising in some sense. I wonder how true that is in domains. The examples I'm most aware of there are physics and engineering and stuff like that. I'm less clear on sort of philosophy where people like write the same philosophy paper at the same time. Maybe there's some of that, but I think in general-

Gus:
For example-

Joseph:
Yeah. Go ahead.

Gus:
I'm sorry, go ahead.

Joseph:
No, please.

Gus:
Okay. Okay. For example, take Peter Singer's paper, Famine, Affluence, and Morality. It is from '73, I think. There are certain factors that... So before '73, it was much more difficult to even contribute to solving global problems as an individual. And so that might have contributed to the paper arising at the point at which it did, or at that specific time. And of course, Singer contributed greatly, but maybe another person could have written that paper.

Joseph:
Yeah, I'm not sure. I'm not sure how much of a difference there was in the '70s versus earlier times in terms of how much difference you could make. And I'm also not sure who would have written... It's not obvious to me that that paper was sort of on the tip of anyone else's tongue, as it were. And particular framings, Singers writing has a type of power or demonstrably has made a very big difference to people, and it's not clear that comparable articulations would have had comparable effects. So I think, in some sense, Singer is a good case for it actually, even if you think, oh, someone will make this point, it's obvious. I think someone will make this point as obvious. It's not always true in philosophy and it can matter how you make the point too.

Gus:
Yeah, I actually agree. I actually agree.

Joseph:
That said, I do think, if you take an extreme case, suppose you thought in 2060 that all intellectual problems would be solved, like you would invent some perfect science and philosophy and math AI and humanities, all the novels would be written. I mean, that doesn't make the same sort of sense. But if you think a lot of human intellectual endeavor will be sort of superseded or completed or something at some relatively near-term interval, then I think that does make a very important difference to what type of contribution you think you're making as you work on these issues. A lot of it then becomes, what do we need to know before then? How can we best orient towards... And to be clear, I think that's a cartoon picture of what kind of contribution AI might make to human intellectual endeavor.

Joseph:
But in general, I think it is true that it sort of focuses your attention more on what does that transition start to look like? What are the questions that matter most to our going into that with the right type of understanding and the right type of maturity? And just in general, I think the more you think sort of really big, important things are going to happen over the next century that make a sort of really big difference to the future of our civilization and our species, it does seem to me to kind of move one's focus towards those things and away from kind of, oh, we will just... There's a kind of timeless intellectual project of making progress on these issues. I think that's sort of one framing. And there's another thing where it's like, something big is about to happen. How do we prepare for that? And I think the more you think that something big is going to happen, the more your focus goes there.

Gus:
Yeah. Okay. So one thing, for example, that I am worried about is that we, I mean, depending on who you ask, we've made either no progress or very little progress on understanding consciousness. And as I see it, consciousness is very central to ethics and what we want in life and what we should want and so on. And it worries me that how do we communicate this to an AI or to a super intelligent AI, for example, that might not be conscious itself? So would such a being be able to understand what it is we care about from say, reading all of our texts or consuming old human-made media and so on? I don't know, because it might seem very mysterious that we keep talking about, oh, I feel bad, or I feel good, or I fell in love and so on, if you're not you're self-conscious. I don't know where the question is there, but how do you react?

Joseph:
Was there an assumption there that the AI itself is not conscious as well?

Gus:
Yeah. That is an assumption. Yeah.

Joseph:
Right. So you're kind of worried, all right, so we build an AI. It's not conscious, but we want it to, in some sense, respect the types of values that we take to be at stake in consciousness, but it's hard to communicate about them.

Gus:
Precisely. Yeah.

Joseph:
Yeah. Okay. I think there's sort of a number of specific things that go into that particular scenario. I guess I would put that as an example of like one of the many, many types of sort of problematic confusions that can arise in a world where we're building machines that are comparable in sophistication and cognitive complexity to humans. Yeah. There's this general problem where, how do we talk about consciousness and communicate about consciousness, given the degree of confusion we have about it, the degree of sort of non-observability that our confusion sometimes implies about it?

Joseph:
I think it's possible that... It seems like an open question to me, whether an AI system could learn human consciousness discourse, even if it wasn't conscious. I think it could, in some sense, ask the humans, which beings do you think are conscious, and which things that I can observe correlate with what you call consciousness? If we were to say, we were kind of dualists of our consciousness, we think it's hidden. We think the AI system doesn't have it. Then we could say, look, AI, there's a special, hidden thing. You can't see it, but trust us. And here's when we think it happens. So whenever those things happen, we want you to... If a bunny is wincing in pain, then stop stepping on it or whatever. So I think we could, I think there might be work around there, but it does feel like we're getting into the morass of, oh, man, we have this, I guess we're talking about this special, mysterious, separate property that things can't see that correlate with physical events. Yikes. It seems like dicey territory.

Gus:
Yeah. Yeah. And I'm worried about it specifically because, I think it's so central to all of ethics. In fact, I think it is the... So it's not a coincidence that we are both confused about consciousness and ethics. I think the solution to both of them and uniting consciousness with the physical world, it's the solution to both problems. So there's a kind of, or there might be a united solution. Yeah. And I think our confusion about-

Joseph:
What [crosstalk 01:31:24]? Go ahead.

Gus:
Yeah. Sorry. Our confusion about ethics stems from our confusion about consciousness actually.

Joseph:
Could you say more about that? I'm curious what you think the confusion is.

Gus:
Yeah. So the problem is how do we place intrinsic value in the physical world? Where is it? How does it interact with other physical things, right? These sort of questions often lead people to an antirealism because well, the physical world is all there is, some assume, and there's no place for intrinsic value in the physical universe. And so therefore we say that morality, or we say that ethics is kind of... We take an antirealist stance towards it. I think that if we place intrinsic value in the brain, and I know this talk sounds confused or weird, but if we say that our good and bad experiences are what matters intrinsically and furthermore, that these experiences like all experiences are ultimately physical brain process, then we have a story about how we can interact with intrinsic value in a physical universe. And I think it's a very unique story that solves many of the metaphysical and epistemological problems that moral realism traditionally grapples with.

Joseph:
Would you see that as different... So suppose I said... Here's how I'm understanding what you're saying. It's something like, there's this general problem that both value and consciousness seem kind of different from physical stuff. And so we have a hard time figuring out how to make their existence compatible with physicalism or naturalism or something sciency. And so that tempts us to maybe deny their existence or to maybe reduce them to physical things or something. And you want to reduce both to kind of brain states.

Gus:
Yep.

Joseph:
And you want to say consciousness is a brain state, value is a brain state. And in virtue of both of those things being brain states, we don't have the problems that you might've thought we do.

Gus:
Well. I would identify value with experiences and identify experiences with brain states. So that's the reduction or that's the identification I would make. Of course, it's not something I made up. It's a combination of established views and philosophy. So, hedonism and the mind brain identity theory. But I do think that the combination of these views in particular offer a quite compelling story about how aware intrinsic value is located and how it is compatible with our understanding of the natural world.

Joseph:
I'm curious what makes brain states special here? So if we think that the central problem is both for consciousness and for value are A, they're kind of an epistemic question, how do we get epistemic access to these things? And B, they're kind of an ontological question of how do we fit these things with physicalism? I guess, maybe focusing on the epistemic piece, suppose I said, instead of brain states, consciousness is a brain state, but value is chairs. Chairs are the things that are valuable. Why would that be different epistemically? Why is my epistemic access to the value of brain states different from my epistemic access to the value of chairs?

Gus:
Well, you know something... So if you experience pain, say. Right? You know that there is some negative quality to this pain. And if we then furthermore, accept that pain is ultimately a brain process, well, then you have some epistemic access directly to some characteristic of your brain process in a way that you do not have something that is outside of your conscious experience, like a chair. So if you're telling me that intrinsic value is chair-ness and that we should maximize chair-ness in the universe, and so on, then I say that we do not have access to a negative quality of being a chair simply because yeah, it is the pain itself, the experience of pain itself that is bad. It is not the content of the experience that is bad. So I might stomp my toe on a chair, and then I might curse the chair and come to believe that chairs are bad, but that would in my view be a mistake, because it is rather the pain itself that is bad.

Joseph:
I think maybe what I'm trying to push on here is how much... So it sounds to me like you're working with A, a kind of physicalist ontology of consciousness, and then B, a story about a kind of privileged epistemic access to conscious states where this privileged epistemic access, which is in some sense, more direct or more transparent than the type of access we have to other things in the world is such as to solve epistemic problems that wouldn't be solved otherwise. And I guess I would see these as separate moves. So I think like, whether or not we are physicalists about consciousness is in some sense, separate from whether or not we think that there's this privileged type of epistemic access to conscious states that somehow solves problems about the epistemic access to value, or to conscious itself-

Gus:
Yeah, I agree. I agree with this.

Joseph:
Okay.

Gus:
Yeah. Yeah.

Joseph:
Cool.

Gus:
They are separable. Yeah.

Joseph:
Cool.

Gus:
Go ahead.

Joseph:
So maybe just setting aside the physicalist piece. I mean, I think part of the reason it can get muddy there is people, because consciousness is this funky thing, it can feel like, well, because consciousness is funky, maybe the funkiness of moral epistemology is sort of mixed up in the funkiness of consciousness epistemology. Our funkiness is unified. We just need one sort of funkiness. I'm skeptical of that type of move for...

Joseph:
Well, it's a little hard to explain, but I guess a way of putting it is, what is the epistemic issue with value and with consciousness? As I see it, it's basically that we think of good epistemic access as involving a certain sort of explanation. It's sort of in virtue of the fact that you have epistemic access to you believe, or your sort of epistemic processes are in some sense explained by that fact. Exactly how to understand that explanation is sort of debatable in epistemology. People talk about sensitivity, or they'll talk about where our sensitivity is sort of like, these things co-vary. They'll talk about safety, which is sort of like, you couldn't have easily been wrong. There's a lot of ways of trying to catch this out.

Joseph:
You couldn't have easily been wrong. There's a lot of ways of trying to cash this out. In two-

Gus:
I'll jump in here because you're referencing a lot of philosophical ideas here, and we might just think that... So, what you're talking about is kind of... The epistemic problem is how... When I believe something to be bad, do I believe that for good reason? Is that a fair summary of what you're talking about? Is there a good explanation for why I would hold some belief about intrinsic value, for example?

Joseph:
Yeah. I think a way of putting it is both consciousness... Suppose we use a very kind of naively dualistic notion, both of consciousness and of value, and treat them as kind of floating properties that correlate. They kind of float outside of the physical world but they correlate with the physical world. So, I've got a cat. Say cats are good, right? So, there's this goodness property floating above the cat. Everywhere the cat goes, the goodness property follows in the non-physical realm, say. This is a kind of cartoon but hopefully it'll illustrate the problem.

Joseph:
And so now, suppose we say you believe the cat is good but the only thing that you ever interact with is the physical cat. Your eyes receive the kind of light that bounces, that is reflected off the cat's kind of physical presence. At no point is there any sort of interaction between the cat's goodness, which is floating in an opposite realm, and your epistemic faculty.

Joseph:
The worry with that is now we can ask, well... I mean, this is one way of putting it, though it gets a little dicey, is suppose cats weren't good, right? Suppose we took that goodness property and moved it to something else. We moved it over to dogs. But because your epistemic faculties were only in our interaction with the physical cat, and we've held the physical cat fixed, and we've moved. We've just moved the non-physical goodness over to dogs, your epistemic faculties are going to be the same. You're still going to go, "Oh, cats are good." So, we've lost... So, the intuition here, though, exactly how to make it out, gets very complicated epistemically, is that you've lost a certain type of connection with the thing that you were forming beliefs about, such that you're no longer a reliable kind of detector of whether that thing is there. Like you're not a good goodness detector because your goodness detection process isn't interacting with goodness at all. We can move the goodness-

Gus:
Yeah, I see that.

Joseph:
... around however we want.

Gus:
So, God [crosstalk 01:41:32] moving... God could be moving goodness from cats to dogs or chairs, but I would continue to believe that it is chairs that are intrinsically bad, say.

Joseph:
Exactly. And so, importantly, this arises with consciousness too, where the dualist about consciousness thinks consciousness as kind of epiphenomenal. It's floating in the other realm beyond your head. But that gets especially hard to swallow because we think of ourselves as kind of really having a kind of causal interaction with our consciousness that explains, for example, the movements of my mouth, which is a physical event. When I say, all right, I'm going to check if I'm conscious, like let's... All right. Am I conscious right now? Here I am. I'm introspecting. I'm seeing my computer screen. And I'm also... A physical event is happening. Namely, I'm producing these sound waves. They're going into the microphone. So, ideally, that physical event has to be in a kind of causal interaction with the fact that I'm conscious, such that you have this notion when you do that type of introspective exercise, that if you took my consciousness away, right? If you moved it over to my chair and made me not conscious, then in some sense, I would get a negative. I would look into my mental theater and be like, "Oh, I'm not conscious." And then, we'd have different mouth events.

Joseph:
So, there's this intuitive notion that consciousness is in the appropriate type of causal relationship that can be detected by the other epistemic processes we use, where those are physical. But that, just that... So, this is, I think, just the fundamental problem for dualism about consciousness, which is that the zombies, if you took away the consciousness, we would just keep saying all the stuff. And so, suddenly, it looks like, "Wait. How do you have the type of relevant access to the consciousness?"

Gus:
Yeah, I fully agree. And an analogous problem holds for value are a moral value in the world. But I might ask you, what do you think of my solution then? Why is it... Is it compelling? Why is it not compelling?

Joseph:
Yeah. So, let's just focus on how the solution handles this epistemic problem. I guess I basically see it as, not... As I'm understanding it, what you're saying is, "Well, we have a special direct type of access to our consciousness, and that's also the type of access we have to value." And I guess I just want to say kind of what is this special type of access, and how does it solve the problem that we just laid out? We sort of laid out this problem of how does this... How do you get the right type of epistemic connection with the thing? And I guess, naively, all I hear is people say, "Well, we have a special type of connection with our consciousness, and also with value." And I guess I'm kind of like this seems like a non-solution. I can go into a little more detail there, but maybe I'll start with that, and you can respond.

Gus:
Yeah, yeah. Interesting. Special type of access. I mean, we are conscious. We have experiences. So, one experience might be an experience of seeing red. And in my terms, let's say, another experience would be an experience that is bad, that is characterized by painfulness or badness. And I think we learn about the redness and the painfulness/badness in the same way. And that is by being conscious.

Gus:
So, I have spent a lot of time wondering about this myself, whether this is actually worth something, right? And I do think that to deny that we have some access to the color red, for example, would be to deny that we're conscious. It does require kind of the full move towards a strong illusionism saying that consciousness merely seem to access, but it does not exist.

Gus:
But you asked, or you're worrying what's more specifically towards this or against this special access thing. What is the problem with saying that, when we have an experience of red, we know something about red?

Joseph:
So, I don't think there's necessarily any problem with that in particular. It's more just we set up, both with the respect to consciousness and with respect to value, this sort of physicalist epistemic dilemma, which is that if your epistemic faculties are physical in some sense but the properties you're trying to detect are kind of not physical, but your epistemic faculties only interact with physical properties, then how do they detect the non-physical properties, is kind of the dilemma.

Joseph:
And I guess, at a high level, I think maybe I just don't hear the connection between talking about introspection or our access to phenomenal red or anything like that and the solution to that problem. So, I think you could say redness is a physical event, and our epistemic faculties are also physical, and so they interact with redness in this way, which I'm guessing is what you want to say.

Joseph:
And then, you want to say-

Gus:
Yes, yeah.

Joseph:
... "Okay, so I guess value is also a physical property."

Gus:
Yeah.

Joseph:
That's... I think you can say that but I don't think that's sort of a solution to the problem. It's more like you've gone... You want to be a physicalist or a naturalist about value and about consciousness but you still have to sort of deal with all the problems that that gives rise to, like in the context of consciousness, questions about the hard problem in the context of value questions about, ah, open question arguments, and doesn't it seem like value is different from an actual property, and stuff like that. So, in some sense, I guess, I just see it as sort of an orthogonal move. It's sort of like, you want to be a naturalist and a physicalist about consciousness, and so you have to solve all those problems.

Joseph:
And then, there's a separate question about... Yeah, so... Or I guess I just see it as that move is sort of, I want to be a physicalist and a naturalist about consciousness. Once you do that, I think a question... The reason I brought up the chair thing is, it's just not clear to me why introspection is a uniquely privileged... You have solved the problem. You've said, "Suppose we were going to be physicalists and naturalists about consciousness." Cool. So, red is a physical event. Goodness is a physical event. And we detect it with our physical brain detectors. But then, why is that uniquely applicable to brain-type physical events, right? So, if there's something else that's happening in the world... Like it's just an arrangement of neurons that we detect but we can detect other things. We can detect non-arrangements of neurons too.

Joseph:
So, I feel like, often, the people I'm thinking of as having your view, and I would guess you'd say this too. You'd want to say that the... In some sense, phenomenal properties are privileged as the type of thing that we can tell what the value properties they have are. We don't have the right type of access to chairs but we do have the right type of access to the badness of pain. But if pain is a physical event, and chairs are physical events, loosely, I guess I don't know why... Or it doesn't feel like the physicalism piece is doing any work there. We could just as easily have access to properties of chairs as we could to properties of brain states, assuming they're physical. And so I don't see the privilegedness of the mental states.

Gus:
Okay. I mean, what does it mean to say that we could have access to chairs in the same way that we have access to our own brain states? Because we don't in this world, right? And I think that is quite clear from our experience. So, what do you even mean by saying that we could have the same kind of access to chairs?

Joseph:
Well, so chairs is probably a bad example because it's not a very serious candidate for some other non-deterministic form of value. Maybe an example is, say, you see someone torturing a cat, right? This is a kind of classic example from Gilbert Harmon. There's these people and you see them, they're kind of burning a cat alive. And you perceive their action, the action of burning the cat. And this action has certain properties like causing the cat to be a certain way and stuff like that. And you might want to say that action is wrong. And we're sort of... In some sense, you're kind of detecting the wrongness of this action that has a certain high-level property that you're detecting. And on a naturalism, that property is physical. And so, you might wonder in what sense, what is... To the extent that that's a physical property that we detect with our physical epistemic apparatus, in what sense is that detection process any less direct or reliable? Or maybe not reliable... Let's just say direct... than our introspective access?

Gus:
Yeah.

Joseph:
I guess a way of putting it is, suppose you had two modules in your brain. You had a little experience module and a separate epistemic access module. And that epistemic access module can kind of reach over to the experience module and check what's going on there physically. And it can also reach over to the cat situation outside of your head and check what's going on there physically. Sort of why is one type of checking or detection privileged relative to the other, would be the question.

Gus:
Yeah. So, in my story about these things, when you see the cat, what happens is that you... I mean, so there is what we know is bad is not what the experience is about. So it is not the intentional content of the experience. It is rather the phenomenal characteristic. And the reason why we can't know and know anything about the possible badness or goodness of the act of killing a cat is precisely because we are limited to have... This might just be repeating the point but we... From our experiences, we know... So, from our experiences themselves, we know something about the experiences, right? From there, we can theorize about what caused the experiences. We can theorize about what might be going on outside of our brains. Now we're getting into kind of classical philosophy of mind thought experiments. But say that I am a brain in a vat. My brain has been stimulated and so on. And I experience a simulated sight of people torturing a cat. Now, what we... So, this might make it clear the distinction there is between things outside of our heads and things inside of our heads.

Gus:
So, in that situation, if we then claim that we can see the cat-torturing situation, and see that it's bad, I think our mistake would be clearer because what is bad is the pain that we feel when we see another creature that we assume is in pain. Does that-

Joseph:
Cool. That's helpful.

Gus:
... story kind of clear something up? Yeah, yeah. Maybe.

Joseph:
I think it helps me understand where you're coming from.

Gus:
Okay.

Joseph:
So, what I'm understanding you're saying is that there's a kind of non-representational access to our consciousness that we don't have to other things. Where, with other things, we represent them and kind of point at them via the content of our representations. Whereas, with consciousness, we don't. And-

Gus:
Yeah.

Joseph:
... So, that's the kind of important difference. Is that a good summary?

Gus:
That is a good summary. And we might get into even more. I mean, yeah. There are questions about how... what it even means when we say pain or red. How do these descriptions refer, and what will they refer to? There are interesting issues surrounding what's called phenomenal concepts, where I think we... When we refer to pain, we're referring... Pain has a special way of referring that situations outside of our heads cannot refer.

Joseph:
Yeah. So, I think we do... I think, at a high level, the main point I want to make is I think this discourse about whether we have non-representational access to our conscious states is separable from the question of whether we are physicalists about consciousness or about value.

Joseph:
So, I don't think the following is enough to... Well, yeah. I think there's one question. Are we physicalists about consciousness and value? There's a separate question about in virtue of some sort of special access, special non-representational access, to our conscious states, do we have better evidence about the value of conscious states than about anything else?

Gus:
Yeah, yeah. And I-

Joseph:
And... Yeah.

Gus:
... completely agree. These two points are separable but I attempted to give you kind of the whole package, which is what I consider possible solutions to all of our worries about the realness or on rail of morality. For example, the person who has most inspired my metathetical position is called the Sharon Hewitt roulette. And she's just been on this podcast and she is, in fact, a non-physicalist. So, it's possible-

Joseph:
There you go.

Gus:
... to hold this view of... Yeah, yeah. So, it's possible to hold this view about intrinsic value and not combine it with my ideas about physicalism. Yeah, definitely.

Joseph:
I will say... I mean, I think my experience... So, I think my impression is that some pattern of views in this vicinity is especially popular amongst hedonistic utilitarian types. And one thing I feel is not often highlighted enough is that, as stated, the view about on-representational access, essentially a kind of... It's a view about what the amount of evidence we have, where there's some notion of kind of an inside... Like with consciousness, you get to kind of see the inside directly but everything else, you kind of have to see the outside.

Joseph:
And, naively, the view sounds like if you can see the inside of an experience, and you look and you see it's bad, you're like, "Okay, cool. We know that one's bad on the inside." With the cat thing, we're clueless about the inside, and so... But that actually doesn't imply hedonism. That just implies that you have better evidence about hedonism than you have about kind of non-experiential things. But sometimes-

Gus:
Yeah, except that-

Joseph:
... It feels like it's conflated. Yeah.

Gus:
... Yeah. So, the point is that we could leave it open, whether, okay, could other things be intrinsically bad? Could other things be intrinsically good? But, on the view I take, it's very difficult to see how we could ever get this epistemic access to the goodness or badness of things that are not experiences. We will never... It's difficult, when we get to this level of discussion, to describe things without getting trivial, right? But we will never experience something that is not our experience itself. So, it is very difficult for me to even understand what it would mean to say that another property could be good.

Gus:
I've heard knowledge could be good, or symmetry, or complexity, and so on. But how would we know, right? The point is, I very much agree with this problem about epistemic access and the ontological status of goodness and badness. I just think there is, or I hope maybe, that there is a unique solution that could solve it. But this-

Joseph:
Cool. I-

Gus:
... solution -

Joseph:
Yeah.

Gus:
... Yeah. This solution is compatible with kind of a taking a debunking stance towards a lot of what we talk about in morality. Yeah.

Joseph:
Cool. I think probably the main question I would have about this view is just what's going on with this non-representational access thing? And, yeah. So, we can talk about maybe phenomenal concepts and other things like that.

Joseph:
My own feeling is wow, this sounds pretty mysterious, especially if we're physicalists and it's just like, look, ultimately what we're talking about is detecting properties of a physical system. So, what's kind of direct and non-representational about that, that sort of can't be applied to other physical systems? And, in some sense, what is this notion of directness in general? It's sort of... Because if you're a physicalist, the thing... Everything you need is kind of public. It's all there in the physicalness. So, why can't other people see it, or why can't you represent it? Or I guess it sort of... I think that the notion of... My feeling is that this view often brings in a lot of elements of dualism about consciousness that sometimes aren't kind of... Yeah. Because it's hard to conceive a physicalism about consciousness because it just feels like it's just not publicly accessible. This is not a set of neurons firing or whatever. It's this experience, and it's red, that you can say, "Ah, the redness is hidden, and I have my special access to it." But once you're a physicalist, I think it gets a lot harder to make sense of that kind of thing. I mean, I think it's hard to make sense of that thing if you're non-physicalist too. But especially as a physicalist, then I start to wonder what kind of... Especially what's going on with this non-representational access?

Gus:
Yeah. So, say that we're looking at a brain from the outside, all right? And this brain is currently experiencing red or experiencing pain. Again, when I say these words, we simulate, or at least I simulate, a kind of a brain on the table in front of me, and me looking at the brain, this doesn't feel red or this doesn't feel bad. So, it's definitely confusing in that sense.

Gus:
But I think if we ourselves have had the experience of red, and if we have 100 or more years of neuroscience connecting the experience of red to different processes in the brain, then we might begin to talk about, "Okay. Well, I know that this brain is experiencing red because I see certain modules lighting up," or whatever the final story ends up being, right?

Gus:
But yeah, I do think that the experience itself is initially necessary to understand what experiences even are. But after we translate our experiences into physical facts, that then they can... Then we can talk about them publicly. And this somewhat dissolves the mysteriousness for me, when we begin to see that pain as maybe pain is a neural activity in certain hotspots in the brain, maybe red is a certain thing in the visual cortex. And so, in that sense, they might become publicly accessible. But only after we learn the right correlations.

Joseph:
Yeah. I guess the... These issues get tricky. The question that comes up for me with that is kind of what's going on with this initial step where you need some sort of internal introspective access first? Or kind of what does the physicalist understand to be going on with that? And, yeah. But these things get hard.

Gus:
I'm wondering whether or why you find it mysterious because I myself have included that kind of if we keep underlining the mysteriousness of this kind of access, or having experienced it in itself, that it might be confusing in a sense. Because I think where this leads you, or what you must accept to... It's kind of the question comes down to whether you believe that we are conscious in a sense. And if you believe that we are conscious, well then, we somehow do have access to certain phenomenal properties, right?

Gus:
Okay, so one frustration that some listeners might have is that all of this in-depth discussion about morality and so on, they might be unnecessary because whether or not we agree that certain things have intrinsic value and other things don't have them, we can all somewhat agree to it, to a common project.

Gus:
For example, if you take the longtermist view, you might say something like, "Well, whatever it is you think is good, maybe it's growth, maybe it's knowledge, maybe it's civilization, whatever. Maybe it's happiness, and so on." What do you say to that move, where we're in specific ethical debates suddenly become less important or as seen as less important?

Joseph:
I am very sympathetic to that sort of move. I think a lot of these nitty-gritty debates are often less relevant to specific kind of lived issues in our lives than people assume. That's not always true. Some issues really do bring up substantive questions and normative ethics. Often though, a lot of it is about empirical questions. And often... This is something I wrote about in a recent post. I think it has more to do with how much weight we give to different values, than how much... whether we value them at all.

Joseph:
And especially, I think in the context of the longterm future, I think we should be, as I said, I think we should be pretty agnostic about what the best types of futures look like. And I think our overall orientation should be towards protecting the possibility of making a very good future. I don't think we should be debating now what that future looks like, especially not in any detail. And so, protecting the possibility of that future, and also the possibility of future people who are wiser and better informed than we are, being able to choose for themselves what type of world to create. And I think that that type of consideration cuts across a very wide variety of ethical views.

Gus:
Yeah. And I somewhat agree with this point, although I feel, especially in population ethics, it just... The conclusion you reached there can have a massive impact on what you think is good. Or when we talk about factory farming or wild animal suffering, this is also extremely consequential, what your conclusion is.

Gus:
So, I do think that there are some of these kind of nitty-gritty debates that suddenly reach into real-world considerations and become very important.

Joseph:
I totally agree. I think I do agree with that, though I think if you have various types of uncertainty about them, sometimes the uncertainty itself, if you just put kind of sufficient credence on one position versus another, then depending on how you work with moral uncertainty, you can end up not needing to resolve the issue definitively in order to make a judgment.

Joseph:
And I tend to think we should at least be interested in that type of situation with respect to things like animal welfare and the longterm future, and other things as well.

Gus:
Although I have been hearing kind of complaints that this move, when we fall back on normative or ethical uncertainty, it kind of tilts our moral considerations in favor of the total view. Because, on the total view, there is just so much more at stake in almost every situation. And so, it might be somewhat... Also because many of the people who study and care about moral uncertainty are also somewhat, at least somewhat sympathetic, to utilitarianism, and to the total view.

Joseph:
I think that's a worthy critique, depending on how you're doing the moral uncertainty calculation. So, there are these issues in the context of moral and normative uncertainty about how, for example, do you compare between different theories?

Joseph:
So, in the context of, say, totalism versus... Say you like don't value future people, or you do. It's not necessary that your understanding of the differences between value and future people versus existing people is sort of where that the future view takes a single existing person and kind of scales them, treats everyone in the future as worth as much as the present-oriented person thinks the present person is worth.

Joseph:
It could also be the opposite. It could be that the future-oriented person thinks the present people are worth much less, in which case it's sort of not actually a... It's not the notion that there's sort of more at stake in one versus the other is less clear.

Joseph:
So, these normalization issues get very difficult in the context of moral uncertainty. And I think there's more work to be done there. And you also don't need to do it in that way. There is sort of... I think it's true that we should be wary of kind of very simplistic applications of the kind of moral uncertainty view, especially if it looks like the person is kind of using it as a way to sneak their view in kind of under the rug. But I do think it's an important fact. It's important that we are uncertain about these things. It's important that we grapple with that. And I think it's reasonable to... Yeah, to expect practical upshots from that sort of grappling.

Gus:
Yeah, yeah. Okay. I also want to touch upon, which we have already begun talking about, consciousness and in that... Or kind of in connection with that meditation. And just so our listeners know you, you've done a total of one year on silent retreat, is that correct?

Joseph:
Or, yeah. I guess more than that now.

Gus:
Okay, more than that. One question that I might ask is why, in light of everything we've talked about, how can it be the right move to go on silent retreat? Shouldn't you be somewhere reducing the risk of nuclear extinction instead?

Joseph:
So, I should say that I chose to do the bulk of mine... When I was choosing to do a lot of intensive retreat practice, I was less oriented towards existential risk and longtermism as a kind of focal point of my energies than I am now. So, I don't... My choices-

Gus:
Okay, it's definitely tongue in cheek. I'm not actually attacking your dispositions or your choices here.

Joseph:
Yeah. I mean, I think it's a reasonable question. That's a very big chunk of time. And I don't think devoting that amount of time to meditation or to other things... That's a serious choice. And especially in the context, if you're thinking about it in a kind of cause prioritization framework, I wouldn't see meditation as sort of a cause competitive with... I mean, it wouldn't be a cause but a sort of... Yeah, I guess I wouldn't see that choice. I would see that as a very non-straightforward choice from an effective altruist perspective.

Gus:
Yeah.

Joseph:
Yeah.

Gus:
But, on the other hand, you might have learned something about how our thoughts influence our experiences. And you might have gained some clarity about what you should do with the rest of your life. So, I don't know if this is actually the case but it could be the case that a person goes on retreats and reaches a certain sense of clarity about what they should do. And, in that case, it's comparable to spending a lot of time figuring out what to do with your career, for example, which is something that's recommended in effective altruist circles.

Joseph:
I think that's an interesting analogy. I hadn't made that connection. I think I was thinking about it in a somewhat similar vein, where there were mental skills and ways of relating to the world, and ways of being that I took to be very important and valuable. And in choosing to invest in developing those, I think I did have some sense, this is better to do early. (A) It's better to do early, partly because the opportunity to take long periods of time away from kind of the everyday world gets harder, I think, as you get more embedded in your career and your relationships, and your responsibilities, and stuff like that. But also because it gives more time for these sorts of dispositions in ways of being to affect the rest of what you do, and the rest of what you choose, and how you relate to people, and stuff like that.

Joseph:
So, in some sense, it's analogous to it being good upfront to think a lot about what to do with your career. I think it's also you can make a case for it being good upfront to kind of invest in developing the kind of mental habits, and skills, and ways of relating to the world that seem to you most valuable.

Gus:
Yeah. I can tell you that I meditate a little bit. So, I meditate sometimes 10 minutes in the morning. But it feels very different to meditate, to spend 10 minutes meditating in the morning, sometimes. And then, on the other hand, to go on retreat for maybe three months in silence, and so on.

Gus:
So, do you think there are kind of threshold effects for meditation where you have to get serious about it in order to get anywhere or get any benefits out of it?

Joseph:
I don't think you have to get serious in the sense of doing a lot of retreat in order to get benefit from it. And mostly, I generally think that the best evidence about how much meditation is helping you is sort of very just direct and empirical. You've got to look and see. Does it seem like this is making a difference in my life? Do I like where it feels like this practice moves my mind and my way of relating to the world?

Joseph:
That said, I do think retreat gives you a chance to get acquainted with what your mind does when it goes deep and kind of further in the direction that this...

Joseph:
Goes deep and kind of further in the direction that this practice kind of moves it. So, I mean, one way of thinking about this, which I think is an imperfect metaphor, but I think it has some value is, if you think about rolling a ball up a hill, it can be helpful ... if it rolls down a little bit, if you let go of it or maybe a lot, then it can be helpful to do a sustained period of kind of pushing the ball up the hill without letting go to kind of see how high you can get it, or maybe even push it over the lip of the hill to the other side or something like that. Where I think there is a dimension where doing just practice in daily life, you spend 10 minutes and then you go and you have a big chaotic day and then next day, 10 minutes, big chaotic day.

Joseph:
And sort of, there can be a sense in which the chaos and the complexity of your life can kind of roll the ball back down, and it can be harder to kind of really settle in and go deep with the practice. That's that. I think there's also a lot of benefit to learning how to integrate various mindfulness and meditation related practices, aligned with other practices into the mess and chaos of everyday life. And so it's a balance, but I do think retreat is a kind of unique space for seeing what your mind does with this type of technique.

Gus:
Have you learned anything on retreat that is comparable to learning how to ride a bike? So, is there anything that feels permanent, where you can't unlearn it or you've rolled the ball over the top of the hill. Is there any kind of permanent insight or yeah?

Joseph:
I think I would compare it more to something like playing piano than riding a bike or something where if you've done something a lot, your mind kind of has practiced it a lot and so there's sort of deeper grooves and it's a stronger type of ... and more accessible way of doing things. So I think there are ways in which it feels like my mind is different and has been different in kind of a sustained way since doing this retreat practice. But I think something like piano or kind of other sorts of skills, feel like a better analogy than kind of biking, you have it or you don't type thing.

Gus:
Can you tell me what you've learned? Tell us what you've learned. What are these skills? How is your experience different now than it was before?

Joseph:
I mean, so to be clear, I don't have any hard data on this or anything like that. This is all very anecdotal and introspective and you can't really look at counterfactuals for kind of what type of person would you have been if you'd gone and done something else? It feels to me like there are certain basic movements of your mind that mindfulness practice aims to cultivate. So a certain kind of awareness of what's going on internally and externally, a certain kind of a motion of letting go or non clinging or a kind of non grabbing or non tension in relation to what's happening, various types of kind of mental discipline and kind of willpower and kind of moving your mind from place to place and controlling where your attention is going at a given time. All of these are skills that I think meditation practice cultivates and that I think I've seen some changes in my own life from my practice in that regard.

Gus:
Okay. Interesting. Is it something that you recommend people look into? I know it's increasingly popular. I think that in earlier decades, you had to be very into meditation, but now it's kind of spreading. Yeah.

Joseph:
So I think it's good to try it, in the same sense that I think it's good to try lots of things. But I'm actually less of a meditation booster than my kind of extensive investment in meditation practice might suggest. I think in particular, a lot of people ... in my experience, it seems to me like people relate to meditation with a little bit of a, "I should be doing that" type of mentality, the same way they might have with exercise or something. It's sort of this unpleasant thing that apparently all the evidence says it's really good, but honestly when I do it, I hate it, I hate every minute of it and I just can't.

Joseph:
And a) I think there are ... the evidence, there are a lot of studies, but there's a lot of studies about a lot of stuff. And I think we've learned a lot about how reliable, many of kind of studies in social science and neuroscience and cognitive science are. And I think studies of mindfulness are certainly no exception to this type of skepticism that we should be bringing to bear in a lot of those contexts. So I wouldn't let yourself be ... I think you should trust your experience with the practice and I think there's enough evidence there that it's worth trying and both anecdotal and non anecdotal to give it a shot and see what your mind does. And I think some people, when they do that, they find themselves very compelled and they're like, wow, this is really making a difference for me. And when they have that type of reaction, I tend to think cool that that suggested that this is something that might be a good fit for you or that it might be worth exploring in more depth or making a more regular part of your life.

Joseph:
Often my recommendation is if someone feels like they're enjoying the practice or finding it useful to go on a retreat, a short retreat, I generally recommend five days, which is sort of not too long, but it's enough time to settle in. I think if you do just like a weekend retreat, that's really only going to be Saturday and it's just not enough time to kind of get used to the environment and really drop in. So I tend to recommend a sort of five day retreat for people who are interested, they've done a little bit of the practice by themselves, they're interested in it and they want to see kind of where their mind goes when they do it in a more sustained way.

Joseph:
But I also think, if you find yourself not enjoying it after having given it sort of a good shot, and it's always a bit of a question of sort of, what does a good shot mean? There's a lot of other stuff in the world, exercise, it's really great, other forms of more social interaction, there's other spiritual practices, there's all sorts of other things you can do. And I don't think people should stick with meditation from a sense of obligation or that this is some sort of privilege mode of improving your life or your mental health or anything like that.

Gus:
You will hear the same kinds of claims about what you can experience in meditations, as you hear about what you can experience on psychedelics for example. I've heard people describe it as, incredibly blissful or tranquil and so on. Is there a sense in which meditation reveals to us that instead of improving the world, instead of taking part of the world, we can just sit down and focus on the breath or whatever it is we do and feel good. So it's that sense that it reveals that we can focus directly on the brain as opposed to improving the entire social system we have going on.

Joseph:
Reveals seems to me too strong, maybe in two dimensions. One is it feels like to the extent that we should focus on the brain, which is an open and ethical question, I think. My sense is that the thing that would reveal that, or that you have in mind is something like, oh, it looks like you can generate really pleasurable or kind of satisfying or otherwise desirable mental states without kind of intervening very dramatically on your environment. You can just intervene on your brain. I think we knew that without meditation and without psychedelics. I think the difference that various of these interventions makes, is maybe it makes it seem somehow more realistic or practical as a ... not just that it's kind of in theory possible but the completed neuroscience to stimulate your brain into very desirable states, but actually it's sort of something that you can do with various types of practices or other interventions.

Joseph:
But I also think a) I think it's an open question, exactly what level of that is practically available to different people via different interventions and b) whether that's really desirable. I know you're sort of sympathetic to hedonism and views in that vein, but even if you're sympathetic to hedonism, kind of in principle, in practice, we live in a kind of political and social reality where there's a lot at stake beyond what's going on in our heads and there are I think [crosstalk 02:21:10]-

Gus:
Yeah and it could be very dangerous actually, to decide ... again, it's kind of a cartoon way to put it, but decide that, okay, zoned meditation is pleasant, so let's all sit and meditate. And then the next thing you know, an asteroid is on its way and so on. It could be dangerous to become too inwardly focused or to honker down on feeling good too early in a sense. Yeah.

Joseph:
Yeah, I agree. And I also think for me at least, the kind of hedonistic aspects of meditation are not very central. I think that sort of the draw of meditation for me, has always been closer to something about being awake and being alive and being in some sense, more present or more in the world and more able to encounter and engage with reality as it really is, as opposed to a sort of mode of kind of dropping out or stepping back, or kind of retreating to your own kind of invulnerable mental space. So-

Gus:
But isn't it as self pleasurable to be a part of the world and be present and kind of experience things fully around you. So isn't there some form of wellbeing in that?

Joseph:
Certainly. Yes. But I think the sort of ... but with less, I think in my head of a kind of escapist flavor.

Gus:
Yeah. Definitely. Definitely.

Joseph:
Yeah.

Gus:
Okay. So on these retreats, have they changed your mind about consciousness itself? Do you think that you've been impacted by the retreats in your philosophy of mind?

Joseph:
Probably in different ways, but I am less excited than some about the kind of metaphysical insight that meditative practice makes available. So people often approach ... basically I am less oriented than some, towards a lot of these practices as modes of kind of seeing that P is the case, like for some metaphysical or philosophical proposition P. I think a lot of people treat meditation like, and then I saw that there is no self or that the mind is impermanent or that something, something, and my attitude towards that is that, philosophy is hard. We have a whole discipline of philosophy and it's really quite rigorous and difficult and confusing. And if you look at some of the arguments, I mean, if you actually look at the philosophical arguments people use in meditative contexts, they're gappy to say the least, in terms of what the actual philosophical structure is, as sort of like, well, there's no self because things change and the self ... various things like that.

Joseph:
So for me, I think the most prominent ... so I think meditation is better as a mode of coming into a kind of visceral preconceptual relationship with your mind and your existence and kind of realizing more deeply the kind of a gritty thing that you are philosophizing about as a real thing. And I mean, there's sort of many ... some of the most dramatic meditation experiences I've had have been ones that have kind of whoa, like what the F is going on with this, or like, what is this? And I think a kind of more visceral sense of the raw thing is I take that a lot more seriously than the kind of insights quote, unquote that people take away and quote in their minds of like, and then I saw that X or Y. Yeah, so for what that's worth.

Gus:
Yeah, it makes sense. Although, I would make the distinction again, as we talked about with ethics and so on between knowing something about your mind or your experiences such as there being no self or, and then, I mean, sudden the spiritual people will claim all sorts of things about karma and rebirth and so on, which is kind of external claims that are even less plausible. So would you accept that distinction that we might in meditation learn something about our experiences or the disposition of our minds and so on?

Joseph:
Yeah. So I mean, I think meditation is going to be a better ... as an epistemic tool, I think meditation is going to be better for kind of introspective epistemology than for learning about minimum wage or whatever. And I do think you learn a lot. I mean, I think people learn a huge amount about themselves and about how their minds function and what sort of patterns and loops and flavors and all sorts of things arise in their mind. And I'm open to there being forms of more metaphysical or philosophical insight that come from that type of introspection. I guess I'm just, ... I mean, it might be related to some of these questions about direct access or non-representational, I guess I just think often the things that people claim to have seen or realized via some sort of direct introspection are conceptually freighted and they may have had some sort of dramatic experience and they may have in some sense reoriented in their perspective, but to articulate that in metaphysical terms or via the kind of language of philosophy or metaphysics or whatever, is a substantially further step.

Joseph:
And I think it's easy to conflate the two and think that you've sort of done philosophy by looking at your mind. And I think that the projects are kind of more distinct than people sometimes suppose. That's not to say that they're not productively in dialogue, I think they are, but I think it's a dialogue, it's not like the introspection sort of reports back to the philosophy sort of, and that's the end of the story.

Gus:
Okay. So I don't know where you are with time, but I would love to run a thought experiment by you if you have the time.

Joseph:
Yeah. I've got time. I know we also ... I mean, there was some stuff about illusionism that we got a little cut off on, that I'm happy to-

Gus:
Exactly.

Joseph:
I'm happy to go back to, yeah.

Gus:
Yeah. So let's talk about illusionism. You have a post in which you try to understand this view, and we should say that it is the view that phenomenal consciousness is an illusion. It seems that we are conscious, but we are not conscious. How do we ... traditionally or at least in philosophical circles and I would definitely assume, in kind of common sense view is that, if we know something, it is that we're conscious, right? We can be in a simulation or we can be a brain into that and so on, but we're definitely conscious. So where do we start with understanding the view that we're actually not?

Joseph:
Yeah. So I'll try to put on my illusionism hat here and see if I can ... it is a difficult view to make sense of. So from the illusionist part of myself, I guess I would say we know that we are something, so there's something going on. And people often articulate that as we know that we are conscious, where it's sort of a pre theoretical commonsensical notion of consciousness. And then we can try to drill down on that in the kind of traditional, philosophical ways by talking about, what effectively amounts to other ways of pointing at a somewhat similar idea where you say, what it's like, or we talk about the raw feels or whatever.

Joseph:
All of which often get bundled under this term "phenomenal." So the term phenomenal is sort of the distinctively, "what it's like-y" type of thing. One issue is that when you try to say more about what that is, it can be difficult. So often ... to the extent that we add more properties in, like we say, ah, well, what's up with this phenomenalness? What makes consciousness a thing as opposed to other things? And then people will add in more substantive properties, like they'll say, well, it's intrinsically private, so no one else can see it, only kind of you can see it as it were, or it is ineffable, or it is subject to a certain type of direct epistemic access, and then you can add other things. And those are kind of substantive, metaphysical claims, which go beyond just kind of saying this raw thing kind of phenomenalness exists.

Joseph:
And so there's these two types of illusionism. One of them, the sort of weak illusionism goes through those additional properties that I just said, like intrinsically something or private or ineffable or whatever, and denies that there's anything that has those properties. So it's sort of, it's given a certain kind of characterization of what consciousness is and then the weak illusionist says, that thing doesn't exist. And where it's still a further question, whether when you said the thing we know that we're conscious is whether we knew that a thing with all those properties existed, that was sort of a substantive ... it was a metaphysically freighted conception of the thing that we know, which we might deny. Sorry, just sort of a long-winded way of talking about illusionism.

Joseph:
But basically I want to point at, I think it's somewhat mysterious, when we say there's no such thing as phenomenal consciousness, it's quite difficult to say like, what is the thing we're talking about there, which we're denying. And in fact, the illusionist themselves has a hard time talking about what it is. Sorry. I don't know if that really got at it. Let me try that one more time. So the question is something like, what's up with this Joe? Come on man. Obviously we're conscious, what is this view?

Gus:
I actually-

Joseph:
Is that?

Gus:
In a sense, or that could be the question. I actually find illusionism, at least somewhat plausible, but I just think it's an especially difficult view to understand. And I do worry that illusionists make grand pronouncements such as, consciousness does not exist or nothing is conscious. And then kind of when pressed it's revealed that well ... or consciousness sneaks back in, in a sense.

Joseph:
Is it sort of a Motte and Bailey type thing or?

Gus:
Yeah.

Joseph:
So there's this distinction in the philosophy of mind between mental properties and functions that create what David Chalmers calls, easy problems, and this hard problem. And the easy problem is sort of, how do you explain basically the input output properties of the brain, the functional properties of different mental states, how we have various forms of introspective access to kind of functionally characterized mental states, all sorts of stuff like that. And then the hard problem is how you explain why there's something it's like quote-unquote to experience any of this, or to be a mind, or why this kind of phenomenal frosting arises from kind of the brain cake or sort of there's this other dimension of what's going on, what explains that and how do we understand the epistemology of that and the metaphysics and all sorts of things.

Joseph:
So what illusionism says is that part is not a real problem, that doesn't exist. The frosting, the shimmering ethereal other thing that is in the other dimension, that's not there, but all the other stuff is there. So you can still tell that you're in pain for example, where we understand pain as a functionally characterized state. You can still look and go like, oh, you can have reliable beliefs about whether you're in pain, if we're understanding pain in that way. If you import into the definition of pain, some notion of phenomenal consciousness, then it looks like the illusionist is denying that we ever experience pain. And so I think this can lead to this sort of back and forth, where people use concepts that sometimes include in their definition for that person, a phenomenal dimension.

Joseph:
And so then it can look like the illusionist is sort of saying, nothing of that kind exists. And then the person is like, so no pain and no beliefs and no desires? And then it's like, no, no desires exist, it's just not with the phenomenal consciousness in, and then they're like, but I think desires are intrinsically phenomenally conscious, but that's a kind of separate debate. And anyway, so I guess I just thought I'd flag, I think that can be some of the source of the confusion with illusionism is that a lot of people tag a lot of things as necessarily phenomenally conscious, and you have to kind of get into the idea that there's a functional characterization that doesn't imply that.

Gus:
Yeah. And so I definitely agree that kind of using different concepts and, or referring to a lot of different concepts by the same words, such as phenomenal or pain or consciousness and so on, it creates a lot of confusion.

Gus:
And I have this, I don't know if it's original to me, but it's definitely inspired by Nozick. So I call it the torture experience machine, in which you have a bunch of brain scientists and they are realists about consciousness. They're not necessarily dualists or anything, they could be physicalists about consciousness, but there are realists about it. And they are also sadists, so they want to offer people to walk into a torture experience machine in which they claim that people will feel extreme pain and suffering. And so if you're a committed illusionist and you're offered enough money, would it be the case that you should enter this machine, being self-interested? That's my thought. And I assume that a lot of illusionists feel that, of course they shouldn't and that it's obvious that they shouldn't enter, but it's not so much that I want to draw on an intuition that we don't want to enter the machine. It's more that I want to use this thought experiment to clarify what we actually mean by pain. Yeah.

Joseph:
Yeah. So I do think that clarification is necessary in thinking about the thought experiment. So I think it's very clear that an illusionist, if you say, Hey, illusionists, these guys are going to remove your fingernails and drill into your ear canal or what ... sorry, I don't mean to describe torture, but if they're going to just do-

Gus:
No, no, but no, you're going to experience something that feels like that, but you're going to be in an experience machine, but you're going to experience something that would feel as bad as what you just described.

Joseph:
Yeah. So I think illusionists don't like stubbing their toes either, or any of this other thing. I mean, we can get into these questions about illusionism and nihilism and stuff like that, but broadly speaking, the illusionist is not going to think that the thing that you disprefer when you try to get away from torture, or at least that they personally disprefer when they are trying to avoid torture is a phenomenal thing, it's something else. They don't believe in the phenomenal thing, but they still believe in something that is worth avoiding when torture is at stake. And so if the machine implicates that, then they'll avoid it. I think you can set it up where it's almost like a philosophical bet, where you say, okay, here's a machine that will only torture you if there's such a thing as a phenomenal consciousness. Where that is-

Gus:
That's exactly what I was meant to point at. Yeah, exactly. Exactly. So you're kind of betting on which one of you has the right theory of consciousness. But sorry, I interrupted. Go on.

Joseph:
I mean, just as you said, I would interpret that as just a kind of way of thinking about a bet, where you could have a similar bet about any other philosophical position, this machine will torture you if nihilism about mereology is true or whatever. And it's just like a bet about your credence in that position and what the stakes are and how much you disvalue torture and stuff like that. But I wouldn't see it as sort of getting at something distinctive about illusionism.

Gus:
Yeah, well, it is at least easier to imagine the experience machine relating to theories of consciousness than theories about mere... what is it? Is it about wholes or what is, wholes and parts and so on.

Joseph:
Yeah, right. There are no composite objects. It's just parts.

Gus:
Yeah. Okay. Yeah. Okay. So it seems to be closely related to something about experience than about parts and wholes and so on.

Joseph:
Actually, I hear you. Yeah. So in some sense, it's more of a direct test where they could say, look, what this machine is going to do is it's not going to intervene on any of your physical processes, it's just going to insert into the phenomenal realm some pain. And we think there is a phenomenal realm and you don't. And so you should be willing to take money because you don't think anything's going to happen when you go into this machine but we think something will. It's like a really tough thought experiment because most of the way we cause people to get phenomenal pain is via physical processes that the illusionist disprefers. So yeah, it's a hard set up.

Gus:
Although I would say that again, the brain scientist could be physicalists and realists about consciousness. So they could say that we will change your brain in this way, and we believe that you will feel phenomenal pain and you do not. And so, why don't you want to enter the machine dear Mr. illusionist, if you don't believe in the existence of phenomenal pain?

Joseph:
I guess basically it would just be that you disprefer stuff other than phenomenal pain, you disprefer non phenomenal pain.

Gus:
Yeah. But then what is in need of explanation is why do you disprefer certain movements of neurons or atoms?

Joseph:
I mean, I think it's a good question in some sense. I mean, it sounds like it's a question that the physicalist about consciousness would face as well. It sounds like you're a physicalist and so if you thought that conscious states were just certain movements of atoms, you would face the same question. Why do you disprefer that?

Gus:
Yeah, this is where I claim that we can experience at least some aspect of our brain states directly as bad.

Joseph:
Yeah. Yeah. So we've talked about some of those issues. But broadly it doesn't seem to me like the illusionist faces any problem ... well, many people do have the intuition, which maybe we're about to talk about that kind of, if there are no phenomenal states, then there's just no value. Like, it doesn't matter what happens to a thing that is unconscious in the phenomenal sense. And so maybe that's part of what the intuition you're trying to pull at, sort of like, why do you care about what happens to you? If you kind of have this nihilist intuition about worlds without consciousness, then why do you care about what happens to you? I guess I just want to say whatever the illusionist answer to that is, will be at stake in the torture experience machine thought experiment. There'll be like, well, it turns out I just actually really disprefer being tortured irrespective of the metaphysics of phenomenal states and so that's why I'm not going in this machine.

Gus:
Yeah, yeah. Yeah, I do accept this kind of consciousness as a necessary condition for value. And I think that if consciousness does not exist then nihilism is true. And that is a much weaker claim, that kind of saying something like hedonistic utilitarianism is true. It's just that it's quite plausible to me that either the world is devoid of value or else value as something very intimately to do with consciousness is how plausible is this to you?

Joseph:
I think I don't find that all that plausible. I think there are some intuitions that support that, but I think my attitude towards that is pretty similar to my attitude towards nihilism in general, which we haven't talked about very much. But very broadly both with just pure metaethical nihilism and with nihilism conditional on illusionism. I guess my reaction would be that when I imagine learning the truth of something, like ... I mean, it's a little hard to say what exactly you're learning when you learn that. Certainly it's easy with something like naturalism. Like I can imagine learning that there no such thing as a kind of non-natural moral property and that there's no such thing as some sort of extra dimension, extra non-physical dimension of consciousness. Now, I think that's the easiest thing to imagine learning. When you're talking about consciousness doesn't exist in where you were going to be a reductionist about consciousness, it's kind of hard to say then what the claim is.

Joseph:
But when I imagine learning those things, I guess I noticed that I just don't stop caring about stuff. I don't stop kind of trying to avoid being run over by buses or trying to help dogs that are trapped in a pit. I just don't notice my normative world collapsing. It's more like I just imagine it getting rebuilt. If you had kind of assumed that normativity was sort of resting on these metaphysical views, I guess what I imagine happening is when you let go of the thing that you thought was crucial, your normative life will re-arise. And that suggests to me that your normative life does not in fact, rest on these kinds of high-falutin, metaphysical, presumptions, or at least mine. I think it doesn't.

Gus:
I can see where you're coming from and I have tried to have days a week in which I kind of drop all the theory and then see what happens. But for my view in particular, it's very difficult to separate out because I continue to walk around and when I eat an apple, it tastes great. And then I stub my toe and I feel pain. And so these things continue to happen no matter what my view is. So maybe it's because I've kind of put-

Gus:
My view is, maybe it's because I've kind of pushed myself into a corner with these thoughts, but it just seemed to reaffirm me in my view.

Joseph:
Yeah. I mean, I guess, I think a difficulty with some of these imagine learning that the view is false thought experiments is, it can be unclear whether you are successfully learning that, really internalizing the fact that it's false, right? So, in the case of your stubbed toe, you might think, "Well, okay." You go, "All right, I'm going to be an illusionist. There's no phenomenal consciousness." And then you stub your toe and you notice, "Well, something normatively relevant happened just now." But there's two ways of interpreting that. One way is, we preserve your illusionism and you've discovered that actually you care about stuff, even if illusionism is true.

Joseph:
Another interpretation is that your false belief in phenomenal consciousness has rearisen and activated your normative theory, and it can be hard to adjudicate those. I guess my feeling is something like, I think I placed just generally less weight on some of these assumptions about how theoretically freighted our mental life is than other people. I think nihilists are often very eager to see in our lives a kind of undergirding type of metaphysical commitment that can be undermined in a way that leads to a kind of wholesale disorientation. It's sort of like, your moral practices or some other practice itself is committed to the existence of a certain kind of metaphysical thing, and I think I'm just generally less sympathetic to that.

Joseph:
I guess I see kind of theory is more malleable and more there being lots of competing interpretations of what we're doing when we want to avoid stubbing our toes. And so I'm less inclined to think of someone that, like the only way to really, if you want to avoid stubbing your toe, is to have some kind of metaphysical view about phenomenal properties. I mean, in an explicit sense, people don't even start with these concepts. They have to get kind of elicited via exposure to David Chalmers. So you can say, "Well, they're baked in underneath," and we can talk about that. But overall it just feels kind of theory-laden to me, whereas wanting to avoid stubbing your toe, that's bedrock, that's stubbing your toe, ick. Anyway, maybe that gives you a sense of where I'm coming from.

Gus:
It does, except for the fact that I feel like I'm appealing to exactly the bedrock avoidance that you just characterized when... I do feel that my position is almost stripped of theory, in a sense, because, again, it's getting a little cartoonish. But I do think of it as, and this has actually happened. I'm wandering around my apartment and I'm wondering whether, "Well, is nihilism true, is morality illusory?" And so on. And then I stub my toe. And this seems to confirm me in moral realism, at least of the flavor that I want to promote here.

Joseph:
I mean, I think that I can certainly make sense of that affirming your normative commitment. You're sort of like, "Wow. Things..." I mean, I guess, I'm often tempted to try to be as non theory-laden as possible. So, let's say we're not a nihilist yet. We're fine with sort of things mattering, but we're unsure whether it has to be phenomenal consciousness or it could be something else. I'd be inclined to say, "Okay, you stubbed your toe. The lesson you learn is, whatever that was..." We'll just use the word "that." Who knows what I'm pointing at with the word "that"? It could be phenomenal. It could be non-phenomenal. "That" is worth avoiding. I am not into "that," you know? And so then someone's like, "Hey, you're about to kick this log." And you're like, "Is that going to involve 'that'?" And they're like, "Yes, it is." You're like, "All right, I'm out." And you don't need to have a theory of what "that" is.

Gus:
But isn't it interesting to have a theory? Aren't we looking for the... We want to know the moral truth and so on, so why not describe it as pain and theorize about pain and so on. Or, what is... I'm looking for, why does it lead us astray to overtheorize about these things? It seems to have some kind of... Yeah. You want to avoid overtheorizing. Why is it possibly misleading?

Joseph:
Well, let's take the example of stubbing one's toe. So, I think if we use my "that" approach to picking out the event of concern that occurred when you stubbed your toe, then we are in less danger of, for example, denying that the thing happened. Whereas if, instead, I said, "Ah, I stubbed my toe, and had an experience of an intrinsic ineffable private property that is sort of fundamentally different from all physical things," which isn't necessarily your view, but that's like an example of a theory-laden interpretation.

Joseph:
And then someone comes along and they say, "Well, that didn't happen." And now I fear that I will get confused, because there's two questions. One is, "did an event of concern occur? "Which I want to say is on much better ground than "did an intrinsic private ineffable non-physical experience occur?". And I worry if we interpret too quickly what the thing was that we care about, then we're at more risk of realizing that it didn't exist and getting disoriented, where it feels very implausible that sort of an event of concern didn't exist when you stubbed your toe.

Gus:
Yeah. I agree with what you just said in a sense, because I don't want to inflate consciousness in the way that we... Or attached these specific... How do we characterize them? By making consciousness special, right? But we seem to all share a concept of pain, and the concept of pain as something bad. So, what is the danger of using that concept to characterize stubbing your toe, and then later on you have somewhat of the same feeling when you eat spicy food, for example? What is the danger in using these concepts?

Joseph:
Basically just that if they start to become sufficiently theoretically freighted, then we might, I think, lose track of whether they... I mean, so, this happened. So I tend to think pain is probably... That's not that theoretically freighted. I think a pain is a pretty pre-theoretical term where it seems very, very implausible, for example, to say that people don't feel pain. And people are-

Gus:
But when you say pain right now, do you mean pain as an experience? Or phenomenal pain?

Joseph:
Exactly. This is the kind of thing I mean. I want to say, let's say, first step was we stubbed our toe and we said, "That." Let's call that A. And A is just a name for whatever happened that I want to avoid. And then we might say, "Oh, look, there's other people stubbing their toes, and there's people bonking their heads, and there are people getting paper cuts, all sorts of things, and much, much worse." And if we look and maybe we got into the idea, there's something interestingly similar about these events of concern. And so, let's call all of those Bs. So A is an instance of B, but we have not yet brought in any kind of theory yet about whether this is a phenomenal? Is B phenomenal? Does B have to be phenomenal?

Joseph:
We could make that claim. We could say... And it is essential to all of these things that they are phenomenal. And this is often, I think, the step people want to make and that they want to kind of sneak in such that then, and if you think that, then you have yourself a very strong objection to illusionism, right? And this is an objection that Dave Chalmers presses. He just says, "Well, cool. Like illusionism says, there's no pain. There's no such thing as pain." And I guess I feel like that... I think it's a non-trivial objection. I think there is force there because pain does feel conceptually quite tied up with our notions of experience and what it's like to feel things and stuff like that.

Joseph:
So, I think as theoretically freighted concepts go, I think this is a decent one, but it seems to be much, much more plausible that, in some sense, our notion of the phenomenalness of our experience is false or a misrepresentation, then that, in the full connotations of the term, pain does not exist or ever occur. I think if we learned that there was no such thing as the phenomenal stuff we would quickly reinvent a notion of non-phenomenal pain, and we would go around avoiding it very hard. And given that we would so quickly reinvent this thing, I'm inclined to start with that, too, and sort of like, "All right, whatever it is that would cause us to reinvent this concept, maybe that's what invented it." Or maybe we should think that that's very relevant to the thing we're talking about. And so maybe that gives you a flavor, right?

Gus:
It does. Except, what's in the back of my mind is that, what we're using to pick out these different happenings in the world. I mean, again, it might be baked in that what we're using to understand these happenings is just that they are phenomenal because otherwise we wouldn't be able to pick them out and to find similarities between them in the sense that we do. And I also sometimes worry that kind of worries about phenomenal consciousness being special and so on often relate to worries about consciousness being non-physical. So it's difficult... Or, I don't know whether it relates to my position.

Gus:
If we begin talking about pain not existing and I claim that pain is a brain process, is there now suddenly a hole in my brain where the pain was before? And so on. So, yeah, it is somewhat confusing territory, but I see your point in the sense that we should keep an open mind about how we characterize these things. And there could be a danger in baking in a theory from the beginning and then just claiming victory, because, well, you've baked something in, and now you can say that, yeah, now your theory is vindicated because you've baked it into bedrock in a sense. Yeah.

Joseph:
Yeah. I mean, I think one way of characterizing that type of worry is, it quickly becomes unclear whether the dispute at stake is terminological or metaphysical. And this happens in the context of nihilism about metaethics, and it happens in the context of kind of illusionism about consciousness, where someone says, "Here's this kind of possibly mysterious thing, morality or consciousness. And I claim, in fact, this mysterious thing is this non-mysterious thing, the physical world." And then someone else says, "Well, no, consciousness or ethics has to be non-physical."

Joseph:
And so then that person, because they've defined consciousness or ethics as non-physical, then physicalism equals nihilism. Whereas someone else could say, "Okay, well, if we don't define consciousness, or if we don't take for granted that it has to be non-physical, then we don't have to say that," as it were. And so it becomes unclear whether... So, the nihilist and the physicalist, often they agree on the metaphysics, right? They agree that it's just a physical world, and what they disagree with is whether that "counts" as pain or as morality or whatever, but sort of disagree with-

Gus:
Wait, wait, wait. The nihilist and the physicalist, who is it that agrees? You said the nihilist and the physicalist agrees.

Joseph:
Oh, sorry. Let's say, the physicalist who thinks morality exists and consciousness exists, but that they're nothing over and above the physical. And the nihilist who says, "There's nothing over and above the physical, but that means that morality and consciousness don't exist."

Gus:
Oh, yeah. Okay.

Joseph:
They agree on the physical being all there is, and they disagree about whether that counts as these other things.

Gus:
Yeah, yeah.

Joseph:
But disagreeing about whether something counts, you can get into a frame of mind where that feels kind of lame. And this is what I meant about reinventing the notion of non-phenomenal pain. If someone's like, "Well, that doesn't count as pain if it's non-phenomenal." And you can be like, "Okay, well, it doesn't count as pain sub-phenomenal. Great. But it still counts as pain sub non-phenomenal that I really want to avoid, which is why I'm not getting tortured right now." Anyway, I guess that's like a way in, is once you start to sense that this is getting a little terminological, you wonder whether you've gone astray.

Gus:
Yeah. Yeah. That's definitely a worry about whether it's just a verbal disagreement between the physicalist anti-realist and the physicalist realist, because, as you said, they agree about what exists. What is actually the... How would the world be different if anti-realism was true? So, the world wouldn't be different, right? If the anti-realist physicalist and the... Yeah, the world wouldn't be different, so what's the difference? Well, the problem for me here is that, again, it all comes down to whether or not we experience things. And it does feel somewhat pre-theoretical in a sense, right? When I claim that we experience something, I can only repeatedly appeal to, "Well, did you drink some coffee this morning? Did it taste a certain way?" That's an experience.

Gus:
So, in claiming that certain physical phenomena are conscious phenomena, I can only repeatedly appeal to the experiences of my fellow conscious beings. And this might be a limiting factor. And here we can kind of model you as having simulators and an AI here who is repeatedly asking, "Well, what is it that you're talking about?" Or, "What's the difference?" And so you can say, hence my worry about communicating experiences to an AI. Yeah.

Joseph:
Yeah. I think my inclination there is to, again, try to be somewhat less theory-laden about it. And I think part of what I was gesturing at in this initial point about whether we're talking about, in some sense, the bare property of being phenomenal or not, versus trying to characterize it in some more substantive way by saying something like intrinsically ineffable, private, et cetera, et cetera, is once we've gotten rid of these extra, more substantive properties, which can be... I think if we have a notion of phenomenalness that builds in where we can say, "What is it to be phenomenal? What makes something phenomenal?" And then someone says, like, "Well, it has to have X and Y and Z further properties which are not just repeating phenomenal." Then, I think, it's easier to have a debate about whether that thing exists.

Gus:
I agree.

Joseph:
If we just say, "Phenomenal. You know what I mean, man. It's that special feeling, that special way that things are. Can't you just tell? You had your coffee, it was that special way. You know what I mean?" I want to say, there's definitely a thing that you are pointing, you had some coffee, right? And it tasted bad, or it tasted good, or whatever, in some sense. Some event occurred where you spat it out, or you smiled or whatever. So there's almost a thing... There was a "that" there. You can say there's that, but I guess I'm trying to sort of channel my illusionist side here, I should say.

Gus:
Yeah, yeah.

Joseph:
I think there is another way. I could be channeling a different side. From my illusionist side, I guess I would say, "Well, this almost is an illusion." I think the illusionist would be almost more in your camp. The illusionist would say, "Yep. We kind of know what we mean when we talk about phenomenalness. We have some sort of distinctive sense of things being phenomenal and it's wrong. We misrepresent things as phenomenal." But the illusionist is, too, they don't really say what that representation consists in. If you ask them, "Oh, so what is it to represent something as phenomenal?" They're like, "I don't know, whatever the heck we're talking about." Like, "I thought we were talking about something and all I'm saying is whatever that is, it's illusionary. It's an illusion." But no one has a very good characterization of the thing.

Joseph:
And so, because of that, I think it gets a little hard to say... When you say, like, "I know that phenomenal consciousness exists," but you can't say anything more about what it is that you're pointing out there, especially if you're a physicalist, right? Because one thing you can say, if you're not a physicalist, is like, "And it's something distinctively different from physical stuff," which I think is often what people have in mind. It's like, it's something that just seems so different, but you wouldn't even want to say that. You want to say, ultimately, this is a physical thing. And so now we're like, "Well, cool. I agree something happened when you had coffee. I had coffee," but it's unclear what we're disagreeing about at a certain point. Or, at least I think I can channel a part of me that says that. I think another part will be more on your camp. So-

Gus:
Yeah, yeah. If I did, it's fine. It's great. But it's just so we can... There's another sense in which this debate is getting overly theoretical because if I communicate with say my grandmother, again, she will instantly understand what I'm talking about if I tell her that, "Oh, you like drinking this coffee," right? She knows that she will then have a good experience drinking the coffee. So that's kind of the reverse. Overtheorizing could also be a danger here in which people who've never thought about consciousness actually understand or claim to understand the concept we're talking about better.

Joseph:
I agree with that.

Gus:
Yeah, but... Yeah. Go ahead.

Joseph:
I hear you on it. I think that's a good point, in some sense. I guess if you, though... Imagine it were true that in some sense there's no phenomenal properties. I was not totally sure.

Gus:
I'm not sure I can actually do what you're asking me, but I can try. Yeah.

Joseph:
Yeah. I mean, I guess in some sense, I'm not sure what that would be either because I don't know what it means to characterize something as phenomenal. So it's not obvious what to imagine when you deprive these things. You have your notion of a philosophical zombie world. There's sort of nothing. It's like there's no one home. But I think it can be problematic. But do your best. There's no phenomenal properties. Is it the case in that world that no one likes coffee? Or that you're misleading your grandmother when you say you like this coffee? No, it's like, she loved the coffee. It was great. She was like, "Mm, it tastes great." Anyway, I guess that's what my point is. It's not totally clear that in an illusionist world you've misled your grandma any more, or that she's any more mistaken. It... Yeah.

Gus:
Yeah, except for the fact that the meaning of liking has now changed between the worlds, I would say. So in the world that I think we're in, liking refers to having a good experience and in the zombie world, liking is more like having a positive attitude or a preference towards coffee. So there might be something going on there.

Joseph:
Yeah. I think that's right. And now I'm realizing I've been channeling something that's, I think, less like an illusionist and more like a physicalist reductionist skeptic of the phenomenal consciousness discourse, which I think is actually not the ethos that underlies illusionism. I think that ethos that underlies illusionism is more like your ethos plus a kind of skepticism that there is a thing that answers to our concept. So I think that the illusionist, now that I think about it, will be more like, "You're totally right. Your grandma is just totally wrong about... It turns out she doesn't like coffee," or, "She didn't have a good experience," [crosstalk 03:07:04].

Joseph:
It's all totally misguided would be one way of... I think the illusionist will be more tempted by that type of view in the same sense. The nihilist is just more tempted about... A nihilist about normativity is more tempted by the view that it's true. Our moral discourse is systematically wrong, or this is... I mean, this is the view and that it's not bad if there's a nuclear war and all sorts of things.

Gus:
Yes.

Joseph:
So, yeah. But I've probably been representing illusionism in particular, the kind of overall view of less than maybe some of my comments suggested. I think I've represented the metaphysics of illusionism.

Gus:
No, but it's great because there are definitely... I mean, you've pointed out some very interesting things, like problems with our concepts and how they refer and how we can share them and so on. So it's all great. It's all great. And what is your position? So I've heard you channel the illusionist or the physicalist skeptic and so on. What do you believe? Or do want to give a credence distribution, or I do you want to say you believe some position or...

Joseph:
Yeah. I guess I would say I have a lot of credence on something like physicalism as a metaphysic. And the reason for that is just that I think these causation epistemic worries are just really deep. So I think consciousness has to be the type of thing that is causing me to say, "I'm conscious." If it's not, then why do I think I'm in some sort of reliable epistemic relationship to my consciousness? So, I'm very sympathetic to the view that... There's some talking about that sort of epistemology and then kind of causation at different levels of abstraction stuff is complicated, but broadly I'm sympathetic to the idea that you need a physicalist metaphysic of some kind to make sense of our epistemic access to consciousness. And then I guess I feel like, once I've said that, once I've said that in some sense we need to be physicalists about this, I actually do have the feeling that some of this discourse is a little bit terminological.

Joseph:
So, now the question is just, does a physicalist metaphysic answer adequately to our concepts of phenomenal consciousness or not? And there I'm a little, like, "I don't know, man. Maybe our concepts are kind of fuzzy or confused, or maybe people have different concepts." It somehow doesn't feel like a very substantive debate, I guess. So I think maybe that's part of why I've been a little bit loose about whether I'm talking about illusionism or about physicalism here, because I see them as almost sort of a terminological dispute at that point, once you become a physicalist. At least, yeah.

Gus:
I can see what you're-

Joseph:
At least... So I would put a decent amount on that position. And then I would have a decent amount of leftover credence, which is something like, we're very confused at a deeper level than the thing I just said expressed. So, we are confused about epistemology and about metaphysics and about all this stuff such that... And that confusion could extend to some sort of fundamental reorientation of like some sort of substantive confusion or misunderstanding of the physical or metaphysical situation. Or it could be more of an epistemic, like our concepts are confused, or we're thinking about this in the wrong way. I have a big chunk on that, too, and I think they kind of blur together. So, yeah.

Gus:
Yeah. It is an interesting thought that both say... I actually can't think of some physicalist realist about consciousness. Some physicalists who is also a realist about consciousness and then Keith Frankish. They will both agree on the metaphysics and they will also both agree that we shouldn't torture random animals, for example. So there might be something going on where, exactly as you said, the debate becomes verbal or superfluous in some sense. But one thing we should definitely hold onto is thinking about whether we're using the same concepts and being precise with them. This was my initial worry with illusionism, is that we're kind of, in a sense, playing fast and loose with the concept or changing the definitions along the way. But, yeah, this is the-

Joseph:
I've always been more worried... right.

Gus:
No, go ahead.

Joseph:
I think I'd be more worried about that with physicalism than with illusionism actually, where I think what the... If you're a start out with a kind of Chalmers-style dualism about consciousness, you think, "Ah, the concept of consciousness is just so different from the physical." And the physicalist comes along and says, "Nope. Actually they're compatible." Whereas the illusionist says, "You're right. Consciousness, it's just totally different from the physical. I totally agree. That's why it doesn't exist." And so, if anything, I want to say the kind of worry about confusing or weakening or, in some sense, changing the concepts applies more to the physicalist than the illusionist in so far as you think there's a kind of pre-theoretical possibility or commonsensical plausibility to a lot of dualist intuitions.

Gus:
Yeah. I was thinking more in terms of the... So, say the illusionist says that pain does not exist, but then he won't take the bet of going into the torture experience machine. What explains this, if he can be paid a million dollars to go into this machine? Isn't it because the notion of pain has somehow arisen again as phenomenal for the illusionist?

Joseph:
Yeah. I mean, as I said, my feeling is that the illusionist... The notion of pain can have rearisen is non-phenomenal for the illusionist, and that's why they-

Gus:
Yeah. Yeah.

Joseph:
... that's why they don't want to get tortured.

Gus:
This has been very interesting for me. I've enjoyed it a lot. And also it has reaffirmed my view that the longer the podcast, the better, basically. I don't know if you've noticed it, but the longer the podcast goes, the more people kind of get to know each other or the better the conversation becomes in a sense.

Joseph:
Yeah, it was a pleasure.