Quotes from David Chalmers on the 80k Podcast
Published February 4, 02020, Page Last Modified April 6, 02020
The 80,000 hours podcast episode, “David Chalmers on the nature and ethics of consciousness (a),” is 4 hours, 41 minutes long. Here are the quotes that stood out most.
The biggest problem for dualism is the interaction problem. How do mind and body interact? And in particular, how could this non-physical consciousness play any role in the physical world? I mean, you could say that it doesn’t. That’s what philosophers call epiphenomenalism. That leads to weird things like, “My consciousness is playing no role in my talking about consciousness right now.” That seems bizarre. Or you can go for an interactionist view where consciousness affects the physical world and then the question is, “How can you reconcile that with physics”? The one place where you might want to look, which looks at least consistent with known physics is quantum mechanics and the collapse of the wave function.
So lately actually, with a coauthor, Kelvin McQueen, who used to be a student of mine at ANU and is now at Chapman University in California. We’ve been exploring views where consciousness actually plays a role in collapsing the quantum wave function, which is an old idea. That’s an old idea that goes back to Wigner and further, but interestingly, no one has really tried to work out formally the details and the dynamics. We’ve just been trying to see if we can make that work. That’s been one aspect of my work on this on the dualist side. Do you want to jump in Rob?
Panpsychism + Physics
So this is another place where you might want to appeal to consciousness and say maybe, “Physics or matter somehow intrinsically involves consciousness whose structure has been characterized by the physics”. But every time there’s an interaction, say between, two particles, it’s actually some bit of consciousness doing the work.
It’s a way out speculative worldview that sounds like the kind of thing that people are going to be into after taking some psychedelics and so on, but it does have some very attractive features. It’s also got a big cost, which is how do you get from consciousness at the fundamental level to the consciousness that we have?
That’s the combination problem for panpsychism that was introduced by William James. In recent years, there’s been this huge resurgence of work on panpsychism as Arden mentioned that a lot of young people these days have been pursuing panpsychist ideas. The big challenge is the combination problem. If you can make panpsychism solve the combination problem, it becomes a lead contender. I think it’s fair to say no one has solved that one yet. So I guess I’d say for both panpsychism and for dualism, big attractions, but big problems.
Problems for Panpsychism/Dualism
I think the big cost is the combination problem. How did the little bits of consciousness yields our consciousness? That’s in a way, a bit like the original hard problem for panpsychists. It’s not clear there’s a good solution. For the dualist, the problem is the interaction problem. How does consciousness play a role in the physical world? How do we reconcile that with physics?
For Berkeley, one of the great idealists, it might’ve been the mind of God. But there’s also a version of this which says, “Take the world as described by quantum mechanics as a giant wave function of the universe” and now go panpsychist about that and say if the wave function of the universe is actually fundamentally something mental. So it’s a mental state in a single cosmic entity that leads to what I call cosmic idealism, and I think that’s an extremely way out view. It’s also one worth taking seriously, but it suffers from its own version of the combination problem. What some people have called the decombination problem.
How do you get from that cosmic mind to our minds. Why should the existence of giant mind necessitate the existence of our minds? And that seems to be at least as hard as the original combination problem.
Cultural Incredulity of Panpsychism
Maybe we’ve got this view of, we’ve all absorbed this scientific view of things being built up from a very simple reductionist level of physics, and minds only come in fairly late in the day. But I think the thing to mind is, of course, it’s very difficult for some people to associate panpsychism with the idea that very simple systems might be intelligent or be thinking, which seems like a crazy view. But once you start thinking of consciousness as something very simple, rather than something so complicated, then this becomes more attractive and more tenable, and I guess we’re going to talk about consciousness in nonhuman animals sometime before too long. But I think there’s been a gradual trend towards expanding the circle of conscious creatures and conscious systems as we come to see consciousness as more and more simple. So it wouldn’t entirely surprise me if at some point in the coming years, panpsychism came to seem culturally more intuitive.
But it is certainly true that to the average person, the average science based person, it sounds like something pretty wacko the moment you hear it and that’s certainly an initial obstacle to overcome. Philosophers call this the incredulous stare. David Lewis said this about his theory that every possible world actually exists.
He said, “I can refute all the objections to this theory, but I cannot refute an incredulous stare”. You’ve got to find some other way to get past the incredulous stare.
Effective Altruism + Consciousness
I would love to see a whole bunch of people motivated by effective altruism come to the study of consciousness to help us figure out those questions. And likewise, people moving in other direction. Specifically on the question of the distribution of consciousness, we had a conference on animal consciousness at NYU a couple of years ago where these questions were being discussed, both by people interested in the theoretical side and the philosophy, the theoretical philosophy and the science, and the ethical and activist issues about their treatment of animals. Yeah, there’s certainly not any consensus about exactly which animals are conscious, but there very clearly is a trend towards being more liberal in ascribing consciousness to animals; both among scientists, among philosophers and among people generally.
Higher Order Consciousness + Templeton Funding
Some people think there’s gradually empirical evidence in humans against these theories that say people are conscious despite not having activity in the brain where the higher order stuff is going on, but actually that’s still very much up in the air. Actually, in two weeks at NYU, we’re hosting a workshop to try and see if anyone can devise an experiment to decide the issue between first order and higher order theories of consciousness. The Templeton Foundation has recently allocated 20 million dollars to seeing if people can do this in general, take the leading theories of consciousness and see if people can come up with experiments to decide the issue between them. The first one was a project came out of a meeting in Seattle I went to last year, to try and decide between global workspace and integrated information theories of consciousness.
Degrees of Consciousness
Yeah, I think I’d want to be cautious about saying consciousness comes in degrees. Because I don’t think there’s a single uni-dimensional scale which is the canonical scale of consciousness on which we can order every possible total state of consciousness or every possible conscious creature at a time. Rather, I think consciousness is multi-dimensional, and there’s many different ways to sort states of consciousness along scales, that is, to project these multi-dimensional scales onto single scales. I think that there’s really probably a bunch of different scales here. The amount of attention you’re paying; attentiveness, at least within human consciousness, that’s a relevant dimension. The amount of information contained within an individual state of consciousness, there’s some sense that maybe you could measure the information, say in a visual state in terms of a measure like bits. Maybe the number of kinds of consciousness, where kinds of consciousness will have to be individuated a certain way. For example, we have visual consciousness and auditory consciousness and tactile consciousness and so on. We can certainly imagine beings with more sensory modalities than us and fewer sensory modalities than us. We have cognitive consciousness, where it seems that some creatures have merely sensory consciousness but not cognitive consciousness.
Moral Status of Vulcans (Conscious Beings With No Valence)
One way to bring this out is to think of your paradigmatic extreme case of the Vulcan from Star Trek who, let’s say, has no affective states at all, but nonetheless has very rich, both sensory consciousness and cognitive consciousness. So they’re constantly thinking about the world, maybe thinking about mathematics and science and the world around them in various ways, without any valence states of pleasure and suffering. They also have rich sensory consciousness. Then the question is, does that being lacked moral status entirely? To me, it just seems bizarre and almost crazy to say that that being lacks moral status and it’s okay just to kill them. And we’re talking about a conscious being here, who senses, who thinks, who reflects.
Sure. Yeah, I can see all those reasons. I’d like to think I can factor out the intuitions about instrumental value and maybe about the practices of violence and so on. Maybe the first point runs, potentially runs deepest here, that maybe we can’t imagine beings without affect, or it’s a lot harder than we think. And maybe, in fact, when I imagine these Vulcans, maybe I’m tacitly imagining some affect in there. For example, I am imagining them reasoning about the world and choosing what to do. And at least on certain views of reasoning and action, maybe doing that always involves some kind of valence. Trying to do something and achieving it might somehow always involve some positive valence. And arriving at contradictions in one’s reasoning might somehow always have some negative valence. So I do take views like that seriously. But nonetheless, it does seem to me that I can coherently … I’m inclined to think I can imagine a being without significant affective consciousness. If it turns out that those kinds of affects matter a lot, even that would reconfigure a view from say a pleasure and suffering based view. But no, I mean prima facie, I’m inclined to think that I can imagine a being without much in the way of affective consciousness. And still, it just actually seems monstrous to me to think about killing such a being.
But then within those realms, affect may make it more positive and more negative, but somehow at least relative to baseline. I guess my attitude is, even if you had a Vulcan who, you went from the Vulcan case to a being who suffers occasionally, then so that on an atomistic view, it would all add up to negative valence. It would still be monstrous to kill them, even though you’d be reducing the amount of suffering in the world. So then yeah, I guess I would just be inclined to think there’s some baseline value contributed by being conscious at all, and maybe contributed by having consciousness of certain kinds.
I think on the face of it, for example, something like visual consciousness can exist without affect. It’s not to say that we don’t very frequently attach positive valence to what we see or to experiencing it and sometimes negative valence. But I’m inclined to think the baseline case in perception is neutral affect in some sense, and this happens relatively frequently. Now maybe that’s getting perception all wrong and there are theories of perception all about affordances and action and getting signals to reach for things. And you get positive affect when something goes right in perception and when not. But I mean, those are, at the very least, controversial and speculative theories of perception. I’d be inclined to think the neutral case is affect-free.
Robert Wiblin: So if you imagine a computer that we create that has perceptive consciousness, so it has a camera and it can see things and hear things, but it doesn’t feel anything positive or negative, or it perceives things, but has no affective states, and maybe doesn’t do anything, so it’s just a passive computer that sits there perceiving things. It seems like in that case, I lose the intuition that it’s bad to turn it off. And it seems like maybe that’s because of, it no longer has this agent quality where I feel like I need to interact with it and cooperate with it in the way that I do with beings that have preferences and do things.
Yeah. What I’m picturing here is a computer that has mere sensory consciousness. Maybe it has states of visual experience, but no cognition, no action, no affect. I mean, I guess I share your intuition to some extent there, at least that the moral status would be relatively minimal. That, I think, goes along with the idea that sensory consciousness may bring along with only a relatively minimal moral status. And things like cognition, the consciousness associated with thinking, reasoning, reflection, action, as well as affect, may well convey much more serious moral status.
On Entering or Exiting the Experience Machine
Yeah. Exactly. I think you’re right about the status quo bias. Felipe De Brigard, a philosopher who’s done some experimental philosophy on this, takes the case where you’re initially in the experience machine and then saying, “Would you choose to exit the experience machine”? People are reluctant to do that to maybe almost the extent they’re reluctant to enter it. He diagnoses all this as you want to be where you’ve been already. You want to have relationships. You want to be in that world.
Comments powered by Talkyard.