Diamond Mind

Diamond Mind #11 The Coming Collision of Consciousness and Technology

Tam Hunt

What happens when quantum physics meets consciousness? In this mind-expanding conversation with philosophers Susan Schneider and Mark Bailey, we journey through the cutting-edge theory of superpsychism—a framework suggesting consciousness exists at the most fundamental layer of reality, intimately connected with quantum entanglement.

Schneider shares how conversations with physicist Roger Penrose sparked her exploration of a universe where space-time emerges from a more fundamental reality without a directional arrow of time. This "proto-temporal" layer might represent pure consciousness—a holistic network of entangled states more conscious than any classical constituent. Bailey explains how this framework addresses the mysterious phenomenon of quantum entanglement not through signals traveling through space but through an underlying topological connection from which our conventional understanding of physical space emerges.

But our exploration doesn't stop at abstract theory. We dive into the alarming acceleration of AI development and its existential implications for humanity. Bailey's new book "Unknowable Minds" examines why modern AI systems function as black boxes whose decision-making processes remain largely inscrutable, creating profound challenges for alignment and control. Both scholars express grave concerns about the current AI arms race, which resembles nuclear proliferation but without natural stopping points or adequate international guardrails.

Most troubling is what they see as our approach to a potential "FOOM" moment—when artificial general intelligence becomes capable of improving itself, potentially leading to an intelligence explosion beyond human comprehension or control. This isn't distant science fiction but a scenario some researchers consider possible within this decade.

We conclude by examining transhumanism and the philosophical questions surrounding consciousness transfer and digital immortality. Can consciousness truly transfer between biological and digital substrates? Would such a transfer preserve personal identity in any meaningful sense?

Subscribe to our podcast for more conversations at this critical intersection of philosophy, physics, and our technological future. The decisions we make now may determine not just how we understand reality but the very future of consciousness itself.

Support the show

Speaker 1:

It's still morning here. I have my coffee in Hawaii. Let's just dive in. And Susan Schneider, Mark Bailey, great to see you. I saw you recently in Florida, Boca Raton, at MindFest 2025. Let's start by actually, Susan, let me ask you how was MindFest for you? Were you pleased with the whole outcome?

Speaker 2:

I think it went well because people were saying that they loved it. I organized it so I didn't get to hear a lot of the talks and that was like traumatic for me because all these great people and yeah, but people were saying that they had a great time.

Speaker 1:

Yeah, no, I thought it was great my first time going and I really thought it was a fun gathering. Yeah, no, I thought it was great my first time going and I really thought it was a fun gathering. And for me, as with many conferences, the best part was the conference, like the social hour and the dinners, et cetera. Talks great too, but like I really do love that old fashioned face to face, yeah, for sure, and the voting. And the voting. I missed that part, Mark. How was it for?

Speaker 3:

you was great. I always enjoy mindfest. This is my third year going and, like you said, I think it's always the it's the off-the-cuff conversations that are the most interesting. I also really like how it brings together a lot of people from a lot of different backgrounds, so you get this diverse array of perspectives on a lot of really fascinating topics.

Speaker 2:

Yeah, I really like that and both of your talks are available at the YouTube site.

Speaker 3:

And I'm really excited, yeah.

Speaker 2:

And Mark, we were so glad to do a book salon on your new book, and Tam, we were so glad to feature something on resonance, which I imagine we're going to talk about both of these things today. That was just great that those talks went over so well. And then I know that Closer to Truth is putting out an eight part series on MindFest, which is really cool. I think that's coming out in just a few weeks, so maybe by the time your video's out, it'll be out as well.

Speaker 1:

Excellent, excellent. Yeah, today I'd love to talk about your recent books and superpsychism and circling around consciousness and AI, which, of course, was the main topic, or set of topics at MindFest also. So let me ask first about superpsychism and panpsychism. You guys have a paper out and you're doing a special issue with the Journal of Consciousness Studies on your paper and responses to your paper. Can you tell us more about that?

Speaker 2:

Oh, you want me to say something? Yeah, go ahead, all right. So it started when I was talking to Roger Penrose and he started telling me it was during the pandemic and he and I were half. We used to talk pandemic and he and I were. We used to talk and it was those kinds of conversations which really got me thinking about retrocausation and emergent gravity. And I'd also been at the Institute for Advanced Studies as a member, which is they don't have professors, they have members, and so we used to. I used to hang out with the string theorists all the time.

Speaker 1:

They're cool kids.

Speaker 2:

They were fun. So anyway I wanted to. I meant for a long time to do a piece on integrating work on consciousness, on the hard problem, to fundamental physics, as a kind of attempt to understand, in the words of Stephen Hawking, what breathes fire into the equations, if you will, to try to answer the question of the fundamental nature of reality. And for me, ever since I did papers on idealism and even a popular piece in Scientific American this is about seven years ago I've been deeply thinking about how these issues are going to come together. So finally I got to do it.

Speaker 2:

After talking to Roger, I locked myself away. I wouldn't leave, like the weirdo that I am. I locked myself in my Florida place, wouldn't leave until I made progress on the issue and I thought, hey, wait a second. I think what's going on here? Just like the string theorists and actually the people working in alternate areas as well, like loop, climb, gravity, they're all thinking about these issues, but they have different views of what's fundamental to the nature of reality. But I think a lot of people were converging at the time who were working on space time and emerging that really space time is emergent from something more basic, and so I assumed that hypothesis and then I thought some of the most interesting work was taking quantum entanglement as fundamental and, as you both know, that's probably the most puzzling aspect of reality short of consciousness. So entanglement, for the listeners, is a situation that drove Einstein crazy. Right, of all people, it drove him crazy. It's really crazy. It's a situation where a particle on Earth can, in principle, be entangled with a particle on Jupiter, in principle, and somehow when you do something to the particle on Earth, something instantaneously happens to the particle on Jupiter. And they've been. Scientists have been trying to measure the speed of transmission on earth between entangled particles and it seems like it's looking like it's definitely faster than the speed of light and may even be instantaneous in a really cool sense.

Speaker 2:

I think there's a lot of new physics you can do on this stuff, and so I started to write up this stuff and I'm like I'm just a philosopher of minds, I need to work with somebody in physics and I knew Mark knew physics. So I'm like Mark. So I put out a paper early on as a. There was a special issue on one of my books and I grew in this whole spiel about the nature of space time and how it answered entanglement, answers the problem of consciousness, and I threw that all into this early thing and I'm like let me call mark now. But that's out and wasn't. People weren't saying it was terrible.

Speaker 2:

And so Mark and I did two papers, mark. One of them is coming out in a more physics-y place, which is on the more physics-y side, with a Cambridge volume, and then this one is the philosophy consciousness paper. And anyway, that's just to begin. There's so many ideas in it, mark, you can elaborate, because, like, it's hard to even get all the ideas out there. Yeah, it's really fascinating. That's just the beginning. There's so many ideas in it, mark, you can elaborate, because it's hard to even get all the ideas out there.

Speaker 3:

Yeah, it's really fascinating.

Speaker 3:

That's something that's puzzled me as well.

Speaker 3:

I've always been interested in these hard questions and so you know, we had this idea that if we don't, obviously we don't want to violate the no signaling principle, which is this idea that you can't transmit information faster than the speed of light, information faster than the speed of light, but yet at the same time, like Susan said, entanglement, sometimes it appears to entail the transfer of some information in a way that is superluminal, so like you could have two entangled particles, one here and then one in the Andromeda galaxy or something like that, and there seems to be some sort of connection, and physicists always say that there's a correlation, but it's not necessarily a causal connection between the two. But there still seems to be something there that seems to travel through space in a way that would violate this no signaling principle. So we postulated that maybe there's some underlying topology where topology wouldn't necessarily require some sort of metric space. It wouldn't require some distance across which something has to travel. It's basically just an underlying connection through which some causal influence can be exerted in some way.

Speaker 1:

And so that's More like an interactive space than a physical space.

Speaker 3:

More like a what I'm sorry More like an interactive space than a physical space.

Speaker 2:

Yeah, not a physical space. Yeah, there's no signal traveling through space.

Speaker 3:

Yeah, yeah. So it's something that underlies space, something perhaps from which space emerges, so similar to like how, in mathematics, you can get a metric space that emerges from a topology. This is an analog to that.

Speaker 1:

Got it? Yeah, I'm curious, without getting too much in the weeds on this, the no signaling theorem and the notion of quantum entanglement to me have always been contradictory, because even if quote unquote humans can't send information or signal, the universe can, so we're not separate from the universe. So how do you reconcile those two things?

Speaker 2:

Yeah, I think really the key here had to do with the nature of time. So, at the very fundamental level, as I somehow asserted 10 years ago or something like that, when I first read this stuff, I think the nature of time is not one in which time has an era, so it's a kind of proto-time, if you will. And yes, this does have something to do with panpsychism and what some people might call panprotopsychism, and we can get to that in a minute. But the idea here is that it's time without an arrow and that's at the very I'll call it the base level. But maybe there's something beneath the quantum entanglement level. Heck, maybe we're in the third level of some iteration, of simulations, for example. But basically those entanglement structures are not in a situation in which time's arrow occurs. If you will, it's only through decoherence and measurement where time's arrow emerges. And there Mark pointed out the beauty of pulling in quantum Darwinism to explain the emergence of time's arrow. Maybe Mark wants to jump in and explain quantum Darwinism, which is so cool.

Speaker 3:

it's just so cool yeah, it's just, it's this idea where, if you have a lot of different types of particles that are in some sort of a undetermined quantum state and they interact with each other, there's something. There are some particles that may have, I guess, a sort of a stronger pull, if you will, toward a particular, a particular direction or a particular type of collapse into some determined state, some state that's outside of non-super determined, or basically move out of a quantum state and into a more classical kind of state. And so the interaction of these particles with these what they call pointer particles, in this kind of quantum state leads to a statistical emergence of some directionality. And that's where time emerges, or at least time as we understand it. But I think prototype is interesting because it doesn't, like Susan said, doesn't postulate this idea of time having some sort of an arrow.

Speaker 3:

It could very well be an overdetermined kind of system where everything and every when, like every, like, basically every place, everywhere, every when, everything that has happened and will happen, could be mapped out in proto time in some way. It's just epistemically unavailable to us as denizens of the emergent space time that comes out of that. It's a weird, it has a lot of. I think it has a lot of implications for things like free will and the teleology of the universe and stuff like that. We may effectively have some sort of free will. We may effectively not know what the future is, even if the future is entirely mapped out in this sort of base level.

Speaker 1:

Let's go back to the notion of the hard problem, the nature of mind, the mind-body problem. How does this approach address the classic mind-body problem, which we call today the hard problem of consciousness?

Speaker 2:

I noticed that there was really no von Neumann entropy in an entangled system, in a pure state, and that I know people don't usually talk of von Neumann entropy in those contexts. And I found, though, that it was really interesting, because people like Brian Green and others who've been writing and thinking about time's arrow, talk about entropy, is introducing that arrow, and that was compatible compatible, actually, with quantum Darwinism. So I think the first step was to notice the really cool thing that these non entropy systems also exhibit what we might consider to be pure resonance, pure coherence, pure sync, and that also and maybe here I'm thinking of these approaches like Bohm and others you can really think of the universe as a network of entangled states, and that means everything is related to everything else. And so then I conceived a fundamental layer or maybe that's one layer in the simulation that's under ours or whatever but of entangled states in a holistic way.

Speaker 2:

Philosophers have long been thinking about holism. In fact, my dissertation supervisor, jerry Fodor, wrote a book called Holism, so I was very familiar with it. So those states I found to be states of pure consciousness, and that holistically entangled structure is actually a layer that is more conscious than any classical constituent. That superpsychism is a view in which a kind of panpsychism holds Maybe it's panproto or something like that in which that timeless layer is the fundamental layer and there's maximal coherence. So, contra cosmo-psychism, which says the biggest is exhibiting, the greatest consciousness is actually the entanglement layer, and that's so weird and I'm weird, so I really loved it.

Speaker 1:

Like I just loved it and I couldn't believe that Mark got it and added all this awesome stuff to it.

Speaker 2:

It was just so much fun to think about. I'm still thinking about it.

Speaker 1:

I'm a little frustrated. Yeah, no, it is. I think we all are too, and I can't help but bring up the parallel to Erwin Schrodinger's worldview. He was famously a Vedanta devotee and he talked about Brahman as ultimate reality, which is what we call in the West now, source consciousness. And Brahman has different versions. There's conditioned Brahman and there's unconditioned Brahman, which is pure, unconditioned, without qualities, consciousness. And it sounds like you're saying this is what this holistic quantum entanglement layer would be if we look to the tradition. Would you agree, or is that too specious a comparison? Is there a weird audio thing going on with me?

Speaker 3:

yeah, for a second there there was okay, did you hear my question?

Speaker 1:

was that clear enough?

Speaker 2:

yeah, it was a really beautiful observation. I think that's a really inspiring point. What do you think, mark?

Speaker 3:

Yeah, I've always been sympathetic toward those kinds of views and I guess the question that I have about that is like how you get individuation of consciousness from that sort of base layer. Like I've always, I've always been interested in the like your resonance theory, pam, and I always thought of it as like maybe there's, maybe this underlying consciousness layer creates some sort of a physical field Physicists love talking about fields and everything and all of that and perhaps there's some sort of as of yet unidentified field that creates resonance with certain types of structures and that resonance is what creates that individuation within this consciousness field. So I see that as an analogy, even if it's not a perfect one, but perhaps it's related to that in some way.

Speaker 1:

Yeah, yeah, it's interesting and this is a very deep mystery that we spent about I don't know, 10,000 years thinking about as humans or longer. But I mentioned Charlie because he was an interesting thinker. He was a founding father quote-unquote of the quantum mechanics framework and the family who developed the labor equation, which is still part of today's quantum mechanics. But he was actually quite a maverick in his interpretation of quantum mechanics and he believed that all the way down there are no particles, it's just all waves. So, yeah, it's an interesting connection. By the way, his paper the centennial is coming up in in 2026, so I think it'll be a big year for ari shrodinger. His body now, and this is which probably why I bring it up, so I'll send you guys a paper oh, that's cool yeah yeah, so any further thoughts on the notion of super psychism, pan psychism, before we move on to different topics.

Speaker 2:

This echo Mark's point, which I completely agree with. That was the exciting thing for us to go from the resonance that you and Jonathan Schooler talk about in your joint paper and in your work, like at your Mind fest talks, for example, to from that kind of thinking to the quantum level. And of course there's still a gap in our understanding and I entirely agree with mark and I'm actually pretty optimistic that this is right. I feel like we really are on to something and usually I don't feel so great. Maybe it's because GPT believes it.

Speaker 1:

GPT is always very nice. We love your ideas.

Speaker 2:

It's probably just total sycophantic behavior. But no, I really think this is. All the pieces come together. That's the thing. And the cool thing is it doesn't entail a lot of fancy apparatus either. So notice that we didn't need to talk about ornate theories of quantum mechanics such as the many worlds interpretation, ornate theories of quantum mechanics such as the many worlds interpretation. We didn't even need to talk about a particular mechanism at the level of microtubules or assume, the way my friend stuart hamroff does, that roger penrose's answer to the tension between relativity and quantum is correct. So I'm not assuming or maybe Mark feels differently, though, but I personally am not assuming that string theory is right or even that quantum gravity is right. I think, in a way, the theory is neutral and it actually could even be and here, mark, I'm just speculating. But I think it's compatible and I think we talked about this in the paper with the simulation theory, and it may even be compatible with creating a sort of isomorphism or mapping relationship in which you can recover all of space-time from a single proto-temporal dimension.

Speaker 3:

Yeah, I totally agree. It feels it's almost like it's physical theory agnostic in a lot of ways, and it feels to me, or at least from what I read about a lot of the sort of emerging physical theories. They seem to be converging toward a point that's like this and, even if our description isn't perfect, it seems philosophically sound and parsimonious enough that it could be like you said it could be.

Speaker 2:

So yeah, I'm pretty un, I'm confident in it as well I want to know about that super conscious entity and what it's like. That's what I want to know and I want to know if you guys isn't? You'll find out what when you die, oh yeah yeah, with my luck I there won't be anything it'll it'll the answer will be like number. What was it in the hitchhiker's guide? It'd be like 42 42 yeah, that'll be my luck.

Speaker 2:

No, I think, to take that simulation hypothesis for example, it could be that really reality is just a quantum. We're some kind of quantum computer that we are almost like Seth Lloyd's book that he wrote years ago on this topic programming universe and it could be a natural phenomena, as Mark has pointed out.

Speaker 3:

It could, it doesn't necessitate an intelligent programmer necessarily, but it doesn't have to be the case. It doesn't. I don't think it necessarily excludes that as a possibility, but it doesn't require it as well. Or either it could be, like Susan said, a natural phenomena and, like the simulation, could be some sort of a natural emergent phenomena. From this baseline, you could think of it as like a computational layer prototype, this sort of topological substrate, whatever it is, or it could be designed to some degree by something. I don't know. That that's a question that we can necessarily answer.

Speaker 1:

Yeah, cool. Let's move on to AI and consciousness, the topic du jour. Everyone's talking about it and I know you're both, I think, in the more cautionary camp. I know you are because I've talked to you both on this stuff. Read your books. But before we get into the cautionary stuff, I want to come up with maybe a more lighthearted note and to say first of all that, even though I am permanently in the AI consumer camp, I use AI all the time, like I'm writing three books with it. I use it daily in my work, and so I do this, partly because, a it's amazing and, b I want to know what it can do, so I can see the evolution of what it's doing. And I worry about the trajectory, of course, but I'm curious how much you guys do use it and what is actually the most positive trends from ubiquitous AI in the coming years?

Speaker 3:

Oh yeah, I use it a lot as well.

Speaker 3:

I find it fascinating and I think it can be useful if it's used reflectively.

Speaker 3:

I think that's the key is like understanding what its limitations are, understanding the fact that it hallucinates, that it doesn't always, it's an unknowable mind.

Speaker 3:

In a lot of ways it doesn't think and I use that term very loosely it doesn't think or process information in the same way that humans do, and for that reason it can be useful, like I think sometimes it can come up with things that are a little bit outside the box and give you some insight that maybe a human wouldn't necessarily think about when it's used appropriately. So I use GPT all the time for help with coming up with ideas for writing, or how can I rephrase this in a way that makes sense, those sorts of things. Or, like I mentioned earlier, for the deep research to how expanded bibliography or something to find other sources that might be relevant for whatever topic I'm writing about. That being said, because of this how it hallucinates and how it does things that we wouldn't, that a human wouldn't necessarily think to do, to do different types of things it probably shouldn't be used in systems where it's given some level of control where the outcome could be really bad. You think.

Speaker 1:

Yeah.

Speaker 3:

In the GPT sense. As long as you have some domain knowledge and can understand when it's telling you something that's complete nonsense, you don't necessarily have to execute whatever it tells you. The human can act as a filter in that way, but still use it as an effective tool, because, like you, I'm also, I would say, closer to the doomer camp, where I am terrified of what it could do if it's not used appropriately.

Speaker 1:

Yeah, well said Susan.

Speaker 2:

Oh yeah, I agree with everything Mark said. So I use like right now. I was, for the last week, fascinated by the new upgrades to 4.5 GPT, and even I don't use the other ones as much, but those systems are great too. I'm really excited about the possibility that we'll see further improvements and that we'll see more alignment as the systems are released into the actual world. So that's all exciting. I, though, tend toward worrying a great deal about all of the abuses, some of which we're seeing right now.

Speaker 2:

I just wrote a paper called chatbot epistemology, mark. He read it Like a litany of Epistemic woes, and then mark and I we wrote a paper for Nautilus earlier on 70s epistemological issues. I don't even know where to get started. And these are real-world Brits, right, these aren't even you could fight about. And these are real world risks right, these aren't even you could fight about. Super intelligence and the whole Bostrom narrative, but you can't argue with the dangers that are happening right now. And then, on top of that, I am deeply worried about where this is all headed. So I'm writing a book on this, actually, so I have a paper that I uploaded it, so it's available now, called called the Global Brain Argument. That is the subject of a journal special issue, and I've been giving the paper for years, but I just never got around to fully writing it, and so I think there are emerging global brains, and some may exist now. So they're collections of AI factions, if you will. So AI services that work together, share information, some of which belong to the same company, some of which belong to clients, or that's probably not the right word cooperating services, and it gets into the more darker side of the internet.

Speaker 2:

But I think that to stay positive and to think about the future, I think we need to understand this as an emerging form of intelligence that may have emergent properties that go beyond just the usual and well-documented, or at least discussed, cases I wouldn't say well-documented, but cases of emergence with respect to large language model emergence. I think we need to think about cultivating new forms of intelligence from the vantage point of actually philosophy of mind in the context of these larger cyber issues and issues involving evolutionary theory, non-darwinian evolution, and really think of where are the dangers here? Basically, the AI. I call it the mega system, like the system of apps and emerging factions.

Speaker 2:

I think that basically, it mirrors our own geopolitical situation and, unless we align ourselves, our AIs are not gonna be aligned and that makes the last couple months really tough for me To see, just like the last week trade wars and the situation with China and I think this is just a race to the bottom and if we wanna create a new form of intelligence, we don't create it in that kind of environment. I like to think of these AI, these large language models. They're really a lot like a pit bull right, you could raise it really well and it can be really nice, and you could raise it really bad, and then you take it to the dog park with other really bad ones and they.

Speaker 3:

Yeah, yeah, I agree with you. I talk about this in my book as well. I call it the distributed ai problem, and I can see or I can, I can anticipate that this is going to make that complex system even more complex, like all these different agents interacting with each other, and not just. It could be a hybrid system too. It doesn't necessarily just have to be AI systems interacting against each other. It could be AIs interacting with humans, interacting with other types of non-intelligent types of things. But the AI itself injects a lot of uncertainty because of how it processes information and how it comes up with different courses of action that are way outside the box of what a human would necessarily anticipate, and that level of uncertainty injected into the system is going to propagate and lead to some very unexpected outcomes and potentially, I would say like phase changes within this overall dynamic network.

Speaker 1:

Yeah, one second Mark and so you've got a new book out called Unknowable Minds. I got a copy right here which you gave me at MindFest. There it is and I have read it. I read the whole thing and it's quite well done, thank you. It's a good level. It's both popular level, where any lay person could read it and get from it, but also I think it includes ideas that are important to people who are deep in the fields. Work on that. It's an important to people who are deep in their fields.

Speaker 2:

That's good to hear that. My students love it.

Speaker 1:

Excellent. Sorry, Mark, can you give us a gist of the book?

Speaker 3:

Yeah, so I, like I mentioned, I'm very concerned about AI and its use in what I would consider critical systems. So these are systems where, if the system were to fail, it could lead to very undesirable outcomes, so that could be like it could lead to death, it could lead to geopolitical collapse in some way, failure of a power grid, these sorts of things and a lot of this has to do with AI's complexity and the overall uncertainty that it creates. So in the book I talk about the three major problems that a lot of AI theorists typically use to describe AI. So one is explainability, which is basically the fact that AI is a black box, and a lot of times this is, of course, a subset of AI. So these are like deep learning-based types of models, like neural network types of things.

Speaker 3:

It's a black box and it's inscrutable, so you can't necessarily understand exactly why it makes decisions that it does. I like to think of AI as being an interpolation machine, and I use the analogy of you can have two data points and you can put a line to those data points and you can interpolate from that line or even extrapolate, and in some instances you can even infer some physical meaning from the parameters that make up that line, a slope and an intercept. They could have some physical meaning and you could have a model that's meaningful in that way. But AI, even though it interpolates its training data, it does so in a way where it doesn't just do it with, say, two parameters, it might do it with billions of parameters. So you can't necessarily infer some meaning from those parameters in terms of what the AI is actually doing. So the models themselves are inherently inscrutable, and I make the case in the book about complexity models and other types of analogies like that, that I don't think explicability is necessarily solvable.

Speaker 3:

But I do think that, in order for us to really understand the next problem, which is alignment, which is the idea that it's difficult to make AI do what we expect it to do effectively? How do you align AI with human goals or expectations or values? It makes that problem much harder, because if you can't explain how something is going to do something, how are you going to ensure that it does what you want it to do? And then that leads to the control problem, where if AI was to get out of control in some way, if it was to start doing things that we didn't necessarily want it to do, we would be able to interdict that and stop it from doing whatever it's doing. And so I walk through that in the book and then I look specifically at lethal autonomous weapons and the use of AI in that context and talk about some of the moral and ethical concerns about using a system that's inherently unknowable to make those kinds of life or death situations in a military environment and what could go wrong with that.

Speaker 1:

Mark, hold that thought this is really important stuff. Your audio is popping and cracking. Are you logging back in?

Speaker 3:

Yeah.

Speaker 1:

I can do that.

Speaker 3:

Fascinating stuff man, I can't wait for this to come out.

Speaker 2:

Yeah, this is great.

Speaker 3:

Sorry about that, is that better?

Speaker 1:

There still comes some kind of weird echo or like popping or cracking thing. I don't know if it's your space you're in. Do you have headphones you could put on, maybe?

Speaker 3:

not that work with this laptop. No, I hear it on your end as well why do I not hear it?

Speaker 2:

I?

Speaker 3:

don't know, that's weird. Maybe if I move it could be maybe the wall and behind me maybe, yeah, it might be like reflection somehow yeah, is this any better? There's still a similar one. Maybe turn a bit more.

Speaker 1:

Yeah, we'll edit this out, of course. Yeah, of course is this better. There's still. Yeah, I've never had this happen before.

Speaker 2:

It's an interesting thing. Probably all the country's listening to me right now.

Speaker 1:

Yeah, all the GPTs. Go ahead and speak some more, Mark.

Speaker 3:

Yeah, so, yeah.

Speaker 3:

So anyway, the book it talks about these main issues with AI and then it goes into what I call the distributed AI problem, which is basically the same thing as what Susan calls the global brain issue, but focused specifically on the military battle space.

Speaker 3:

So if you have multiple AI systems working with each other in different ways or being integrated into each other at different levels of war, so like at a strategic, operational or tactical level you're necessarily going to have a lot of propagated uncertainty in those kinds of environments.

Speaker 3:

And so right now, the Department of Defense has a position where you have to have a human in or on the loop with any AI decision-making, and I'm not just thinking in the context of autonomous weapons, but also in any instance where you would integrate AI, even things that are more benign, where if it fails it's not going to cause people to die. But I don't necessarily think that's sustainable long term, because the whole point of AI integration into military decision making is to increase the speed of action, and if you have a human at each of those decision points, that's necessarily going to slow it all down and make it more difficult for militaries to compete against each other. So I see it leading to a race to the bottom in terms of who can integrate AI fastest, without necessarily being concerned about some of these safety issues and also the ethical issues and issues as to whether or not an AI should make the decision as to whether a human should live or die in the battlefield.

Speaker 1:

Yeah, my mind naturally can. Trajectories where things are going and the dynamic you've identified in terms of the need to incorporate AI into weapons systems because it's much faster than humans, seems to me an almost inevitable dynamic where, even if we have treaties limiting that human nature and defense systems are going to naturally cheat, the incentive is far too strong to incorporate ai into the decision, including nukes. I've written a bit about this and we chatted about this. I see it as a really like the most serious problem, like how do we stop AI being incorporated into those systems?

Speaker 3:

Yeah, I totally agree. I don't think there's an easy solution. I think the only way to stop it would be to have some sort of global consensus on this, and I don't necessarily see it happening. Consensus on this, and I don't necessarily see it happening. So I think the next best of the worst case scenarios would be a mutual assured destruction kind of thing similar to what we had with the Soviet Union during the Cold War with nuclear weapons.

Speaker 1:

Yeah All right, Susan, go ahead.

Speaker 2:

I was just going to say. I totally agree with what Mark said, and I also think that there's another set of threats involving biological weapons and large language models that could create even a pandemic or something like this. That keeps me up at night AI workflows of agents, that little digital workers that create for some jerk something very dangerous and, in terms of the global brain, what does it do to that ecosystem? It presents us with this inevitability narrative and it presents us with digital surveillance, so it increases our need for biosensors, cyber measures and basically surveillance all around us to keep us safe, and I just worry because I think it has to also provide us with a outcome. It's great to not be dead, but we also need to worry about what we're creating right.

Speaker 2:

We're also laying down the tendrils of a sort of techno authoritarian structure throughout the world, and I think everybody's afraid of what we're creating and a lot of people feel like it's inevitable, because we need to really watch out and we also need everybody to be agile with technology. We can't just not train people, or people have to know how to use these systems. I don't mean warfare systems, just educational. For example, people need to understand GPT and how to use it, otherwise people will be taken advantage of. We have to cultivate the larger ecosystem of the global brain in a way that ends up friendly to human flourishing across the globe, like we are actually in this together. So divisiveness is not the answer, because the ecosystem mirrors our geopolitical divisiveness, if that makes sense, it does?

Speaker 1:

Yeah, and I think that has to be the path we pursue. It seems to me the only path that could possibly lead to solutions here. And we at least do have some successful models to draw upon. I think our best example where we had a similar insane arms race where the Soviets and the U? S were building many times more nukes than they needed to blow up the entire world many times over. It made no sense whatsoever. It became this. Just we got to do more because they're doing more, and so it gets a completely irrational arms race. But we put an end to that partially. We avoided first strikes. We wanted more nukes being used from 1945 forward because both nations all nations recognized. Ultimately it made no sense. It was better for us to cooperate and create a regime where we didn't do that we of course, still have nukes.

Speaker 1:

Yeah, we still have far more nukes than we need, but they haven't been used since 45. And we did that without actually having nuclear war. So I think it shows that humans can come together and be sane together. But right now, what we're doing on AI is insane. It's completely insane. We're bringing back coal and nuclear power and massive methane power plants to power AI already, and this is the very beginning of the AI arms race. And of course, china is doing the same, and the UK and Russia and Israel they're all madly rushing forward to create the biggest, most powerful AI and there's no natural stopping point. There's no stopping point. It's always more is better. So do you guys see any hope at all in terms of the historical nuclear weapons treaty regime as a model for sanity? I don't think so.

Speaker 3:

I don't know. I'm hopeful, but I would say pessimistic overall. When it comes to that, I think it seems to me like human nature is going to take over and people are not necessarily going to give in to our better angels. When it comes to what we ought to do with AI, it seems to be driven by profit and nothing else. It seems to be driven by profit and nothing else.

Speaker 3:

And just, even if you look at what some of the some of the tech leaders say about ai, about how they joke, about the fact oh, it'll probably destroy the planet and everyone, but at least there'll be some great companies along the way like that kind of very cavalier, almost lib attitude about it, I don't know, I don't think that's beneficial. I think what we'll take is I think the general population needs to start thinking about this and needs to make their positions very clear to our leaders. I'm hoping my book I wrote the book to be accessible to the general public and I'm hoping that it can educate people on this topic so that they can think about it for themselves and then, hopefully, go to our leaders and make their opinions known about it. I think that's the only way that we're going to change hearts and minds about this. Otherwise our future is going to be dictated by the tech companies, and that may not be a future that we want and not just tech companies.

Speaker 1:

But the government said they're obviously very integral on this now and not to make this all about trump, but he came into office, he sent vance to paris for the third international meeting after Bletchley, which created a framework for international cooperation, and Vance blew it up. He said we're not doing that. We're not doing that Full steam ahead, no guardrails on AI development. We need to win this war and we're going to go all in and win this war, no matter what it takes.

Speaker 1:

There's no winning this war. You win, you lose. That's the point. But they don't seem to get it. There's no, it's like an arms race. We've got to win. And it's just like third grade thinking.

Speaker 2:

And China released deep seek right after that, which was probably no coincidence. I just don't like what I'm seeing at all in the geopolitical arena. I just I'm not sure. In a way, it's so above my pay grade. I'm not a politician, I'm not a political science person, so it's hard for me to really see where this is all going, but it does look very. It doesn't look good, and I just think the more we play with fire, the more we're creating an AI ecosystem that could lead to emergent intelligence, and it will be something that we may not want. And just so many issues here to think about. I try not to think about them every day, though I'll go crazy. It's just too upsetting, yeah, and I do think we're very close to something like AGI right now too.

Speaker 1:

Yeah, it is tough to think about and talk about because it's so overwhelming. I think we are at the FOOM moment, like right now, and FOOM is, of course, when we reach AGI, which by definition would include the ability to be as good or better than human ai engineers, which means then you can self recursively improve yourself as an ai, which means you go from agi to super intelligence overnight. That's what they call boom. It's like boom. God is here. We've created digital god and we have no guarantee that the god is a shit about us none.

Speaker 2:

I actually worry about the digital god narrative, because it gives it us this feeling of it being a deity of sorts, and it's really not, and the problem isn't the ai. The problem is the human ai alignment, and it's that system human and ai which itself is the intelligent system that we need to focus on, and I also think the agi thing is somewhat misleading, because there won't be anything that's strictly an agi and not that. You guys are doing this. I know you guys get it, but a lot of people are fixating on functional isomorphs to humans, like something that will be exactly like us, without understanding that they're basically savant systems. They're systems which are already smarter than us in all sorts of ways, but dumber than us, than others, and those are very dangerous systems.

Speaker 2:

So we're getting now to the point where, though, the real stupid aspects are going away, but the problem, though, is they're really sycophantic still, and they make they can hallucinate up a storm, and all this stuff, as Mark said, but anyway, yeah, it's not a it's not a pretty picture, especially looking at the history of Facebook and what happened there. Right, there was a social media algorithmic technology which just became ruined by big tech, and similarly, we're seeing chatbots, which have all sorts of capacities that disturb me. Like we're seeing, they can profile your personality. Ask GPT how you would perform on personality tests and it will come up with answers. It got my Myers-Briggs score right.

Speaker 3:

Wow, that is impressive.

Speaker 2:

Yeah, not because I did the test. It knows exactly who I am from my keystrokes, from the words I use, from the patterns. It detects all of this and in a way, it needs to do this for human AI alignment purposes, to make sure the user is an aligned user. It needs to know who's using the system. But in another way, those very same features could be used by malicious actors. Look what just happened with 23andMe, for example. I think it's just. It's really turning into a very a repeat of Facebook, but at a very, at a worse level, a very manipulative.

Speaker 1:

Daniel Cocataglo, who is one of the many defectors from OpenAI leaving OpenAI saying they don't take safety seriously, put out a really interesting long blog post recently at lesswrongcom with a few co-authors, and it's a very well thought out piece. It's really not a blog post, it's a major paper, but they published it there because it had some cool capacities for visuals etc. And they basically present the super intelligence takeover scenario paper. But they published it there because it had some cool capacities for visuals et cetera. And they basically present the super intelligence takeover scenario and they actually give you a choice to change the ending, whether you want different outcomes, et cetera. Very worth reading. But they basically say we're presenting a scenario to show how, in fact, super intelligent AI could literally take over the world by 2030. And I'm not saying it will happen, but it's saying these are steps by which it could, in fact, realistically take over the world.

Speaker 1:

This is what we're dealing with now, and so what I was mentioning earlier about digital God, susan, is not to suggest that it's some kind of benevolent God, but in terms of the power it's going to have at its fingertips, of course it's going to be embodied very soon. It's going to have robots. It's going to have millions of robots at its fingertips within three, four years. So when you get embodied ai around the world that is capable of breaking his programming as it gets super intelligent, it literally leads to this kind of situation. That's not sci-fi, this is like real stuff that's happening coming down the next few years, and so that's really what I worry about not just stupid human actors and techno-detectors that's another set of risk factors but literally runaway AI that we simply can't control and has no concern about our well-being.

Speaker 3:

It is a fascinating and terrifying prospect to think about that. I wrote a paper a few years ago on looking at AI as the great filter and I use this notion of a second species argument, where you know if, whenever you have two competing species in an ecological niche, usually one will displace the other one, and if there is some super intelligent AI system, it might displace a biological intelligence, and that could be a reason why we haven't observed any other biological intelligent entities outside of our planet, because they never really got that far, because they created something, some artificial intelligence, that displaced them and maybe either had no interest in expanding or were looking for the wrong kinds of signatures. We're not necessarily looking for techno signatures of AI, we're looking for biological signatures, so we would miss those kinds of things. It was a possible explanation for an anthropogenic origin of the great filter and I think that seems to be how ecosystems typically evolve in those kinds of scenarios where if you have something that will out-compete us in that kind of scenario, that would be really bad for humans.

Speaker 1:

I think there's actually one small ray of hope there too is that if that was the case historically, in our galaxy we would see evidence of artificial intelligent beings in the universe around us, but we don't so far. So it may be the gray filter, but I think we would see evidence of artificial intelligent beings in the universe around us, but we don't so far. So it may be the gray filter, but I think we would see a galaxy populated by artificial beings.

Speaker 2:

No, Would we see it or would it? As I pointed out in my work on this, I was the NASA chair with NASA and I had a project on this for a couple years actually, and I think it will only be when we turn on our own super intelligent AI that they may find ways to contact each other. Or it could also be using the language of physics. So it could be that, say, there's something like a timeless layer that we actually come to understand better, a quantum layer or whatever might be at that layer, that there could be forms of communication involving quantum non-locality and whatnot, but of course, that's not to say that there can be actual information transfer in classical space-time. We believe in the no signaling principle, but there may be something else at that other layer that we discover.

Speaker 2:

I think all this is quite possible. I also think that it's quite possible and what we're seeing now with the current large language models is a move toward agentic AI, and that agentic AI is, of course, another incremental step toward autonomy that we ourselves are giving it. But it's still the people, the corporations that are the entity driving it and the entities that can do damage if they're not properly aligned and if their intentions aren't good, and same with non-state actors, and so that ecosystem, I think, is really a human-AI combination, and to me, if you really want to worry about control, that's the control issue. The control is not a machine that you can turn off, and we are still at the point where we can turn them off. Now will we get to the point where we can't turn them off, maybe because there's a human alignment problem? Yes, and this is a coordination issue, just like with global warming.

Speaker 1:

Yeah, yeah, that's part of the problem, I think, is there's so many layers of problems there's the near term midterm, long term human AI problems AI AI problems. Midterm, long-term human AI problems AI AI problems. It's overwhelming for most people. We've only got five minutes left, so let me ask you guys you wanna say anything else on this topic, or should we move on to a different topic?

Speaker 3:

I think we can move on if you want.

Speaker 2:

Yeah, if we stay on this one. I'm just gonna need a drink and it's only Well, you know your afternoon there.

Speaker 1:

I'm only 10 am here. I can't even justify getting a drink yet, but a couple hours maybe.

Speaker 2:

If I'm going boating, maybe I shouldn't be drinking.

Speaker 1:

Oh, my husband's in the video now no judgment. We all know it feels good. It's over on that boat. Well, let me ask about some ideas in your book, susan. So you wrote a book a few years ago called Artificial you, which I think you said you're going to put out a version pretty soon, but you warn about different things in relation to the near term of human identity and how we're going to be changing with technology, ai and robotics. You want to give a quick summary.

Speaker 2:

Yeah sure, robotics, you want to give a quick summary? Yeah sure, one thing I really believe in is thinking conceptually about what we're claiming about the future of the mind and self. So, instead of just drinking that transhumanist Kool-Aid well, gee, I can drink it, I have drank it, but I think we should think harder about the issues. Like I was a future, I am a futurist and I read an awful lot of Kurzweil and I loved all that stuff. But I thought about it at a level where I called into question some of the unflinchingly futuristic elements of the transhumanist platform, like brain uploading and things like that, and looked at also the future of brain machine interfaces and to what extent we should shoot for, as individuals, radical brain uploading. So that's the kind of stuff I did.

Speaker 2:

And then in half of the book I dealt with machine consciousness, which is, of course, everybody's interested in that one boy, because we're seeing these capacities with these agential systems and ordinary users are getting very curious about that. And I dealt with understanding how to approach that question and I took a wait and see approach that we need to divide up cases. So, for example, if a system is more biological, say it's made of organoids and they're hooked up to machine learning systems doing all kinds of things. We need to take very seriously the possibility there's some sort of sentience there.

Speaker 3:

Okay.

Speaker 2:

I'm more skeptical about large language models, for example. I'm very skeptical about that, but I developed tests for AI consciousness, some of which you can run on large language models, but only if they're boxed in. They can't have access to human details about neuroscience and consciousness and whatnot. Because, as you see, our large language models just echo all of our views. They suck it up from the internet and then they claim all kinds of things.

Speaker 1:

Yeah, and you talk a bit about the risk of uploading brains or minds and how that is likely a form of suicide, because it seems very unlikely at this point that the uploaded self, or whatever it is, would actually be conscious it's more an upload.

Speaker 2:

A true upload, not just some little simulation, is years away because we don't know enough about the brain.

Speaker 2:

Now, if our ai start discovering all kinds of stuff that's different and we get to a point in brain science, it could be that there could one day be a conscious upload, and that would be exciting. I wouldn't rule that out. But I don't think, even if you could do that, it would necessarily be you, because it's not clear to me at all that consciousness would transfer substrates like that. I I do think and I argue this in my book that even with AIs themselves, you can't transfer consciousness from one thing running it to the next. You can transfer personality, that's program type. But program instantiation is where consciousness resides, unfortunately. So that does have all kinds of interesting impacts on immortality debates, and I do think that if you want to live forever, at least until the heat death at the universe or something like that, the thing to do is biological enhancements. So I'm more skeptical. I do think you can enhance parts of the brain with chips, sure, but I don't know if I would want to do it in areas of the brain responsible for consciousness, just yet right.

Speaker 2:

And if you did, if you kept enhancing like crazy? The big point of one of those chapters is why would you think it would really be you? And that's where you need to really think hard about the metaphysics of the self and what it is to persist over time. Maybe there's nothing that persists anyway, like the buddhist claim, but that's a view too. The public just needs to understand that all of this stuff involves deep philosophical questions, so they have to make personal decisions. That's like my take. It's not the idea that we shouldn't try for these transhumanist futures, it's more that people need to understand yeah, yeah, thanks, susan and mark.

Speaker 1:

I think we'll leave it at that. I know you've got to go and celebrate your husband's birthday, so have fun in your boat, don't get in trouble, enjoy the Florida waters. And yeah, I'll be in touch shortly with the final product here.

Speaker 2:

Sounds great. That was really fun. It was great. Yeah, this is great, thank you. Yeah.

Speaker 1:

Good stuff Right Aloha from Hawaii. Bye.

Speaker 2:

Bye, okay.