Library / In focus

Back to Library
The TED AI ShowCivilisational risk and strategy

Could AI really achieve consciousness? w/ Anil Seth

Why this matters

This episode strengthens first-principles understanding of alignment risk and the strategic conditions that shape safe outcomes.

Summary

This conversation examines philosophy through Could AI really achieve consciousness? w/ Anil Seth, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Perspective map

MixedSocietyHigh confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 51 full-transcript segments: median 0 · mean -1 · spread -108 (p10–p90 -80) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.

Slice bands
51 slices · p10–p90 -80

Mixed leaning, primarily in the Society lens. Evidence mode: interview. Confidence: high.

  • - Emphasizes alignment
  • - Emphasizes safety
  • - Full transcript scored in 51 sequential slices (median slice 0).

Editor note

Useful mainstream bridge episode for teams that need a shared baseline quickly.

ai-safetyconsciousnessted-ai-showphilosophysocietyintro

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video Dh9dlCqzJmM · stored Apr 2, 2026 · 1,492 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/could-ai-really-achieve-consciousness-w-anil-seth.json when you have a listen-based summary.

Show full transcript
hey belaval here before we start the show I have a quick favor to ask if you're enjoying the Ted AI show please take a moment to rate and leave a comment in your podcast app which episodes have you loved and what topics do you want to hear more of your feedback helps us shape the show to satisfy your curiosity bring in amazing guests and give you the best experience possible in the rush to develop increasingly sophisticated artificial intelligence a big question keeps floating around you know this question how long will it take before some massive breakthrough some kind of Singularity emerges and suddenly AI becomes self-aware before AI becomes conscious but we're getting way ahead of ourselves lately reports from AI researchers suggest that AI models are not improving at the same rate as before and are hitting the limits of so-called scaling laws at least as far as pre-training is concerned there's also worries that we're running out of useful data that these systems require better quality and greater amounts of data to continue growing at this exponential Pace the road to a machine that can think for itself is long and it's starting to sound like it may be even longer than we think for now clever interfaces like chat gpt's advanced voice mode the one I experimented with in an earlier episode this season helps give some illusion of a human at the other end of this conversation with an AI I was I was surprised by how much it actually delighted me and even kind of tricked me at least a tiny little bit into feeling like chat GPT was really listening like a friend would the thing is though it's a slippery slope we're building technology that is so good at emulating humans that we start ascribing human attributes to it we start wondering does this thing actually care is it actually conscious and if not now will it be at some point in the future and by the way what even is consciousness [Music] anyway the answer is trickier than you might think to unpack it I spoke with someone who's been tackling this question from the inside out from the perspective of the one thing we know is conscious the human brain one of my mentors the philosopher Daniel Dan we sadly lost earlier this year he said we should treat AI as tooled rather than colleagues and always remember the difference that's anel Seth he's a professor of cognitive and computational Neuroscience at the University of Oxford he studies human consciousness and wrote a great book about it it's called being you a new science of Consciousness and that quote from his mentor it's something that sticks with him it sticks with me because I think we have this tendency to always project too much of ourselves into the Technologies we build I think this has been something humans have done over history and it's always got into trouble because we tend to misunderstand then the capabilities of the machines we build and also we tend to diminish ourselves in the process and I think the recent explosion of interest in AI is a very prominent example of how we're falling PR to this problem at this moment so this is why annel's on the show today he's come to share why he thinks it's imperative we see AI as a schol not as a friend and why that difference matters to not only the future of this technology but also the future of human consciousness I'm belaval sadu and this is the Ted AI show where we figure out how to live and thrive in a world where AI is changing everything hi I'm baval sadu host of Ted's newest podcast the tedi show where I speak with the world's leading experts artists journalists to help you live and thrive in a world where AI is changing everything I'm stoked to be working with IBM our official sponsor for this episode now the path from gen Pilots to real world deployments is often filled with roadblocks such as barriers to free data flow but what if I told you there's a way to deploy AI wherever your data lives with Watson X you can deploy AI models across any environment above the clouds helping Pilots navigate flights and on lots of clouds helping employees automate tasks on Prem so designers can access proprietary data and on the edge so remote bank tellers can assist customers Watson X helps you deploy AI wherever you need it so you can take your business wherever it needs to go learn more at ibm.com X and start infusing intelligence where you need the most are your digital operations a well-oiled machine or a tangled mess is your customer experience breaking through or breaking down it's time for an operations intervention if you need to consolidate software and reduce costs if you need to mitigate risk and build resilience and if you need to speed up your pace of innovation the pager Duty operations cloud is the essential platform for operating as a modern digital business get started at pag your duty.com something about the way we're working just isn't working when you're caught up in complex a requirements or distracted by scheduling staff in multiple time zones or thinking about the complexity of working in Monteray while based in Montreal you're not doing the work you're meant to do but with dayforce you get HR pay time talent and analytics Allin one Global people platform so you can do the work your men to do visit dayforce.com dothework to learn more so any I've been thinking about how not long after we invented digital computers we started referring to our human brains as computers obviously there is a lot more to it than that but what is helpful and not helpful about describing our brains as computers it's clearly very helpful I mean there's my title my academic title is the professor of computational Neuroscience so I'd be rather hypocritical to say that it was not a useful way of thinking to some extent and there's a very Lively debate uh mainly in philosophy rather than Neuroscience or in Tech about whether brains actually do computation as as well as other things in fact the metaphor of the brain as a computer has clearly been very very helpful if you if you just look inside a brain you find all these neurons and chemicals and all kinds of complex stuff computation gives the language to think about what brains are doing that that means you don't have to worry so much about all that and of course at the beginning of AI there was this idea that intelligence might be a matter of computation Alan Turan famously asked the questions about whe whether machines can think and Universal touring machines were were specified theoretically which can do any computation and the idea that well that might be what the brain is doing becomes very of P also at the birth of AI Walter Pitts and Warren molik realized that neural networks these simple abstractions of artificial neurons uh that are connected to each other that underpin a lot of the the modern AI we have actually serve as universal touring machines so we have this this temptation this idea to think yeah the brain is a network of neurons networks of neurons can be Universal tur machines and these are very powerful things so maybe the brain is computer but I think we're also seeing the limits of that metaphor and all the ways in which actually brains might perform computations but they may also do other things and fundamentally know we always get into trouble too when we confuse the metaphor for the thing itself I love that and I I think a big chunk of that is also we talk so much about sort of the supercomputing Clusters and just how fast technology is moving and we're almost you know losing some appreci for the intelligence that's inside our craniums and to put it very plainly how much more complex is the brain today compared to even the most advanced AI systems I mean it's it's a totally different thing I think we we really do the brain a great disservice if we think of it purely in terms of sort of number of of neurons but even then there are 86 billion neurons in the human brain a thousand times more connections it's incredibly complicated even at that level also the the brain is is so intricate like the what the connectivity in one area might be slightly different from the connectivity in another area there are also neurotransmitters washing around the brain changes every time a neuron fires synaptic connectivities change a little bit it's not a stable um architecture and then there are all the G cells and all the supporting gobbins that that we often don't even think about but but are turning out to actually be significantly involved in in the brain's function there was there was a recent paper in science I think that this gargantuan impressive effort to unpack in as much detail as possible one cubic millimeter of brain tissue in the human cortex in this one cubic millimeter you've got 150 million connections nearly 60,000 cells and and to store all that data in a standard computer was just an enormous amount to characterize and even this is just a you know it's not everything right this is just a very detailed model the brain is very complex very complex that is quite amazing what's also interesting about the share complexity in the brain is the brain doesn't sit in a vat right at least not usually and of course the brain works in concert with the rest of the body does that aspect of being embodied give humans any advantages over AI systems I think it depends what you want the system to do you're absolutely right brains didn't evolve in isolation they evolved in response to certain selection pressures what were those selection pressures they weren't sort of right computer programs or right poetry or or solve complex problems fundamentally brains are in the business of keeping the body alive and later on moving the body around so control of action things like that and those imperatives fundamental to me to understanding what kinds of things brains are they are part of the body they're not this kind of meat computer that moves this body around from from one meter to another chemicals in the body affect what's happening in the brain the brain communicates with the gut and even with the microbiome we're seeing all these kinds of effects that transpire within the body and then of course the body is embedded in a world and there's always this feedback from the world and understanding these nested Loops of how the brain is embedded within a body and a body is embedded within a world I think that's a a very different kind of thing than the abstract disembodied ideal of computation that that drives a lot of our current Ai and of course it's also represented in a lot of Science Fiction we have things like um how which are 2001 which okay there's a body as the spaceship but it's a kind of disembodied intelligence in many ways so then how important is it that we're embodied to have conscious and intelligence and we'll get to the definitions in a bit because I'm curious what happens when you embody in Ai and I'm of course thinking of all the humanoid robot demos that we've seen lately where it seems to be this crude representation of kind of what we do like we've got sensor systems that perceive the world and we build a map of it and then we can figure out how to take action in it this is a very this is a fascinating question I mean so far you the the AI systems that we have the ones we tend to hear about mainly anyway language models and and generative AI they tend to have been trained and then deployed in a very disembodied way but this is this is changing robotics is improving too it's gling behind a little bit as it always does but it is improving and there are fascinating questions about what difference that makes one possibility that strikes me as plausible is that embodying an AI system so that you train it in terms of physical interactions don't just drop a pre-trained model into a robot but everything is trained in an embodied way might give us grounds to say that AI systems actually understand what they say if it's a language model for instance or understand what what they do because these abstract symbols words that we use in language there a good argument that ultimately their meaning is grounded in physical interactions with the world but does this mean that AI systems not only are intelligent and possibly understand but also have conscious experiences that's a separate question and I think there's many other things that might be um necessary uh for us to think seriously about the possibility of AI being conscious I think that brings me to The Logical next question which is what is the difference between intelligence and Consciousness perhaps let's start with intelligence most intelligence and Consciousness are tricky to Define but most definitions immediately point to the fact that they're different and if we think about a broad definition of intelligence it's something like doing the right thing at the right time um slightly more sophisticated definition might be the ability to solve problems flexibly and whether it's solving a Rubik's Cube or a complex problem scientifically or a navigating a social situation a dethly mean these are all aspects of doing the right thing at the right time and importantly intelligence is something you can Define in terms of function in terms of what a system does what Its Behavior ultimately is so there's no deep philosophical challenge for machines to become intelligent in some way I there may be obstacles that prevent machines from becoming intelligent in this sort of General AI way which is the way that that humans are intelligent but intelligence fundamentally is a property of systems now Consciousness is different Consciousness again is very hard to Define in a way that that's going to get everyone signed up to but fundamentally Consciousness is not about doing things it's about experience it's the difference between being awake and aware and the profound loss of consciousness in in general anesthesia and when you open your eyes your brain is not merely responding to signals that come into the retina know there's an experience of color and shape and shade uh that characterizes what's going on a world appears and and a self within it and Thomas Nagel I think has the the nicest philosophical definition which is that for a conscious organism there is something it is like to be that organism it feels like something to be me feels like something to be you now you can finesse these distinctions as much as you want these definitions but but I think it's already clear they are different things they they come together in US humans you know we we know we're conscious and we think we're intelligent so tend to put the two together but just because they come together in us doesn't mean they necessarily go together in general as you describe Consciousness in this sort of subjective experience a term that keeps getting thrown around in AI circles now is like qualia right this notion of subjective conscious experiences and figuring out if large language models can actually have this um certainly they're good at like making it seem like they do especially the jailbroken models but it also takes me back to something else that you've talked about which is our perception of reality is sort of this controlled hallucination that we don't fully like perceive reality in this completely objective sense I don't know if that's the best characterization but I'm I'm trying to connect the dots there where it seems to be like even our experience of reality is kind of hard to grock and fully explain and so I wonder doesn't that point to us not being able to create a very you know clear definition to measure that in a synthetic system yeah I I think you can go even further actually I think there's very little concept ensus on well there's no consensus on what would be the necessary and sufficient conditions for something to have subjective experience to have Quia in this sense when you and I open our eyes we have a visual experience it's the redness of red the greenness of green this is the kind of thing that philosophers call qua and there's a lot of argument about whether this is actually a meaningful concept or it's it's just something that we think is profound and it actually is it is just a wrong way of looking at the problem but for me there is there there you when we open our eyes there is visual experience however we label it as qual or or something else but for a camera on my iPhone well no and we we don't think there's any experiencing going on um so what is the difference that makes a difference and could it be that some kind of AI That's a glorified version of my camera on the phone would instantiate the sufficient condition so that it not only responded to visual signals but had subjective experience too I think that's the challenge we need to to face because as you say AI systems especially things like language models could be very persuasive uh about having conscious experience and again especially the ones where you you ask them to to whisper and and get around the guard rails in one way or another um they can really seduce our bias and uh so we can't just rely on what a language model says and if a language model says yes of course I have a conscious visual experience that's not a great evidence for whether it's it's there or not so we need to think I think a little more deeply about what it would take to ascribe conscious experience to the system that we create out of a completely different material material is an interesting point you're bringing up sort of the straight from which intelligence and perhaps Consciousness can emerge because what you are saying is that you know I think it seems clear that we could have a superum intelligence level AI system that isn't necessarily conscious but I do wonder when people make arguments like hey well if we just keep throwing more data and compute at this thing and it keeps getting more and more intelligent Consciousness will be this emergent property of this system and it almost has this like techno religious kind of fervor to it why do you think Consciousness might be uniquely biological why does the nature of the substrate matter I don't know that it matters but I think it's a possibility worth taking seriously now in a sense the opposite claim is is equally odd you know why should Consciousness be a property of a completely different kind of material yeah know why would computation be sufficient for Consciousness you know after all for many things the the material matters if we're talking about a rain storm you know you need actual water for anything to get wet if you have a computer simulation of a weather system it doesn't get wet or windy inside that computer simulation it's only uh ever a simulation and the way you set it up is also very informative because there has being this this implicit assumption at least in some quarters that indeed if you just throw more compute and AI gets smarter in in ways which can be very very impressive and very very sometimes unexpected too that at some point Consciousness just arrives comes along for the ride and the UN lights come on and you have something that is also experiencing as well as um being smart I think that's a reflection more of our psychological biases than it is grounds for having Credence in in synthetic Consciousness because why should Consciousness just happen at a particular level of intelligence I mean you can you could make an argument that some forms of intelligence might require Consciousness and and those may be the kinds of intelligence that we humans have but that's a it's a bit of a strange argument because there are plenty of other species out there that don't have humanlike intelligence that are very likely conscious and there may be more ways to achieve intelligence than through what evolution settled on for for human beings which is having brains that are also capable of Consciousness so the question for me the fundamental question is is computation sufficient for Consciousness if we try to design in the functional architecture of the brain as it is and run it in a computer would that be enough for Consciousness or do we need something much more brain likee at the level of being made of carbon of having neurons of having neurotransmitters washing around of being really grounded in our living Flesh and Blood matter and I don't think there's well there there's just not a knockdown argument for or against either of these positions but there's to me good reasons to think that computation is likely not enough and there are at least some good reasons to think that the stuff we're made of really does matter given all of this you do believe that it's unlikely that AI will ever achieve Consciousness why is that I think it's unlikely but I have to say it's not not impossible and the first reason it's not impossible is that I may very well be wrong and um if I'm wrong in computation is sufficient for Consciousness well then it's going to be a lot easier than than I think but even if I'm right about that then as AI is evolving and as our technology evolves we also have these technologies that are becoming more brain likee in in various ways we have these whole areas of neuromorphic engineering or neuromorphic Computing um where we're Building Systems which are just sticking closer to the properties of real brains and on the other side we also have things like cerebral organoids which are made out of brain cells they're little mini brain type things grown in the dish they they're derived from Human stem cells and they differentiate into neurons which clunk together and show organized patterns of activity now they don't do any very interesting yet so it's the opposite situation to a language model you know a language model really seduces our psychological biases because it speaks to us but a a clump of neurons in a dish just doesn't because it doesn't do anything yet now for me the possibility of artificial Consciousness there is much higher because we're made out of the same material to the specific question why should that matter why does the matter matter it comes back to this I about what kinds of things brains are and the fact that they're deeply embodied and embedded systems so brains fundamentally in my view evolve to control and regulate the body to keep the body alive and fundamentally this imperative goes right down you even into individual cells individual cells are continually regenerating their own conditions for survival they don't just take an input and transform it into an output and and in doing this now I think there's a pretty much a direct line from the metabolic processes that are fundamentally dependent on particular kinds of matter flows of energy transformations of carbon into energy things like that all the way up to these high level descriptions of the brain making a perceptual inference or as we said earlier a controll hallucination a best guess about the way the world is so if there is this this through line from things that are alive and and why we call them alive all the way up to the neural circuitry that seems to be involved in in visual perception or conscious experience generally then I think there's there's some reason to think that Consciousness is a property of of living systems as you were answering that in my head I have this visualization maybe the future of this conscious AI system that we finally create isn't going to be a bunch of Jensen's Nvidia gpus and some data center but perhaps this like Giga brain that we build out of the very things that our brain is made out of that's uh one hell of a visual I got to say yeah I I think that's and that's possible future right because we're already on that track with with neurot Technologies um and and hybrid Technologies as well and people can plug organoids you know into rat servers people are beginning to do this already to sort of Leverage you know the dynamical repertoire that these things have um and nobody knows how biological assistance needs to be in order to move the needle on the possibility to Consciousness happening it may be not at all or it may be a great deal indeed hi I'm baval sadu host of Ted's newest podcast the tedi show where I speak with the world's leading experts artists journalists to help you live and thrive in a world where AI is changing everything I'm stoked to be working with IBM our official sponsor for this episode now the path from gen Pilots to real world deployments is often filled with roadblocks such as barriers to free data flow but what if I told you there's a way to deploy AI wherever your data lives with Watson X you can deploy AI models across any environment above the clouds helping Pilots navigate flights and on lots of clouds helping employees automate tasks on Prem so designers can access proprietary data and on the edge so remote bank tellers can assist customers Watson X helps you deploy AI wherever you need it so you can take your business wherever it needs to go learn more at ibm.com Watson X and start infusing intelligence where you need it the most are your digital operations a well-oiled machine or a tangled mess is your customer experience breaking through or breaking down it's time for an operations intervention if you need to consolidate software and reduce costs if you need to mitigate risk and build resilience and if you need to speed up your pace of innovation the pager Duty operations cloud is the essential platform for operating as a modern digital business get started at pag your duty.com [Music] so I have to ask the question can artificial neural networks then also teach us something about biological neural networks and and the reason I asked this I was reading the anthropic CEO's rather extended blog and he brought up this example where basically like a computational mechanism was discovered by AI interpretability researchers in these AI systems that was rediscovered in the brains of mice and I was just ask him my question wait a second so like like an artificial like system like a very simplified simulation is still telling us something about the the organic you know more complex representation what's your thoughts on that and do you think this trend will continue oh absolutely I I think this is for me the the line of research that was certainly the line that I'm following the use of computers and in general and AI in particular they're incredible tool they're incredible general purpose tools for understanding things and you know even in my own research this is what we do I mean we'll build comput models of what we think is going on in the brain and we'll see what these models are capable of doing and we'll also see what predictions they might make about real brains that we might then go and test in in experiments I have to imagine the advances in technology both on the sensing and the computation side is making a huge difference and I'd love to hear some examples there are examples in many different levels so for instance there are there are algorithms involved in generative AI that might really map onto things that brains do so one level it's about discovering what the functional architecture of the brain is through developing these these new kinds of algorithms but then there are other levels too that there's the levels in which we might use AI systems as tools for modeling or understanding some higher level aspects of the brain so for instance we use some generative AI methods to simulate different kinds of per ctual hallucination so the visual hallucinations that people have in in different conditions like in psychosis or in Parkinson's disease or or after psychedelics and you know this goes back to some early algorithms by by Google in their deep dream when they turned bows of Pastor into these weird images with doghe heads sprouting everywhere but you know we can we can use those in a in a more serious way to get a handle on what's happening in the brain when people experience hallucinations and then at the other end and I admit this is something that for me anyway is still uh Uncharted Territory and something I'm really interested to explore is when we actually leverage the tool set that AI is delivering now you know the the language models the virtual agents and I was reading a paper just the other day about a whole Virtual Lab that was discovering new compounds to to bind to the the covid um virus virus particles and this Virtual Lab was was basically doing everything from searching the literature to generating the hypothesis to critiquing experimental designs and proposing new experimental designs and so on so I think there's a lot of utility in AI for accelerating the process of of scientific discovery I think Alpha fold is such a great example of that right like what took like a PhD you know the entirety of their PHD to to to figure out a couple molecules we've mapped out a huge huge opportunity space and kind of just put it out there I mean that that is such a beautiful example because also it just exemplifies the way in which um I think it's productive for us to relate to these kinds of systems because Alpha fold intuitively seems like a tool right we we treat it we use it as a tool or rather the biologists do to just rapidly accelerate the hypothesis they can make at the level of protein binding so um we never think of alpha f as another conscious scientist it's not it doesn't it doesn't seduce our intuitions in the same way that that language models do um so I don't think there's anything quite comparable to to Alpha fold you know in the Neuroscience domain yeah and I'm trying to think what one what what the equivalent problem would be you know what one thing might be and this is this is very speculative maybe somebody's working on this already you know one of the big unknowns the brain is is really how it's wired up there was uh you know another recent paper looking at the full wiring diagram of the brain of a fruit fly and this is an incredible resource already it was computationally incredibly difficult to to put this together from the little bits of data that you might get in individual experiments so there could well be a role for for AI in helping amass large amounts of data to give us a more comprehensive picture of what kind of thing a brain is um and there may be you many other creative ideas out there too but all of them I think would treat the AI as you in in I think the most productive way as a kind of tool I agree there's a there's a lot of you know inclination to I call it the co-pilot versus Captain question a lot of people are like yeah this is like my personalized Jarvis and I'm going to be like Tony Stark in the lab and just like you know doing what I need to do and it just like preempts my needs and it's cool that it's not constrained sort of by wall clock time right that you can just throw more compute at it and they can move faster but fundamentally to me it feels like humans are still doing the orchestration um what do you think are the risks of going the other route where we start feeling like these systems should be the captain and let's build the grand AGI system and ask it what to do and then let's do it blindly yeah I I think I mean there's there's of course there a huge amount of uncertainty maybe it's not not a terrible idea in some ways but it does strike me as something that is certainly not guaranteed to turn out very well and human intuition still seems very important in interpreting the the suggestions that that might come from from AI or just what AI will deliver in whatever context having a human in the loop still seems to be very very important but there are some larger risks here that to the extent we do this then I think we and moving back towards imbuing artificial intelligence with properties that it may not in fact have you know things like oh it really does understand what it's doing or or it may be indeed be conscious of of of what it's doing as well I think if we Mis attribute qualities like this to AI that can be pretty dangerous because we may fail to predict what it will do um another concept from Daniel Dennett is something called the intentional stance and it's a beautiful idea about how we interpret the behavior of other people is because I attribute beliefs and knowledge and and goals to to you or to whoever I'm interacting with and that helps me predict what they're going to do now if we if we do this with AI systems and this is what language models in particular encourage us to do then we may get it right some of the time but we may get it wrong some of the time too if the systems don't actually have these beliefs desires goals and and so on and can be that can be quite problematic there's the other side to all of this too where you know technology is also advancing to a degree where um we can kind of coarsely figure out what's going on in people's minds and so earlier in the season we had Nita farahan on and and she touched on the concept of cognitive Liberty and we were basically nerding out over how we're basically putting all these like neurobiomarker ulating our dreams with targeted dream incubation what keeps you up at night when you think about sort of the ethical considerations from AI kind of making our minds more of an open box than they have been in the past one of the things that that I think about and I was recently writing a paper with a philosopher Emma Gordon about brain computer interfaces as well is is really why is the skull this this boundary that we think of as particularly significant here I mean we've already given our data privacy in so many ways I mean that's true not a good thing right but but in many ways at least for people who've been around for a while the cats already out of the bag but the idea of getting inside the skull does seem to be significant partly because there's no other boundary that's left and while we're very used to the ideas the importance of things like preserving um freedom of speech then there isn't really the same degree of attention paid to something like freedom of thought right so we we just not used to what kinds of guard rails and moral guidelines we might need in this case and then there are also I think some more subtle worries certainly in this space of of brain computer interfaces because let's imagine a situation where we each have or brain computer interfaces are widely used a lot of brain data is extracted it's used to train models which are then used to um underpin the utility of brand computer interfaces so that they can predict what someone wants to say or do you on the basis of brain activity now there are some extraordinarily powerful and compelling use cases for this kinds of thing in medicine in treatment for people with um brain damage or paralysis or blindness but if we generalize that to enhancement of everybody and we try to think okay these things are not just to solve specific clinical problems but they become part of our society more deeply then there's a potential that there's a kind of forced homogeneity yeah we might have to learn to think a particular way in order to get the the brain computer interface to work and that may be a completely unintended consequence but it strikes me as a as a worrisome consequence as well there may also be you know just kinds of social inequity that start to happen too about okay people with access to these systems can do more or will be allowed to train them so they can think in their own distinctive way and not have to think in you know in the way that that the mass Market bcis require so I think there's a lot of there's a lot of sort of feedback cycles that can start to unfold uh in this case but fundamentally it's that really there's nothing more to privacy once you go inside the skull and then there's a stimulation thing as well once brain computer interfaces can be bidirectional and if they're bidirectional and you start implanting thoughts goals intentions then we're definitely in a very ethically troubling situation that last bit to me is is the stuff that keeps me up at it's like it's like giving a bunch of companies read WR access to your mind right and to your brain and and in a sense it the the point you brought up about sort of you know homogeneity sort of like lack of intellectual diversity we're already kind of seeing that where people are using llms and it's all kind of like the same milk toast Pro and you know people are almost losing the ability to uh write and think and yeah I think there's something kind of disconcerting about that yeah I mean there might be a more optimistic view of this too that the the sort of milk toast homogeneity of large language model output may cause us to really value human contributions more know just as you in other situations there's a kind of there's a value attached to the handmade the bespoke and and we may end up living in a situation where we just view these two kinds of language quite differently and you just as someone who grows up in a in a bilingual household we'll naturally learn to speak two different languages future Generations might might become accustomed to okay well that's that's kind of large language model language and this is this is human language and they just innately feel very different even though they're using the same words oh yeah kind of like just forming code switching and you know different contexts how exactly to behave I think that's a it's a valid point perhaps I have a slightly more jaded take on this cuz I'm like yeah people are going to want the Whole Foods experience but a vast majority of people are like give me the free ad funded Mountain de straight to the VIN and I deeply deeply worry about that so let's leave the lab for a second here what are the kinds of AI tools that you yourself are using um outside of the context of the lab I'm pretty a light user of of AI tools at least the ones that I know about because of course one of the thing is the AI is hidden beneath the surface of many of the things we use and every time I use Google Maps it's there's you know there's machine learning or AI happening there I I do use language models increasingly sort of as verbal sparring partners rather than as sources of text that that I will then edit or use directly and kind of as glorified search engines in in that sense and yeah I find them more and more useful but I still don't trust them I think it's a case of of using them to help us to help humans think more clearly rather than to Outsource the business of thinking itself so have you ever whilst interacting with all these large language models felt yourself forming a connection with these systems or are you able to keep that separation and distance like almost like you're forgetting it's a tool and kind of more like a colleague does it ever feel like that it does and you know this is another of the things that that keeps me up at night back to that that question um because there's something so seductive about the way we respond to language that even if at one level we can be very very skeptical that there's anything other than just sort of statistical machination happening the feeling that there's a mind that understands and might be conscious is extremely powerful and I'm thinking you know one way of thinking about this is that there are plenty of cases where knowledge does not change experience so for example lots of visual Illusions um there's a famous visual illusion called the The Mulia illusion which is a a visual illusion where two lines um look different lengths because of the way the arrows point at the end but if you measure them they're exactly the same length and the thing is even if you know this even if you understand what's happening in the visual system that gives rise to this illusion they will always look you know the way they do there's no like firmware there's no firmware fix for our brains to fix that that's right and so the worry for me is that there will be similarly cognitively impenetrable illusions of artificial Consciousness that if we're dealing with sufficiently fluent language models especially if they get you animated in in deep fakes or even embodied in humanoid robots that you we won't be able to update our own wet wear sufficiently in order to not feel that they are conscious we will just be compelled to have those kinds of feelings and that is a very problematic state to land in too because if we are unable to avoid attributing let's say conscious states to to a system then again we're going to be in the business of attributing with qualities it doesn't have and mispredicting what it's going to do and leaving oursel more open to coercion um and more vulnerable to manipulation because if we think a system really understands isn't cares about us but it doesn't it's actually just trying to sell us Oreos or something then that that that's a problem and I think the most pernicious problem here is something that goes right back to Emanuel K and and probably before which is the problem of brutalizing our own minds because here if we are interacting with an artificial system that we can't help but feel is conscious we have two options broadly we can either be nice to it anyway and care about and bring it within our circle of moral concern and that's okay but it means that you know we will waste some of our moral capital on things that don't need it and potentially care less about other things because we humans have this ingroup outgroup Dynamic if you're in you're in and if you're out you you're out so we might either do that or we learn to not care about these things and sort of treat them in the same way that we might treat a toaster or a radio um and that can be very bad for us psychologically because if we treat things badly but we still feel they are conscious and that's the point that K made that's what utilizes our minds it's why we don't poke the eyes out of teddy bears or pull the limbs off off dolls you know the the science fiction film and then series westw World dealt with this beautifully you how dangerous it does for us uh to take this this perspective so this keeps me up at night because there's no good option here we need to think very carefully not only about the possibility of Designing actually conscious machines which even if it is unlikely if it happened would be very ethically problematic because of course if something actually is conscious it's a moral subject and we would need to be very careful about how we treat it but even Building Systems that give the strong appearance of being conscious is also problematic for different reasons and this scenario is is basically already with us or will be very soon unless we we think very carefully about how we design these systems and design against giving that impression in some way I think you very beautifully paint this picture of why it's problematic on both ends right like treating it like the Rick and Morty this robot wakes up what is your purpose your purpose is to put butter on my toast that is your purpose just get get back to please putting butter on my toast and it has this existential crisis and I think on the other end the the Westworld example is is very valid too where you have things that are indistinguishable from humans and we go act out all these sort of lower urges or whatever the right way to put that and we suddenly start bringing that sort of behavior to interactions with actual humans but the real question I come at is where you end it which is from a user experience standpoint right A lot of people think that it is important to have these systems be as humanlike as possible and meet the user sort of where they are do you want to talk about why we need to be more nuanced and and do you have any ideas for what that sort of um what would be a better way to build these systems because it seems like either extreme kind of sucks I think this this is super interesting and in fact I think talking to you just now is is helping you give Focus to this is a serious design Challenge and I'm not sure it's one that's been well addressed so far and because of course yes there there is a good reason to build systems with which we can interact very fluently it can also be very empowering and if we can if we can have a machine generate code by by talking to it about what we want a program to do that that's hugely empowering for for many people so long as it does the thing that they supposed to do and not not something else but is there a way of having the benefits of that designing systems so that we can preserve the kind of fluent interaction that natural language gives us but in a way that still at least pushes back to some extent on the psychological biases that then lead us to make all these further attributions of consciousness of understanding of caring of emotion and all of these things I don't know what the solution is and I'm not but I think it's a it's it's a really important problem one The Simple Solution would be okay these things just have to Watermark themselves to say you know I am not conscious I don't have feelings and of course language models do that and and until you you know play around with them press them a little bit um but that that may not be enough I mean there may have to be other ways where we design interfaces which through practice or through education or through some other manipulation are shown and this is really a question as much for psychology as it is for technology what kinds of things uh preserve fluent interaction but do not come to our you know psychologic call biases and and what properties we actually you know I I would love to see progress focus on that problem because that would show the line we need to and you're right that there aren't any solutions do you think we can build those antibodies I think we have to try I mean that also brings up another point which is again very contentious in the tech sphere which is what should we do about regulation what kinds of systems should people just put out there and you know what come back to in that conversation is is always the fact that in other domains of of invention and Technology we're very cautious I we don't put a new plane in the sky without being fairly sure it's not going to fall out we don't release a new drug on the market unless we can be very sure it's not going to have unintended side effects or Consequences and you know there does seem to be an increasing recognition that you AI Technologies are in the same ballpark and doesn't mean that you know we don't want to stifle innovation of course but but we can help shape and guide Innovation I think there's there's again a sweet spot to be to be found there and then on the other side the education one of the challenges there of course is that things are moving so fast um that it's very hard to to keep up but it's important to try one thing that strikes me here is the very term artificial intelligence is part of the problem brings with it so much baggage that there's some kind of magic and it's like you know a science fiction mind whether it's you Javis or whether it's Skynet how from 2001 or whatever it is whatever your favorite conscious intelligent robot is that's what we think of and you artificial intelligence has this this brand quality which I think is a little bit unhelpful it maybe you've been incredibly successful in raising large amounts of venture capital but it's you know it's not a particularly helpful description of what the systems themselves are doing of course most people working in this at least they used to at any rate talk about machine learning rather than artificial intelligence and then another level of description you can just say well these things are basically just applied statistics you start describing something as applied statistics you know it it it's even that is educationally valuable because it highlights how much we we we load onto these systems by the words we use one a very simple example here which I think the the horse has already bolted here too but it's always annoyed me how people describe language models as hallucinating when they make stuff up yeah it's giving them too much credit it's giving them way too much credit and it's it's doing something more specific than that it's cultivating the implicit idea that indeed language models experience things because that's what a Hallucination is a hallucination if you apply it to a human being it means well they're they're having a perceptual experience of something that that's not there and the fact that that linguistic term caught on so quickly I think is it selft telling because it just revealed like okay there's some implicit assumptions about what people think these things are doing but it's also in a positive feedback it's unhelpful because it leads us to again project qualities in if we're going to use a word I I wish they'd used confabulation because in human psychology confabulation is what people do when they make stuff up without realizing that they're making stuff up and that to me is an awful lot closer to what language models uh do but I yeah I don't think it's going to catch on now but we should be careful about the language we use to describe these systems for exactly this reason what advice would you have for people that are listening to this uh so that they can take advantage of the tools at their Disposal today and not get sucked into into perhaps the I don't like the pseudo science and the you know fake spirituality that kind of comes as a package deal with AI Today part of it is exactly recognizing these often implicit motivations that that drive all these these associations that lead us to think of these things as being more than they are or different than they are and you in the extreme it gets it gets pretty religious right I mean there's there's the idea of the singularity which is a sort of the Techno Optimist moment of rapture and you know the possibility of uploading to the cloud and and living forever with the promise of immortality I mean the story is is very textbook religion right um and so that in itself I think is useful to to bear in mind there's there's a larger cultural story behind this it's not simply an objective description of where the technology is and then cashing that out further it is for me anyway it's just a matter of of continually reminding myself of the differences between us and the technologies that we build to resist the temptation to anthropomorph you know to project human like qualities into things to retain a slightly critical attitude to what's going on behind the interface that it is not an alternative person and this can be easier said than done because as we were discussing before one of the things that keeps me up at night are these cognitively imp penish iions of intelligence Consciousness and so on that AI systems can can bring to bear but yeah I think it's just at its most basic reminding ourselves that if we think an AI system is a reflection of us that it's something in our image what we're probably doing is overestimating its capabilities and underestimating our own capabilities I love that that's Punchy and that brings me to the last question which is is given our discussion thus far it makes me very curious what is your idea of the sort of ultimate final form of AI if you will that appeals to you as a neuroscientist like what exites you the most about the potential for this future you know where um you know AI can serve human intelligence and Consciousness this a very very good question I mean I I think my vision optimistic Vision about this is not some sort of single super intelligence like deep thought and hiters gu of the Galaxy or whatever it might be that's that's a single super intelligent entity maybe AI in the future is going to be a bit more like electricity or water you know it's it's a basic utility property of the the it's a utility it's a utility and it's used in many many different ways in many many different contexts to do many many different things and it's in this world we we face the challenge of of recognizing that there are some things which we once thought were distinctively or uniquely human which aren't and so there will be a social disruption to that this happens of course in you know in all technological revolutions but the flip side of that is that the space that's opened for massive Innovation creativity the ability to solve all sorts of problems so I think it's not a single thing it's it's many things one last thought on this I've heard the idea many times that the distinctive thing about AI is that it could be Humanity's last invention because AI systems can design develop improve themselves Oh They'll invent everything else or or we we lose the ability to have dominion over what they may end up being and that's something that I'm still you know I'm still a little bit Unsure how to think about that whether that's a that's a real difference or whether it's something that we can still be careful to to manage but my yeah my optimistic view of AI is as some kind of utility that's drawn on in many many ways yeah that permeates everything versus the the singular all-encompassing AI in the sky which again gets starts sounding very religious um Anil thank you so much for joining us thanks for the conversation I really enjoyed it wow what a conversation an elth reminds us that the story of AI isn't just a tale of machines gaining power it's a mirror reflecting our own biases aspirations and fears we project so much of ourselves onto these tools we anthropomorphize the algorithms we give meaning to their outputs as if they share the complexity of human experience but as NL said we overestimate their capabilities and underestimate our own and that's something worth meditating on for all the dazzling Feats AI can pull off it's still us humans who design direct and decide what these systems become and it's our responsibility to tread carefully not just for the sake of innovation but for the future of what makes us human there's another fascinating question here too if a truly conscious AI system does emerge one day will it even look like the systems we built so far the unique messy biology of the human brain neurons synapses and gal cells doesn't just power intelligence it creates the rich subjective experience we call Consciousness silicon and software might never be enough Consciousness May demand a substrate that mirrors what we're made of a construct that's alive pulsing with the same kind of Vitality that flows through us and this is wild to imagine a future where conscious AI doesn't emerge from humming server racks or massive data centers but from Liv organic systems Giga brains we create from the very building blocks of life itself in trying to recreate the Sparks of Consciousness we'd be stepping closer to understanding what makes it so mysterious and so uniquely human for now though that's all in the realm of speculation what's not speculative though is this the choices we make today how we design interact with and regulate these systems will shape not just the future of AI but the future of our own humanity and as much as we might Marvel at the power of these tools it's our responsibility to stay grounded to remind ourselves that these systems are reflections of us not replacements for us it's truly an incredible moment to be alive and a terrifying one too and as we grapple with the unknowns ahead of us perhaps the best question we can keep asking is what kind of future do we want want to create not just for AI but for [Music] ourselves the Ted AI show is a part of the Ted audio Collective and is produced by Ted with Cosmic standard our producers are Dominic Gerard and Alex Higgins our editor is ban ban Shen our showrunner is Ivana Tucker and our engineer is Asia polar Simpson our technical director is Jacob Winnick and our executive producer is is Smith our researcher and fact Checker is Jennifer N and I'm your host bavo sadu see y'all in the next one

Related conversations

AXRP

3 Jan 2026

David Rein on METR Time Horizons

This conversation examines core safety through David Rein on METR Time Horizons, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -0 · 108 segs

AXRP

7 Aug 2025

Tom Davidson on AI-enabled Coups

This conversation examines core safety through Tom Davidson on AI-enabled Coups, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -5 · 133 segs

AXRP

6 Jul 2025

Samuel Albanie on DeepMind's AGI Safety Approach

This conversation examines core safety through Samuel Albanie on DeepMind's AGI Safety Approach, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -4 · 72 segs

AXRP

1 Dec 2024

Evan Hubinger on Model Organisms of Misalignment

This conversation examines technical alignment through Evan Hubinger on Model Organisms of Misalignment, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med -6 · avg -7 · 120 segs

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.