Skip to content
Breaking News Alert Human Trafficking Czar Ignores Democrat-Invited Human Trafficking Over U.S. Border

Nita Farahany: Neurotech Poses An Immediate Threat To Our ‘Last Bastion Of Freedom’

Battle for Your Brain website
Image CreditScreenshot

Meta has ‘technology that can embed sensors’ into a device like a watch. It picks up brain activity and may tell your body what to do.

Share

The following is a rush transcript of my interview with Nita Farahany, author of “The Battle for your Brain: Defending the Right to Think Freely in the Age of Neurotechnology.” (To read a transcript of my interview with Tristan Harris of the Center for Humane Technology, which covered related ground, click here.)

You can listen to the full conversation via the link below as well.

Emily Jashinsky: We’re back with another edition of The Federalist Radio Hour. I’m Emily Jashinsky, culture editor here at the Federalist. As always, you can email the show at radio at the federalist.com. Follow us on Twitter @FDRLST. Make sure to subscribe wherever you download your podcasts as well.

I’m so excited to be joined today by Nita Farahany. She’s the author of the new book, “The Battle for your Brain: Defending the Right to Think Freely in the Age of Neurotechnology.” She’s also a professor of law and philosophy over at Duke. Thank you, Nita, for joining the show.

Nita Farahany: Thank you for having me.

EJ: Of course. There’s so much to dig into with this book, with your career in general. You have been studying bioethics for a really long time. So I wanted to start there. What got you interested in studying bioethics, especially at a time when what you’re writing about now, it feels really new, it feels like a lot of this is starting, but I’m sure you saw the writing on the wall long before this present moment.

NF: Yeah, great question. So I, you know, I would say, started with a passion with science. So from high school, I was really into genetics, and went to college thinking I was going to be pre-med because I sort of didn’t know what you could do with science other than be pre-med. So I went to college [for] pre-med, but quickly found myself drawn to everything that was policy-related to science.

And so I did the pre-med curriculum, but all the internships, I would do all of the opportunities that I would seek out, all of them had a policy bent to them. And I was a policy debater as well, both in high school and in college. And so I was debating some of the policy aspects of it. And all of my, you know, fellow debate, friends, were going into areas like philosophy, and, you know, I thought, no, no, I’m going down the hard science route.

But I just saw myself increasingly drawn to those questions. And so I, it took me a while to find myself, actually practicing bioethics, you know, I went into business, doing strategy consulting, focusing on biotech, and healthcare. I went to law school and, you know, did my PhD program at the same time looking at the intersection between science and law, thinking, maybe I practice intellectual patents, and, you know, like patent law or be a patent attorney or something like that, because, again, I didn’t know about the field, I didn’t know kind of what you could do with your interest other than these kinds of tried and true pathways.

But, you know, just kind of kept following one opportunity after another, which then finally landed me and what I think is just the greatest job in the world of getting to actually be at the leading edge of looking at scientific and technological developments and thinking about what their ethical and legal ramifications are, and how we can find a better pathway forward for humanity.

EJ: Well, that raises a really big question. And it’s when you think about a lot, I think you were on a commission for bioethics research, a presidential commission in the Obama years. And there’s so much attention, so much more attention, I guess, paid to some of these issues now in the era of the metaverse and Meta, but I wanted to ask: It seems to me that our policy leaders are doing a woefully inadequate job of addressing bioethics and the rapid evolution of it. What have you seen over the last decade or so, you know, maybe from the Obama administration to now in terms of Washington’s ability, interest, willingness to address some of these really big questions?

NF: Yeah, so I was really fortunate to be appointed by President Obama to the Presidential Commission for the Study of Bioethical Issues. It was really a highlight of my career in getting to serve in that capacity. And what I didn’t know at the time, because our commission followed in a long line of presidential commissions on bioethics, they had different names, but all the way back to the Carter administration, every administration had some form of a bioethics commission.

And we just assumed that our successors would be named under the Trump administration. And when the Trump administration didn’t create counsel or a bioethics commission, I thought, Okay, well, that’s, you know, tragic and unfortunate, because there’s so many critical issues, including issues that have arisen during the pandemic that, you know, could have and should have been worked on by a presidential commission.

But I was incredibly surprised that President Biden, who our commission met with under the Obama administration didn’t, after he took office, create his own counsel or commission despite the fact that there had been this gap. So there’s been a real vacuum…

There was a special place for a executive-level bioethics commission. We had really significant convening power we were able to take on, on very large issues and bring the world’s leading experts to the table to talk about and to weigh in on those issues. We created a number of reports, one of which had to do with, we did a two-volume report on neuro technologies in neuroscience called “Grey Matters.” And those have a lot of influential power in helping to shape the conversation and in helping to shape policy.

I’m surprised we haven’t had one. I’ve been discouraged that we haven’t had one. There’s been great thought leadership from other organizations. But I think it’s so important for the executive branch to have that kind of leadership and convening power and eyes on these issues and ability for a president to turn to their commission and say, you know, here’s a brand new issue that has come up, can you please provide and weigh in on guidance and perspectives and policy recommendations to how to move forward?

EJ: …It seems to me right now that there’s an integration and a development of a lot of this tech with very consolidated, consolidated monopolistic companies in Silicon Valley, you know, your Meta, anything else, Amazon, Google, that a lot of this, this technology is being developed and then integrated into these massive platforms with so much power. What is it that prevents Washington or Brussels from taking these issues really seriously?

Is it the industry lobbying, the money from the industry pouring into those places to prevent you to continue the sort of wild, wild west frontier mentality that they had in the aughts that allowed them to push some of this tech through very, very quickly? Is it a combination of that, and Washington being led by you know, people who are not necessarily the central demographic for some of this technology, and are maybe a little bit old in some cases to understand it? What’s the block here?

NF: That last possibility is even more terrifying. I would just throw that out there, which is, you know, they’re just out of touch with what’s happening is not a comforting thought, given how rapidly all this technology is developing.

So I mean, first, I’d say it’s helpful that there’s actually experts and leading experts on AI that had been brought into the administration. So that’s at least good. And there’s been some really important executive leadership around that kind of idea of an AI Bill of Rights and thinking about, you know, kind of what governance in that space should look like. So I think, I think that shows promise and foresight and leadership in ways that are meaningful.

There’s some really talented people who’ve been on the case and thinking about some of these issues. Tim Wu, for example, is one of them, who just recently left the administration who I think is terrific, and has been giving a lot of thought leadership to the administration, on neuro tech, though, like, where are they? And why are they not taking on the issues?

As you said, Meta, you know, has been very vocal about their investment in neural interface technology, right. I mean, they bought what I think is, was the pivotal acquisition in this field, it was the thing that really led me to write the book, because they acquired a company called Control Labs that has technology that can embed sensors into basically like a watch. And that picks up brain activity as it goes your motor neurons, so your brain isn’t just in your brain, it sends messages out to the rest of your body and receives messages from the rest of your body. And part of how it does so is through motor neurons. And those motor neurons, you know, tell your hand to move or to type.

And if you could pick up into code that activity at the wrist, it can be a very effective way to interact with other technology. And it also gives a whole bunch of new data to Meta that they never had before. And giving all of that data from facial biometric data that they’re picking up from their Oculus headsets to neural interface, and doing nothing about it at the executive level of thinking about like, Oh, these companies from Meta to LG just announced earbuds that pick up brain activity to patents that Apple has on sleep masks and undoubtedly AI integration with electrical activity from the brain to Microsoft to Google, all of them are making huge investments into the brain.

And that can be really promising because we have not treated our brain health as seriously as we have treated the rest of our physical health and real investment is needed in this space. But real investments, that is actually overseen and the perils are recognized and ethics are being debated and the appropriate incentives are being put into place to not commodify our brains and turn them into, you know, the true panopticon that corporations are controlling.

And so, you know, why is it not happening in DC? I think in part, you know, part of the reason I wrote the Battle for your Brain is because I have found that most people do not know what’s happening in this space. And most people don’t understand how profound it is to unlock the brain to take the black box of the brain and hand it over to corporations to commodify however, they want, to hand it over to governments to use for whatever purposes they want, without being thoughtful in advance of changing the terms of service and putting the power into the hands of individuals to have governance over their own brains and mental experiences.

And clearly, we haven’t done that when it comes to algorithms, when it comes to predictive profiling of individuals when it comes to all of the different ways that we’ve allowed addictive technologies to, you know, manipulate and change our brains, and to leave people worse off, not better off in so many ways.

But this is it. This is like our last shot at getting this right. And so I, you know, I think it’s people don’t know what’s happening. They’re distracted by a whole bunch of other things. There’s very powerful lobbies. There isn’t a bioethics commission anymore to sort of raise and sound the alarm on these issues. And I think part of the reason why we don’t have that is because the increased fracturing of society makes it seem as if we can’t find consensus on anything, let alone some of the trickiest and knottiest philosophical issues. But I just had, I have more optimism that I think everybody can agree that our brains deserve special protection.

EJ: And that’s being fueled, of course, by the addictive technologies themselves in a vicious cycle… I mean, we’re addicted to our phones, so that we’re distracted from what’s really going on. This is all why I was actually fairly surprised by the treatment your talk at Davos this year got in some of the press…

So this is what The Guardian writer wrote about this. I’m sure you’re familiar with this particular piece. They say your “professional milieu may have led you into falsely believing that corporations will not do the most unspeakable acts imaginable in order to make an extra dollar of profit.” But of course, you were at the time preparing to release this book that made exactly that point.

NF: Well, I mean, in the talk I did too, but unfortunately, you know, so I created right from the introduction to my book, there’s a scenario that is sort of like, let me just paint the full picture for you in a fictional scenario. And I had that animated by an animator. And I played that at the beginning of my talk at Davos, as a way to help people understand the full gravity of the problem before I focused on one narrow aspect of it, right, because I wanted to focus on the use of it in the workplace, even though it’s a much bigger picture in the book, there’s a much bigger treatment of that.

So I was like, okay, here’s a scenario that gives people a full snapshot, and then I’m going to do a deep dive into the brain at work. And people thought I was advocating for that scenario, rather than saying, like, this is the dystopian thing that we need to be worried about. And that’s why I’m here sounding the alarm and proposing a right to cognitive liberty.

So you know, I think, I think it’s one of those problems of people watch tiny clips, rather than actually doing a deep dive, from the addictive technology that from the clips over substance. Yeah, they have, they have two-minute attention span, and that’s being generous, right? Most people have a 30 second span is that they hear some clip of me saying something about neuro technology, they think that I must be advocating for it, and therefore they vilify me, but you know, here’s the positive on that.

That talk and the clips from it went so crazy viral, that, in many ways, if I mean, one of my big goals of the book is to get people talking about the issues, right? It’s not I don’t, I don’t need them to be talking about me. I need them to be talking about the issues and be at the table deliberating about it and to wake up to what’s happening and to be part of the conversation.

And so if that’s what it takes, right, which is taking something I said completely out of context to get people riled up, but they’re riled up and they’re at the table. At least that’s good, right? I mean, that’s something in the I guess I’m a silver lining kind of person.

EJ: So your optimism keeps getting you in trouble.

NF: It does.

EJ: Well, let’s start with your concept of cognitive liberty, which, again, not on a lot of people’s radars, but is actually at stake in a lot of the technologies that already exist, or if you’re, you know, engaged in different types of work, this is already something that is at risk and at stake for you on a daily basis. But how do you define cognitive liberty? And then maybe add a little bit from your sort of my, what you’re drawing from John Stuart Mill on as you write in your introduction?

NF: Yeah, thanks. So cognitive liberty, like the simplest definition is just self-determination over our brains and mental experiences. And the way I’ve proposed it, I lay out what I hope is a kind of practical guide to how to adopt it, which is I see it as an international human rights. And what it requires, I think, is recognizing the right to cognitive liberty, this could happen by the human rights committee that oversees the ICCPR, the treaty that implements the UN Declaration of Human Rights. And what it would do is update three existing rights. So we have already a right to privacy as a human, right. It’s implicit but not explicitly included within that, that we have a right to mental privacy. And I think it’s really important we make that explicit. The second is, there’s a right to freedom of thought. But that right has been pretty narrowly interpreted over the years to apply to freedom of religion. And there was a really terrific report by the previous Special Rapporteur on freedom of thought, Ahmed Shaheed, who presented a updated view of what that should look like to the UN General Assembly in October of 2021.

And along those lines, I really build that out in the books kind of normative foundation for that of updating freedoms thought and the third is we have a collective right to sort of self determination, this has been really interpreted as a political or a group, right. And I think we need to be very explicit that all of the rights that are within the UN Declaration of Human Rights include within it a right to individual self determination, and that includes the right to self access rights, the right to access information about our own brains, receive information, be able to have that informational self access, as well as the ability to change our own brains and mental experiences, us change them not be changed by other people, right. So that’s the kind of legal framework of it.

But the idea is really an update to John Stuart Mill, who proposed, you know, in his book On Liberty, this kind of fundamental need for the basic liberty over oneself, as it exists within society, right, all rights are relative, they’re not absolute, their societal interest, we have to balance them against it.

But the world in which he wrote that, even including, he has a really robust discussion of freedom of thought, it didn’t contemplate a world where you can actually breach real freedom of thought, right, where you can actually pro brains, you can actually decode them, you could actually hack them and track them in real time, by others, by by society that there be real demands on doing so. And I introduce it in the book through the lens of neuro technology. And I do that both because I think it is a real and present threat to cognitive liberty, but also because it puts the finest point on the issue for people, right? Because if, like, if we’re talking about it, theoretically, if we’re talking about it as that practice that marketing practices, manipulative, where that algorithm and the way it are, you know, that facial recognition, all of it feels a little bit more remote to people.

But if I literally put sensors on your head, and decode what is happening inside your brain, or change what’s happening in your brain through centers, people get it, right. I mean, they get that their self determination to their brains and mental experiences, is now at risk. And so I use neuro technology to really help us understand that our last bastion of privacy or last bastion of freedom is at risk. And if we can get it in that case, right, if we can get that at least when it comes to neuro technology, we ought to have a right to cognitive liberty. I think the bigger conversation about in what other circumstances does cognitive Liberty apply? How does it apply visa vie us and addictive algorithms? How does it apply visa vie us and social media? How does it apply? In the broader context of all of the other debates that are attempts to really track and decode our brains? We have the foundation to figure that out. That’s the starting place for me is to make it really direct and concrete and salient to people to get it right. I mean, if it’s, it’s decoding what’s happening in your literal brain, your brain is at risk.

EJ: Yeah, I was reading I think it was Douglas Murray recently who was writing about poetry. And again, I think it was like somebody who had been incarcerated in a gulag who was emphasizing you know, the importance of memorizing poetry because the one thing the authorities, whether in government or business cannot hack is your brain, it’s the one place that is your own. So I wanted to ask, you know, maybe going back into the the Davos question in what ways increasingly do we have technology that disrupts that in a sort of dystopian sense? What is the kind of immediate future? Or maybe the present look like in terms of actually being able to to have that neural interface to actually access people’s thoughts and manipulate people’s thoughts?

NF: Yeah. I would love to that quote that you said, I want to write that down. Because you see that kind of echo in so many other places as well, where people say, like, okay, but you have this place of solace. So all as well, right. But you don’t anymore. So. So first, let me be clear, neuro technology, neural interface can’t decode your inner monologue, it can’t decode complex thoughts. And I don’t see it doing so anytime soon. If ever, right? I mean, the like true inner monologue, I think is true is still your first that’s not going to change.

But that doesn’t mean that you can’t decode a lot of brain states. And those brain states can tell you a lot about a person. And so right now, already, let’s take the simplest case, there are companies that are selling technology, that at least 5000 companies worldwide are using that. Employees have to wear helmets, or baseball caps, or sweat bands that have sensors that are EEG sensors, electro encephalography sensors that pick up brainwave activity from an individual. And the purpose of those sensors is at the simplest to just track if a person is tired, or awake their fatigue levels. And you can see how if a person is mining and you want to know if they’re getting carbon monoxide exposure, or if a person is a commercial driver, and you want to figure out if they’re tired or awake, or if they’re commercial pilot, and if they’re starting to nod off, you want to know that and we have a lot of other technology that does that, too.

There’s most people probably in their cars now have something that gives them an alert when they start to drive in a way that suggests that they’re tired. And that’s pretty accurate. But this is more accurate. This is much more precise. And it can also give you an earlier warning, you don’t have to wait until you’re driving dangerously, you start to be able to pick up those fatigue levels earlier. So that’s already in place. There are companies that are already selling technology to help employers and employees track their focus and their attention.

So as we were talking about one of the demands on our brains is this attention economy, right, that kind of constant clamoring for our attention. And there’s a very distracted workplace as a result of it, as a lot of people come into the workplace and can’t focus on any one thing at one time. And so focus tools can be really simple. Like I use something called a Pomodoro cube, it’s just a little timer that helps me say like, Okay, for the next 20 minutes, I’m just doing deep writing, and I’m not going to allow any distractions is just a mental exercise. But you can actually track your focus and attention levels with brain sensors to and you can have that tracked by others. And there are companies that are already selling earbuds that have brain centers integrated into them or headphones that have sensors that are integrated into the soft cups that are around the ears.

There’s one major company that has entered into a major partnership with a major headphone manufacturer that is launching later this spring, with focus being the kind of key metric to be tracking. And so that can be something that employees use themselves and that they have the brain data for themselves. It also can be something that lawyers if they demand it could be tracking as another productivity measure. There’s all kinds of ways in which employers are doing really creepy things to track employees in the workplace. And I think it’s really chilling to think about tracking attention levels and focus levels or boredom engagements, you know, emotional levels. And that’s happening. There’s reports out of China that is happening at scale, that it’s happening in educational settings in China, where students are required to wear headsets that monitor their attention and fatigue levels. And then it can be used to probe the brain to right so while you can’t literally decode thoughts, you can start to try to figure out what a person is thinking. Like, show them images and figure out if they have positive or negative responses to them.

You want to know what their political preferences are, show them a bunch of pictures of them Democrats in a bunch of pictures of Republicans and see how their brain reacts to it. And you can do that over and over with a whole lot of different information when you can do it—

EJ: Unconscious bias tests

NF: Exactly. But but the unconscious bias test, where people who self report, at least govern some of what they’re saying, and you’re just trying to probe their brain and creepy and icky ways to figure out what their unconscious reaction that they can’t control to a whole bunch of information is, and researchers have even use that to figure out if you could pick up information like, could you figure out a person’s PIN number? Could you figure out their address, right. And through these kinds of probes, and even probes that a person may not see it could be in a gaming environment, there’s a whole bunch of neural interface headsets that are used in gaming. So you could put it into the gaming environment so that it’s within their perception, but they’re not consciously aware of it, and then probe their brain for information as well. And so that’s been successfully demonstrated as a risk through research studies as well. So it’s brain states that are being decoded. But people shouldn’t take solace in that because you can use brain states to decode a whole bunch of information about a person, even if you can’t decode complex thoughts

EJ: right, that’s what’s so scary is that you can make assumptions that may not be accurate, but then maybe used as punishment

NF: Or accurate, right? I mean, so either way, right? So it is like, either they’re inaccurate, and that’s really problematic, or they’re accurate. And that’s really problematic. And I think there’s just a really significant chilling effect that it has as well, right? Anytime that you implement surveillance, it can have a chilling effect that people become normalized to it, I think. And in bad ways, right? Being normalized to surveillance is not a helpful thing. It just means that we become less aware and focused on the risk. And I think when it comes to neuro technology, you know, requiring students to wear headsets that are monitoring their brain activity. You know, they don’t know whether their complex thoughts are being decoded or not, you assume the worst case scenario, you try to self censor. And the chilling effect the silencing that occurs when people are under surveillance has been really well studied and documented. And I worry about that silencing of even inner monologue and how profoundly detrimental that’ll be for humanity, to have surveillance of the brain and silencing of one’s inner monologue…

EJ: Right reconditioning your sort of thought patterns?… It doesn’t matter if it works, or if it doesn’t, because the perception is that you are being, like your brain is being surveilled.

NF: Right, and at that point, the effects and especially with children can be really, really problematic.

EJ: That is a new terrifying layer that I have in my mind.

NF: Right?

EJ: Yes, especially when the brain is like elastic and developing. Great.

NF: I’ll tell you, you know, every conversation I have, like, it’s there’s this exchange of of terrifying one another I was I was talking with Alan Alda, who I have worked with, in the in the past and over the years. So I just think it’s so wonderful. And he put this chilling and terrifying possibility into to my head that had not been there before. Which is, you know, I’m talking about people knowing that they have sensors on their heads, right? And he was like, Well, I mean, what’s to keep manufacturers from embedding centers without people knowing about it, right? I mean, you’re assuming that they know what if all of them come with it, and it’s just waiting to be activated, or you’re just not aware of it. So that freaked me out, which is that neural centers cause her to be integrated in technology and you wouldn’t even know about it.

My husband who is a technologist reassured me a little bit that I’m hackers every time there’s a new technology. Like they’re all these YouTube videos of people taking them apart and trying to figure out what things are. And he said, Don’t worry, as soon as they get shipped, you know, there will be people who are disassembling them and analyzing them. That gave me a little bit of comfort over that dystopian scenario.

EJ: …That would be my next question is like there’s, you can embed some of this into the legal system and an international framework and a national framework, you can take those steps, and that’s hugely important. But as we increasingly transfer all much of so much of this like intimate personal data onto the cloud, onto digital platforms, in general, the risk of hacking is impossible to get around basically, what what are you write about this in the book. What are some of the threats that are posed by hackers when it comes to this data?

NF: Yeah, you know, and I struggle so much with this question, not because of what the we’ll talk about what the risks are, but what the solutions ought to be as a result, so I’m gonna tell you, my optimism for a moment about why I say that I talk about this in the book, as well, which is, one thing I think is we can’t lose sight of is the fact that neurological disease disorder, mental illness, is an increasing toll on ourselves and on society, right. And we have just drastically increasing levels of degenerative neurological disease, and mental illness that have taken really over humanity. Physical health is improving, while mental health which is also physical health, but we have separated it and treated it differently, is declining across the world.

And part of the problem is that we don’t, even though you know, we’re talking here about neuro technology, there’s a lot we don’t know about the brain. And it’s because we don’t have a lot of great brain data, we have a lot of studies that people going into a laboratory environment, with brain scanning technology for limited periods of time under artificial conditions. And the possibility of brain centers becoming part of our everyday lives means that we may have suddenly, a whole lot of real world data, and real world data. That is, over time that could tell you know, this is what periods of depression look like in the brain. This is what neurological disease looks like. This is what the earliest stages of it look like things that we don’t have information about, which could really mean that we could solve some of those problems within our lifetime with that data, especially with the power of of AI.

But of course, that’s an AI like data used for good, utopia, as opposed to the dystopia of all that brain data can then be mined and hacked, right and misused. So rather than putting it toward the good of solving neurological disease and disorder and trying to collectively improve brain health and well being, you know, it’s commodified, to figure out how to addict us to technology, or it’s commodified, to try to figure out what our PIN numbers are and hack into our bank accounts or to figure out who’s an adherent to the Communist Party and who isn’t an adherent to the Communist Party, you know, to try to identify neuro divergent thinkers, and to isolate them to try to discriminate against them in the workplace, or find people who are suffering from early stages of cognitive decline and fire them rather than, you know, getting them the help they need, or find people who are suffering from mental illness and, you know, detain them or segregate them from society.

So, you know, the utopian universe that I would love for us to live in, is the world in which we have all this brain data. And we use it for good, as opposed to all of the ways that now that that data is data, and can really easily be re identified, because neural signatures can be a way to identify people, it’s a functional biometric. So just like your facial recognition can be used to identify you, the way you sing a song in your head. And the way I sing the same song in my head, could be used to figure out who you are and who I am just based on the brain data. So we’re gonna have a whole bunch of brain data in the cloud, it’s going to be easy to individually identify people from it. And then we can use that to make all kinds of discriminatory choices against people or to hack information from their brains. So I don’t know how we don’t know how we do my utopia. And don’t just end up in the world in which all that data is used by hackers, by employers, by governments, by bad actors, but it’d be great if we could find a way that of utopia.

EJ: I mean, I think your optimism is well placed, because you can easily envision a world where it’s just for more good than bad if there’s a movement to adopt a legal moral framework, you know, where is the Ralph Nader that’s running just on tech?

NF: …And that’s I mean, that’s why I try to propose this pathway forward. Right. So. So I think, I think we were standing at a moment in time, right? I mean, literally, this spring, there are a bunch of companies that are launching neuro tech products that are embedded in everyday devices, right, instead of just listening to your music, your listening to your music while your brain activity is being monitored, instead of just, you know, taking a conference call your brain activity can be decoded at the same time.

And so, like, we’re at the we’re at the stage where this is about to go mainstream. There are already, you know, millions of neural headsets in use already, but it’s about to go mainstream is becoming part of our everyday devices. So we have this like this nanosecond, where if we make a movement together, to adopt a right to cognitive liberty, we change the default rules, right? So the default default rule is you have a right over your brain data. And if you choose to share it, and you choose to share it with a researcher or scientist for studying the public good, you can make that choice.

But it would literally flip the default rules so that if a company wants to seek your brain data, it’s the exception, rather than the default rule being you have to opt out of them using your data, right, you have to opt in, you have to, like it’s required by an international human right, and a global norm, which puts it in favor of individuals.

So I think there’s a moment that we can get this right, I think we don’t need actually a tremendous amount of political will to do it based on the mechanism that I’ve laid out, doesn’t require, you know, all the governments in the world coming together and adopting a new human right, it just requires the updating of those three existing rights under the direction of recognizing a right to cognitive liberty. And then we change the default rule, and at least we’d be on a pathway toward that there’s a whole bunch of other stuff that would have to happen. There has to be individual, country level enforcement context and sector level enforcement of these different, right. But the norm would shift and that’s like, we have to shift the norms.

EJ: Yeah, this is another thing I want to ask you about, the NLRB made a decision that a lot of people didn’t pay attention to, I think was back in October, when they issued a press release, saying they were investigating the ways in which, it sounded like they’re talking about Amazon, companies like Amazon are tracking workers and this sort of integration of surveillance capitalist technologies into people’s day to day lives. And from the conservative perspective, one thing I’m constantly emphasizing is that you don’t necessarily need new laws, and you need to reinterpret some laws that we’ve had on the books since the Gilded Age. And as a law professor, that’s another question I wanted to pose to you. What does this look like, legally in terms of laws that may already exist that should already protect workers?

NF: Well, on the worker side, there’s not a whole lot of protections. For good years. There are a few states that have more robust protections for workers. And there are a few states that have laws about biometric privacy. And so you mentioned the case about the NLRB that I think you’re referring to one out of Illinois, where there was a class action lawsuit. So the one you’re talking about?

EJ: I think that’s what it was.

NF: So there was the big class, a class action lawsuit for the collection of what was I think thumbprint data in that case. But I’d have to look back at the specifics, I write about it in my Harvard Business Review article that is about neuro tech in the workplace that picks up on chapter two of of the Battle for your Brain. But in any event, Illinois, Texas, Washington State, California, they have specific biometric protection laws, and those also apply to workers, which is that you can’t collect biometric data without, like, you know, upfront disclosure that is in writing in a handbook for employees that gives the specifics about what the data is that’s being collected, and for what purpose, it has to be purpose limited. And I think that’s a step in the right direction.

You know, part of what I propose happen in the workplace, if, for example, employers want to collect fatigue level data, is that they need to have it clearly transparently in writing in their workplace handbooks that explain what brain data they’re collecting, for what purpose, what happens to the other data, like, if you’re collecting a whole bunch of brain data to extract information about fatigue level, you should be overriding and discarding all the rest of that data. There’s no reason for an employee to have that are stored in the cloud or anything else like that.

But so those laws go in the right direction. They fall short, though. And that really they just require disclosure. And as long as you know, there’s adequate disclosure. And you know, it’s narrowly tailored to whatever the bonafide legal practices, they can collect the biometric data. And I think, you know, it may not be that hard to make a case for, you know, we need to monitor attention, we need to monitor boredom, engagement, like we need to do it in order to figure out if our workplace from home policy makes sense, or if people are more engaged when they’re in the office instead.

And so I think most cases tend to favor employers over employees. And so when it comes to brain data, which I think is special, unique, that we need to have stronger protections around. I worry that if they if it just falls under those few states that have biometric protections, disclosure is all that it’s going to take and the data could be collected otherwise, so I think we need stronger protections, which includes a right to mental privacy as part of that right to cognitive liberty.

But I mean, on your point, your bigger point of like, updating existing rights versus new rights. I really struggled with this actually in my book. On, not on the three pieces that make up cognitive liberty, which are mental privacy, self determination and freedom of thought, those just require updated interpretations of those rights. They need to be explicit. It doesn’t like we shouldn’t do that on a case by case there are mechanisms for evolving human rights laws from opinions that are issued by the Human Rights Committee to things that are called general comments. These are the kind of interpretive guides to these different rights, where I struggled was whether or not to deem cognitive liberty as a right, that is a derivative of those, or if it is a new right. And I erred on the side of, of deciding it really needs to be a new, new, right, that doesn’t require updating the Human Rights Committee and I go into this in the book, but I have these wonderful colleagues, human rights colleagues, human rights law professor colleagues here at Duke, Larry Helfer, Jayne Huckerbee Brandon Garrett, they wrote this really important article about the right to actual innocence and why that should be a new right rather than just one that’s derived from other existing rights. And Larry Helfer is a US Representative appointed to the Human Rights Committee. And so he was really helpful. We went back and forth on thinking about cognitive liberty and how to frame it. And I came down on believing strongly, both for legal reasons, but also for expressive norm reasons. We need to recognize it as a new right to cognitive liberty that requires updating all the rest of these rights.

EJ: And before we go, that is actually really interesting. That’s what I liked about drawing on Mill, because you see how it is sort of embedded in a sense of the foundation of kind of enlightenment, modernity. And before we go, I wanted to ask about case studies of two men, Elon Musk and Mark Zuckerberg both who have visions for integrating some of this technology. Elon Musk obviously owns Neuralink, and Tesla and now Twitter, and has talked about transforming Twitter into something that’s more like Weibo, something that’s kind of all encompassing. And with Zuckerberg, obviously, he’s now no longer the head of Facebook, but the head of Meta and Oculus headsets are in living rooms around the country already, including the one that I’m talking to you from right now. So the broader question is, to what extent have we already sacrificed a lot of data that without legal protections can be used and what you’re talking about as that neural signature? You know, are we already forfeiting a lot of information that will, in this, this battle for our brain, as you write in the book that could already be built into a mountain of information that’s weaponized against people as individuals?

NF: Yeah. So I mean, it’s important to realize that all of this is additive, right? And we’re not talking about brain data as if it’s not going to be integrated with all of the rest of your personal data that has already been commodified? And the question is, like, what additional risk is posed? What’s the additional power that the brain data gives that the rest of the data that’s already being used doesn’t provide?

And in some cases, the answer will be not much. In some cases, the power of all of the rest of the data that has already been collected about you from your cell phone tracking to your social media accounts to the like buttons and hearts and emojis, or how much time you spend on a particular video, to what Google search terms, you enter Bing search terms you enter, right? There’s already a very compelling picture that helps companies know what you’re thinking and feeling.

But there is an undisclosed part, there is a piece of you that you hold back, there is unexpressed information, there is still a space of emotion, we’re the inference, the gap and inference that all of that predictive work is making hasn’t closed and the brain data can close that gap. And so when Meta you know, is investing huge amounts of money into neural interface to become the way in which you interact with all of the rest of your technology from your Oculus to your, you know, AR glasses to social media accounts. That means that that last piece of information falls into the hands of Meta. And, you know, an Elon Musk is is not focused on Neuralink just for helping people overcome, you know, the loss of speech or loss of movements. And I think those are important and laudable goals, right. I mean, neuro technology is, especially implanted neuro technology can revolutionize for so many people who’ve lost the ability to communicate or move or speak, the ability to regain somebody of that those functions and independence.

But he has very clearly said he wants Neuralink to be the Fitbit for our brains, right it is the way in which he thinks we can compete against artificial intelligence that we’re creating kind of super general artificial intelligence. It’s the way in which we all work, the hours that he works,

EJ: Extremely hardcore.

NF: Yes, I mean, it is the way in which we all become extremely hardcore is by jacking our brains with implanted neuro technology. And, you know, and that’s a, it’s a vision of the future that not everybody shares. And it’s a version of the future that is rapidly developing without people being at the table talking about it. And when you see tech titans, like Zuckerberg and Elon Musk, putting in the huge investments in general interface, I mean, what I have to say is wake up, right? Wake up and join the conversation and be part of the process of democratic deliberation to figure out how to make this technology Empower, not oppress humanity. Right. And that’s what’s at stake here is our last bastion of freedom. And it’s one that we can set the terms of, if we’re at the table now, five years from now, two years from now, the story may be different.

EJ: Yeah, that actually reminds me of one final quick question, because it’s based on what you just said, you took this message to Davos, you’ve written this book, what response have you gotten, if any, from industry, people who are obviously increasingly aware, you know, they got hit so hard in 2016, for some of this tech, they’ve gotten hit hard for some of this tech since 2016. TikTok is under fire fromthe left and the right here for all kinds of different things. But what response do you get from the industry? Are there people who are saying, yes, we want this we want the government to tell us what’s illegal, unethical? Or we want norms to shift? Or is it sort of, hey, we want to make money and we are blind?

NF: So I have, I would say, let me segment that in two ways. One, the implanted neuro technology world. You know, these are folks who care. Every person who I’ve encountered who is working on implanted neuro technology, cares deeply about how to do it, right? How to, you know, restore movement, treat epilepsy, treat depression, enable communication, and they’ve recognized the profound risks that it poses. But they are, I mean, they’re aware they care deeply, they want to get it right. You know, the the people who are the more crossover who are looking for wider scale, commercial adoption, it’s been variable.

I mean, some I think, are much more commercially driven. And, and I think a little bit less sincere about the ethical concerns, they recognize that it is that people are going to care more about their brain privacy, and so many of them are saying the right things, but their privacy policies aren’t doing the right things. And then there are others who have been at the forefront and leaders of trying to get ethics and government oversight on the table from the very beginning, who have been leaders and bringing together ethicist and regulators and technologist together to have it embedded into the design. And so, you know, I’d say there is a general awareness in the industry and by industry players, and some of them are more responsible already from the get go and some less so I think no matter where they are, the technology and the startups and the neuro tech companies are increasingly getting getting acquired by big tech, who are, you know, maybe recognize the risks, but have a very long history of commodification and misuse of data.

EJ: Yeah, it’s almost like a reflex to them, the blind optimism of the aughts. Nita Farahany, he book is called Battle for your Brain. I cannot recommend it enough. And I can’t thank you enough for being so generous with your time. Really appreciate you joining the show today.

NF: Thank you for the wonderful conversation, really enjoyed it.

EJ: Of course, again, go buy this book. It’s called “Battle for your Brain,” buy it, talk about it, and maybe even talk to your congressperson about it. I’m Emily Jashinsky, culture editor at The Federalist we’ll be back soon with more Federalist Radio Hour. Until then be lovers of freedom and anxious for the fray.


0
Access Commentsx
()
x