“Empire of AI”: Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World
Democracy Now!
343,919 views Jan 2, 2026 Latest Shows
Support our work: https://democracynow.org/donate/sm-de...
In a New Year holiday special, we revisit our interview with longtime technology reporter Karen Hao, author of the book Empire of AI, which unveils the accruing political and economic power of AI companies — especially Sam Altman’s OpenAI. Her reporting uncovered the exploitation of workers in Kenya, attempts to take massive amounts of freshwater from communities in Chile, along with numerous accounts of the technology’s detrimental impact on the environment. “This is an extraordinary type of AI development that is causing a lot of social, labor and environmental harms,” says Hao in an extended interview.
===
Transcript
This is democracyow democracynow.org the War and Peace Report. I'm Amy
Goodman. The Empire of AI, that's the name of a new book by journalist Karen
How, who's been closely reporting on the rise of the artificial intelligence
industry with a focus on Sam Alman's Open AI. That's the company behind Chat
GPT. Karen how compares the actions of the AI industry to those of colonial
powers of the past. She writes, quote, "The empires of AI are not engaged in
the same overt violence and brutality that marked this history, but they too
seize and extract precious resources to feed their vision of artificial intelligence. the work of artists and
writers, the data of countless individuals posting about their experiences and observations online, the
land, energy, and water required to house and run massive data centers and
supercomputers. She writes, "Over the past year, the Trump administration has
increasingly embraced the AI industry. In December, Trump signed an executive
order to bar states and local governments from enacting their own AI regulations. Soon after he signed the
order, his family's company, Trump Media and Technology, announced a $6 billion
merger with a firm aiming to build the world's first viable nuclear fusion
plant to power AI projects. Karen How is a former reporter at the Wall Street
Journal and MIT Technology Review where she became the first journalist to
profile open AI. Democracy Now's Juan Gonzalez and I spoke to her in May. The
National Book Critic Circle recently named her book The Empire of AI: Dreams
and Nightmares and Sam Alman's Open AI as a finalist for best non-fiction book
of 2025. I began by asking Karen How to explain
just what artificial intelligence is. So AI is a collection of many different
technologies but most people were introduced to it through chatbt and what
I argue in the book and what the title refers to empire of AI it's actually a critique of the specific trajectory of
AI development that led us to chatbt and has continued since chatbt and that is
specifically Silicon Valley's scale at all costs approach to AI development AI
models in modern day they are trained on data they need computers to train them
on that data. But what Silicon Valley did and what OpenAI did in the last few years is they started blowing up the
amount of data and the the size of the computers that need to do this training. So we are talking about the full English
language internet being fed into these models, books, scientific articles, all of the intellectual property that is
being created and also massive supercomputers that run tens of thousands even hundreds of thousands of
computer chips that are the size of dozens maybe hundreds of football fields and use practically the entire energy
demands of cities now. So this is an extraordinary um type of AI development
that is causing a lot of social, labor and environmental harms and that is ultimately why I evoke this analogy to
empire. And Karen, could you talk some more about not only the energy requirements
but the water requirements of these huge data centers that are essence in essence
the backbone of of this uh widening uh industry?
Absolutely. I'll give you two stats on both the energy and the water. When talking about the energy demand,
McKenzie recently came out with a report that said in the next 5 years based on
the current pace of AI computational infrastructure expansion, we would need to put as much energy on the global grid
as what is consumed by two to six times the energy consumed annually by the
state of California. And that will mostly be serviced by fossil fuels.
We're already seeing reporting of uh coal plants with their lives being extended. They were supposed to retire
but now they cannot to support this data center development. We are seeing methane gas turbines, unlicensed ones
being popped up to uh service these data centers as well. From a freshwater
perspective, these data centers need to be trained on freshwater. They cannot be
trained on any other type of water because it can corrode the equipment. It can lead to bacterial growth and most of
the time it actually taps directly into a public drinking water supply because
that is the infrastructure that has been laid to deliver this clean fresh water
to different businesses to different homes. And Bloomberg recently had an analysis where they looked at the
expansion of these data centers around the world and 2thirds of them are being
placed in water scarce areas. So they're being placed in communities that do not
have access to fresh water. So it's not just the total amount of fresh water that we need to be concerned about, but
actually the distribution of this infrastructure around the world. And most people are familiar with chat
GPT the consumer aspect of AI but what about the military uh aspect of AI where
in essence uh we're finding Silicon Valley companies becoming the next generation of defense contractors.
One of the reasons why OpenAI and many other companies are turning to the defense industry is because they have
spent an extraordinary amount of money in developing these technologies. They're spending hundreds of billions to
train these models and they need to recoup those costs and there are only so
many industries and so many places that have that size of a paycheck to pay. And
so that's why we're seeing a cozying up to the defense industry. We're also seeing Silicon Valley use the US
government in their empire building ambitions. You could argue that the US government is also trying to use Silicon
Valley vice versa in their empire building ambitions. Um but certainly these technologies are not they are not
designed to be used in a sensitive military context. And so the the aggressive push of these companies to
try and get those defense contracts and integrate their technologies more and more to the into the infrastructure of
the military is really alarming. I wanted to go to the countries you went
to or the stories you covered because I mean this is amazing the depth of your reporting from Kenya to Uruguay to
Chile. Um you were talking about the use of water and I also want to ask you about nuclear power. Uh but in Chile um
what is happening there around these data centers and the water they would use and the resistance to that.
Yeah. So Chile has an interesting history in that it's been under it was under a dictatorship for a very long
time. And so during that time, most public resources were privatized,
including water. But because of an anomaly, there's one community in the greater Santiago metropolitan region
that actually still has access to a public freshwater resource that services both that community as well as the rest
of the country in emergency situations. That is the exact community that Google
chose to try to put a data center in and it would be free and it, you know, I have no idea. That
is a great question. But what the community told me was they weren't even paying taxes for this because they they
believed based on reading the documentation that the taxes that Google was paying was in fact to where they had
registered their offices, their administrative offices, not where they were putting down the data center. So
they were not seeing any benefit from this data center directly to that community and they were seeing no checks
placed on the fresh water that this data center would have been allowed to extract. And so these activists said,
"Wait a minute, absolutely not. We're not going to allow this data center to come in unless they give us a legitimate
reason for why it benefits us." And so they started doing boots on the ground
activism, pushing back, knocking on every single one of their neighbors doors, handing out flyers to the
community, telling them this company is taking our freshwater resources without giving us anything in return. And so
they escalated so dramatically that it escalated to Google Chile. It escalated to Google Mountain View, which by the
way then sent representatives to Chile that only spoke English. But then it eventually escalated to the
Chilean government. And the Chilean government now has roundts where they
ask these community residents and the company representatives and representatives from the government to come together to actually discuss how to
make data center development more beneficial to the community. The activists say it is the fight is not
over. Just because they've been invited to the table doesn't mean that everything is suddenly better. They need to stay vigilant. They need to continue
scrutinizing these projects. But thus far they've been able to block this project for four to five years and have
gained that seat at the table. And how is it that these uh western
companies in essence are exploiting labor in the global south? You go into
something called data annotation firms. What what are those? Yeah. So because AI modern day AI
systems are trained on massive amounts of data and they're sc that's scraped from the internet, you can't actually
pump that data directly into your AI model because there are a lot of things
within within that data. It's heavily polluted. It needs to be cleaned. It needs to be annotated. So this is where
data annotation firms come in. These are middleman firms that hire contract labor
to provide to these AI companies to do that kind of data preparation. And open
AI when it was starting to think about commercializing its products and thinking about let's put text generation
machines that can spew any kind of text into the hands of millions of users. They realized they needed to have some
kind of content moderation. They needed to develop a filter that would wrap around these models and prevent these
models from actually spewing racist, hateful, and harmful speech to users that would not make a very good
commercially viable product. And so they contracted these middleman firms in Kenya where the Kenyan workers had to
read through reams of the worst text on the internet as well as AI generated
text where open AI was prompting its own AI models to imagine the worst text on
the internet and then telling these Kenyan workers to detail to categorize them in detailed taxonomies of is this
sexual content, is this violent content, how graphic is that violent content in order to teach its filter all the
different categories of content it had to block. And this is incredibly uncommon form of labor. There are lots
of other different types of contract labor that they use. But these workers, they're paid a few bucks an hour, if at
all. And just like the era of social media, these content moderators are left very deeply psychologically traumatized.
And ultimately, there is no real philosophy behind why these workers are
paid a couple bucks an hour and have their lives destroyed. And why AI researchers who also contribute to these
models are paid million-doll compensation packages simply because they sit in Silicon Valley in OpenAI's
offices. That is the logic of Empire. And that hearkens back to my title, Empire of AI.
So, let's go back to your title, Empire of AI. the subtitle dreams and
nightmares in Sam Alman's Open AI. So tell us the story of Sam Alman and what
Open AI is all about right through to the deal he just made in the Gulf when President Trump uh Sam Alman and Elon
Musk were there. Alman is very much a product of Silicon Valley. His career
was first as a founder of a startup and then as the president of Y Combinator, which is one of the most famous startup
accelerators in Silicon Valley, and then the CEO of OpenAI. And there's no
coincidence that OpenAI ended up introducing the world to the scale at all cost approach to AI development
because that is the way that Silicon Valley has operated in the entire time that Altman came up in it. And so he is
a very strategic person. He is incredibly good at telling stories about
the future and painting these sweeping visions that investors and employees want to be a part of. And so early on at
YC, he identified that AI would be one of the trends that could take off. And
he was trying to build a portfolio of different investments and different initiatives to place himself in the
center of various different trends depending on which one took off. He was investing in quantum computing. He was
investing in nuclear fusion. He was investing in self-driving cars. And he was developing a fundamental AI research
lab. Ultimately, the AI research lab was the ones that started accelerating really quickly. So he makes himself the
CEO of that company. Um and and originally he started it as a nonprofit
to try and position it as a counter to forprofit driven incentives in Silicon
Valley. But within one and a half years, OpenAI's executives identified that if they wanted to be the lead in this
space, they had to go for this scale at all cost approach and had to should be in quotes. They thought that they had to
do this. There are actually many other ways to develop AI and to have progress in AI that does not take this approach.
But once they decided that, they realized the bottleneck was capital. It just so happens Sam Alman is a once in a
generation fundraising talent. He created this new structure nesting a for-profit arm within the nonprofit to
become this fundraising vehicle for the tens of billions and ultimately hundreds of billions that they needed to pursue
the approach that they decided on. And that is how we ultimately get to present
day OpenAI, which is one of the most capitalistic companies in the history of
Silicon Valley, continuing to raise hundreds of billions and and Altman has joked even trillions to produce a
technology that ultimately has a middling economic impact thus far. We'll
return to our conversation in a minute with Karen How, author of the new book
Empire of AI: Dreams and Nightmares in Sam Alman's Open AI. Stay with us.
This is Democracy Now! democracynow.org. I'm Amy Goodman. In this holiday
special, we continue with the journalist Karen How, author of the new book Empire
of AI: Dreams and Nightmares, and Sam Alman's Open AI. Karen came into our
studio in May when she discussed how AI will impact workers.
One of the things that we have seen is this technology is already having a huge impact on jobs.
Not necessarily because the technology itself is really capable of replacing jobs, but it is perceived as capable
enough that executives are laying off workers. And we need more some kind of
more guard rails to actually prevent these companies from continuing to try
and develop labor automating technologies and try to shift them to
producing labor assistive technologies. What do you mean? So, open AI, their definition of what
they call artificial general intelligence is highly autonomous systems that outperform humans in most
economically valuable work. So, they explicitly state that they are trying to
automate jobs away. I mean, what are what is economically valuable work? But the things that people do to get paid.
Um but there's this really great book called Power in Progress by MIT economists Jeron Austin and Simon
Johnson who mention that technology development all technology revolutions
they take a labor automating approach not because of inevitability but because the people at the top choose to automate
those jobs away. They choose to design the technology so that they can sell it to executives and say you can shrink
your costs by laying off all these workers and using our AI services instead. But in the past, we've seen
studies that for example suggest that if you develop an AI tool that a doctor
uses rather than replacing the doctor, you will actually get better health care for patients. you will get better di
cancer diagnoses. If you develop an AI tool that teachers can use rather than just an AI tutor that replaces the
teacher, your kids will get better educational outcomes. And so that's what I mean by labor assistive than labor
and explain uh what you mean because I think a lot of people don't even understand artificial intelligence. And
when you say replace the doctor, what are you talking about? Right. So these companies they try to
develop a technology that they position as an everything machine that can do anything. Um and so they will try to say
you can use this you can talk to chatbt for therapy. No, you cannot. Chat GBT is
not a licensed therapist. And in fact, these models actually spew lots of medical misinformation. And there have
been lots of um examples of actually users being psychologically harmed by
the model because the model will continue to reinforce um selfharming behaviors. And we've even had cases
where uh children who speak to chatbots and develop huge emotional relationships
with these chatbots have actually killed themselves after using these chatbot systems. Um but that's what I mean when
these companies are trying to develop labor automating tools. They're positioning it as you can now hire this
tool instead of hire a worker. I mean, most recently, Sam Alman was speaking at a conference and said, "We originally
said that these models were junior level partners at a law firm, and now we think that they can really be more senior
colleagues at a law firm." What he's saying is don't hire the junior level partners, don't hire the senior
colleagues, and just use our AI models. And we are already seeing the career
ladder breaking because many different white collar job uh white collar service
industries as well as other industries are becoming convinced that they do not need to hire interns. They do not need
to hire entry-level positions that they just need these AI models and new college graduates are struggling now to
find job opportunities to help them get a foothold into these industries. So, you've talked about Sam Alman, and in
part one, we touched on uh who he is, but I'd like you to go more deeply into
what uh who Sam Alman is, how he exploded onto the um US scene,
testifying before Congress, actually warning about the dangers of AI. So, that really protected him in a way.
Um people seeing him as a prophet. That's a P O P. But now, we can talk
about the other kind of prophet, P O Fit T. um and how open AI was formed. How is
open AI different from AI? OpenAI is a com I mean it was originally
founded as a nonprofit as I mentioned and Alman specifically when he was
thinking about how do I make a fundamental AI research lab that is going to make a big splash he chose to
make it a nonprofit because he identified that if he could not compete on capital and he was relatively late to
the game Google already had a monopoly on a lot of top AI research talent at the time if he could not compete on
capital and he could not compete um in in terms of being a first mover he needed some other kind of ingredient
there to really recruit talent recruit um public goodwill and establish a name
for open AAI so he identified a mission he identified let me make this a
nonprofit and let me give it a really compelling mission so the mission of openai is to ensure artificial general
intelligence benefits all of humanity And one of the quotes that I open my
book with is this quote that Sam Alman cited himself in 2013 um in his blog. He
was an avid blogger back in the day talking about his learnings on business and strategy and Silicon Valley startup
life. And the quote is successful people built companies, more successful people
build countries. The most successful people build religions. And then he
reflects on that quote in his blog saying, "It appears to me that the best way to build a religion is actually to
build a company." And so talk about how Alman was then forced out of the company and then came
back. And also I just found it so fascinating that you were able to speak with so many Open AI workers. You
thought there was a kind of total ban on you. Yes. Yeah. Exactly. So I was the first journalist to profile OpenAI. Um, I
embedded within the company for 3 days in 2019 and then my profile published in 2020 for MIT technology review and at
the time I identified in the profile this tension that I was seeing where it was a nonprofit by name but behind the
scenes a lot of the public values that they exposed were actually the opposite of how they operated. So they espoused
transparency but they were highly secretive. They espoused collaboriveness. They were highly competitive. and they espoused that they
had no commercial intent, but in fact it seemed like they had just gotten a $1 billion investment from Microsoft. It
seems like they were rapidly going to develop commercial intent. And so I wrote that into the profile and OpenAI
was deeply unhappy about it and they would not refuse to talk to me for 3 years. But when I started working on the
book, when I started reaching out to employees, current and former, I discovered that many employees actually
really liked the profile and they specifically wanted to talk to me because they thought that I would do
justice to the truth of what had actually happened within the company and be able to discover behind what the
executives mythologized and narrativized about this technology and about the
course of this company. I would be able to actually get beneath that to the real heart of the matter. And so um one of
the things that you really have to understand about AI development today is that there are what I call quasi
religious movements that have developed within Silicon Valley. The concept of artificial general intelligence is not
one that's scientifically grounded. It is this idea that we can fundamentally recreate human intelligence in
computers. And this idea has been around for actually a really long time. The field of AI was founded all the way back
in the 1950s and that was the original intent of the field. How do we recreate intelligence in computers? Can machines
think? That was the famous question that British mathematician Alan Turing asked. But we to this day do not have
scientific consensus around even what human intelligence is. And so to peg an
entire research field and a technology to the basis of human intelligence is a very tricky endeavor because there are
no good metrics to assess have we actually gotten there yet and there's no blueprint to say what should AI look
like and how should it work and ultimately who should it serve. And so when OpenAI took up this mission of
artificial general intelligence, they were able to essentially shape and mold what they wanted this technology to be
based on what is most convenient for them. But when they identified it, it was at a time when scientists really
looked down on this term even AGI. And so they absorbed just a small group of
self-identified AGI believers. This is why I call it quasi religious because
there's no scientific evidence that we can actually develop AGI. The people who are strongly con have this strong
conviction that they will do it and that it's going to happen soon. It is just purely based on belief and they talk
about it as a belief too. But there are two factions within this belief system of the AGI religion. There are people
who think AGI is going to bring us to utopia and there are people who think AGI is going to destroy all of humanity.
Both of them believe that it is possible. It's coming soon. And therefore they conclude that they need
to be the ones to control the technology and not democratize it. And this is ultimately what leads to your question
of what happened when Sam Alman was fired and rehired through the history of OpenAI. There's been a lot of clashing
between the boomers and doomers about who should actually the boomers and doomers. The boomers and the doomers.
Those that say it'll bring us the apocalypse. topia boomers and those that say it'll destroy humanity. The doomers
and they have clashed relentlessly and aggressively about how quickly to build
the technology, how quickly to release the technology and ultimately Altman is
one that he is really good at saying to people what they need to hear and he
will say different things to different people if he thinks they need to hear different things. So when I asked boomers, is Altman a boomer? They said
yes. When I asked doomers, is Altman a doomer? They said yes. And I want to take this up until today to um in
January, the Trump administration announcing the Stargate project, a $500
billion project to boost AI infrastructure in the United States.
This is Open AI Sam Alman speaking alongside President Trump.
I think this will be the most important project of this era and as Masa said for AGI to get built here to create hundreds
of thousands of jobs to create a new industry centered here. Uh we wouldn't be able to do this without you Mr.
President. He also there referred to AGI um uh artificial general intelligence. Explain
what happened here and what this is and has it actually happened. So Altman
before Trump was elected um he already was sensing through
observation that it was possible that the administration would shift and that he would need to start politicking quite
heavily to ingruiate himself to a new administration.
Alman is very strategic. Um he was under a lot of pressure at the time as well because his original co-founder Elon
Musk now has great beef with him. Uh Musk feels like Alman used his name and
his money to set up OpenAI and then he got nothing in return. So Musk had been suing him, still suing him and suddenly
became first buddy of the Trump administration. So Altman basically
cleverly orchestrated a um this announcement where by the way
the the announcement is quite strange because the Trump President Trump is not it's not the US government giving $500
billion. It's private investment coming into the US um from places like Soft
Bank which is uh which is one of the largest investment funds um run by Masay Yoshi
Son a Japanese businessman who made a lot of his wealth from the previous tech era. So, so it's not even the US
government that's that's providing this money. And take that right through to now that Gulf trip that um Elon Musk was on, but
so was Sam Alman to the fury of Elon Musk and then a deal was sealed in Abu
Dhabi. Yeah. It didn't include Elon Musk but was about open AI.
Exactly. So Altman has continued to try and use the US government as a way to to
get access to more places and uh more powerful spaces to build out this
empire. And one of the one of the things because OpenAI's computational infrastructure needs are so aggressive.
You know, I had an OpenAI employee tell me we're running out of land and power. So they are running out of resources in
the US which is why they're trying to get access to land and energy in other places. The Middle East has a lot of
land and has a lot of energy and they're willing to strike deals and that is why Altman was part of that trip looking to
strike a deal and what they the deal that they struck was to build a massive data center or multiple data centers in
the Middle East using their land and their energy. But one of the things that
OpenAI has recently rolled out, they call it the OpenAI for countries program and it is this idea that they want to
install OpenAI hardware and software in places around the world and explicitly
says we want to build democratic AI rails.
We want to install our hardware and software as a foundation of democratic
AI globally so that we can stop China from installing authoritarian AI
globally. But the thing that he does not acknowledge is that there is nothing
democratic about what he's doing. You know, the Atlantic executive editor says we need to call these companies for what
they are. They are techno authoritarians. They do not ask the public for any perspective on how they
develop the technology, what data they train the technology on, where they develop these data centers. In fact,
these data centers are often developed in the cover of night um under shell companies like Meta recently entered New
Mexico under the Shell company named Greater Kudu LLC. Greater Kudu.
Greater Kudu LLC. And once the deal was actually closed and the residents couldn't do anything about anymore,
that's when it was revealed, surprise, we're Meta and you're going to get a data center that drinks all of your fresh water. And then there was this whole
controversy in Memphis around a data center. Yes. So that is the data center that
Elon Musk is building. So meanwhile, Musk is saying Alman is terrible. Everyone should use my AI. And of
course, his AI is also being developed using the same environmental and public health costs. So he built this massive
supercomputer called Colossus in Memphis, Tennessee that's training Grock, the chatbot that people can
access through X and that is being powered by
around 35 unlicensed methane gas turbines that are pumping thousands of
tons of toxic air pollutants into the greater Memphis community. And that
community has long suffered a lack of access to clean air, a fundamental human
right. So I want to go to interestingly Sam Alman testifying in front of Congress
about solutions to the high energy consumption of artificial intelligence.
In the short term, I think this probably looks like more natural gas. Um although there are some applications where I
think solar can really help. In the medium-term, I hope it's advanced nuclear uh fish and fusion. More energy
is important well beyond AI. So that's open AI's Sam Alman. This is
testifying before the Senate and talking about everything from uh solar to
nuclear power. Something that was fought in the United States by environmental activists for decades. So you have these
huge old uh nuclear power plants, but many say you can't make them safe no matter how small. and smart you make
them. This is one of the things of the many things that I'm concerned about with the current trajectory of AI development.
This is a second order tertiary order effect is that because these companies
are trying to claim that the AI development approach they took doesn't have climate harms. They are explicitly
evoking nuclear again and again and again as nuclear will solve the problem. And it has been effective. I have talked
with certain AI researchers who thought the problem was solved because of nuclear and in order to try and actually
build more and more nuclear plants, they are lobbying governments to try and
unwind the regulatory structure around nuclear power plant building. I mean
this is this is like crazy on so many levels that they're not just trying to
develop these the AI technology recklessly. They are also trying to lay down infrastructure and nuclear
infrastructure in this move fast break things ideology. But for those who um
are environmentalists and have long opposed nuclear will they be sucked in by the solar alternative? But that exact
so data centers have to run 247. So they cannot actually run on just renewables.
That is why the companies keep trying to evoke nuclear as the solve all but solar
does not actually work when we do not have sufficient enough energy storage solutions for that 24/7 operation. We'll
return to our conversation in a minute with Karen How, author of the new book Empire of AI: Dreams and Nightmares in
Sam Alman's Open AI. Stay with us.
This is Democracy Now! Democracynow.org. I'm Amy Goodman. In this holiday
special, we're speaking with the journalist Karen How, author of the new book Empire of AI: Dreams and Nightmares
in Sam Alman's Open AI. She came into our studio in May. She lives in Hong
Kong. I asked her to talk about what's happening in China around artificial
intelligence. China and the US are the largest hubs for AI research. They are the largest
concentration of AI research talent globally. Um, China other than Silicon
Valley, China really is the only other rival in terms of talent density and the amount of capital investment and the
amount of infrastructure that is going into AI development. In the last few years, what we have seen is the US
government has been aggressively trying to stay number one and one of the
mechanisms that they have used is export controls. A key input into these AI
models is the computational infrastructure and the computer chips for installing into the data centers for
training these models. And these computer chips are the in order to develop the AI models. Companies are
using the most bleeding edge computer chip technology. It's like the every two years a new chip comes out and they
immediately start using that to train the next generation of AI models. Those computer chips are designed by American
companies, the most prominent one being Nvidia in California. And so the US government has been trying to use export
controls to prevent Chinese companies from getting access to the most cutting edge computer chips. That has all been
under the recommendation of Silicon Valley saying this is the way to prevent
China from being number one. and like put export controls on them and don't
regulate us at all so we can stay number one and they will fall behind. What has happened instead
is because there is a strong base of talent of AI research talent in China
under the constraints of fewer computational resources, Chinese companies have actually been able to
innovate and develop the same level of AI model capabilities as American companies with two orders of magnitude
less computational resources, less energy, less data. So, I'm talking
specifically about um the Chinese company Highfire, which developed this model called Deep Seek earlier this year
that briefly tanked the global economy because the company said that their
their um training this one AI model cost around $6 million when OpenAI was
training models that cost hundreds of millions if not over tens of billions of
dollars. And that delta demonstrated to people that this what Silicon Valley has
tried to convince everyone for the last few years that this is the only path to getting more AI capabilities is totally
false and actually the techniques that chi the Chinese company was using were
ones that existed in the literature and just had to be assembled. They used a lot of engineering sophistication to do
that, but they weren't actually using fundamentally new techniques. They were ones that actually already existed.
So explain it further because I think a lot of people just can't get their minds around this. How do you do this
training? So there's software called neural networks which is essentially a massive
statistical engine. is doing lots and lots of sophisticated statistical
computation to try and ascertain what kinds of patterns exist in data sets. So
typically in in the past before we got to large language models it would be doing something like um looking at MRI
scans and checking the patterns of what what does cancer look like in an MRI
scan. Um now with GBT what it's looking at is what are the patterns of the
English language? what is the syntax, the structure, figures of speech that are typically used and then it uses
those patterns to construct new sentences. That's how generative AI works. And the reason why it's so
computationally expensive is because it's crunching the numbers for those patterns. And the more data you feed in,
the more it has to crunch. And so it we used to train these AI models on you
know a powerful laptop like maybe one computer chip maybe the richest labs
academic labs like MIT they would be uh training on a couple or a dozen computer
chips and companies like Google they would be training maybe on a couple hundred computer chips. We are now
talking about hundreds of thousands of computer chips training a single model.
Um and that is the you know that is what open AI says is necessary to build these
technologies and that is what deepseek proved wrong. So, let me ask you
something, Karen. uh the latest news um as you're traveling in the United States
before you go back to Hong Kong uh of Trump's attack on academia, how this
fits in. Um how could Trump's attack on international students specifically
targeting the what more than 250,000 a quarter of a million Chinese students
and revoking their visas impact the future of the AI industry. But not just
Chinese students because what's going on here now is terrifying students around
the world and because labs are shutting down in all kinds of ways here uh US
students as well uh deciding to go abroad. This is just the latest action that the
US government has taken over the last few years to really alienate a key
talent pool for US innovation. Originally, there were more Chinese
researchers working in the US contributing to US AI than there were in
China because just a few years ago, Chinese researchers aspired to work for
American companies. They wanted to move to the US. They wanted to contribute to
the US economy. They didn't want to go back to their home country. But because
of what was called the China Initiative, which was the a first Trump era
initiative to try and criminalize Chinese academics or ethnically Chinese academics, some of whom were actually
Americans um based on just paperwork errors. They would accuse them of being
spies. That was one of the first actions. Then of course the pandemic happened and the USChina trade
escalations started amplifying anti-Chinese rhetoric. All of these led
and now with the potential ban on international students. All of these have led more and more Chinese
researchers to just opt for staying at home and contributing to the Chinese AI
ecosystem. And this was a prerequisite to High-fly pulling off Deepseek. If
there had not been that concentration and buildup of AI talent in China, they
probably would have had a much harder time innovating around circumventing
these export controls that the US government was imposing on them. But because they now have a high
concentration of top talent, some of the top talent globally,
when those restrictions were imposed, they were able to innovate around them. So Deepseek is literally a product of
this continuation of that alienation and with the US continuing to take this stance, it is just going to get worse.
And as you mentioned, it's not just Chinese researchers. I literally just talked to a friend in academia that said
she's considering going to Europe now because she just cannot survive without that public funding. And Europe European
countries are seeing a critical opportunity offering milliondoll packages. Come here, we'll give you a
lab. We'll give you millions of dollars of funding. I mean this is the fastest way to brain drain this country.
I mean what many are saying US's brain drain is their brain gain. Yes.
And this also reminds us of history. You have the Chinese rocket scientist Chen
Shuen who in the 1950s was inexplicably held under house arrest for years and
then Eisenhower has him deported to China. He becomes the father of rocket science and uh China's entry into space.
And he said he would never again step foot into the United States even though originally that was the only place he
wanted to live. Yes. And there was a I believe a government official, a US government
official who said that was the dumbest mistake the US ever made.
Um you we talk about the brain drain and the brain gain. Okay. Again, uh some
more rhyming, the doomers and the boomers. Um, I want to talk about what
an AI apocalypse looks like, meaning how it brings us to apocalypse, but also um
how uh people say it could lead us to a utopia. What are the two tracks
trajectories? It's a great question and I ask boomers and doomers this all the time. Can you
articulate to me exactly how we get there? And the issue is that they cannot. And this is why I call it quasi
religious. It really is based on belief. I mean, I was talking with one researcher who identified as a boomer.
And I said, you know, he his his eyes were wide and he he really lit up saying, you know, once we get to AGI,
game over. Everything becomes perfect. And I asked him, I was like, can you
explain to me how does AGI feed people that haven't don't have food on the table right now? And he was like, "Oh,
you're talking about like the floor floor and how to elevate their quality
of life." And I was like, "Yes, because they are also part of all of humanity." And he was like, "I'm not really sure
how that would happen, but I think it could it could help the middle class get more economic opportunity." And I was
like, "Okay, but how does that happen as well?" And he was like, "Well, once these come once we have AGI and it can just create trillions of dollars of
economic value, we can just give them cash payouts." And I was like, who's giving them cash payouts? What institutions are giving them? You know,
like it it doesn't when you actually test their logic, it doesn't really hold. And with the doomers, I mean, it's
the same thing. like their belief is ultimately
what I realized when reporting on the book is they believe AGI is possible because of their belief of how the human
brain works. They believe human intelligence is inherently fully computational. So if you have enough
data and you have enough computational resources, you will inevitably be able to recreate human intelligence. It's
just a matter of time. And to them, the reason why there would that would lead to an apocalyptic scenario is humans, we
learn and improve our intelligence through communication. And communication is inefficient. We miscommunicate all
the time. And so for AI intelligences, they would be able to rapidly get
smarter and smarter and smarter by having perfect communication with one another as digital intelligences. And so
many of these people who selfidentify as doomers say there has never been in the history of the the universe a species
that was superior to another species a a species that was able to rule over um a
more superior species. So they think that ultimately AI will evolve into a
higher species and then start ruling us and then maybe decide to get rid of us
altogether. I'm wondering if you can talk about any model of a country, not a
company, that is pioneering a way of
democratically controlled artificial intelligence. I don't think it's actively happening
right now. The EU has had the EU AI act, which is
their major piece of legislation trying to develop a riskbased, rightsbased framework for governing AI um
deployment. But to me, one of the keys of democratic AI
governance is also democratically developing AI. And I don't think any
country is really doing that. And what I mean by that is there are AI has a
supply chain. It needs data. It needs land. It needs energy. It needs water. And it also needs spaces in which these
companies need access to to then deploy their technology. Schools, hospitals, government agencies. Silicon Valley has
done a really good job over the last decade of making people feel that their
collectively owned resources are Silicon Valleys. You know, I have I talk with friends all the time who say, "We don't
have data privacy anymore." So, like, what's more what's what is more data to these companies? Like, I'm fine just
giving them all of my data. But that data is yours. You know, that intellectual property is the writers and
artists intellectual property. That land is a community's land. Those schools are
the students and teachers schools. The hospitals are the doctors and nurses and patients hospitals. These are all sites
of democratic contestation in the deployment in the development and the deployment of AI. And just like those
Chilean water activists that we talked about who aggressively understood that that fresh water was theirs and they
were not willing to give it up unless they got some kind of mutually beneficial agreement for it. We need to
have that spirit in protecting our data, our land, our water, and our
schools so that companies inevitably will have to adjust their approach
because they will no longer get access to the resources they need or the spaces that they need to deploy in. In 2022,
Karen, you wrote a piece for MIT Technology Review headlined a new vision of artificial intelligence for the
people. In a remote rural town in New Zealand, an indigenous couple is
challenging what AI could be and who it should serve. Who are they? This was a
wonderful story that I did where the couple um they run to media. It's a
nonprofit MAI radio station in New Zealand. And the Maui people have
suffered a lot of the same um challenges as many indigenous peoples around the world. The history of colonization led
them to rapidly lose their language and there are very few Mauy speakers in the world anymore. And so in the last few
years there's been an attempt to revive the language and the New Zealand government has tried to repent by by
trying to encourage that the revival of that language. But this nonprofit radio station, they had all this wonderful
archival material, archival audio of their ancestors speaking the Mai language that they wanted to provide to
Maui speakers, ma Mai learners around the world as an educational resource. The problem is in order to do that they
needed to transcribe the audio so that Mai learners could actually listen, see what was being said, click on the words,
understand the translation and actually turn it into an active learning tool. But there were so few Maui speakers that
can speak at that advanced level that they realized they had to turn to AI. And this is a key part of my book's
argument is I'm not critiquing all AI development. I'm specifically critiquing the scale at all cost approach that
Silicon Valley has taken. But there are many different kinds of beneficial AI models, including what they ended up
doing. So they took a fundamentally different approach. First and foremost, they asked their community, do we want
this AI tool? Once the community said yes, then they moved to the next step of
asking people to fully consent to donating data for the training of this
tool. They explained to the community what this data was for, how it would be used, how they would then guard that
data and make sure that it wasn't used for other purposes. They collected around a couple hundred
hours of audio data in just a few days because the community rallied support around this project and only a couple
hundred hours was enough to create a performant speech recognition model which is crazy when you think about the
scales of data that these Silicon Valley companies require. And that is once again a lesson that can be learned is
actually there's plenty of research that shows when you have highly curated small data sets, you can actually create very
powerful AI models and then once they had that tool, they were able to do exactly what they wanted to open source
and resour uh open source this educational resource to their community. And so my vision for AI development in
the future is to have more small taskspecific AI models that are not
trained on vast polluted data sets but small curated data sets and therefore
only need small amounts of computational power and can be deployed in challenges
that we actually need to tackle for humanity.
mitigating climate change by integrating more renewable energy into the grid, improving health care, by doing more
drug discovery. So, as we finally do wrap up, what you were most shocked by, you've been doing
uh this journalism, this research for years, what you were most shocked by in
writing Empire of AI. I originally thought that I was going to
write a book focused on vertical harms of the AI supply chain. Here's how labor exploitation happens in the AI industry.
Here's how the environmental harms are arising out of the AI industry. And at
the end of my reporting, I realized that there is a horizontal harm that's happening here. Every single community
that I spoke to, whether it was artists having their intellectual property taken or Chilean water water activists having
their fresh water taken, they all said that when they encountered the empire,
they initially felt exactly the same way. A complete loss of agency to
self-determine their future. And that is when I realized the horizontal harm here
is AI is threatening democracy. If the majority of the world is going to feel
this loss of agency over self-determining their future, democracy cannot survive and again specifically
Silicon Valley's approach scale at all costs AI development. But you also chronicle the resistance.
You talk about how the Chilean water actors felt at first, how the artists
feel at first. So talk about the strategies that these people have
employed and if they've been effective. So the amazing thing is that there has since been so much push back. The
artists have then said wait a minute we can sue these companies. The Chilean water activists said wait a minute we
can fight back and protect these water resources. The Kenyan workers that I spoke to who are contracted by OpenAI
they said we can unionize and escalate our story to international media attention. And so even in these even
when I thought that these communities you could argue are the most vulnerable in the world have the least amount of
agency, they were the ones that remembered that they do have agency and that they can seize that agency and
fight back. And I think it it was it was remarkably heartening to encounter those
people to remind me that actually the first step to reclaiming democracy is
remembering that no one can take your agency away. Karen How, author of the new book Empire
of AI: Dreams and Nightmares and Sam Alman's Open AI. Karen How is a former
reporter at the Wall Street Journal and MIT Technology Review. And that does it
for this special broadcast.
====
Democracy Now is produced with Mike Burke, Renee
Fel, Dina Guster, Messiah Rhodess, Nurmine Shake, Maria Terasena, Nicole Salazar, Sarin Nasser, Trina Nadura, Sam
Alov, T Maria Joe, John Hamilton, Robbie Karen, Honey Massud, and Safwet Naz. Our
executive directors, Julie Crosby. Special thanks to Becca Stelli, John Randolph, Paul Powell, Mike DeFippo,
Miguel Nggera, Hugh Grant, Carl Marxer, Dennis Moahan, David Prud, Dennis McCormack, Matt Elely, Anna Osbeck,
Emily Anderson, Dante Toriieri, Buffy St. Marie Hernandez with Juan Gonzalez.
I'm Amy Goodman. Happy New Year. Thanks for watching Democracy Now on
YouTube. Subscribe to the channel and turn on notifications to make sure you never miss a video. And for more of our
audience supported journalism, go to democracynow.org or where you can download our news app, sign up for our
newsletter, subscribe to the daily podcast, and so much
No comments:
Post a Comment