
Somehow, we were not prepared for this. Artificial intelligence was in development for decades, during which time we fantasized about all the wonderful things it was going to do for us. And then the bots launched almost fully formed like Athena springing from the forehead of Zeus with her sword in hand, and only then did we have our epiphany: Oh man, this is not going to go well.
What happened to the AI utopia? We were expecting self-driving cars that would let us drink too much on nights out while eliminating highway fatalities. We anticipated the seamless integration of all our devices and appliances, maybe even without cords! We imagined an unlocking of efficiencies at home and at work; medical breakthroughs; scientific innovation on steroids. We’d have three-day workweeks and go hiking on the weekends while the robots cooked and cleaned.
Maybe these things are still there in our future, along with world peace, but so far what we’ve got is a new way for kids to cheat on homework, a lot of derivative art, pernicious deepfakes and raging arguments over intellectual property theft. Oh, and an unprecedented increase in the demand for electricity that threatens to overwhelm the grid and make it impossible for us to stop burning fossil fuels before global warming destabilizes societies worldwide.
The wonder is why we thought this would go well. Shouldn’t we have known ourselves better?
In my view, the biggest problem with AI is that either humans are in charge, or the robots are. If it’s the robots, there is a good chance they will decide to kill us all, and we won’t see it coming. So we need to root for the humans, who could use the powerful new tools of AI to address hunger and climate change but so far mostly use it for financial fraud, child pornography and adding to the absurd percentage of the internet devoted to cat memes.
And instead of helping to lower CO2 emissions, right now the effect of AI is to increase the burning of fossil fuels. U.S. electricity consumption had flatlined after the mid-2000s, but AI is pushing it up again, and sharply. Data centers, where AI “lives,” could consume as much as 9% of U.S. electricity generation by 2030, double that of today.
We have a close-up view of this in Virginia, the data center capital of the world. In 2022, when I first tried to quantify Virginia’s data center problem, industry sources put the state’s data center demand at 1,688 megawatts (MW) — equivalent to about 1.6 million homes. With the advent of AI and its enormous appetite for power, the industry added 4,000 MW of new data centers in 2023. By the end of last year, data centers commanded fully 24%of the total electricity generated by Dominion Energy Virginia, the state’s largest utility. Over the next 15 years, Virginia’s data center demand is expected to quadruple.
Citing the need to supply data centers with power, Dominion did an about-face on its plan to achieve net zero carbon emissions by 2050. It now proposes to keep coal plants running past their previous retirement dates, and to build new gas-powered generation.
The problem is not confined to Virginia. Across the country, utilities are struggling to meet AI’s increased energy demand, and looking to fossil fuels to fill the gap.
And while tech companies talk a good game about meeting their power demand sustainably, the evidence says otherwise. Tech companies conspicuously did not push back on Dominion Energy’s plan, and their own efforts fall woefully short. Even Google, which has taken its carbon-cutting obligations more seriously than most companies, just reported a 13% rise in its greenhouse gas emissions in 2023, thanks to its investments in AI and data centers.
Apparently, Google and its competitors in the race to dominate AI think meeting climate goals is like getting a loan from a bank; you emit more today, grow your business and use the profits to clear the debt by emitting a lot less tomorrow.
But Mother Earth is not a bank. She is a loan shark, and she has started breaking fingers.
If we can’t rely on the inventors of AI to restrain their energy appetites, we have to turn to our politicians (sigh). Our leaders have to make and enforce limits on the growth of AI commensurate with the world’s ability to provide the resources without baking the planet. Admittedly, mustering that kind of willpower is hard to do in a country that has elevated corporations to personhood and defines the First Amendment to include both spreading lies and spending money to influence elections.
And that gets us to the second-biggest concern I have about AI, but the one that might upend society soonest: the unleashing of deepfakes in this fall’s elections, and the threat that the reins of government will go not to those most dedicated to tackling hard problems, but to those who prove themselves the biggest scoundrels.
The American Bar Association (ABA) defines deepfakes as “hoax images, sounds and videos that convincingly depict people saying or doing things that they did not actually say or do.” Noting that they have already been used in election campaigns in the U.S. and abroad, the ABA is promoting model state legislation to criminalize the creation of malicious deepfakes. Meanwhile, tech companies including Google and Meta have adopted advertising policies to require disclosures of altered content.
Both approaches are good as far as they go; websites should police content, and states should act swiftly to outlaw the deepfakes (though the ABA lists very few that have done so yet). But in a high-stakes situation like an election, punishing violators after the fact – if you can catch them at all – is very much a case of closing the barn door after the horses are out. Once voters have been exposed to “evidence” of a candidate’s unfitness for office, especially when media coverage has primed them to believe the lies, the damage is done.
Many voters, especially younger ones, are savvy enough to be wary of campaign-related materials generally, and of unattributed images that float around the internet in particular. But older people who came of age in the pre-internet-memes era are vulnerable to believing what they see and hear, and a lot of us won’t put ourselves to the trouble of questioning what feels true. A deepfake only has to fool some of the people some of the time to alter the results of an election.
But maybe I’m being needlessly alarmist about the dangers of AI, even if I have a lot of company. So I did the obvious thing: I asked a bot if AI would save humanity or kill us all.
ChatGPT responded with a list of pros and cons of AI, including the familiar benefits and concerns that have spawned a thousand op-eds. You can try this at home, so I won’t reiterate them here. But I will note the curious fact that the bot didn’t mention either carbon emissions or election-altering deepfakes.
Maybe that’s an oversight, or maybe it means my fears are unwarranted. But maybe it shows something even scarier than AI itself: It’s AI pretending it isn’t trying to take over.
We urgently need action from U.S. and corporate leaders. Stiff new taxes on data center energy use would lead to greater efficiencies and nudge companies to price data storage and AI use appropriately. New laws should put the onus on internet platforms to stop deepfakes before they can spread. Tech companies should prioritize what is good for human beings over what is good for corporate profit. If they can’t ensure AI is used only for good, they should pull the plug until they can.
If all this doesn’t happen, and soon – well, let’s just hope the robots are kind.
This article first appeared in the Virginia Mercury on July 11, 2024.
If you’d like to hear a deeper discussion about the climate challenge posed by data centers and AI, I’ll be addressing this topic tonight at a meeting of the IEEE Society on Social Implications of Technology (SSIT) Chapter of Northern Virginia/Washington/Baltimore in Oakton, Virginia, which you can also attend remotely. The presentation will be recorded.. https://events.vtools.ieee.org/m/424609
Thanks for all the work you do to educate about environmental issues.
Re your piece on “AI could usher is a golden age of technology… ”
Dominion needs money and so does BWX so let’s build SMR to make them money so we can power ARTIFICIAL intelligence instead of using our own intelligence. I hope that ‘we’ are able to eventually understand that we need to live within our ‘means’ which means respecting and protecting the environment that sustains us. ‘We’ are part of the natural world and our success and happiness as individuals and species depends on a healthy environment.
There isn’t enough clean water for the huge human population, agriculture, and AI for servers. Likewise installed electrical generation. So the news is chock full of going back to retired coal and nuclear plants. For results that will lead to a wide die-off of humans and other species, humans are acting as malevolently as you imagine robots could do. In the words of the 1970 climate poster, “We have met the enemy and he is us.” Corporations are running the planet for “profitability”. We’ve gotten past 1.5 degrees Centigrade for a solid year. Bad things are happening worldwide, as predicted.
I Love this! It’s the clearest analysis of AI that I have seen. Keep thinking ahead, we need your insight
An excellent analysis of the power consumption of AI along with the bloviated promises of its possibilities, of what it can do for our society.
My question for the scrambling electric industry, is why not figure out how to process the mounting delima of nuclear waste 1st. Perhaps AI can help decipher a problem unspoken by all involved, rather than passing it on to the next 6,250 generations at the first 1/2 life. We only get superficial solutions of creating more energy as the future is mortgaged and this isn’t penny wise 30 year Real Estate notes.
“… if it doesn’t kill us first”
Good point, detailed by a senior AI engineer fired from Open-AI as he tried to explain the risks, now detailed at https://situational-awareness.ai/
“I asked a bot if AI would save humanity or kill us all. … You can try this at home”
I asked a related though different question for a client (I very rarely interact with generative models, as there is a high energy expense for each answer);
Q. What can humanity do to mitigate the risks of climate change, mineral resource peaking, and oil resource peaking?
ChatGPT: Mitigating the risks associated with climate change, mineral resource peaking, and oil resource peaking involves comprehensive and interconnected strategies. Here’s a set of actions that can help address these challenges simultaneously:
Climate Change Mitigation:
Mineral Resource Peaking:
Oil Resource Peaking:
Cross-Cutting Strategies:
Addressing these challenges requires a coordinated effort from governments, businesses, communities, and individuals. It’s crucial to adopt a holistic and integrated approach that considers the interdependencies among climate change, mineral resource peaking, and oil resource peaking.