"Last week, as New Scientist reported that leading A.I. models kept recommending nuclear strikes during war-game exercises, the Department of War tried to strong-arm Anthropic, its leading A.I. vendor, into backing down from its demand that its tools not be used for domestic surveillance or totally autonomous warfare. In general, I’m not an A.I. doomer who thinks existential risk or even thoroughgoing social disruption is right around the corner. But these developments didn’t seem great.
"Then the Pentagon proceeded to launch an attack on Iran, reportedly with the help of Claude, even though President Trump had banned its use just hours before. It’s possible that one of the first targets was an elementary school in which at least 175 people were killed. (Neither Israel nor the United States has claimed responsibility for the strike.) ...
"... The major A.I. companies quickly grew so large and so important to the near future of the American economy that they began to seem not only too big to fail but perhaps so big that the government was scared to interfere with them. And now, partly in response, a genuinely democratic backlash is brewing ... The country is hugely anxious about what’s to come while at the same time seeming to lack real faith in anyone, or in any institution, to actually manage it. One common analogue for artificial intelligence is nuclear weapons...."
David Wallace-Wells [gift article] argues that a combination of opportunity, nimbyism, and local targeting has channeled vehement popular opposition to unregulated A.I. into preventing build-out of massive data centers.
"... A.I. arrives in that landscape like an all-encompassing symbol of people’s powerlessness, which is already here but is bound to grow worse, heralding a vision of the future in which much of the ordering of society has been handed over to robots operating in black boxes controlled by a small number of immensely wealthy people.
"Increasingly, voters seem to be trying to take things into their own hands, rising up in opposition to the intrusion of A.I. infrastructure into their local communities.... Younger voters especially hated the building of data centers ... an interesting inversion of the conventional pattern in which younger people are both more tech-friendly and less opposed to change than their parents. ..."
People can be roused and organized in opposition; A.I. is politically potent because what we know and what we are told echoes our felt fears.
"... I don’t know how it will address fears that a small group of tech oligarchs are working feverishly to design a future in which many of the rest of us might be rendered functionally obsolete. “The cultural and economic impact of A.I. is going to be the biggest issue in politics over the next decade,” Senator Chris Murphy of Connecticut said in December, expressing what has become an even more common refrain in the couple of months since. “I think we have not a clue,” said [Senator Bernie] Sanders in announcing his data center [regulation] bill. “We are totally unprepared for what is coming,” he added, predicting “massive job loss” and widespread “cognitive decline.” ...
"... In the past three [years], we’ve gone from casual users freaking out about their first encounters with ChatGPT to the Pentagon staging industrial-strategy-level fights over whether fully autonomous A.I.s can be deployed in war zones without any human oversight. In the next three? Those exponential curves may not bring us to a new godhead, but the genie doesn’t exactly look like it’s going back in the bottle, either. And those hoping to play a more active role in shaping the future that it’s conjuring will probably have to do more than stop ground from being broken for a few new data centers.
Technological progress can't be stopped, but can it be regulated. Maybe.
I grew up in the shadow of the atom bomb, raised with New York Governor Nelson Rockefeller's requirement that school children become practiced in ducking under their desks should the great mass incineration seem immanent. (Really! Since nuclear proliferation is pretty much sure to recur in the law-free world we have slipped into in 2025, that anxiety is likely to reinvigorate.) And then, with the end of the Cold War, awareness of the nuclear threat became a second order terror, superseded by global pandemics and, for some, the immanent arrival of aliens, extraterrestrial or not.
Can human beings survive ourselves? We always have, though often at terrible cost in lives and suffering. The Orange Toddler's perfect little war should force all this onto the agenda of a free people. But will it? Do explore David Wallace-Wells' musings. He's smart, measured, and thought provoking.

No comments:
Post a Comment