double-skinned crabs double-skinned crabs double-skinned crabs double-skinned crabs double-skinned crabs double-skinned crabs double-skinned crabs double-skinned crabs double-skinned crabs double-skinned crabs double-skinned crabs double-skinned crabs vietnamese seafood double-skinned crabs mud crab exporter double-skinned crabs double-skinned crabs crabs crab exporter soft shell crab crab meat crab roe mud crab sea crab vietnamese crabs seafood food vietnamese sea food double-skinned crab double-skinned crab soft-shell crabs meat crabs roe crabs
Tech

Doomsday AI machines could lead to nuclear war

Could the future of artificial intelligence bring about a robot doomsday?

In a scenario straight out of apocalyptic science fiction, a leading security think tank is warning that as soon as 2040, AI machines could encourage nations to take apocalyptic risks with their nuclear stashes.

A paper commissioned by the RAND Corporation, a nonprofit in Santa Monica, Calif., that offers research and analysis to the armed forces on global policy issues, says it’s conceivable that AI — as well as the proliferation of drones, satellites and other sensors — could lead to nuclear war with overwhelmingly grave consequences for humanity.

“Autonomous systems don’t need to kill people to undermine stability and make catastrophic war more likely,” said Edward Geist, an associate policy researcher at RAND, a specialist in nuclear security and co-author of the paper. “New AI capabilities might make people think they’re going to lose if they hesitate. That could give them itchier trigger fingers. At that point, AI will be making war more likely, even though the humans are still, quote-unquote, in control.”

The study was based on data collected from experts in nuclear issues, government branches, AI research and policy and national security.

These experts are attempting to envision what the future of security looks like, molded by the effects of political, technological, social and demographic trends.

The report looked at scenarios from the best (the future of AI leads to increased security stability) to the worst (hackers as third-party actors).

“Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes,” added Andrew Lohn, co-author of the paper and associate engineer at RAND. “There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk.”