Drone Weaponized with AI try to kill Human, try to kill Operator.
Warning: this is not STEALTH (movie in 2005, starring Jessica Biel & Jamie Foxx), this is real life
Artificial life imitating art at an accelerating pace. In a simulation an AI-enabled drone, killed its operator after seeing it as a threat because the human could override the drone's ability to strike. After being told not to attack human, it attacked the communication tower used to override the AI. I already posted this news 1-2 hour before VICE also reported.
This is not STEALTH (movie in 2005, low rating, just 5.1 according to IMDB).
On 23-24 May the Royal Aeronautical Society hosted a landmark defence conference, the Future Combat Air & Space Capabilities Summit, at its HQ in London, bringing together just under 70 speakers and 200+ delegates from the armed services industry, academia and the media from around the world to discuss and debate the future size and shape of tomorrow’s combat air and space capabilities.
Sarah O’Connor (terminator, SKYNET) is right. Human too weak. Robo(t) too ambitious-powerful, and now AI too powerful.
A who's who of AI leaders (in CENTER FOR AI SAFETY / SAFE AI) agree that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Moving too (big, giant) leap, too fast on AI could be terrible for humanity.
The public’s attention on a far fetched scenario that doesn’t require much change to their business models. Addressing the immediate impacts of AI on labor, privacy, or the environment is costly. Protecting against AI somehow “waking up” is not.
(also) convince everyone that AI is very, very powerful like what Sarah O’Connor afraid. So powerful that it could threaten humanity! They want you to think we’ve split the atom again, when in fact they’re using human training data to guess words or pixels or sounds.
The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. This has been a thing with these kinds of AIs for years and years. Neural network training inevitably tries to find the path of least resistance, so training it appropriately involves finding the most exhaustive set of requirements possible. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective. It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target. The comms tower was a twist.
As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.
So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton.
On a similar note, science fiction’s – or ‘speculative fiction’ was also the subject of a presentation by Lt Col Matthew Brown, USAF, an exchange officer in the RAF CAS Air Staff Strategy who has been working on a series of vignettes using stories of future operational scenarios to inform decisionmakers and raise questions about the use of technology. The series ‘Stories from the Future’ uses fiction to highlight air and space power concepts that need consideration, whether they are AI, drones or human machine teaming.
(Promoting to more engage in Substack) Seamless to listen to your favorite podcasts on Substack. You can buy a better headset to listen to a podcast here (GST DE352306207). Listeners on Apple Podcasts, Spotify, Overcast, or Pocket Casts simultaneously. podcasting can transform more of a conversation. Invite listeners to weigh in on episodes directly with you and with each other through discussion threads. At Substack, the process is to build with writers. Podcasts are an amazing feature of the Substack. I wish it had a feature to read the words we have written down without us having to do the speaking.