DoD Should Consider Truly Autonomous Weapons: Bipartisan AI Commission Reportshttps://breakingdefense.com/2019/11/bipartisan-ai-commission-dod-should-consider-truly-autonomous-weapons/WASHINGTON: The US military should adopt artificial intelligence urgently without letting debates over ethics and human control “paralyze AI development,” a congressionally mandated panel says. “In light of the choices being made by our strategic competitors, the United States must also examine AI through a military lens, including concepts for AI-enabled autonomous operations.”The
interim report released yesterday by the
bipartisan National Security Commission on Artificial Intelligence is full-throated in its defense of the pursuit of autonomous, AI-driven military systems as not only ethical but essential for future US military operations. Even in the military, some commanders have been publicly reluctant to trust AI — especially for anything related to nuclear weapons.
... It also posits the benefits of AI for homeland defense, the Intelligence Community (IC) and the military:
- For homeland defense, the report says, AI-enable tools can assist with border protections, cybersecurity, protection of critical infrastructure and natural disaster response.
- For the Intelligence Community, “AI algorithms can sift through vast amounts of data to find patterns, detect threats, and identify correlations. AI tools can make satellite imagery, communications signals, economic indicators, social media data, and other large sources of information more intelligible. AI-enabled analysis can provide faster and more precise situational awareness that supports higher quality decision-making.”
- On future battlefields, the military “could use AI-enabled machines, systems, and weapons to understand the battlespace more quickly; develop a common joint operating picture more rapidly; make relevant decisions faster; mount more complex multi-domain operations in contested environments; put fewer U.S. service members at risk; and protect innocent lives and reduce collateral damage.”
Of course, if those applications, however appealing, require relinquishing or even reducing human control, the controversy will be intense.
Notably, nowhere does commission use the phrase ‘human in the loop,’ the language currently favored by the Pentagon to assert that a human would always have ultimate control over any autonomous system.--------------------------------------
SecDef: China Is Exporting Killer Robots to the Mideasthttps://www.defenseone.com/technology/2019/11/secdef-china-exporting-killer-robots-mideast/161100/https://breakingdefense.com/2019/11/china-seeks-ai-without-limits-ethics-secdef-esper/ China is exporting drones that it advertises as having lethal autonomy to the Middle East, Defense Secretary Mark Esper said Tuesday. It’s the first time that a senior Defense official has acknowledged that China is selling drones capable of taking life with little or no human oversight.“As we speak, the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East, as it prepares to export its next-generation stealth UAVs when those come oneline,” Esper said today at the National Security Commission on Artificial Intelligence conference. “
In addition, Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes.”
The Chinese company Ziyan, for instance, markets the Blowfish A3, essentially a helicopter drone outfitted with a machine gun. Ziyan says it “autonomously performs more complex combat missions, including fixed-point timing detection, fixed-range reconnaissance, and targeted precision strikes.”Last year, Zeng Yi, a senior executive at NORINCO, China’s third-largest defense company, forecast that,
“... In future battlegrounds, there will be no people fighting” —as early as 2025.... it’s much easier to create an indiscriminate weapon than one that avoids collateral damage. An AI that can detect, for example, a roughly human-sized object with a roughly human-like body temperature is much easier to program than one that can tell a civilian from an armed combatant. And an unmanned system that opens fire on its own, without seeking approval from a human overseer, doesn’t need the kind of secure, jam-proof, long-range communications networks that are required for human oversight.
Esper also said Chinese surveillance software and hardware networks could help China develop AI. “
All signs point to the construction of a 21st-century surveillance state designed to censor speech and deny basic human rights on an unprecedented scale. Look no further than its use of surveillance to systematically repress more than a million Muslim Uighurs,” he said. “
Beijing has all the power and tools it needs to coerce Chinese industry and academia into supporting its government-led efforts.”
He said it was “
equally troubling are the outside firms or multinational corporations that are inadvertently or tacitly providing the technology or research behind China’s unethical use of AI.”
“Let me be clear: The question is not whether AI will be used by militaries around the world – it will be,” Esper concluded. ...------------------------------------
Navy, Marines Moving Ahead with Unmanned Vessel Programshttps://news.usni.org/2019/10/31/navy-marines-moving-ahead-with-unmanned-vessel-programsThe Navy is gaining enough experience with unmanned vehicles on and below the water’s surface that it’s becoming easier to kick off new programs, as each can build on previous program’s lessons learned, service officials said last week.
On larger unmanned surface vessels, the Navy and Pentagon’s
Ghost Fleet Overlord program transitioned from Phase 1 to Phase 2 at the beginning of October, further building upon the base of knowledge that will inform two future programs of record, the
Large Unmanned Surface Vehicle (LUSV) and Medium USV (MUSV).
“We’ve completed over 600 hours of autonomous testing, we’ve demonstrated autonomy, we’ve demonstrated navigation, and we’ve launched into the Phase 2 with our two vendors now,” Berkof said last week at the National Defense Industrial Association’s annual Expeditionary Warfare Conference.
After his speech, he told USNI News that “Phase 2, we integrate a government supplied
[command, control, communications, computers and intelligence] system into the vessel, we do more complex autonomy, more complex navigation, and we start also some payload work. And then so we get through Phase 2, and then we will have a number of demonstrations out there where we really ramp up the capability on that front.”
Once the integration of a government supplied C4I system onto a hull rigged to operate autonomously is complete, the only real remaining work will be to add in a vertical launching system to give the USV a strike capability.“
For the Navy’s LUSV program, we envision integrating vertical launch into that vessel. … That’s really the biggest difference” between Ghost Fleet Overlord Phase 2 and LUSV, Berkof said.
... And if you’re the adversary, well I guess you’re going to need to target everything,” the general said.
“
So I hope you bought enough DF-XXs [Chinese anti-ship weapons] because I’m going to try to spread you out. That I don’t mind saying publicly. So long-range unmanned surface vessels, for us, vitally important because they’re lethal. They’re not just connectors; they’re sniffers, they’re out there telling me what’s going on, they’re passing that information back to me, and they’re spreading out the enemy because at some point you’ve got to target everything that moves because the one thing that does get through is carrying the lethal package”
---------------------------------
Brain-Like Computer Chips Developedhttps://techxplore.com/news/2019-11-brain-like-chips-privacy-greenhouse-emissions.htmlA team lead by Professor Simon Brown at the University of Canterbury (UC) has developed computer chips with brain-like functionality, that could significantly reduce global carbon emissions from computer energy consumption.
Published this week in prestigious peer-reviewed journal
Science Advances, the paper proves signals on the chips are remarkably like those that pass through the network of neurons in the brain. This is important for building new kinds of computers because the brain is incredibly good at processing information using very small amounts of energy. Brain-like computing could enable "edge computing" and address the ever increasing energy consumption of computers.
The chips are based on self-organisation of nanoparticles – taking advantage of physical principles at unimaginably small scales, a hundred thousand times smaller than the thickness of a human hair, to make brain-like networks.
The components of this new chip are at the atomic level and are so small they cannot be seen with the naked eye or conventional microscopes, and can only be seen in electron microscopes."The research shows that this type of chip really does mimic the signalling behaviour of the brain. We were surprised at the extent to which the avalanches or cascades of voltage pulses on our chips replicate the avalanches of 'action potentials' that are observed in the brain.
"These chips might provide a different kind of artificial intelligence. By understanding the underlying fundamental physical processes, we believe we can design these chips and control their behaviour to do things like pattern or image recognition," he says. "The key is that processing on-chip and with low power consumption opens up new applications that are not currently possible."J. B. Mallinson et al.
Avalanches and criticality in self-organized nanoscale networks,
Science Advances (2019