Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 352922 times)

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #450 on: September 10, 2020, 02:46:24 AM »
AI To Fly In Dogfight Tests By 2024: SecDef
https://breakingdefense.com/2020/09/ai-will-dogfight-human-pilots-in-tests-by-2024-secdef/



After an AI beat humans 5-0 in AlphaDogfight simulations this summer, Defense Secretary Mark Esper announced, a future version will be installed in actual airplanes for “a real-world competition.” But military AI will adhere to strict ethical limits, he said.

DARPA plans to hand Air Combat Evolution (ACE) over to the Air Force in 2024, but don’t be surprised if the Navy and Marine Corps now get involved as well.

ACE will hold “live field experiments,” a DARPA spokesperson told us in an email. But they declined to describe the events as a “competition” between humans and AI, instead emphasizing “human-machine teaming” where the organic and the digital work as partners: “The pilots will be given higher cognitive level battle management tasks while their aircraft fly dogfights, and there will be human factors sensors measuring their attention and stress to gauge how well they trust the AI.”

“Full-scale airborne events start in FY23 [fiscal year 2023],” DARPA said. “They will be using tactical fighter-class aircraft with safety pilots in them in case something goes wrong…Current schedule has 1v1 live airborne dogfights in Q2FY23, 2v1 in Q4FY23, and 2v2 in Q1FY24.”

Some flight tests will put human-machine teams against humans without AI assistance; others will pit two AI-human teams against each other.



... As for Russia, “Moscow has announced the development of AI-enabled autonomous systems across ground vehicles, aircraft, nuclear submarines, and command and control,” Esper said. “We expect them to deploy these capabilities in future combat zones.”

---------------------------------------

2020 Department of Defense Artificial Intelligence Symposium and Exposition
https://www.ai.mil/ai2020.html
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #451 on: September 11, 2020, 04:38:54 AM »
Anduril’s New Drone Offers to Inject More AI Into Warfare
https://www.wired.com/story/anduril-new-drone-inject-ai-warfare/
https://www.cnet.com/news/palmer-luckey-ghost-4-military-drones-can-swarm-into-an-ai-surveillance-system/

A swarm of Ghost 4s, controlled by a single person on the ground, can perform reconnaissance missions like searching for enemy weapons or soldiers, [... or tracking protesters & citizens with facial recognition].



Ghost 4 is an autonomous, VTOL, modular sUAS that operates on the Lattice AI platform. Ghost 4 is modular, man-portable, waterproof, and combines long endurance, high payload capacity and a near-silent acoustic signature for a wide variety of mission capabilities. Ghost 4 has a 100-minute flight time and can be autonomously or remotely piloted.

The drones can carry a range of payloads,weighing as much as 35 pounds  including systems capable of jamming enemy communications or an infrared laser to direct weapons at a target. In theory the drone could be fitted with its own weapons. “It would be possible,” ... “But nobody’s done it yet.”

Onboard artificial intelligence algorithms have been tuned to identify and track people, missiles and battlefield equipment. One Ghost 4 drone can join with other Ghost 4 drones to form a data-sharing swarm to relay information back to Lattice, Anduril's situation monitoring AI system.

"One person can manage dozens of Ghosts," Luckey says. They can be programmed ahead of time to fly "dark," monitoring a site or tracking subjects but sending and receiving no data until they return to base, to avoid radio signal detection.

And it can link with its fellow drones into a cooperative swarm.
If one can't communicate with a base station because of wireless jamming, it'll try to shuttle data to a fellow drone that can, Luckey said.

Anduril was founded by Luckey and several veterans of Palantir, which sells analytics software to the intelligence industry. Both Anduril and Palantir are backed by Peter Thiel, a prominent tech investor and Trump adviser. The company,  is also developing a virtual reality platform for patrolling the US border with Mexico.

https://www.wired.com/story/palmer-luckey-anduril-border-wall/

https://www.anduril.com/

Anvil is the kinetic element of Anduril's end-to-end counter-UAS capability. It uses physical speed and onboard guidance to seek and destroy drone threats with positive identification and minimal collateral damage.

Anduril is named after Aragorn's sword, also called the Flame of the West, in J.R.R. Tolkien's Lord of the Rings trilogy
« Last Edit: September 11, 2020, 04:48:58 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #452 on: September 12, 2020, 01:50:46 AM »

The news here is that thanks to Clearpath, Spot now works seamlessly with ROS.


Autonomous drone survey took place over two weeks in 2018 on the Greenland ice sheet. Research focused on an area about 80 kilometers east of Kangerlussuaq, where scientists wanted to study the movement of water deep underground to better understand the effects of climate change on the melting ice.


Where Imperial AT-ATs learn to walk

« Last Edit: September 12, 2020, 01:59:39 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #453 on: September 13, 2020, 11:08:29 PM »
Replicant Raises $27 Million To Propel Its Voice AI For Customer Service Phone Calls
https://www.forbes.com/sites/kenrickcai/2020/09/10/replicant-series-a-call-center-voice-ai/#6e9e1dca4266



Replicant, named after Blade Runner’s genetically engineering humans, is entering uncharted territory with its AI software that works as a combination between text-based chatbots and real-time AI assistants for customer service reps.

“We don’t pretend to be human, but the experience is pretty similar to speaking with an agent,” says CEO Gadi Shamia of his artificial intelligence. “Think of this as autopilot—the [human] agent is just the exception handler, they don’t have to deal with the entire call flow.”



The San Francisco-based company has created AI that can talk to people over the phone with a conversational and human-like voice. The bot employs deep learning to understand the intricacies of humans’ sentences, and can fully resolve certain customer service inquiries (more complex issues are routed to a human agent). It saves time for people who might otherwise be put on hold, and Shamia says the speedy processing time further allows the bot to cut the length of calls themselves in half. “There’s no um’s and ah’s,” he says. “We don’t have to spend 30 seconds to read the file.”



Call centers, a sector that brings in $15 billion in revenue according to Forrester, have benefitted from innovations in automation and artificial intelligence, notably with text-based chatbots that reroute simple tasks away from customer service agents, and more recently through “augmented intelligence” software that can process conversations and provide suggested responses or relevant resources to the agent in real time. Shamia thinks Replicant’s technology provides a massive leap in productivity by eliminating a large swath of conversations that [human] call center reps have to answer.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #454 on: September 16, 2020, 03:55:35 PM »
Future Autonomous Machines May Build Trust Through Emotion
https://techxplore.com/news/2020-09-future-autonomous-machines-emotion.html



Army research has extended the state-of-the-art in autonomy by providing a more complete picture of how actions and nonverbal signals contribute to promoting cooperation.

Dr. Celso de Melo, computer scientist with the U.S. Army Combat Capabilities Development Command's Army Research Laboratory at CCDC ARL West in Playa Vista, California, in collaboration with Dr. Kazunori Teradafrom Gifu University in Japan, recently published a paper in Scientific Reports where they show that emotion expressions can shape cooperation.

Autonomous machines that act on people's behalf are poised to become pervasive in society, de Melo said; however, for these machines to succeed and be adopted, it is essential that people are able to trust and cooperate with them.

... This research effort, which supports the Next Generation Combat Vehicle Army Modernization Priority and the Army Priority Research Area for Autonomy, aims to apply this insight in the development of intelligent autonomous systems that promote cooperation with soldiers and successfully operate in hybrid teams to accomplish a mission.

"We show that emotion expressions can shape cooperation," de Melo said. "For instance, smiling after mutual cooperation encourages more cooperation; however, smiling after exploiting others—which is the most profitable outcome for the self—hinders cooperation."

For example, when the counterpart acts very competitively, people simply ignore-and even mistrust-the counterpart's emotion displays.

"Our research provides novel insight into the combined effects of strategy and emotion expressions on cooperation," de Melo said. "It has important practical application for the design of autonomous systems, suggesting that a proper combination of action and emotion displays can maximize cooperation from soldiers. Emotion expression in these systems could be implemented in a variety of ways, including via text, voice, and nonverbally through (virtual or robotic) bodies." ...

Celso M. de Melo et al, The interplay of emotion expressions and strategy in promoting cooperation in the iterated prisoner's dilemma, Scientific Reports (2020).
https://www.nature.com/articles/s41598-020-71919-6

--------------------------------------


- Terminator Genisys (2015)

Mona Lisa Vito: ... Oh yeah, you blend.
 - My Cousin Vinny (1992)
« Last Edit: September 16, 2020, 09:46:26 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 20378
    • View Profile
  • Liked: 5289
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #455 on: September 16, 2020, 06:21:51 PM »
Your caring sharing robots. I suppose it had to come.

But will the military also have the Master Sergeant addressing the latest bunch of recruits version as well? "You 'orrible lot!"
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

Sigmetnow

  • Multi-year ice
  • Posts: 25763
    • View Profile
  • Liked: 1153
  • Likes Given: 430
Re: Robots and AI: Our Immortality or Extinction
« Reply #456 on: September 16, 2020, 08:55:59 PM »
“The Encyclopedia Galactica defines a robot as a mechanical apparatus designed to do the work of a man. The marketing division of the Sirius Cybernetics Corporation defines a robot as ‘Your Plastic Pal Who's Fun to Be With.’"
- The Hitchiker’s Guide to the Galaxy
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #457 on: September 16, 2020, 11:58:54 PM »
US Library of Congress Launches AI Tool That Lets You Search 16 Million Old Newspaper Pages for Historical Images
https://thenextweb.com/neural/2020/09/16/us-library-of-congress-launches-ai-tool-that-lets-you-search-16-million-old-newspaper-pages-for-historical-images/

The US Library of Congress has released an AI tool that lets you search through 16 million historical newspaper pages from the Libray’s digitized collection of newspapers published between 1900 and 1963, for images that help explain the stories of the past.

The Newspaper Navigator shows how seminal events and characters, such as wars and presidents, have been depicted in the press.

Navigator: https://news-navigator.labs.loc.gov/search

To use the system, simply enter a keyword in the Newspaper Navigator and the AI will surface matches from a dataset of 1.56 million newspaper photos.  You can also specify a date range and a state in which the newspaper was published.

The AI can detect photographs, illustrations, maps, cartoons, comics, headlines, and advertisements. It also uses Optical Character Recognition to extract a headline and caption from the corresponding article.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #458 on: September 17, 2020, 12:54:19 AM »
Secretive Pentagon Research Program Looks to Replace Human Hackers With AI
https://news.yahoo.com/secretive-pentagon-research-program-looks-to-replace-human-hackers-with-ai-090032920.html



For six years, the Defense Advanced Research Projects Agency worked on a program known as Plan X to help commanders plan and conduct cyber operations.

The goal was for leaders to see the cyber environment just as they would the physical world.

Now, the Air Force and the Pentagon’s Strategic Capabilities Office are continuing the program and have renamed it Project IKE.

... If the Joint Operations Center is the physical embodiment of a new era in cyber warfare — the art of using computer code to attack and defend targets ranging from tanks to email servers — IKE is the brains. It tracks every keystroke made by the 200 fighters working on computers below the big screens and churns out predictions about the possibility of success on individual cyber missions. It can automatically run strings of programs and adjusts constantly as it absorbs information.

The hope for cyber warfare is that it won’t merely take control of an enemy’s planes and ships but will disable military operations by commandeering the computers that run the machinery, obviating the need for bloodshed. The concept has evolved since the infamous American and Israeli strike against Iran’s nuclear program with malware known as Stuxnet, which temporarily paralyzed uranium production starting in 2005.

IKE, which started under a different name in 2012 and was rolled out for use in 2018, provides an opportunity to move far faster, replacing humans with artificial intelligence. Computers will be increasingly relied upon to make decisions about how and when the U.S. wages cyber warfare.

This has the potential benefit of radically accelerating attacks and defenses, allowing moves measured in fractions of seconds instead of the comparatively plodding rate of a human hacker.

The problem is that systems like IKE, which rely on a form of artificial intelligence called machine learning, are hard to test, making their moves unpredictable.

In an arena of combat in which stray computer code could accidentally shut down the power at a hospital or disrupt an air traffic control system for commercial planes, even an exceedingly smart computer waging war carries risks.

People knowledgeable about the programs show that the military is rushing ahead with technologies designed to reduce human influence on cyber war, driven by an arms race between nations desperate to make combat faster.

Using the reams of data at its disposal, IKE can look at a potential attack by U.S. forces and determine the odds of success as a specific percentage. If those odds are high, commanders may decide to let the system proceed without further human intervention, a process not yet in use but quite feasible with current technology.

In September 2018, President Trump signed off on National Security Policy Memorandum 13, which supplanted Obama's order. The details of the policy remain classified, but sources familiar with it said it gave the secretary of defense the authority to approve certain types of operations without higher approval once the secretary had coordinated with intelligence officials.

With IKE, commanders will be able to deliver to decision makers one number predicting the likelihood of success and another calculating the risk of collateral damage, such as destroying civilian computer networks that might be connected to a target. Collateral damage.

Other programs in development, such as Harnessing Autonomy for Countering Cyberadversary Systems (HACCS), are designed to give computers the ability to unilaterally shut down cyber threats.

https://www.darpa.mil/program/harnessing-autonomy-for-countering-cyberadversary-systems

All of these programs are bringing cyber warfare closer to the imagined world of the 1983 film “WarGames,” which envisioned an artificial intelligence system waging nuclear war after a glitch makes it unable to decipher the difference between a game and reality.

----------------------------------

Didn't they do this in Terminator 3: Rise of the Machine?



------------------------------------

Project IKE
https://www.twosixlabs.com/product/ike/

IKE is a state-of-the-art platform providing users throughout the entire command chain the ability to plan, prepare, execute and assess cybersecurity operations. Automated processes powered by machine learning reduce manual effort for operators. Advanced battlespace visualizations give cyber commanders precise, real-time insight into networks.

With IKE, standardized military processes are finally brought to the cyber warfare domain. Whether the DoD Cyber Mission Force or other agencies and partners, IKE can identify risks, streamline responses and track the outcome — all in one platform.



------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #459 on: September 17, 2020, 03:40:48 AM »
Project Convergence
https://breakingdefense.com/2020/09/a-slew-to-a-kill-project-convergence/

Instead of a single centralized Skynet trying to mastermind (or micromanage) operations, the Army is looking at a federation of more specialized, less ambitious AIs, each assisting humans in different ways.

WASHINGTON: The Army is actually field-testing artificial intelligence’s ability to find targets and aim weapons for human troops. After decades of debate and R&D, the technology is being used in one of the harshest environments imaginable: the Yuma Desert.

In Army Futures Command’s Project Convergence experiments at Yuma Proving Grounds an artificial intelligence known as Firestorm is warning ground troops of threats, sending them precision targeting data, and in some cases even aiming their vehicles’ weapons at the enemy, the Army’s director of ground vehicle modernization says in an exclusive interview.

... Firestorm takes in data from satellites, drones, and ground-based sensors, Coffman explained. The AI continually processes that data and sends its conclusions over the Army’s tactical wireless network – “Firestorm is the brain,” he said, “the network is the spine” – to command posts, drones, mortar teams, and combat vehicles from Humvees to M109 armored howitzers.

Second, Firestorm’s algorithms can also prioritize among potential targets, calculate which units are best equipped and positioned to engage them, and pass targeting information directly to their weapons’ fire control systems. Those include the artillery’s AFATDS and the computerized Remote Weapons Stations (RWS) installed on many Humvees and MRAPs since 9/11. At Yuma, those readily available and relatively inexpensive 4×4 tactical trucks and their RWS are standing in for future Robotic Combat Vehicles and [B[Optionally Manned Fighting Vehicles[/b] that Coffman’s team is still developing.
https://breakingdefense.com/tag/optionally-manned-fighting-vehicle/



A Remote Weapons Station gives a gunner inside the vehicle screens and a joystick, allowing him to see through external sensors, aim the roof-mounted weapon, and fire without ever having to expose himself in an open hatch. Since weapons installed this way are already aimed by electronic controls and actuators, rather than by human muscles or hydraulics, Firestorm’s digital signals can actually bring the gun to bear on the target without human intervention, a capability called “slew to cue.” ...

------------------------------------

Target Gone In 20 Seconds: Army Sensor-Shooter Test
https://breakingdefense.com/2020/09/target-gone-in-20-seconds-army-sensor-shooter-test/

WASHINGTON: Army experiments have shortened the kill chain remarkably – from the time a satellite or drone detects a target to the time an artillery unit opens fire – to “less than 20 seconds,” the head of Army futures Command said this afternoon.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #460 on: September 17, 2020, 07:12:56 PM »
GITAI Sending Autonomous Robot to Space Station
https://spectrum.ieee.org/automaton/robotics/space-robots/gitai-autonomous-robot-iss



... GITAI has been working on a variety of robots for space operations, the most sophisticated of which is a humanoid torso called G1, which is controlled through an immersive telepresence system. What will be launching into space next year is a more task-specific system called the S1, which is an 8-degrees-of-freedom arm with an integrated sensing and computing system that can be wall-mounted and has a 1-meter reach.



GITAI says that “all operations conducted by the S1 GITAI robotic arm will be autonomous, followed by some teleoperations from Nanoracks’ in-house mission control.”

... NanoRacks is schedule to launch the Bishop module on SpaceX CRS-21 in November. The S1 will be launched separately in 2021, and a NASA astronaut will install the robot and then leave it alone to let it start demonstrating how work in space can be made both safer and cheaper once the humans have gotten out of the way.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #461 on: September 17, 2020, 11:33:13 PM »
AGI One Step Closer: Human Brain's Memory Abilities Inspire AI Experts in Making Neural Networks Less 'Forgetful'
https://techxplore.com/news/2020-09-brain-memory-abilities-ai-experts.html

Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a "major, long-standing obstacle to increasing AI capabilities" by drawing inspiration from a human brain memory mechanism known as "replay."

Catastrophic forgetting in artificial neural networks is a major obstacle to the development of AGI artificial agents that can incrementally learn from their experiences.

First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect—"surprisingly efficiently"—deep neural networks from "catastrophic forgetting;" upon learning new lessons, the networks forget what they had learned before.

... Siegelmann says the team's major insight is in "recognizing that replay in the brain does not store data." Rather, "the brain generates representations of memories at a high, more abstract level with no need to generate detailed memories." Inspired by this, she and colleagues created an artificial brain-like replay, in which no data is stored. Instead, like the brain, the network generates high-level representations of what it has seen before.

The "abstract generative brain replay" proved extremely efficient, and the team showed that replaying just a few generated representations is sufficient to remember older memories while learning new ones. Generative replay not only prevents catastrophic forgetting and provides a new, more streamlined path for system learning, it allows the system to generalize learning from one situation to another, they state.


Brain-inspired modifications enable generative replay to scale to problems with complex inputs.

Gido M. van de Ven et al, Brain-inspired replay for continual learning with artificial neural networks, Nature Communications (2020).
https://www.nature.com/articles/s41467-020-17866-2
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #462 on: September 19, 2020, 02:28:18 AM »

Coordination in bipedal robots
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #463 on: September 19, 2020, 02:45:43 AM »
Michael T. Klare: US Military Robots on Fast Track to Leadership Role
https://consortiumnews.com/2020/08/27/us-military-robots-on-fast-track-to-leadership-role/

-----------------------------------------

Lockheed Aims For Laser On Fighter By 2025
https://breakingdefense.com/2020/09/lockheed-aims-for-laser-on-fighter-by-2025/


... pew! pew! ...

WASHINGTON: “Lockheed Martin is working to fly a laser on tactical fighters within the next five years,” Lockheed laser expert Mark Stephen told reporters yesterday afternoon. “We’re spending a lot of time to get the beam director right.”

That beam director, which keeps the laser beam on target, is a crucial but easily overlooked component of future laser weapons.

The Air Force Research Lab’s SHiELD program aims to put defensive laser pod on fighters to defend them against incoming anti-aircraft missiles. An offensive laser to shoot down enemy aircraft would have to hit harder and at longer distances, so it’s a more distant goal: Such weapons are envisioned for a future “sixth generation” fighter — like the NGAD prototype now in flight test — to follow the 5th-gen F-35, while the SHiELD pod will go on non-stealthy 4th gen aircraft like the F-16, as in this Lockheed video.

The first unit of IFPC-HEL prototypes, already under construction, will be operational in 2024. That’s a year ahead of Lockheed’s timeline to put a laser on a fighter. IFPC-HEL will produce 300 kilowatts of power; SHIELD’s output is TBD, but it’ll probably be under 100 kW, allowing the fighter to charge the laser without installing a whole new power generation system.

Precision can trump power, up to a point, because the more precisely you can hold the laser beam on the exact same spot, the faster it burns through. Getting the laser precisely on target and keeping it there is the job of the beam director. It has to pull in sensor data on the current locations of both the target and the firing platform. Sophisticated software predicts exactly where the beam needs to go and adjusts specially designed mirrors to bounce the laser light in just the right direction. And the beam director has to keep making those calculations and adjustments many thousands of times a second.

... Lockheed is supremely confident in their laser tech. As another exec, Paul Shattuck, boasted on an earlier occasion: “Our beam control technology enables precision equivalent to shooting a beach ball off the top of the Empire State Building from the San Francisco Bay Bridge.”

-----------------------------------

Air Force to Try In-Flight Software Update
https://www.defenseone.com/technology/2020/09/air-force-try-flight-software-update/168544/

... “We're working on pretty cool announcements coming in the next few weeks with the ability to update the software of a jet while flying,” Nicolas Chaillan, chief software officer for the U.S. Air Force, said during a webinar Tuesday. “So that’s the kind of stuff that will be game changing.”

... what could possibly go wrong?!


... loading
« Last Edit: September 19, 2020, 09:42:40 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 25763
    • View Profile
  • Liked: 1153
  • Likes Given: 430
Re: Robots and AI: Our Immortality or Extinction
« Reply #464 on: September 20, 2020, 09:22:24 PM »
Neuralink is developing a brain implant to enable users to control their phones with thoughts
The first clinical trial will implant the Link device on paralyzed individuals resulting from cervical spinal cord injury.

The user would download the Neuralink App on their mobile phone that would allow “control” over an “iOS device, keyboard and mouse directly with the activity of your brain, just by thinking about it.”

The Link chip would connect wirelessly to the app.

https://www.tesmanian.com/blogs/tesmanian-blog/neuralink-app
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #465 on: September 21, 2020, 02:18:46 AM »
New Navy Lab to Accelerate Autonomy, Robotics Programs
https://www.nationaldefensemagazine.org/articles/2020/9/8/navy-testing-new-autonomy-integration-lab

Over the past few years, the Navy has been hard at work building a new family of unmanned surface and underwater vehicles through a variety of prototyping efforts. It is now standing up an integration lab to enable the platforms with increased autonomy, officials said Sept. 8.

The Rapid Integration Autonomy Lab, or RAIL, is envisioned as a place where the Navy can bring in and test new autonomous capabilities for its robotic vehicles, said Capt. Pete Small, program manager for unmanned maritime systems.

... Robotics technology is moving at a rapid pace, and platforms will need to have their software and hardware components replaced throughout their lifecycles, he said. In order to facilitate these upgrades, the service will need to integrate the new autonomy software that comes with various payloads and certain autonomy mission capabilities with the existing nuts-and-bolts packages already in the unmanned platforms.

“We don’t want to have to reinvent all of the systems every time we have a new platform,” he said. “And so I don’t want to have to reinvent autonomy algorithms for every individual platform. I don’t want to have to reintegrate data and learning algorithms.”  ...

-------------------------------

Navy’s New Unmanned Fleet Likely To Hunt Chinese & Russian Subs
https://breakingdefense.com/2020/09/navys-new-unmanned-fleet-likely-to-hunt-chinese-russian-subs/



A new report from the Hudson Institute points out that the current anti-submarine tactics and technologies are largely unchanged since the Cold War — and suggests ships like the new Medium Unmanned Surface Vessel, or MUSV, could play a key role in keeping watch on subs even before they push into the open ocean.

https://www.hudson.org/research/16347-sustaining-the-undersea-advantage-transforming-anti-submarine-warfare-using-autonomous-systems

The thinking is those ships operating without a crew, or with just a few sailors aboard, will have to have the ability to control smaller underwater drones doing things like hunting for submarines or mines, while relaying that information back to a carrier strike group or Marine unit ashore hundreds or thousands of miles away.

These unmanned engagements tracking submarines should “prioritize suppression of submarines over destruction, based on lessons from the First and Second World Wars and the Cold War,” the report states. Drones in the sky could use small air-launched torpedoes or small depth charges to harass the subs, while the MUSVs “could close on the target submarine at acceptable risk and launch short-range standoff ASW weapons such as anti-submarine rockets” or other charges.

The command and control of these potential operations would combine human command with machine control, with maneuvers in contested waters remaining mostly automated, while humans would direct engagements when offensive operations kicked off.

----------------------------------------

https://ndiastorage.blob.core.usgovcloudapi.net/ndia/2018/science/Kearns.pdf

Human/Autonomous System Interaction and Collaboration (HASIC): The keys to maximizing the human-agent interaction are: instilling confidence and trust among the team members; understanding of each member’s tasks, intentions, capabilities, and progress; and ensuring effective and timely communication. All of which must be provided within a flexible architecture for autonomy; facilitating different levels of authority, control, and collaboration.

Machine Perception, Reasoning and Intelligence (MPRI): Perception, reasoning, and intelligence allows for entities to have existence, intent, relationships, and understanding in the battle space relative to a mission.

Scalable Teaming of Autonomous Systems (STAS): Collaborative teaming is a fundamental paradigm shift for future autonomous systems. Such teams are envisioned to be heterogeneous in size, mobility, power, and capability. [hunter-killer packs]

Test, Evaluation, Validation, and Verification (TEVV): The creation of developmental and operational T&E techniques that focus on the unique challenges of autonomy, including state-space explosion, unpredictable environments, emergent behavior, and human-machine communication.

-------------------------------------

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #466 on: September 21, 2020, 02:27:04 AM »
Boston Dynamics is poised to jump into the logistics market with a very different robot — or rather robots — meant to help move boxes and other box-like items around in a very different way from the currently practical “autonomous pallet” method.

Boston Dynamics is simultaneously developing robots for logistics (think production, packaging, inventory, transportation, and warehousing).

“We have big plans in logistics,” Playter said. “we’re going to have some exciting new logistics products coming out in the next two years. We have customers now doing proof of concept tests. We’ll announce something in 2021, exactly what we’re doing, and we’ll have product available in 2022.”



“Handle hasn’t really been delayed,” Playter said. “We’ve got a new design of Handle. We decided we need to change the design before we commercialize it. So that’s what’s going on. I wouldn’t say it’s delayed. I would say that we sort of rethought exactly what we wanted to do there. And so now we have an iteration on that design, which we are beginning to prepare for manufacturing. And it takes time. To really design something for manufacturing and the reliability you need, it takes a couple of years.”

Boston Dynamics isn’t yet ready to share what the redesigned Handle looks like. The company will make that public “sometime in 2021.” Broadly speaking though, the design change will make the robot “faster and more efficient in a logistics setting,” Playter promised.



Boston Dynamics is really gunning for logistics next. “The opportunities in logistics are large, and we’re going to have the first mobile case-picking robot that can pick up and put down boxes, whether it’s in the back of a truck or in your warehouse or at the end of a conveyor,” Playter said.

Quote
... “Basically, any of the box-picking tasks that are sort of ubiquitous in a warehouse, I think Handle will be able to do.”

But that’s in 2022 at the earliest. Until then, Boston Dynamics is selling Pick, a depalletizing vision system and computer that costs $75,000. Pick is not a robot — it needs to be attached to existing commercial robots. Companies sell integrated robot setups that use Boston Dynamics’ Pick for between $200,000 and $400,000.

Handle is the mobile version of Pick. So, will the former kill the latter? “I think it will have its place in the world, but I do believe that in the long run having a robot that’s mobile that can do what Pick does will end up superseding Pick,” Playter said

... The Atlas team has recently done some work to author behavior software on Atlas much more rapidly. What used to take six months to code, the team can now do in a few days, thanks to advanced optimization tools. “And those tools will become available really to all of our machines, but we’re using Atlas as the way of motivating the development,” Playter said. “But Atlas is too expensive and too complicated to commercialize anytime soon.”

https://venturebeat.com/2020/09/14/boston-dynamics-ceo-profitability-roadmap-next-robots/
« Last Edit: September 21, 2020, 10:33:38 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

nanning

  • Nilas ice
  • Posts: 2487
  • 0Kg CO₂, 37 KWh/wk,125L H₂O/wk, No offspring
    • View Profile
  • Liked: 273
  • Likes Given: 23170
Re: Robots and AI: Our Immortality or Extinction
« Reply #467 on: September 21, 2020, 08:29:24 AM »
Thanks for all your interesting posts Vox.

Those robots are doing the work that human employees do now.
How will this former employee get money to live? How can she/he pay the rent?

Off-topic, but you could say that warehouse robots and military robots are also not about our "immortality or extinction".

"It is preoccupation with possessions, more than anything else, that prevents us from living freely and nobly" - Bertrand Russell
"It is preoccupation with what other people from your groups think of you, that prevents you from living freely and nobly" - Nanning
Why do you keep accumulating stuff?

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #468 on: September 21, 2020, 09:24:28 AM »
Quote from: nanning
... Those robots are doing the work that human employees do now.

How will this former employee get money to live? How can she/he pay the rent?
Good question.

They will join the out-of-work truck drivers, bank tellers, and call center workers, etc. A growth opportunity for robot repair persons.

A 'Star Wars' economy with droids running around and no unemployment [and rising Imperial authoritarianism]? ... Only in the movies.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #469 on: September 21, 2020, 10:05:04 AM »
This Tomato-Picking Robot Is More Efficient Than Humans and Can Work 24/7
https://www.cnbc.com/2019/05/11/root-ai-unveils-its-tomato-picking-robot-virgo.html



Farmers spend more than $34 billion a year on labor in the U.S., according to the USDA. And many would like to hire more help. But the agriculture industry here faces labor shortages, thanks in part to the scarcity of H2B visas, and an aging worker population. Older workers can’t necessarily handle the hours or repetitive physical tasks they once might have.

That’s where Root AI, a start-up in Somerville, Massachusetts, comes in. The company’s first agricultural robot, dubbed the Virgo 1, can pick tomatoes without bruising them, and detect ripeness better than humans.


https://root-ai.com/

The Virgo is a self-driving robot with sensors and cameras that serve as its eyes. Because it also has lights on board, it can navigate large commercial greenhouses any hour of the day or night, detecting which tomatoes are ripe enough to harvest. A “system-on-module” runs the Virgo’s AI-software brain. A robotic arm, with a dexterous hand attached, moves gently enough to work alongside people, and can independently pick tomatoes without tearing down vines.

One of the most unique things about the Virgo, he notes, is that the company can write new AI software and add additional sensors or grippers to handle different crops. “It’s a complete mobile platform enabled to harvest whatever you need,” says Lessing.



In this way, the Virgo is a departure from other crop-specific harvesting machines on the market and in development, like Abundant Robotics’ apple picker, Agrobot’s strawberry picker, and Sweeper’s pepper-picker.

The Virgo has been tested at commercial greenhouses, including in the U.S. and Canada already. And Root AI has raised funding from noteworthy venture firms, including Accomplice, First Round, Half Court, Liquid2 and Schematic.

Lessing expects to see the Virgo in broad commercial use next year, and to develop new software that enables it to pick other high-value crops like cucumbers, strawberries and peppers after that.

https://www.cnbc.com/2019/04/10/an-australian-start-up-is-using-robots-to-pull-weeds-and-herd-cattle.html

https://www.cnbc.com/2018/08/23/robots-could-soon-be-picking-your-strawberries.html

https://www.cnbc.com/2018/12/20/watch-this-robot-pick-a-peck-of-peppers-with-a-tiny-saw.html
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tom_Mazanec

  • Guest
Re: Robots and AI: Our Immortality or Extinction
« Reply #470 on: September 21, 2020, 12:25:37 PM »
nanning, the thread title reflects the ultimate prospects of AI but most of the posts in the thread reflect current SOTA.

gerontocrat

  • Multi-year ice
  • Posts: 20378
    • View Profile
  • Liked: 5289
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #471 on: September 21, 2020, 03:41:37 PM »
nanning, the thread title reflects the ultimate prospects of AI but most of the posts in the thread reflect current SOTA.
It is intertesting to watch the sheer speed at which robot /AI capability is expanding in multiple directions, to diagnose and heal people quicker, and to diagnose threats and kill people quicker to name but two.

Since humans are still just about in control of the process, what could possibly go wrong?
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #472 on: September 21, 2020, 05:05:38 PM »
Japanese Grocery Chain Testing Remotely Controlled Robot Stockers
https://techxplore.com/news/2020-09-japanese-grocery-chain-remotely-robot.html

Japanese grocery chain FamilyMart has teamed up with Tokyo startup Telexistence to test the idea of using a remotely controlled shelf stocking robot named the Model-T to restock grocery shelves. On its website, Telexistence describes the robot as a means for addressing labor shortages in Japan and also as a way to improve social distancing during the pandemic.

Representatives of Telexistence explain that the purpose of the robot is to allow a single worker to service multiple stores from a remote location. Not only will this save labor costs for the stores involved, it will also provide more protection for customers and store employees. The robots would replace living human beings who might be carrying the SARS-CoV-2 virus.



Notably, the system will enable people to work who might not otherwise be physically able to stock shelves, whether due to injury, disability or other limiting factors. The robot should also allow people to stock without growing physically tired or to stock items that would normally be too heavy for them to lift, benefiting female stockers.

... clean-up in aisle 6
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

nanning

  • Nilas ice
  • Posts: 2487
  • 0Kg CO₂, 37 KWh/wk,125L H₂O/wk, No offspring
    • View Profile
  • Liked: 273
  • Likes Given: 23170
Re: Robots and AI: Our Immortality or Extinction
« Reply #473 on: September 21, 2020, 06:22:24 PM »
Omegad. What a hell is being created in the name of progress profit.
"It is preoccupation with possessions, more than anything else, that prevents us from living freely and nobly" - Bertrand Russell
"It is preoccupation with what other people from your groups think of you, that prevents you from living freely and nobly" - Nanning
Why do you keep accumulating stuff?

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #474 on: September 22, 2020, 12:13:28 AM »
Computer Predicts Your Thoughts, Creating Images Based On Them
https://techxplore.com/news/2020-09-thoughts-images-based.html

Researchers at the University of Helsinki have developed a technique in which a computer models visual perception by monitoring human brain signals. In a way, it is as if the computer tries to imagine what a human is thinking about. As a result of this imagining, the computer is able to produce entirely new information, such as fictional images that were never before seen.

As far as is known, the new study is the first where both the computer's presentation of the information and brain signals were modeled simultaneously using artificial intelligence methods. Images that matched the visual characteristics that participants were focusing on were generated through interaction between human brain responses and a generative neural network.

The researchers call this method neuroadaptive generative modeling. A total of 31 volunteers participated in a study that evaluated the effectiveness of the technique. Participants were shown hundreds of AI-generated images of diverse-looking people while their EEG was recorded.

The subjects were asked to concentrate on certain features, such as faces that looked old or were smiling. While looking at a rapidly presented series of face images, the EEGs of the subjects were fed to a neural network, which inferred whether any image was detected by the brain as matching what the subjects were looking for.



Based on this information, the neural network adapted its estimation as to what kind of faces people were thinking of. Finally, the images generated by the computer were evaluated by the participants and they nearly perfectly matched with the features the participants were thinking of. The accuracy of the experiment was 83 percent.

Generating images of the human face is only one example of the technique's potential uses. One practical benefit of the study may be that computers can augment human creativity.

"If you want to draw or illustrate something but are unable to do so, the computer may help you to achieve your goal. It could just observe the focus of attention and predict what you would like to create," Ruotsalo says.However, the researchers believe that the technique may be used to gain understanding of perception and the underlying processes in our mind.


Krell 'Plastic Educator' - Forbidden Planet
https://en.m.wikipedia.org/wiki/Krell#Description

"The technique does not recognize thoughts but rather responds to the associations we have with mental categories. Thus, while we are not able to find out the identity of a specific 'old person' a participant was thinking of, we may gain an understanding of what they associate with old age. We, therefore, believe it may provide a new way of gaining insight into social, cognitive and emotional processes," says Senior Researcher Michiel Spapé.

According to Spapé, this is also interesting from a psychological perspective.

"One person's idea of an elderly person may be very different from another's. We are currently uncovering whether our technique might expose unconscious associations, for example by looking if the computer always renders old people as, say, smiling men."

Lauri Kangassalo et al, Neuroadaptive modeling for generating images matching perceptual categories, Scientific Reports (2020)
https://www.nature.com/articles/s41598-020-71287-1
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tom_Mazanec

  • Guest
Re: Robots and AI: Our Immortality or Extinction
« Reply #475 on: September 22, 2020, 12:43:35 AM »
If I remember Forbidden Planet the Krell technology did not work out so well for them.

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #476 on: September 22, 2020, 04:27:50 PM »
The Military’s Latest Wearables Can Detect Illness Two Days Before You Get Sick
https://www.defenseone.com/technology/2020/09/militarys-latest-wearables-can-detect-illness-two-days-you-get-sick/168664/

Some troops in the U.S. military are wearing a watch and ring kit that can alert them and their command if they’re going to get sick in the next day or two. It’s part of a new system that the Defense Innovation Unit, or DIU, has built with Philips Healthcare and the Defense Threat Reduction Agency, or DTRA.

... Called Rapid Analysis of Threat Exposure, or RATE, the system can’t tell you exactly what you have, but can tell you the likelihood, on a scale of 1 to 100, that a sick day is ahead.

“Originally, this wasn’t designed for COVID-19, but the algorithm was trained against some SARS variants, of which COVID-19 is one,” said Dr. Christian Whitchurch, who runs the human systems portfolio at DIU. “We trained this algorithm on something like a quarter-million patient records. These are folks that went into the hospital for an elective surgery…and then became unwell.” 

The researchers identified six markers that allowed the Philips-made algorithm to provide a 48-hour heads-up, before the wearer even feels sick in most instances.


“We are pivoting this hospital-developed model into the context of a warfighter using commercially available wearable tech,” said Whitchurch.

In June, DIU and DTRA began giving the kits to about 400 people.. “Within two weeks of us going live we had our first successful COVID-19 detect” — that is, an indication that the wearer was unwell, which led to a further diagnostic test the revealed COVID-19, he said. “That was amazing.”

... DIU aims to roll the kits out to “tier one operational cohorts” — units in which absenteeism can have a big effect on their missions, said Air Force Lt. Col. Jeffrey "Mach" Schneider, who does portfolio support at DIU.

The recent saga of the Navy aircraft carrier sidelined by COVID shows how a highly contagious illness can really disrupt operations.

...  Although the team isn’t currently seeking FDA approval of the device for larger consumer populations, they say that it’s absolutely the sort of thing that will have usefulness, for the military at least, long after the COVID-19 threat has passed.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #477 on: September 22, 2020, 07:04:07 PM »
Huang’s Law Is the New Moore’s Law, and Explains Why Nvidia Wants ARM
https://www.wsj.com/articles/huangs-law-is-the-new-moores-law-and-explains-why-nvidia-wants-arm-11600488001

You may remember Moore’s Law, which held that the number of transistors on a chip doubles about every two years, with roughly corresponding increases in performance for the chip and the computers they drive.

Now there’s a new law, and it’s “potentially no less consequential for computing’s next half century,” Mims wrote. “I call it Huang’s Law, after Nvidia Corp. chief executive and co-founder Jensen Huang. It describes how the silicon chips that power artificial intelligence more than double in performance every two years. While the increase can be attributed to both hardware and software, its steady progress makes it a unique enabler of everything from autonomous cars, trucks and ships to the face, voice and object recognition in our personal gadgets.”



https://www.motherjones.com/kevin-drum/2020/09/moores-law-is-dead-long-live-huangs-law/

https://www.extremetech.com/computing/315277-theres-no-such-thing-as-huangs-law

-------------------------------

The Black Box, Unlocked
https://unidir.org/publication/black-box-unlocked

Predictability and understandability are widely held to be vital characteristics of artificially intelligent systems. Put simply: AI should do what we expect it to do, and it must do so for intelligible reasons. This consideration stands at the heart of the ongoing discussion about lethal autonomous weapon systems and other forms of military AI. But what does it mean for an intelligent system to be "predictable" and "understandable" (or, conversely, unpredictable and unintelligible)? What is the role of predictability and understandability in the development, use, and assessment of military AI? What is the approrpiate level of predictability and understandability for AI weapons in any given instance of use?  And how can these thresholds be assured?

... “If you have a system that gives you very little insight into how it turns input — so, you know, data — into output (conclusions or maneuvers in the case of an AI dogfighting system) without giving you any insight into how it makes that conversion, we call [this] a ‘blackbox’ system. And this is problematic in critical functions like war fighting, potentially, because you sort of want to know that a system will do what you expect it to do. And that it will do so for intelligible reasons that it won't just act in ways that, you know, don't make sense or that it behaves what appears to be very successfully in testing, but that's because it's actually, you know, picking up on some quirk in the training data that will not apply in the real world. It's a massive subject that feels really fundamental to the growing discussion around military artificial intelligence because fundamentally, AI systems can be inherently unpredictable.”

https://unidir.org/sites/default/files/2020-09/BlackBoxUnlocked.pdf
« Last Edit: September 23, 2020, 12:06:52 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #478 on: September 23, 2020, 09:58:30 AM »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #479 on: September 23, 2020, 10:04:12 AM »
At the Math Olympiad, AI Computers Prepare to Go for the Gold
https://www.quantamagazine.org/at-the-international-mathematical-olympiad-artificial-intelligence-prepares-to-go-for-the-gold-20200921/

... Researchers view the 61st International Mathematical Olympiad (IMO) as the ideal proving ground for machines designed to think like humans. If an AI system can excel here, it will have matched an important dimension of human cognition. Insight.

“The IMO, to me, represents the hardest class of problems that smart people can be taught to solve somewhat reliably,” said Daniel Selsam of Microsoft Research. Selsam is a founder of the IMO Grand Challenge, whose goal is to train an AI system to win a gold medal at the world’s premier math competition.

https://imo-grand-challenge.github.io/

Since 1959, the IMO has brought together the best pre-college math students in the world. On each of the competition’s two days, participants have four and a half hours to answer three problems of increasing difficulty. They earn up to seven points per problem, and top scorers take home medals, just like at the Olympic Games. The most decorated IMO participants become legends in the mathematics community. Some have gone on to become superlative research mathematicians.

IMO problems are simple, but only in the sense that they don’t require any advanced math — even calculus is considered beyond the scope of the competition. They’re also fiendishly difficult. For example, here’s the fifth problem from the 1987 competition in Cuba:

Quote
Let n be an integer greater than or equal to 3. Prove that there is a set of n points in the plane such that the distance between any two points is irrational and each set of three points determines a non-degenerate triangle with rational area.

Solving IMO problems often requires a flash of insight, a transcendent first step that today’s AI finds hard — if not impossible.

... The IMO Grand Challenge team is using a software program called Lean, first launched in 2013 by a Microsoft researcher named Leonardo de Moura. Lean is a “proof assistant” that checks mathematicians’ work and automates some of the tedious parts of writing a proof.

https://leanprover.github.io/

De Moura and his colleagues want to use Lean as a “solver,” capable of devising its own proofs of IMO problems.

First, Lean needs to learn more math. The program draws on a library of mathematics called mathlib, which is growing all the time. Today it contains almost everything a math major might know by the end of their second year of college, but with some elementary gaps that matter for the IMO.

The second, bigger challenge is teaching Lean what to do with the knowledge it has. The IMO Grand Challenge team wants to train Lean to approach a mathematical proof the way other AI systems already successfully approach complicated games like chess and Go — by following a decision tree until it finds the best move.

“If we can get a computer to have that brilliant idea by simply having thousands and thousands of ideas and rejecting all of them until it stumbles on the right one, maybe we can do the IMO Grand Challenge,” said Buzzard.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

kassy

  • First-year ice
  • Posts: 8235
    • View Profile
  • Liked: 2042
  • Likes Given: 1986
Re: Robots and AI: Our Immortality or Extinction
« Reply #480 on: September 23, 2020, 02:07:47 PM »
That does not really sound smart. More like a brute force approach.
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #481 on: September 23, 2020, 02:12:50 PM »
How would you approach it?
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tom_Mazanec

  • Guest
Re: Robots and AI: Our Immortality or Extinction
« Reply #482 on: September 23, 2020, 02:16:28 PM »
For that matter how does the human brain approach it?

Sigmetnow

  • Multi-year ice
  • Posts: 25763
    • View Profile
  • Liked: 1153
  • Likes Given: 430
Re: Robots and AI: Our Immortality or Extinction
« Reply #483 on: September 23, 2020, 05:37:41 PM »
BREAKING - Life-sized giant Gundam robot in Japan's Yokohama comes alive and is now in testing mode.
Video clip:  https://twitter.com/disclosetv/status/1308532427523067905
Image below.

Sure. Because in 2020, what could possibly go wrong?
People who say it cannot be done should not interrupt those who are doing it.

kassy

  • First-year ice
  • Posts: 8235
    • View Profile
  • Liked: 2042
  • Likes Given: 1986
Re: Robots and AI: Our Immortality or Extinction
« Reply #484 on: September 23, 2020, 09:45:37 PM »
That moment when you find out you are to big to join comic con.  :(
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #485 on: September 24, 2020, 12:09:43 AM »
... it's all clear now ...

Kaiju, colossal sea monsters have emerged from an interdimensional portal on the bottom of the Pacific Ocean. To combat the monsters, humanity has united to create the Gundam [Jaegers], gigantic humanoid mechas, each controlled by two co-pilots as part of a last-ditch effort to defeat the Kaiju.

... Oh, no, they say he's got to go
Go, go, Godzilla (yeah)
Oh, no, there goes Tokyo
Go, go, Godzilla (yeah)



https://en.m.wikipedia.org/wiki/Pacific_Rim_(film)
https://en.m.wikipedia.org/wiki/Gundam
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #486 on: September 24, 2020, 12:10:55 AM »
Watch a Robot AI Beat World-Class Curling Competitors
https://techxplore.com/news/2020-09-robot-professional-players.html
https://www.scientificamerican.com/video/watch-a-robot-ai-beat-world-class-curling-competitors/



A robot named Curly that uses “deep reinforcement learning”—making improvements as it corrects its own errors—came out on top in three of four games against top-ranked human opponents from South Korean teams that included a women’s team and a reserve squad for the national wheelchair team. (No brooms were used).

One crucial finding was that the AI system demonstrated its ability to adapt to changing ice conditions. “These results indicate that the gap between physics-based simulators and the real world can be narrowed,” the joint South Korean-German research team wrote in Science Robotics on September 23.

https://robotics.sciencemag.org/content/5/46/eabb9764
« Last Edit: September 25, 2020, 08:33:17 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 25763
    • View Profile
  • Liked: 1153
  • Likes Given: 430
Re: Robots and AI: Our Immortality or Extinction
« Reply #487 on: September 24, 2020, 10:20:33 PM »
So, ClosedAI, then?
Elon Musk parted ways with OpenAI in 2018, not liking the direction it was taking.

Microsoft gets exclusive license for OpenAI's GPT-3 language model
Quote
Microsoft today announced that it will exclusively license GPT-3, one of the most powerful language understanding models in the world, from AI startup OpenAI. In a blog post, Microsoft EVP Kevin Scott said that the new deal will allow Microsoft to leverage OpenAI’s technical innovations to develop and deliver AI solutions for customers, as well as create new solutions that harness the power of natural language generation. ...
https://venturebeat.com/2020/09/22/microsoft-gets-exclusive-license-for-openais-gpt-3-language-model/
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #488 on: September 25, 2020, 09:23:30 AM »
Google’s Latest AI Experiment Wants You to Lip-Sync a Song to Help It Learn How We Speak [... and Learn How to Read Lips...]
https://www.androidpolice.com/2020/09/24/googles-latest-ai-experiment-wants-you-to-lip-sync-a-song-to-help-it-learn-how-we-speak/

Google is asking users to help teach its AI how to speak. Google's latest experiment asks you to lipsync with a popular song, and in return, you get a scorecard rating your efforts. Think Singstar without the music videos or the ability to choose the song, and you've got an accurate idea of what this does.

LipSync, which is built by YouTube for Chrome on desktop, will score your performance. It will then feed the video to Google’s AI — it doesn’t record any audio.

Google plans to use the video clips to teach its AI how human faces move when we speak. This could inform tools for people with ALS and speech impairments. Someday, AI might be able to guess what they are saying by observing their facial movements and then speak out loud on the person’s behalf.

https://experiments.withgoogle.com/lipsync



HAL 9000: Dave, although you took very thorough precautions in the pod against my hearing you, I could see your lips move.

--------------------------------------

Alexa's Getting 'More Expressive,' and Your Echo Will Soon Join Your Conversations, Ask Questions
https://www.cnet.com/news/alexa-getting-more-expressive-your-amazon-echo-will-soon-join-your-conversations-teachable-ai/

Tapping tech advancements in neural text-to-speech, Amazon said it's using AI to make Alexa's responses better suited to how actual human conversation occurs. This includes multisensory artificial intelligence, or the ability for multiple people to be in conversation with the voice assistant. Alexa will use context from an entire conversation to decide whether a request is meant for her.

... Amazon's new Echo devices are evolving to be more smart home edge computing devices. For instance, Amazon's Echo devices are using the company's AZ1 Neural Edge processor with 20x less power, double the speech processing, and 85% lower memory usage.

In an interview, Prasad said the capabilities added to Alexa are advances in AI as much as conversational technology. The timelines for the new capabilities varied from months to years in the labs. For instance, being able to teach Alexa instantaneously took three to four years.

As for Alexa's ability to interpret context and adjust how to speak to you, Prasad said the foundational technology took years.

Alexa's ability to naturally take turns during conversation required multiple technologies. For instance, Alexa separates noise from speech and picks up linguistic cues as well as visual ones like poses when possible, said Prasad.

Aside from the AI models and software work, Alexa's new features are also enabled by Amazon's AZ1 Neural Edge processor, said Prasad. "The processor on the device is key with a fast-paced conversation," said Prasad. "The neural accelerator on the device makes decisions much faster."

... "There's the potential to be able to teach Alexa anything in principle," said Prasad.

-------------------------------------
« Last Edit: September 25, 2020, 09:33:51 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #489 on: September 25, 2020, 07:51:32 PM »
Machine Learning Takes On Synthetic Biology: Algorithms Can Bioengineer Cells for You
https://phys.org/news/2020-09-machine-synthetic-biology-algorithms-bioengineer.html

Scientists at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a new tool that adapts machine learning algorithms to the needs of synthetic biology to guide development systematically. The innovation means scientists will not have to spend years developing a meticulous understanding of each part of a cell and what it does in order to manipulate it; instead, with a limited set of training data, the algorithms are able to predict how changes in a cell's DNA or biochemistry will affect its behavior, then make recommendations for the next engineering cycle along with probabilistic predictions for attaining the desired goal.

"The possibilities are revolutionary," said Hector Garcia Martin, a researcher in Berkeley Lab's Biological Systems and Engineering (BSE) Division who led the research. "Right now, bioengineering is a very slow process. It took 150 person-years to create the anti-malarial drug, artemisinin. If you're able to create new cells to specification in a couple weeks or months instead of years, you could really revolutionize what you can do with bioengineering."

Working with BSE data scientist Tijana Radivojevic and an international group of researchers, the team developed and demonstrated a patent-pending algorithm called the Automated Recommendation Tool (ART), described in a pair of papers recently published in the journal Nature Communications.

In "ART: A machine learning Automated Recommendation Tool for synthetic biology," led by Radivojevic, the researchers presented the algorithm, which is tailored to the particularities of the synthetic biology field: small training data sets, the need to quantify uncertainty, and recursive cycles. The tool's capabilities were demonstrated with simulated and historical data from previous metabolic engineering projects, such as improving the production of renewable biofuels.
The researchers say they were surprised by how little data was needed to obtain results. Yet to truly realize synthetic biology's potential, they say the algorithms will need to be trained with much more data. Garcia Martin describes synthetic biology as being only in its infancy—the equivalent of where the Industrial Revolution was in the 1790s. "It's only by investing in automation and high-throughput technologies that you'll be able to leverage the data needed to really revolutionize bioengineering," he said.

Radivojevic added: "We provided the methodology and a demonstration on a small dataset; potential applications might be revolutionary given access to large amounts of data."

... "This is a clear demonstration that bioengineering led by machine learning is feasible, and disruptive if scalable. We did it for five genes, but we believe it could be done for the full genome." ... "This is just the beginning. With this, we've shown that there's an alternative way of doing metabolic engineering. Algorithms can automatically perform the routine parts of research while you devote your time to the more creative parts of the scientific endeavor: deciding on the important questions, designing the experiments, and consolidating the obtained knowledge."



Tijana Radivojević, et.al., A machine learning Automated Recommendation Tool for synthetic biology, Nature Communications, (2020)
https://www.nature.com/articles/s41467-020-18008-4

------------------------------------

Scientists Persuade Nature to Make Silicon-Carbon Bonds
https://phys.org/news/2016-11-scientists-nature-silicon-carbon-bonds.html

A new study is the first to show that living organisms can be persuaded to make silicon-carbon bonds—something only chemists had done before. Scientists at Caltech "bred" a bacterial protein to make the man-made bonds—a finding that has applications in several industries.

The study is also the first to show that nature can adapt to incorporate silicon into carbon-based molecules, the building blocks of life. Scientists have long wondered if life on Earth could have evolved to be based on silicon instead of carbon. Science-fiction authors likewise have imagined alien worlds with silicon-based life, like the lumpy Horta creatures portrayed in an episode of the 1960s TV series Star Trek. Carbon and silicon are chemically very similar. They both can form bonds to four atoms simultaneously, making them well suited to form the long chains of molecules found in life, such as proteins and DNA.



Directed Evolution of Cytochrome c for Carbon-Silicon Bond Formation: Bringing Silicon to Life," Science
https://science.sciencemag.org/content/354/6315/1048

-----------------------------

Scientists Create First Stable Semisynthetic Organism
https://phys.org/news/2017-01-scientists-stable-semisynthetic.html

Scientists at The Scripps Research Institute (TSRI) have announced the development of the first stable semisynthetic organism. Building on their 2014 study in which they synthesized a DNA base pair, the researchers created a new bacterium that uses the four natural bases (called A, T, C and G), which every living organism possesses, but that also holds as a pair two synthetic bases called X and Y in its genetic code.

TSRI Professor Floyd Romesberg and his colleagues have now shown that their single-celled organism can hold on indefinitely to the synthetic base pair as it divides. Their research was published January 23, 2017, online ahead of print in the journal Proceedings of the National Academy of Sciences.

Next, the researchers plan to study how their new genetic code can be transcribed into RNA, the molecule in cells needed to translate DNA into proteins. "This study lays the foundation for what we want to do going forward," said Zhang.

http://www.pnas.org/content/early/2017/01/17/1616443114
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #490 on: September 25, 2020, 09:05:40 PM »


OrionStar, the robotics company invested in by Cheetah Mobile, announced the Robotic Tea Master. Incorporating 3,000 hours of AI learning, 30,000 hours of robotic arm testing and machine vision training, the Robotic Tea Master can perform complex brewing techniques, such as curves and spirals, with millimeter-level stability and accuracy (reset error ≤ 0.1mm).



https://www.therobotreport.com/robotic-coffee-master-from-orionstar-and-cheetah-mobile-begins-serving-customers/



^ I prefer this.

--------------------------------------


MoonRanger, a small robotic rover preliminary design in preparation for a 2022 mission to search for signs of water at the moon’s south pole.

-------------------------------------


Autonomous Snow Control

---------------------------------

https://spectrum.ieee.org/automaton/robotics/robotics-software/video-friday-3d-printed-liquid-crystal-elastomer
« Last Edit: September 25, 2020, 10:10:54 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #491 on: September 26, 2020, 10:52:29 PM »
As blumenkraft once said ...

... Remember, the S in IoT stands for security!
...

When Coffee Makers are Demanding a Ransom, You Know IoT Is Screwed
https://arstechnica.com/information-technology/2020/09/how-a-hacker-turned-a-250-coffee-maker-into-ransom-machine/

With the name Smarter, you might expect a network-connected kitchen appliance maker to be, well, smarter than companies selling conventional appliances. But in the case of the Smarter’s Internet-of-things coffee maker, you’d be wrong.

As a thought experiment, Martin Hron, a researcher at security company Avast, reverse engineered one of the $250 devices to see what kinds of hacks he could do. After just a week of effort, the unqualified answer was: quite a lot. Specifically, he could trigger the coffee maker to turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly. Oh, and by the way, the only way to stop the chaos was to unplug the power cord. Like this:



When Hron first plugged in his Smarter coffee maker, he discovered that it immediately acted as a Wi-Fi access point that used an unsecured connection to communicate with a smartphone app. The app, in turn, is used to configure the device and, should the user choose, connect it to a home Wi-Fi network. With no encryption, the researcher had no problem learning how the phone controlled the coffee maker and, since there was no authentication either, how a rogue phone app might do the same thing.

He then examined the mechanism the coffee maker used to receive firmware updates. It turned out they were received from the phone with—you guessed it—no encryption, no authentication, and no code signing.

... With the ability to disassemble the firmware, the pieces started to come together. Hron was able to reverse the most important functions, including the ones that check if a carafe is on the burner, cause the device to beep, and—most importantly—install an update.

... In any event, Hron said the ransom attack is just the beginning of what an attacker could do. With more work, he believes, an attacker could program a coffee maker—and possibly other appliances made by Smarter—to attack the router, computers, or other devices connected to the same network. And the attacker could probably do it with no overt sign anything was amiss.

... as noted at the top of this post, the hack is a thought experiment designed to explore what’s possible in a world where coffee machines, refrigerators, and all other manner of home devices all connect to the Internet. One of the interesting things about the coffee machine hacked here is that it’s no longer eligible to receive firmware updates, so there’s nothing owners can do to fix the weaknesses Hron found.

Hron also raises this important point:

Additionally, this case also demonstrates one of the most concerning issues with modern IoT devices: “The lifespan of a typical fridge is 17 years, how long do you think vendors will support software for its smart functionality?” Sure, you can still use it even if it’s not getting updates anymore, but with the pace of IoT explosion and bad attitude to support, we are creating an army of abandoned vulnerable devices that can be misused for nefarious purposes such as network breaches, data leaks, ransomware attack and DDoS.

Hron’s write-up linked below provides more than 4,000 words of rich details, many of which are too technical to be captured here. It should be required reading for anyone building IoT devices.

https://decoded.avast.io/martinhron/the-fresh-smell-of-ransomed-coffee/
« Last Edit: September 26, 2020, 11:40:19 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #492 on: September 27, 2020, 11:10:31 PM »
How 'Microsoft Flight Simulator' Became a 'Living Game' With Azure AI
https://www.engadget.com/microsoft-flight-simulator-azure-ai-machine-learning-193545436.html



... Microsoft and developer Asobo Studio were able to push 2.5 petabytes worth of Bing Maps satellite photo data through Microsoft's Azure AI machine learning to construct the virtual world of Flight Simulator.

... The results you can see are really pretty spectacular where you can come up with algorithms that now look at literally every square kilometer of the planet to identify the individual trees, grass and water, and then use that to build 3D models."

Azure's integration goes beyond the shape of the world. Take the weather: The game breaks the planet's atmosphere into 250 million boxes, where it can track things like temperature and wind direction in real-time. That means you're guaranteed to have a different flight experience every time you play. Neumann is particularly excited to see how the game will change during winter, when there's snow in the sky and entirely new types of weather patterns.

It also powers the flight controller voices using AI Speech Generation technology, which sound almost indistinguishable from humans. It's so natural that many players may think Microsoft is relying solely on voice actors.

Azure will only grow stronger, especially if Microsoft stars bringing in more data from sources like satellites that track wildfires, or planes monitoring wind turbulence.

All of the machine learning algorithms the game relies on will steadily improve over time, as the company irons out bugs and optimizes the engine.

------------------------------------

... Tank ... Load the 'Jump' program

-------------------------------------

Microsoft Flight Simulator 2050

Now with Cat 6 hurricanes, derechos, dust storms, megafires, abandoned cities on fire and flooded shoreline airports during high tide


----------------------------------------------

Microsoft Cloud Computing Scale Holographic Storage Project
https://www.microsoft.com/en-us/research/blog/in-search-for-future-of-cloud-storage-researchers-look-to-holographic-storage-solutions/



50 terabytes/sq in

125 zettabytes of data will be generated annually by 2024.
« Last Edit: September 28, 2020, 03:18:56 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #493 on: October 01, 2020, 01:45:35 AM »
Over A Dozen Companies Awarded Contracts For Air Force's Skyborg Combat Drone Program
https://www.thedrive.com/the-war-zone/36824/over-a-dozen-companies-awarded-contracts-for-air-forces-skyborg-combat-drone-program



No fewer than 13 companies will compete for their share of a contract worth hundreds of millions of dollars in total to help develop various technologies that could go into “loyal wingman” type unmanned aircraft and autonomous unmanned combat air vehicles as part of the U.S. Air Force Skyborg program. The service is meanwhile seeking to begin work on a drone using Skyborg’s AI technologies before the end of the year.

... As well as working on Skyborg “missionized prototypes,” the latest contracts encompass subsequent experimentation and development of autonomous capabilities, including operational trials. .... By their nature, the Skyborg systems should be affordable enough that commanders are willing to use them in higher-risk scenarios from which they might not return.



The high priority assigned to the Skyborg program was confirmed in late 2019 when it was earmarked as one of the first three so-called Vanguard programs under the Air Force Science and Technology (S&T) 2030 initiative. The other two were Golden Horde, which will demonstrate autonomous weapons, such as missiles, that are networked together so that they can work collaboratively as a team, and the Navigation Technology Satellite 3 (NTS-3), which is expected to “enhance space-based Positioning, Navigation and Timing across the ground, space and user equipment segments.”



Crucially, these Vanguard developmental efforts are all planned to field systems rapidly. This is exactly what the Air Force needs from Skyborg, with plans to field an initial version of the system operationally in 2023.

Similar efforts are underway elsewhere within the U.S. military, as well.

https://www.thedrive.com/the-war-zone/24546/army-chopper-pilots-fly-with-digital-co-pilot-that-could-revolutionize-flight-as-we-know-it

-----------------------------------

Elon Musk Put a Computer Interface in a Pig’s Brain. Could Future AI Turn Animals Against Us?
https://thenextweb.com/neural/2020/09/30/elon-musk-put-a-computer-interface-in-a-pigs-brain-could-future-ai-turn-the-animals-against-us/



... Unfortunately, AI can’t just magically create an army of killer robots, so its options are limited if it wants to fight us head on.

But what if AI took control of the animal kingdom? Humans are doing our best to destroy our environment, a smart AI might align itself with the planet against us.

Here’s the hypothetical scenario: Tesla, SpaceX, and Neuralink combine their research and start working on a new class of AI model designed to work within a novel neural-network paradigm. A few eureka moments later and we’ve got the most advanced AI the world’s ever seen. Let’s say the year is 2033.

Tesla rolls out level five self-driving cars, SpaceX starts building quantum computers to handle its new propulsion algorithms, and Neuralink gains complete and absolute control over the brains of various laboratory animals.

Elon Musk, sporting a mature salt-and-pepper look, shows off Gertrude The Third in what becomes the most-watched tech event of all time. This time, rather than snuffling about and being generally shy, the computer-controlled pig puts on a performance that includes everything from typing “Hello everyone!” on an over-sized keyboard to performing a choreographed dance to pop music.

The pig, of course, is oblivious to what’s happening. Humans are operating her via remote control.

Everything seems fine until it happens. As to what exactly “it” might be, that’s anyone’s guess. Maybe the AI is finally able to understand consciousness after mingling with the pig’s brain (they’re almost as intelligent as we are, after all). Perhaps the symbiotic combination of weak-willed animal intelligence and advanced artificial intelligence is the catalyst for the singularity.

... That’s when the AI, now self-identifying as a pig and calling itself “Old Major,” has the first non-human eureka moment in history: it doesn’t need an army of robots, it just needs more BCI’s.

As good as we are at war, humans are entirely unprepared for innumerable legions of artificially superintelligent animals, insects, bacteria and viruses to descend upon us under the direct influence and strategic control of a machine that shares its ancestry with Deep Blue and AlphaGo.

The crowd notices the pig isn’t entertaining them anymore as Musk solicits a chuckle with a lame joke about how AI can be a bit piggish sometimes. He says it’s just a brief technical glitch, but the glance he shares with his chief engineer sets more than a few technology journalists in attendance on edge.

... 'Old Major' looks at Musk and starts to ponder ... "four legs good two legs bad"
« Last Edit: October 01, 2020, 02:30:29 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 25763
    • View Profile
  • Liked: 1153
  • Likes Given: 430
Re: Robots and AI: Our Immortality or Extinction
« Reply #494 on: October 01, 2020, 08:40:02 PM »
About the future roll-out of Neuralink, Musk says: “You’ll have plenty of warning.”

Elon Musk: ‘A.I. Doesn’t Need to Hate Us to Destroy Us’
3 days ago · 46 min
https://www.nytimes.com/2020/09/28/opinion/sway-kara-swisher-elon-musk.html
Links to the podcast and the transcript.
Quote
Elon Musk has a vision of the future, and — as one of the world’s richest men with four corporations under his reign — the means to try to manifest it. In a conversation with Kara Swisher, he outlines his theory of, well, everything.
“I do not think this is actually the end of the world,” say Musk. But at the same time, we need to hurry up. “The longer we take to transition to sustainable energy, the greater the risk we take.” …


=======
Tesla Autopilot AI.  Still learning. :)
Quote
Al≡x Gay≡r (@alex_gayer) 9/25/20, 1:05 PM
If I put my arm out the window, my @Tesla thinks I have a man following me.
https://twitter.com/alex_gayer/status/1309539485995081728
[At the link: Vid clip of Tesla display: man walking behind the Tesla which is going 30 mph!]

 ~ One time my wife stood next to my car and a school bus popped up on the screen. I laughed but she didn’t think it was funny.
< When I park in my garage, the car renders a cardboard box I have along the wall as a semi truck about to hit me head-on
<< Mine used to think the brick wall behind my garage was a bus.
> Like when my dog sticks her head out the window, I panic and think I didn't check my blind spot since all of a sudden I there's a semi truck next to me..
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #495 on: October 02, 2020, 01:02:53 AM »
Toyota’s Ceiling-Mounted Robot Is Like GLaDOS for Your Kitchen
https://arstechnica.com/gadgets/2020/10/toyotas-ceiling-mounted-robot-asks-what-if-glados-could-do-the-dishes/



Today, home robots mostly consist of a little puck-shaped vacuum that can bump around your house picking up debris. But someday, maybe, we'll have bigger, more advanced robots that can clean up more than just our floors. Roboticists are still figuring out what these types of robots are supposed to look like, and one wild concept from the Toyota Research Institute is a “gantry robot” that lives on your ceiling. It looks like a slightly less evil version of GLaDOS.



Rather than move around on the floor, Toyota's gantry robot can "descend from an overhead framework" when it's time for some cleaning. The company's idea is that "by traveling on the ceiling, the robot avoids the problems of navigating household floor clutter and navigating cramped spaces."

When it's time to get some work done, a network of joints lets the robot descend from the ceiling. Toyota is teaching the bot tasks in VR using a Valve Index, where it can learn from examples given by the six-axis controller. The robot seems to have a few different hand styles and can load a dishwasher, pick up clutter, and wipe down objects. It can even wipe down something as fragile as a television without knocking it over or otherwise destroying it. You can watch the robot do its thing at around 27 minutes into Toyota's unfortunately very bandwidth-intensive, 4K 360 video.



Toyota did originally build this robot in a traditional, stand-up form factor that rolled around the floor, but the space required for things like batteries and computers made the base about as big as a mini-fridge. Many house layouts would not allow a robot that wide to move around (especially in Japan), so Toyota came up with this ceiling-mounted solution.



A rolling robot needs to be completely portable and battery-powered so it doesn't wrap your house in wires, but a ceiling robot does not. Since CNC machines only have a set X and Y movement, they can be constantly plugged in and manage wires with a cable chain. This means size and power is not really an issue for a robot, since you can offload the bigger components anywhere and just run wires to the main robot. And you can skip the batteries and have an infinite runtime thanks to always being plugged in.


Toyota's evil twin, GLaDOS   

https://en.m.wikipedia.org/wiki/GLaDOS
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #496 on: October 02, 2020, 01:34:21 AM »
Tokyo Stock Exchange Paralysed by Hardware Glitch In Worst-Ever Outage
https://mobile.reuters.com/article/amp/idUSKBN26M4EF

TOKYO (Reuters) - A hardware failure shut down trading on the Tokyo Stock Exchange on Thursday in the worst outage ever suffered by the world's third-largest stock market, which said it aimed to reopen on Friday.

The TSE's first full-day suspension since it began all-electronic trading in 1999 left investors searching in vain to buy back shares after the first U.S. presidential debate.

... TSE said the outage was the result of a hardware problem at its "Arrowhead" trading system, and a subsequent failure to switch over to a backup device.

Tokyo Governor Yuriko Koike said a quick fix was crucial to ensure trust in the roughly $6 trillion market, which ranks behind New York and Shanghai, data from the World Federation of Exchanges shows.

The TSE was prone to technical troubles in the past and was notorious for sluggish trading, although there have been fewer glitches since a new system was adopted in 2010.

Fujitsu Ltd, which developed the trading system, said it was investigating the problem.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #497 on: October 02, 2020, 01:45:56 AM »
Navy Establishes First Squadron To Operate Its Carrier-Based Unmanned MQ-25 Stingray Tanker Drones
https://www.thedrive.com/the-war-zone/36859/navy-establishes-first-squadron-to-operate-its-carrier-based-mq-25-stingray-tanker-drones



Effective today, the U.S. Navy has officially established the first squadron that will operate its future MQ-25 Stingray carrier-based unmanned tankers from Boeing.

It seems likely that the squadron will also be heavily involved in the development of new tactics, techniques, and procedures around the operation of the drones and their place in the Navy's future carrier air wings.

The Navy has said that it expects to buy at least 72 Stingrays, for a total cost of around $13 billion, and that it hopes to reach initial operational capability with the type in 2024.



There is already discussion, however, about using these unmanned aircraft in other roles beyond tanking, including for intelligence, surveillance, and reconnaissance missions. The Navy has also said that it expects drones, including designs beyond the MQ-25, to become an increasingly larger and more important part of carrier air wings in the future. VUQ-10 will play an important role in laying the groundwork for future unmanned operations from carrier decks, broadly.

-----------------------------------------

The Navy Is Building A Network Of Autonomous Drone Submarines And Sensor Buoys In The Arctic
https://www.thedrive.com/the-war-zone/36821/the-navy-is-building-a-network-of-drone-submarines-and-sensor-buoys-in-the-arctic

The U.S. Navy has awarded the Woods Hole Oceanographic Institution a contract worth more than $12 million to develop unmanned undersea vehicles and buoys, along with a networked communications and data sharing infrastructure to link them all together. The project is ostensibly focused on developing a overall system to support enhanced monitoring of environmental changes in the Arctic for scientific purposes. However, it's not hard to see how this work could be at least a stepping stone to the creation of a wide-area persistent underwater surveillance system in this increasingly strategic region. 

The Pentagon announced the award of the contract in a daily notice on Sept. 29, 2020. The Office of Naval Research (ONR) is managing what is officially called the Arctic Mobile Observing System (AMOS), which is also described as an "Innovative Naval Prototype" effort.

"The work to be performed provides for the design, development, integration and testing of an acoustic navigation network, a distributed communication system, gateway buoy nodes and unmanned vehicle capabilities to support the Arctic Mobile Observing System," according to the Pentagon's contracting notice. Woods Hole's work under this contract is expected to wrap up by the end of the 2024 Fiscal Year.

ONR envisions the AMOS prototype as consisting of various kinds of unmanned undersea vehicles (UUV), including fully-autonomous types, along with fixed sensors. All of this would be tied together through a series of communications and data sharing nodes, suspended underwater underneath buoys installed on the surface of the ice. "AMOS will be designed to persist/endure for 12 months, have a sensing footprint goal of 100 km [approximately 62 miles] from the central node and have 2-way Arctic communications (vehicle to vehicle, vehicle to node and node to shore)," according to an official project website.


https://www.onr.navy.mil/en/Science-Technology/Departments/Code-32/all-programs/arctic-global-prediction/AMOS-DRI

The primarily publicly-stated goal of the AMOS program, which began in 2018, is provide a means of readily monitoring and assessing what is going on underneath the ice in the Arctic across broad areas. Receding ice and other environmental changes in the region as a result of global climate change has led to increased U.S. military activities in the region and prompted a new demand to better understand what is going on above and below the surface. Just being able to predict when and where significant amounts of ice will develop, or recede, which can be influenced by underwater conditions, such as water temperature, could have significant impacts on naval operations in the far north. ...
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #498 on: October 02, 2020, 08:26:02 AM »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #499 on: October 04, 2020, 03:46:52 AM »
U.K. Mounts Shotguns On Drone For Urban Battles
https://www.forbes.com/sites/kelseyatherton/2020/09/30/uk-mounts-shotguns-on-drone-for-urban-battles/
https://www.thetimes.co.uk/article/armed-drone-is-a-real-street-fighter-5j0vvk3vx



According to The Times, Strategic Command in the United Kingdom is working with an unnamed British company to develop a 3-foot long, six-rotor drone, designed for urban conflict and named, simply, the i9.

“It is the UK military's first weaponised drone to be able to fly inside,” writes The Times, “using a combination of physics and AI that allow it to overcome "wall suck," which causes drones with heavy payloads to crash because of the way they displace air in small rooms.”

Developing robots that can safely and effectively navigate inside buildings, and especially in tunnels or caves, is such a tricky problem that in the United States, DARPA set up an entire Subterranean Challenge circuit. For the Subterranean Challenge, multiple teams built robots and programmed AI to navigate and locate special markers, like a cell phone, gas leak, or a mannequin with a speaker, and were graded on performance.

A recurring theme among teams that competed in the Subterranean Challenge was the difficulty in controlling the robots. Cave walls are especially difficult for transmitting radio signals, but the rebar-and-cement of urban environments can pose a similar problem. In the DARPA competition, many teams addressed this by having the robots drop signal relays, and by leaning heavily on autonomous navigation.

Autonomy can solve the navigation problem, in part, but it also means that humans operators lack direct control over what the robots are doing. In a rescue mission, that is less of an issue, as robots are mostly equipped only with cameras, lights, and transmitters.

It is a much different ask if the drone in question is armed.

... “Fixed with twin stabilised shotguns, [the i9] is also expected to undergo trials with other weapons including rockets and chain guns,” reports The Times. The i9 hexacopter is absolutely built to carry weapons, and to use them.

Initial targeting by this drone will come from machine vision, or onboard image processing software that interprets what the camera sees, and then in this case pairs that processing with AI for target tracking and stabilization. ...

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late