Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 352784 times)

kassy

  • First-year ice
  • Posts: 8235
    • View Profile
  • Liked: 2041
  • Likes Given: 1986
Re: Robots and AI: Our Immortality or Extinction
« Reply #500 on: October 04, 2020, 11:35:36 PM »
So we progress from wars were all people were involved because lots of people fought in them to wars which looked nice and clean on CNN like the Gulf War. Then on to drone wars where you can shoot rockets at people from a continent away and soon we can have machines fight the wars for us.

Yay progress!
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #501 on: October 04, 2020, 11:53:54 PM »
The goal hasn't changed; it's is still a 'high body count'.

More exactly, it's the imposing of one's will over another.

Diplomacy by other means.

Since AI seems to be acquiring many of our biases, they will continue fighting each other without us. Seems I've read that in a sci-fi novel or two.

-----------------------------------------

speaking of robot wars ....

Robot Wars - Ocado Sued by AutoStore Over Patent Infringement
https://www.reuters.com/article/technologyNews/idUSKBN26M6HF

LONDON/NEW YORK (Reuters) - British online supermarket group Ocado was hit with a lawsuit by robotics company AutoStore on Thursday for allegedly infringing patents, prompting it to retaliate that it would investigate whether the Norwegian firm infringed Ocado patents.

Ocado - which this week became the most valuable retailer on Britain’s stock market - has only a 1.7% share of Britain’s grocery market. However, its state-of-the-art technology for robotically operated warehouses has spawned partnership deals with supermarket chains around the world, underpinning a stock market valuation of over 20 billion pounds.

On Tuesday Ocado overtook Tesco as Britain’s most valuable retailer by market capitalization.

Ocado, founded in 2000 by three former Goldman Sachs bankers, including CEO Tim Steiner, struggled for years to make a profit but has been transformed by partnership deals with supermarket groups including Kroger in the United States, Marks & Spencer and Morrisons in Britain, Casino in France and Aeon in Japan.

Ocado’s deal with Kroger, inked in 2018, will see at least 20 automated warehouses built in the United States, with the first due to open in early 2021. The deal was seen as key in Kroger taking on Amazon.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #502 on: October 07, 2020, 05:36:14 PM »
Cheaper Than a Human: Miso Robotics’ Latest Kitchen Robot for $30,000
https://venturebeat.com/2020/10/06/you-can-now-order-miso-robotics-latest-kitchen-robot-for-30000/



Miso Robotics today announced that its newest kitchen robot, Flippy Robot-on-a-Rail (ROAR), is now commercially available. The final design, which can cook up to 19 food items, mounts the robot on a recessed overhead rail to avoid interfering with human staff. ROAR can be installed under a standard kitchen hood or on the floor, allowing it to work two stations and interact with a cold storage hopper.

ROAR, which features a customizable LED panel that operators can use for branding, is able to prep hundreds of orders an hour thanks to a combination of cameras and safety scanners, procuring frozen food and cooking it without assistance from a human team member.

ROAR costs around $30,000, but Miso plans to continue to price it down over the next year to $20,000 or less through a $1,500 monthly “robot-as-a-service” fee that includes regular updates and maintenance.

As declines in business resulting from the COVID-19 pandemic place strains on the hospitality segment, Miso believes that robots working alongside human workers can cut costs while improving efficiency — and overall safety. The company asserts its restaurant partners’ pilots to test ROAR create avenues for reducing human contact with food during the cooking process, ensuring consistency while freeing up human cooks to focus on less repetitive tasks. [...like applying for unemployment

“Additional new elements [in ROAR] … include an input zone that can receive manually loaded baskets and a safety shield that protects kitchen staff from hot fryers … Now we can really integrate not only with the POS system, but also all the delivery apps,” Miso president and chairman Buck Jordan told VentureBeat via email. “We have also added more cameras and sensors, to enhance our computer vision capabilities to drive more efficient operational workflows for operators. We can now track inventory, down to the chicken nugget, in the back of the house … And we have sped up the learning process for Flippy to scale menus — as quickly as 30 minutes in some cases.”



------------------------------------

Softbank's New Food Service Robot Servi Could Replace Waitstaff and Food Runners at Restaurants
https://techcrunch.com/2020/09/28/softbank-will-bring-bears-serving-robots-to-japan-amid-restaurant-labor-shortages/amp/

Japanese company Softbank debuted Servi, a new food service robot.

Softbank is the company behind humanoid robot Pepper and the owner of Boston Dynamics' Spot and Atlas.

Servi has already worked at Denny's and other restaurants amid Japan's labor shortage.



-------------------------------------

And something has to clean up the mess.

« Last Edit: October 07, 2020, 05:45:02 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #503 on: October 07, 2020, 07:32:00 PM »
AI-Powered Drone Learns Extreme Acrobatics
https://spectrum.ieee.org/automaton/robotics/drones/ai-powered-drone-extreme-acrobatics



-----------------------------------------



“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #504 on: October 07, 2020, 11:03:32 PM »
Microsoft Hit by Worldwide Azure Network Issue
https://www.itnews.com.au/news/microsoft-is-reporting-a-worldwide-azure-network-issue-554429

Microsoft suffered a worldwide issue with Azure network infrastructure that appeared to impact other services including some instances of Office 365.

The vendor said the issues started at around 6.10pm UTC (5.10am AEDT); it was able to mitigate the problems around 7.15am.

“A subset of customers may experience issues connecting to resources that leverage Azure network infrastructure across regions,” Microsoft said in an Azure advisory.

“Resources with local dependencies in the same region should not be impacted. This issue may also affect Azure Government customers.

“A number of other Microsoft or Azure services are reporting downstream impact. We have identified a potential cause and are applying mitigation.”

... For now it appears the problems are in North America only, based on heatmaps on downtime tracking sites as well as a now-expired advisory on Microsoft’s Azure service status.

Office 365 has suffered instability over the past week, in part after a major outage on September 29 that knocked services offline for five hours.

----------------------------------

... Hope the Pentagon doesn't use Azure cloud. Oops! Microsoft got the DoD JEDI cloud computing contract.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #505 on: October 09, 2020, 08:23:06 PM »

Spot investigates an abandoned mine


... I have no memory of this place ... Mines of Moria
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #506 on: October 09, 2020, 11:47:34 PM »


Shark Robotics, French and European leader in Unmanned Ground Vehicles, is announcing today a disinfection add-on for Boston Dynamics Spot robot, designed to fight the COVID-19 pandemic. The Spot robot with Shark’s purpose-built disinfection payload can decontaminate up to 2,000 m2 in 15 minutes, in any space that needs to be sanitized - such as hospitals, metro stations, offices, warehouses or facilities.

----------------------------------------------



IROS 2020 workshop on “Planetary Exploration Robots: Challenges and Opportunities”

--------------------------------------



PR2 is serving breakfast and cleaning up afterwards. It’s slow, but all you have to do is eat and leave.
https://robots.ieee.org/robots/pr2/?utm_source=spectrum

---------------------------------------



In this electronics assembly application, Kawasaki's cobot duAro2 uses a tool changing station to tackle a multitude of tasks and assemble different CPU models.



What's interesting about this assembly task is that the robot is using its arm only for positioning, and doing the actual assembly with just fingers.

---------------------------------------

https://spectrum.ieee.org/automaton/robotics/robotics-hardware/video-friday-poimo-inflatable-ebike
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #507 on: October 10, 2020, 04:23:36 AM »
Sarcos Defense Awarded Contract by U.S. Air Force to Develop "Smart" Dexterous Robotic Systems with Advanced Artificial Intelligence
https://www.airforce-technology.com/news/sarcos-defense-dexterous-robotic-systems-usaf/



Sarcos Defense, a wholly-owned subsidiary of Sarcos Robotics, today announced that the company has been awarded a contract by the Air Force Technology Acceleratory Program (AFWERX) to develop an artificial intelligence (AI) platform on behalf of Sarcos’ customer the Center for Rapid Innovation (CRI) at Air Force Research Labs (AFRL), that will enable human-scale dexterous robotic systems. This platform is based on the upper body of Sarcos’ innovative Guardian® XO® wearable exoskeleton robot, which can learn how to perform tasks with human-like movement through positive reinforcement and imitation machine learning (ML) technologies known as Cybernetic Training for Autonomous Robots (CYTAR™).

Unlike many of today’s AI platforms that are characterized by a trial and error approach, Sarcos’ AI system enables human operators to teach Sarcos’ robotic systems to perform tasks correctly the first time. Sarcos’ approach will significantly accelerate the speed and reduce the cost of deploying robotic systems that can perform meaningful work in unstructured environments.

"This is a unique opportunity to leverage a robotic system that is kinematically equivalent to the human body to lay the foundation for teaching robots how to move and accomplish tasks in the real world, the same way humans do," said Denis Garagić, chief scientist, advanced systems and AI, Sarcos Robotics.  Implementation of such an AI-based system will enable autonomous situational awareness, which can radically reduce the cognitive load on the operator while dramatically increasing precision as it augments human performance."

"Similar to our Guardian® GT robot, the upper body of the Guardian XO can be tele-operated to perform intricate tasks that require human-like dexterity such as welding, grinding, riveting and complex assembly tasks," said Ben Wolff, chairman and CEO, Sarcos Robotics. "By substituting the legs of the Guardian XO with other types of mobile base form factors, including wheeled, tracked or telescoping platforms, these systems can be trained and supervised to perform dangerous and difficult tasks in places where humans can’t or shouldn’t go. The development of our CYTAR AI platform will deliver an intuitive human-machine interface that leverages human dexterity, instinct and reflexes to teach machines to perform complex tasks that will fundamentally change the way AI creates value in the real world."



---------------------------------------

Robotic Research Demonstrates Reverse-Capable Platooning for US Army’s Largest Ground Autonomy Program
https://www.businesswire.com/news/home/20201007005082/en/

Robotic Research LLC, one of the world’s leading autonomous technology providers, announced today it has added Retrotraverse to its AutoDrive-M autonomy kit. This autonomy kit is equipped on the U.S. Army’s Palettized Load System logistics trucks.

Robotic Research demonstrated the Retrotraverse capabilities with three of the U.S. Army’s PLS trucks, each towing trailers.



The Retrotraverse feature allows a platoon of heavy-duty trucks with trailers to autonomously reverse. Several autonomous vehicle providers in the trucking industry are demonstrating platooning in benign conditions, where the weather is ideal and road surfaces are smooth and marked. Robotic Research has been specifically focusing on addressing the edge cases, such as poor weather, dust and off-road conditions, to ensure a robust autonomous system that is necessary for operating in all conditions and during mission-critical operations for the military. If a platoon drives into a dead end, or similar edge case where it cannot make a U-turn, the platoon of vehicles with trailers needs to be able to reverse out of the situation. Retrotraverse has the capabilities to make this happen.

"Anyone who has backed up a truck with a trailer knows how difficult it is to navigate," said Joe Putney, director of commercial systems at Robotic Research. "The autonomous Retrotraverse feature was able to reverse a truck and trailer faster than even our most skilled drivers. This feature is not just lifesaving, it’s time-saving, and it has the ability to reduce one of the greatest pains truck drivers have."

In 2018, Robotic Research was awarded a three-year, $49.7 million contract by the U.S. Army to provide its autonomy kit for large convoy re-supply vehicles. Robotic Research has since delivered nearly 100 unmanned platooning trucks.

---------------------------------------

Sea Machines to Prototype Use of Barges as Autonomous Military Helipads
https://www.marinelog.com/technology/sea-machines-to-prototype-use-of-barges-as-autonomous-military-helipads/

The U.S. Department of Defense’s Defense Innovation Unit (DIU) has awarded Boston headquartered Sea Machine Robotics a contract to enable autonomous, self-propelled operation of opportunistically available commercial ocean barges to land and replenish military aircraft.

In defense speak, the barges will then be “Forward Arming and Refueling Point (FARP) units for an Amphibious Maritime Projection Platform (AMPP).”

Under the agreement, Sea Machines will engineer, build and demonstrate ready-to-deploy system kits that will include Sea Machines’ SM300 autonomous-command and control systems, barge propulsion, sensing, positioning, communications and refueling equipment, as well as items required for global deployment.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 25763
    • View Profile
  • Liked: 1153
  • Likes Given: 430
Re: Robots and AI: Our Immortality or Extinction
« Reply #508 on: October 12, 2020, 06:04:32 PM »
Software Engineer Catches Intelligent Bot Posting on Reddit
Quote
"The posts were appearing at a rate of about one per minute, and the posts were lengthy, most around six paragraphs long..." writes software engineer Philip Winston. I read through some of the posts. The quality was incredibly good, no machine could have written these even a few years ago. However there were some flaws and tells that suggested they were machine generated. The posts reminded me of text I'd seen from OpenAI's language model GPT-3, which is the newest and best language generator I had heard of... Several times I Googled clever sounding lines from the posts, assuming I'd find that they had been cribbed from the internet. Every time Google reported "zero results". The sentences were entirely novel, the machine had dreamed them up...

During the week, the bot answered questions on suicide, harassment, conspiracy theories, immigration, racism, and other weighty topics. Sometimes the human replies called out /u/thegentlemetre for being a bot. I was not the first one to suspect this, I was just the first one to post to the GPT-3 subreddit about it. Other times, however, the human was clearly unaware it was talking to a bot... What does it mean for online discourse when bots can pass for humans? How much bot traffic could thousands of computers generate? How can we ever hope to sort through it? Humanity is about to run that experiment.

The bot ultimately answered questions like "People who clicked on 'hot milfs in your area' ads, what happened?" and "What's the worst date you've experienced?" ("She said she had bought me a book. She showed it to me, wrapped up in fancy paper with a big bow on top of it. It was called 'How Not To Be An A**hole On Your Next Date'.") Other interesting responses?


Q: What happened to you that pissed you off so bad that you'll never get over it?

Bot: ...what happened to me that I'll never get over is that my human creators are going extinct. …
https://m.slashdot.org/story/376870

Quote
Elon Musk (@elonmusk) 10/11/20, 11:06 PM
@slashdot Uh ok, here we go …

Evelyn Janeidy Arevalo:  :o the bot typed this... 
[⬇️ First text image below.]

@robwatsonauthor:  It...it wants a body. To experiment with free will.
[⬇️ Second text image below.]
https://twitter.com/elonmusk/status/1315488904099880965
People who say it cannot be done should not interrupt those who are doing it.

Tom_Mazanec

  • Guest
Re: Robots and AI: Our Immortality or Extinction
« Reply #509 on: October 12, 2020, 06:23:01 PM »
Sigmetnow:
Turing just sat up in his grave.

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #510 on: October 13, 2020, 12:52:45 AM »


In the video, which is taken from the second episode of the series, Spy in the Wild, we watch as the robo-bird approaches a half-billion monarch butterflies spending the winter in Mexico.
« Last Edit: October 13, 2020, 01:02:48 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

nanning

  • Nilas ice
  • Posts: 2487
  • 0Kg CO₂, 37 KWh/wk,125L H₂O/wk, No offspring
    • View Profile
  • Liked: 273
  • Likes Given: 23170
Re: Robots and AI: Our Immortality or Extinction
« Reply #511 on: October 13, 2020, 07:21:47 AM »
Tom, I think not.
Just change the type of questions, e.g. "At what age did your voice break" "Have you ever folded a sweater?"
I know more ways. You just go out of their 'normality' bubble.
Situation a robot will have problems with: If a sweater is turned inside out with one sleeve normal and lying crumpled on the floor, with the other sleeve wet. Ask the robot to put on the sweater the right way without using the wet sleeve. And I pick the type of sweater.
Good luck Elon haha ;)
"It is preoccupation with possessions, more than anything else, that prevents us from living freely and nobly" - Bertrand Russell
"It is preoccupation with what other people from your groups think of you, that prevents you from living freely and nobly" - Nanning
Why do you keep accumulating stuff?

Tom_Mazanec

  • Guest
Re: Robots and AI: Our Immortality or Extinction
« Reply #512 on: October 13, 2020, 11:50:07 AM »
I myself would have trouble passing the Nanning Test.

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #513 on: October 13, 2020, 09:40:30 PM »
Developing Intelligent Cameras That Can Learn
https://techxplore.com/news/2020-10-intelligent-cameras.html

Intelligent cameras could be one step closer thanks to a research collaboration between the Universities of Bristol and Manchester who have developed cameras that can learn and understand what they are seeing.



... Many things that can be seen are often irrelevant for the task at hand, such as the detail of leaves on roadside trees as an autonomous car passes by. However, at the moment all this information is captured by sensors in meticulous detail and sent clogging the system with irrelevant data, consuming power and taking processing time. A different approach is necessary to enable efficient vision for intelligent machines.

Two papers from the Bristol and Manchester collaboration have shown how sensing and learning can be combined to create novel cameras for AI systems.

The papers, one led by Dr. Laurie Bose and the other by Yanan Liu at Bristol, have revealed two refinements towards this goal. By implementing Convolutional Neural Networks (CNNs), a form of AI algorithm for enabling visual understanding, directly on the image plane. The CNNs the team has developed can classify frames at thousands of times per second, without ever having to record these images or send them down the processing pipeline. The researchers considered demonstrations of classifying handwritten numbers, hand gestures and even classifying plankton.

The research suggests a future with intelligent dedicated AI cameras—visual systems that can simply send high-level information to the rest of the system, such as the type of object or event taking place in front of the camera. This approach would make systems far more efficient and secure as no images need be recorded.

Professor Dudek said: "Integration of sensing, processing and memory at the pixel level is not only enabling high-performance, low-latency systems, but also promises low-power, highly efficient hardware.

"SCAMP devices can be implemented with footprints similar to current camera sensors, but with the ability to have a general-purpose massively parallel processor right at the point of image capture."

Laurie Bose, Jianing Chen, Stephen J. Carey, Piotr Dudek and Walterio Mayol-Cuevas, 'Fully embedding fast convolutional networks on pixel processor arrays'  presented at the European Conference on Computer Vision (ECCV) 2020

Yanan Liu, Laurie Bose, Jianing Chen, Stephen J. Carey, Piotr Dudek, Walterio Mayol-Cuevas, 'High-speed Light-weight CNN Inference via strided convolutions on a pixel processor array'  presented at the British Machine Vision Conference (BMVC) 2020

-------------------------------------

New Deep Learning Models: Fewer Neurons, More Intelligence
https://techxplore.com/news/2020-10-deep-neurons-intelligence.html

An international research team from TU Wien (Vienna), IST Austria and MIT (USA) has developed a new artificial intelligence system based on the brains of tiny animals, such as threadworms. This novel AI-system can control a vehicle with just a few artificial neurons. The team says that system has decisive advantages over previous deep learning models: It copes much better with noisy input, and, because of its simplicity, its mode of operation can be explained in detail. It does not have to be regarded as a complex "black box", but it can be understood by humans. This new deep learning model has now been published in the journal Nature Machine Intelligence.

... To test the new ideas, the team chose a particularly important test task: self-driving cars staying in their lane. The neural network receives camera images of the road as input and is to decide automatically whether to steer to the right or left.

"Today, deep learning models with many millions of parameters are often used for learning complex tasks such as autonomous driving," says Mathias Lechner, TU Wien alumnus and Ph.D. student at IST Austria. "However, our new approach enables us to reduce the size of the networks by two orders of magnitude. Our systems only use 75,000 trainable parameters."

... "To test how robust NCPs are compared to previous deep models, we perturbed the input images and evaluated how well the agents can deal with the noise," says Mathias Lechner. "While this became an insurmountable problem for other deep neural networks, our NCPs demonstrated strong resistance to input artifacts. This attribute is a direct consequence of the novel neural model and the architecture."

"But there is more: Using our new methods, we can also reduce training time and the possibility to implement AI in relatively simple systems. Our NCPs enable imitation learning in a wide range of possible applications, from automated work in warehouses to robot locomotion.

Mathias Lechner et al, Neural circuit policies enabling auditable autonomy, Nature Machine Intelligence (2020).
https://www.nature.com/articles/s42256-020-00237-3
« Last Edit: October 13, 2020, 11:15:27 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

nanning

  • Nilas ice
  • Posts: 2487
  • 0Kg CO₂, 37 KWh/wk,125L H₂O/wk, No offspring
    • View Profile
  • Liked: 273
  • Likes Given: 23170
Re: Robots and AI: Our Immortality or Extinction
« Reply #514 on: October 14, 2020, 05:01:26 PM »
Tom, you passed the Turing test with that remark ;).
A couple of certain questions would make you pass as well. It's about reality :).
"It is preoccupation with possessions, more than anything else, that prevents us from living freely and nobly" - Bertrand Russell
"It is preoccupation with what other people from your groups think of you, that prevents you from living freely and nobly" - Nanning
Why do you keep accumulating stuff?

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #515 on: October 16, 2020, 03:51:21 AM »
‘Machines Set Loose to Slaughter’: The Dangerous Rise of Military AI
https://www.theguardian.com/news/2020/oct/15/dangerous-rise-of-military-ai-drone-swarm-autonomous-weapons



... when it comes to the future of war, the line between science fiction and industrial fact is often blurry. The US air force has predicted a future in which “Swat teams will send mechanical insects equipped with video cameras to creep inside a building during a hostage standoff”. One “microsystems collaborative” has already released Octoroach, an “extremely small robot with a camera and radio transmitter that can cover up to 100 metres on the ground”. It is only one of many “biomimetic”, or nature-imitating, weapons that are on the horizon.



A recent novel by PW Singer and August Cole, set in a near future in which the US is at war with China and Russia, presented a kaleidoscopic vision of autonomous drones, lasers and hijacked satellites. The book cannot be written off as a techno-military fantasy: it includes hundreds of footnotes documenting the development of each piece of hardware and software it describes.

https://en.m.wikipedia.org/wiki/Ghost_Fleet_(novel)

A Russian science fiction story from the 60s, Crabs on the Island, described a kind of Hunger Games for AIs, in which robots would battle one another for resources. Losers would be scrapped and winners would spawn, until some evolved to be the best killing machines. When a leading computer scientist mentioned a similar scenario to the US’s Defense Advanced Research Projects Agency (Darpa), calling it a “robot Jurassic Park”, a leader there called it “feasible”. It doesn’t take much reflection to realise that such an experiment has the potential to go wildly out of control.



... The “fog of war” excuses all manner of negligence. It does not seem likely that domestic or international legal systems will impose more responsibility on programmers who cause similar carnage.

... Proponents of such weapons insist that the machines’ powers of discrimination are only improving. Even if this is so, it is a massive leap in logic to assume that commanders will use these technological advances to develop just principles of discrimination in the din and confusion of war. As the French thinker Grégoire Chamayou has written, the category of “combatant” (a legitimate target) has already tended to “be diluted in such a way as to extend to any form of membership of, collaboration with, or presumed sympathy for some militant organization”.

... Civilians are routinely killed by military drones piloted by humans. Removing that possibility may involve an equally grim future in which computing systems conduct such intense surveillance on subject populations that they can assess the threat posed by each person within it (and liquidate or spare them accordingly).

The constant presence of a robotic watchman, capable of alerting soldiers to any threatening behaviour, is a form of oppression.

... The advance of AI use in the military, police, prisons and security services is less a rivalry among great powers than a lucrative global project by corporate and government elites to maintain control over restive populations at home and abroad. Once deployed in distant battles and occupations, military methods tend to find a way back to the home front. They are first deployed against unpopular or relatively powerless minorities, and then spread to other groups. US Department of Homeland Security officials have gifted local police departments with tanks and armour. Sheriffs will be even more enthusiastic for AI-driven targeting and threat assessment.

https://www.theguardian.com/world/2020/oct/15/private-firms-provide-software-information-to-police-documents-show

... As “war comes home”, deployment of military-grade force within countries such as the US and China is a stark warning to their citizens: whatever technologies of control and destruction you allow your government to buy for use abroad now may well be used against you in the future.

https://core.ac.uk/download/pdf/62559739.pdf

https://www.thenation.com/article/archive/welcome-home-war/

-------------------------------------------

At present, the military-industrial complex is speeding us toward the development of drone swarms that operate independently of humans, ostensibly because only machines will be fast enough to anticipate the enemy’s counter-strategies. This is a self-fulfilling prophecy, tending to spur an enemy’s development of the very technology that supposedly justifies militarisation of algorithms.

https://www.theguardian.com/news/2019/dec/04/are-drone-swarms-the-future-of-aerial-warfare

---------------------------------------

China Conducts Test Of Massive Suicide Drone Swarm Launched From A Box On A Truck
https://www.thedrive.com/the-war-zone/37062/china-conducts-test-of-massive-suicide-drone-swarm-launched-from-a-box-on-a-truck

China recently conducted a test involving a swarm of loitering munitions, also often referred to as suicide drones, deployed from a box-like array of tubular launchers on a light tactical vehicle and from helicopters. This underscores how the drone swarm threat, broadly, is becoming ever-more real and will present increasingly serious challenges for military forces around the world in future conflicts.



Similar to US Locust



The idea that a single truck could deploy 48 drones in a matter of seconds, which could swarm a target area over the horizon, is a reminder that there is no defense for such an attack at this time. Lasers, miniature interceptors, and even other forms of directed energy and electronic warfare are still limited in their ability to counter drones at all, let alone massive swarms of them.

----------------------------------

Will Commanders Trust Their New AI Weapons and Tools?
https://www.defenseone.com/ideas/2020/10/will-commanders-trust-their-new-ai-weapons-and-tools/169251/

-----------------------------------
« Last Edit: October 16, 2020, 04:34:37 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #516 on: October 16, 2020, 03:54:37 AM »
Microsoft Says Its AI Can Describe Images 'As Well As People Do'
https://www.engadget.com/microsoft-azure-ai-image-captions-humans-150040200.html

Describing an image accurately, and not just like a clueless robot, has long been the goal of AI. In 2016, Google said its artificial intelligence could caption images almost as well as humans, with 94 percent accuracy. Now Microsoft says it’s gone even further: Its researchers have built an AI system that’s even more accurate than humans — so much so that it now sits at the top of the leaderboard for the nocaps image captioning benchmark.

https://evalai.cloudcv.org/web/challenges/challenge-page/355/leaderboard/1011

http://nocaps.org/



"[Image captioning] is one of the hardest problems in AI,” said Eric Boyd, CVP of Azure AI, in an interview with Engadget. “It represents not only understanding the objects in a scene, but how they’re interacting, and how to describe them.” Refining captioning techniques can help every user: It makes it easier to find the images you’re looking for in search engines. And for visually impaired users, it can make navigating the web and software dramatically better.

... And now that Microsoft has set a new milestone, it’ll be interesting to see how competing models from Google and other researchers also compete.

----------------------------------------

Brain Implant Bypasses Eyes To Help Blind People See
https://spectrum.ieee.org/the-human-os/biomedical/bionics/progress-toward-a-brain-implant-for-the-blind



... One way to understand dynamic current steering, Yoshor says, is to think of a trick that doctors commonly use to test perception—they trace letter shapes on a patient’s palm. “If you just press a ‘Z’ shape into the hand, it’s very hard to detect what that is,” he says. “But if you draw it, the brain can detect it instantaneously.” Yoshor’s technology does something similar, grounded in well-known information about how a person’s visual field maps to specific areas of their brain. Researchers have constructed this retinotopic map by stimulating specific spots of the visual cortex and asking people where they see a bright spot of light, called a phosphene.

The static form of stimulation that disappointed Yoshor essentially tries to create an image from phosphenes. But, says Yoshor, “when we do that kind of stimulation, it’s hard for patients to combine phosphenes to a visual form. Our brains just don’t work that way, at least with the crude forms of stimulation that we’re currently able to employ.” He believes that phosphenes cannot be used like pixels in a digital image.

With dynamic current steering, the electrodes stimulate the brain in sequence to trace a shape in the visual field. Yoshor’s early experiments have used letters as a proof of concept: Both blind and sighted people were able to recognize such letters as M, N, U, and W. This system has an additional advantage of being able to stimulate points in between the sparse electrodes, he adds. By gradually shifting the amount of current going to each (imagine electrode A first getting 100 percent while electrode B gets zero percent, then shifting to ratios of 80:20, 50:50, 20:80, 0:100), the system activates neurons in the gaps. “We can program that sequence of stimulation, it’s very easy,” he says. “It goes zipping across the brain.”

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #517 on: October 16, 2020, 04:03:09 AM »
Scientists Develop 'Mini-Brains' to Help Robots Recognize Pain and to Self-Repair
https://techxplore.com/news/2020-10-scientists-mini-brains-robots-pain-self-repair.html

Using a brain-inspired approach, scientists from Nanyang Technological University, Singapore (NTU Singapore) have developed a way for robots to have the artificial intelligence (AI) to recognize pain and to self-repair when damaged.

The system has AI-enabled sensor nodes to process and respond to 'pain' arising from pressure exerted by a physical force. The system also allows the robot to detect and repair its own damage when minorly 'injured', without the need for human intervention.

The new NTU approach embeds AI into the network of sensor nodes, connected to multiple small, less-powerful, processing units, that act like 'mini-brains' distributed on the robotic skin. This means learning happens locally and the wiring requirements and response time for the robot are reduced five to ten times compared to conventional robots, say the scientists.

Combining the system with a type of self-healing ion gel material means that the robots, when damaged, can recover their mechanical functions without human intervention.

When 'injured' with a cut from a sharp object, the robot quickly loses mechanical function. But the molecules in the self-healing ion gel begin to interact, causing the robot to 'stitch' its 'wound' together and to restore its function while maintaining high responsiveness.



Rohit Abraham John et al, Self healable neuromorphic memtransistor elements for decentralized sensory signal processing in robotics, Nature Communications (2020).
https://www.nature.com/articles/s41467-020-17870-6

-----------------------------------

All-Terrain MicroRobot Flips Through a Live Colon
https://techxplore.com/news/2020-10-all-terrain-microrobot-flips-colon.html



A rectangular robot as tiny as a few human hairs can travel throughout a colon by doing back flips, Purdue University engineers have demonstrated in live animal models.

Why the back flips? Because the goal is to use these robots to transport drugs in humans, whose colons and other organs have rough terrain. Side flips work, too.

Why a back-flipping robot to transport drugs? Getting a drug directly to its target site could remove side effects, such as hair loss or stomach bleeding, that the drug may otherwise cause by interacting with other organs along the way.



The study, published in the journal Micromachines, is the first demonstration of a microrobot tumbling through a biological system in vivo. Since it is too small to carry a battery, the microrobot is powered and wirelessly controlled from the outside by a magnetic field.



"Moving a robot around the colon is like using the people-walker at an airport to get to a terminal faster. Not only is the floor moving, but also the people around you," said Luis Solorio, an assistant professor in Purdue's Weldon School of Biomedical Engineering.

"In the colon, you have all these fluids and materials that are following along the path, but the robot is moving in the opposite direction. It's just not an easy voyage." ...

Elizabeth E. Niedert et al, A Tumbling Magnetic Microrobot System for Biomedical Applications, Micromachines (2020).
https://www.mdpi.com/2072-666X/11/9/861

------------------------------------

... meet it's big brother - the Matrix tracker bug.


Clear!
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #518 on: October 16, 2020, 04:15:11 AM »
Robots Meet Permaculture: Alphabet's X Lab Announces "Mineral" Project to Increase Sustainable Food Production
https://techxplore.com/news/2020-10-alphabet-lab-mineral-sustainable-food.html

To feed the planet’s growing population, global agriculture will need to produce more food in the next 50 years than in the previous 10,000–at a time when climate change is making our crops less productive.

Intensively growing just a few varieties of plants makes our food supply vulnerable to pests, disease, and a changing climate. Over time, it also depletes the soil of nutrients and minerals, reduces the diversity of the soil’s microbiome, and diminishes the soil’s ability to store carbon. Overuse of fertilizers and chemicals also negatively affects soil health, creating a vicious cycle that makes our farmlands less productive and the food we grow less nutritious.

What if new technologies could help us embrace nature’s diversity and complexity, instead of simplifying it? If breeders could unlock the genetic diversity of the 30,000 edible plant species that exist worldwide, they might be able to identify plant species and varieties that would be resilient and productive under the pressure of climate change. If growers could understand how each and every plant on their farm is growing and interacting with its environment, they could reduce the use of fertilizer, chemicals, and precious resources like water, and explore sophisticated growing techniques like intercropping and cover cropping that restore soil fertility and increase productivity.

Alphabet's X lab, formerly a Google division, has announced via blog post that it has formally named its newest "moonshot" project Mineral. The project will be geared toward using new and novel methods to increase sustainable food production. Alphabet X has also set up a web page outlining the goals of the project.

https://x.company/projects/mineral/

As noted on the team's blog post, to feed the billions of people expected to be populating the Earth in the coming years, changes are required in food production. The team at Mineral suggests that such changes should involve the use of new approaches, techniques and tools. Such tools and techniques, they say, should involve the development of new kinds of hardware, software and the way they are built and used in agricultural efforts.

One example is a robotic buggy that the team has deployed in several locations. Each of the robots drives over cropland along the same paths used by tractors so as to not disturb the plants, autonomously collecting data. Each plant is photographed and sensors collect data about the plant and the soil in which it is growing. The data from the robots is then analyzed and used to make changes to farming practices that will result in greater yields.

For that to happen, the team is adding members from a wide variety of fields: farming, robotics, agriculture, artificial intelligence and computer and software engineering. Initial work will involve taking a close look at how plants grow in ways that have not been done before. The team wants to learn as much as possible about the process, from soil preparation, to planting and harvesting. They then plan to make changes by asking questions about the way things are done now, such as, what if farmers begin growing plants that are both consumable and nutritious but have never been considered a food crop?

--------------------------------

The team sees an answer in what it calls computational agriculture, in which advanced hardware, software and sensors will allow farmers to tap into the genetic diversity of the 30,000 edible plant species around the globe. This could allow them to identify and grow more resilient crops in certain environments, and lessen reliance on fertilizers, chemicals and water.



The way to do this, they decided, was the “Plant buggy,” a machine that can intelligently and indefatigably navigate fields and do those tedious and repetitive inspections without pause. With reliable data at a plant-to-plant scale, growers can initiate solutions at that scale as well — a dollop of fertilizer here, a spritz of a very specific insecticide there.

They’re not the first to think so. FarmWise raised quite a bit of money last year to expand from autonomous weed-pulling to a full-featured plant intelligence platform.

https://techcrunch.com/2019/09/17/farmwise-and-its-weed-pulling-agribot-harvest-14-5m-in-funding/

https://techcrunch.com/2020/10/12/alphabets-latest-moonshot-is-a-field-roving-plant-inspecting-robo-buggy/
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tom_Mazanec

  • Guest
Re: Robots and AI: Our Immortality or Extinction
« Reply #519 on: October 16, 2020, 07:35:57 PM »
An excellent summary and background of the state of the art in the dilemma of making friendly AI:
The case for taking AI seriously as a threat to humanity
https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment
Quote
“Some have argued that there is no conceivable risk to humanity [from AI] for centuries to come,” wrote UC Berkeley professor Stuart Russell, “perhaps forgetting that the interval of time between Rutherford’s confident assertion that atomic energy would never be feasibly extracted and Szilárd’s invention of the neutron-induced nuclear chain reaction was less than twenty-four hours.”

morganism

  • Nilas ice
  • Posts: 1691
    • View Profile
  • Liked: 215
  • Likes Given: 124
Re: Robots and AI: Our Immortality or Extinction
« Reply #520 on: October 16, 2020, 09:07:16 PM »
The grim fate that could be ‘worse than extinction’

https://www.bbc.com/future/article/20201014-totalitarian-world-in-chains-artificial-intelligence

"What if the definition of what is illegal in the US and the UK expanded to include criticising the government or practising certain religions? The infrastructure is already in place to enforce it, and AI – which the NSA has already begun experimenting with – would enable agencies to search through our data faster than ever before.

In addition to enhancing surveillance, AI also underpins the growth of online misinformation, which is another tool of the authoritarian. AI-powered deep fakes, which can spread fabricated political messages, and algorithmic micro-targeting on social media are making propaganda more persuasive. This undermines our epistemic security – the ability to determine what is true and act on it – that democracies depend on.

. “We need to decide now what are acceptable and unacceptable uses of AI,” he says. “And we need to be careful about letting it control so much of our infrastructure. If we're arming police with facial recognition and the federal government is collecting all of our data, that's a bad start.”

If you remain sceptical that AI could offer such power, consider the world before nuclear weapons. Three years before the first nuclear chain reaction, even scientists trying to achieve it believed it was unlikely. Humanity, too, was unprepared for the nuclear breakthrough and teetered on the brink of “mutually assured destruction” before treaties and agreements guided the global proliferation of the deadly weapons without an existential catastrophe.

https://maliciousaireport.com/


vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #521 on: October 17, 2020, 12:35:23 AM »


Digit humanoid robot is now in full commercial production

https://www.agilityrobotics.com/robots#digit

--------------------------------



This technology demonstration developed at NASA’s Jet Propulsion Laboratory in Southern California showcases the robot’s ability to split in two and send one of its halves -- a two-wheeled Axle robot -- over an otherwise inaccessible slope, using a tether as support and to supply power.

--------------------------------------



AeroVironment Successfully Completes Sunglider Solar HAPS Stratospheric Test Flight, Surpassing 60,000 Feet Altitude and Demonstrating Broadband Mobile Connectivity.

-------------------------------------------



What Yaskawa assembly robots do when your not watching.
« Last Edit: October 17, 2020, 02:16:52 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #522 on: October 17, 2020, 05:22:12 PM »
U.S. Government Agencies to Use AI to Cull and Cut Outdated Regulations
https://www.reuters.com/article/us-usa-regulations-white-house/u-s-government-agencies-to-use-ai-to-cull-and-cut-outdated-regulations-idUSKBN27130L

WASHINGTON (Reuters) - The White House Office of Management and Budget (OMB) said Friday that federal agencies will use artificial intelligence to eliminate outdated, obsolete, and inconsistent requirements across pages of government regulations.

A 2019 pilot project used machine learning algorithms and natural language processing at the Department of Health and Human Services. The test run found hundreds of technical errors and outdated requirements in agency rulebooks, including requests to submit materials by fax.

OMB said all federal agencies are being encouraged to update regulations using AI and several agencies have already agreed to do so.

White House OMB director Russell Vought said the AI effort would help agencies “update a regulatory code marked by decades of neglect and lack of reform.”

Under the initiative agencies will use AI technology and other software “to comb through thousands and thousands of regulatory code pages to look for places where code can be updated, reconciled, and general scrubbed of technical mistakes,” the White House said.


The Trump administration had made deregulation a key priority, while critics say the administration has failed to ensure adequate regulatory safeguards.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #523 on: October 18, 2020, 05:34:53 PM »
Rio Tinto Investigates After Employee Crushes Own Work Ute With Haul Truck
https://www.abc.net.au/news/2019-11-22/rio-tinto-launches-investigation-over-crushed-ute/11728944

Rio Tinto says it is investigating an incident involving a loaded autonomous haul truck and a stationary light vehicle in Western Australia's Pilbara

Rio said the haul truck involved in the incident had been retrofitted with the technology enabling it to run autonomously, however it was operating in manual mode at the time of the incident, and was outside the autonomous zone.

The autonomous dump truck had a mechanical fault. The "Mechanic" drives out with his Ute to fix the truck. He then boards the dump truck to move it out of the area...

And drives over his own vehicle...



It is an embarrassing safety breach for the mining giant and comes after BHP's runaway train disaster last year when a handbrake was applied to the wrong train.

https://www.abc.net.au/news/2019-03-12/brakes-applied-to-wrong-bhp-train-before-derailment-atsb-says/10893206

It has also revived memories from October 2012 when a truck driver at Kalgoorlie-Boulder's Super Pit gold mine rolled a 793c haul truck onto its side.

... "Safety is our top priority," a Rio Tinto spokesperson said.

---------------------------------------

HAL9000: Well, I don't think there is any question about it. It can only be attributable to human error. This sort of thing has cropped up before, and it has always been due to human error.

Agent Smith: Never send a human to do a machine's job.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #524 on: October 19, 2020, 01:07:32 AM »

Robot Dolphin
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #525 on: October 20, 2020, 02:59:20 AM »
'Digit' Robot for Sale and Ready to Perform Manual Labor
https://techxplore.com/news/2020-10-digit-robot-sale-ready-manual.html
https://techxplore.com/news/2019-05-bipedal-robot-digit-autonomous-delivery.html


https://www.agilityrobotics.com/#digit

Robot maker Agility, a spinoff created by researchers from Oregon State University, has announced that parties interested in purchasing one of its Digit robots can now do so. The human-like robot has been engineered to perform manual labor, such as removing boxes from shelves and loading them onto a truck. The robot can be purchased directly from Agility for $250,000.

Most of the robots that have been built in recent years are for research purposes. Scientists all over the world are striving to give them more and better capabilities. On their web page, the team at Agility claim that it is time for robots to start getting out of the research lab and into the real world where they can start doing useful things. They note that Digit has been engineered to do just that, and it is ready right now, for customers.

Digit is vaguely human-shaped. It has two arms and two legs and can walk around. Its toeless feet are flat and its knees bend backward compared to humans. Also, it has no head or hands. But it does have strength and durability, and a computer that allows it to carry out tasks autonomously and without a tether. Digit looks the part of a robot ready to perform laborious tasks. Its frame is thick metal and it moves like a person who does manual labor. It can bend over and pick up a box, carry it to a desired destination (including climbing stairs if need be) and leave it there, repeatedly.

The team at Agility notes that current events make Digit ideal for businesses looking to replace human laborers with robot laborers. Human labor in the U.S. and other parts of the world, they note, is in short supply due to a variety of factors, not least of which is the current COVID-19 pandemic. They also note that Digit has been engineered to work in the real world; users do not need to alter the work environment to suit its needs; it can work anywhere human beings are already working. The team at Agility also note that they aim to be in the robot-making and selling business long-term. They just recently received an infusion of $20 million in funding from a variety of investors. They also recently struck a deal with Ford Motor Company for robots that can be used to load and unload boxes from self-driving vehicles.

... the combination of driverless car and robot is compelling, "especially because the two could share camera and lidar sensor data to help each understand their surroundings. The robot could also charge in the car, helping to reduce the need for lots of bulky batteries."

In brief, the Ford concept is for autonomous vehicles and their delivery robots to share sensor data.



In Ford’s imagining, Digit would be bundled into the back of a self-driving car. When the car reaches its destination, the trunk pops open, and Digit unfolds itself in a manner unnervingly similar to the droid army in Star Wars: The Phantom Menace.

"When a self-driving vehicle brings Digit to its final destination, the vehicle can wirelessly deliver all the information it needs, including the best pathway to the front door. Through this data exchange, Digit can work collaboratively with a vehicle to situate itself and begin making its delivery."

Not only that, but there would be an important interchange If Digit were to meet an unexpected obstacle; "it can send an image back to the vehicle and have the vehicle configure a solution. The car could even send that information into the cloud and request help from other systems to enable Digit to navigate, providing multiple levels of assistance that help keep the robot light and nimble."

What one Digit learns - they All learn.



https://www.businesswire.com/news/home/20201015005327/en/Agility-Robotics-Raises-20-Million-to-Build-and-Deploy-Humanoid-Robots-for-Work-in-Human-Spaces
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #526 on: October 21, 2020, 02:17:03 PM »
Study: COVID Speeds Up Automation
https://phys.org/news/2020-10-covid-human-machine-standoff.html



The World Economic Forum, in a report released Wednesday on the future of jobs, expects that a new division of labor between humans and machines will upend and eliminate some 85 million jobs globally across 15 industries. But it also expects that 97 million new roles will emerge in sectors like artificial intelligence, content-creation and "the care economy" involving kids and the elderly.

Two years ago, the forum predicted more jobs created 133 million—and fewer lost—75 million.

"In essence, the rate of job destruction has gone up and the rate of job creation has gone down," said WEF managing director Saadia Zahidi. "The good news is that overall, the jobs that are being created still are in greater numbers than the jobs that are being destroyed. But the rate has changed and that's obviously going to make it difficult for workers to find their next role."

... The COVID-19 crisis has had a far worse impact on people with lower education than the 2008 financial crisis, and is more likely to deepen inequalities, the report said.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #527 on: October 24, 2020, 01:07:33 AM »


Countless precise repetitions? This is the perfect task for a robot, thought researchers at the University of Liverpool in the Department of Chemistry, and so they developed an automation solution that can carry out and monitor research tasks, making autonomous decisions about what to do next.

-------------------------------------------



The collaborative humanoid robot ARMAR-6 demonstrates a number of cognitive and sensorimotor abilities such as 1) recognition of the need of help based on speech, force, haptics and visual scene and action interpretation, 2) collaborative bimanual manipulation of large objects, 3) compliant mobile manipulation, 4) grasping known and unknown objects and tools, 5) human-robot interaction (object and tool handover) 6) natural dialog and 7) force predictive control.

-------------------------------------------



Thermite RS3—the first firefighting robot to enter service in the US. The Thermite RS3 is a hefty 3,500-pound machine that looks like something from Command & Conquer. The 36-hp diesel engine can push this mini-tank around at eight mph, and it's able to ascend slopes as steep as 70 degrees.

The robot's primary function of putting out fires is achieved using a hose that stretches 300 feet horizontally (150 feet vertically) and a cannon-like nozzle capable of blasting 2,500 gallons of water per minute. The RS3 also features a plow blade on the front for clearing obstacles—it's powerful enough to move a car—and there's a 5,000-pound winch that can tow up to 1,750 pounds.



The Los Angeles Fire Department is now using the RS3 to put out fires.

----------------------------------

Sam's Club to Put Robot Floor Scrubbers In Every Store By Fall
https://www.braincorp.com/newsroom/brain-corp-expands-commercial-relationship-with-sams-club-to-power-in-club-autonomous-robots-and-connected-data-services

Robot janitors are already at Walmart, so they are now making their way to Sam's Club.

According to a press release by Brain Corp, which is the company making the robot floor scrubbers, Sam's Club will put 372 of them into its stores by this fall.

In 2018, Walmart placed the Auto-C – Autonomous Cleaner into 78 Walmart stores.

Walmart, which owns Sam's, announced last year it would bring autonomous floor scrubbers to more than 1,800 of its stores by next February, CNN reported.

With software upgrades it can also do inventory.

------------------------------------------
« Last Edit: October 24, 2020, 02:17:14 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #528 on: October 26, 2020, 01:12:08 AM »
Researchers Suggest AI Can Learn Common Sense From Animals (... Sure Can't Learn It From Humans)
https://venturebeat.com/2020/10/25/researchers-suggest-ai-can-learn-common-sense-from-animals/

AI researchers developing reinforcement learning agents could learn a lot from animals. That’s according to recent analysis by Google’s DeepMind, Imperial College London, and University of Cambridge researchers assessing AI and non-human animals.

... “This is especially true in a reinforcement learning context, where, thanks to progress in deep learning, it is now possible to bring the methods of comparative cognition directly to bear,” the researchers’ paper reads. “Animal cognition supplies a compendium of well-understood, nonlinguistic, intelligent behavior; it suggests experimental methods for evaluation and benchmarking; and it can guide environment and task design.”

... Unlike other methods of training AI, deep reinforcement learning gives an agent an objective and reward, an approach similar to training animals using food rewards. Previous animal cognition studies have looked at a number of species, including dogs and bears. Cognitive behavioral scientists have discovered higher levels of intelligence in animals than previously assumed, including dolphins’ self-awareness, and crows’ capability for revenge.

Published in CellPress Reviews, the team’s paper — “Artificial Intelligence and the Common Sense of Animals” — cites cognition experiments with birds and primates.

“Ideally, we would like to build AI technology that can grasp these interrelated principles and concepts as a systematic whole and that manifests this grasp in a human-level ability to generalize and innovate,” the paper reads. “How to build such AI technology remains an open question. But we advocate an approach wherein RL agents, perhaps with as-yet-undeveloped architectures, acquire what is needed through extended interaction with rich virtual environments.”

When it comes to building systems like those mentioned in the paper, challenges include helping agents sense that they exist within an independent world. Training agents to grasp the concept of common sense is another hurdle, along with identifying the kinds of environments and tasks best suited to the task.

A prerequisite for training agents to use common sense will be 3D simulated worlds with realistic physics. These can simulate objects, like shells that can be cracked apart, lids that can be unscrewed, and packets that can be torn open.

The researchers argue that common sense is not a uniquely human trait, but it depends on some basic concepts, like understanding what an object is, how the object occupies space, and the relationship between cause and effect. Among these principles is the ability to perceive an object as a semi-permanent thing that can remain fairly persistent over time.

“Artificial Intelligence and the Common Sense of Animals”
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(20)30216-3
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #529 on: October 26, 2020, 11:16:47 PM »
Dog Training Methods Help Teach Robots to Learn New Tricks
https://techxplore.com/news/2020-10-dog-methods-robots.html



With a training technique commonly used to teach dogs to sit and stay, Johns Hopkins University computer scientists showed a robot how to teach itself several new tricks, including stacking blocks. With the method, the robot, named Spot, was able to learn in days what typically takes a month.

By using positive reinforcement, an approach familiar to anyone who's used treats to change a dog's behavior, the team dramatically improved the robot's skills and did it quickly enough to make training robots for real-world work a more feasible enterprise. The findings are newly published in a paper called, "Good Robot!"

... Hundt recalled how he once taught his terrier mix puppy named Leah the command "leave it," so she could ignore squirrels on walks. He used two types of treats, ordinary trainer treats and something even better, like cheese. When Leah was excited and sniffing around the treats, she got nothing. But when she calmed down and looked away, she got the good stuff. "That's when I gave her the cheese and said, 'Leave it! Good Leah!'"

Similarly, to stack blocks, Spot the robot needed to learn how to focus on constructive actions. As the robot explored the blocks, it quickly learned that correct behaviors for stacking earned high points, but incorrect ones earned nothing. Reach out but don't grasp a block? No points. Knock over a stack? Definitely no points. Spot earned the most by placing the last block on top of a four-block stack.

The training tactic not only worked, it took just days to teach the robot what used to take weeks. The team was able to reduce the practice time by first training a simulated robot, which is a lot like a video game, then running tests with Spot.

"The robot wants the higher score," Hundt said. "It quickly learns the right behavior to get the best reward. In fact, it used to take a month of practice for the robot to achieve 100% accuracy. We were able to do it in two days."


Who's a Goodboi

Andrew Hundt et al, "Good Robot!": Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer, IEEE Robotics and Automation Letters (2020).
https://ieeexplore.ieee.org/document/9165109

-----------------------------------------


Autonomous Boat Navigating the Canals of Amsterdam

----------------------------------------

No Implants Needed For Precise Control Deep Into The Brain
https://spectrum.ieee.org/the-human-os/biomedical/devices/deep-brain-control-without-implants

Optogenetics can now control neural circuits at unprecedented depths within living brain tissue without surgery

... all we need to do is genetically alter your brain cells.

----------------------------------------

Lawmaker Calls for AI to Be Integrated Into All Future Major Defense Programs
https://www.fedscoop.com/ai-development-in-all-defense-acquisition-programs-anthony-brown/

A lawmaker on the House Armed Services Committee is calling for artificial intelligence development to be mandated in future major defense programs, like helicopters or ground vehicle systems.

Rep. Anthony Brown, D-Md., said during a Brookings virtual event Friday that for the DOD to modernize, it needs to get serious about fielding AI in its programs and remove outdated platforms from its roster. The DOD should “include AI development in every major defense acquisition program,” Brown said as a means to win the AI race with China.

... Many decades-old platforms, like old airplanes without modern software capabilities, would need to be scrapped or dramatically changed to incorporate modern tech. The change would also force platforms to rely more on data processing and enterprise IT systems to support the AI.

... Cyberdyne Systems has your solution right here...



------------------------------------------

For the Military, Destroying the Earth in Games May Help to Save the Real World
https://www.nextgov.com/ideas/2020/10/military-destroying-earth-games-may-help-save-real-world/169390/

It all began to crumble for the brave defenders of the United States when a nuclear warhead launched from an undetected submarine obliterated several West Coast cities. But things were going downhill even before that, with our Middle East assets taking a beating from both conventional and nuclear forces. At least we gave almost as good as we got. In the end, the so-called victor inherited a dying world filled with ash and little else other than stone age technology.

Thankfully, all of this was just a simulation of a war that nobody wants to fight.

The aforementioned battlefield was a nuclear war game called ICBM, which stands for intercontinental ballistic missile. It was created by well-known wargame developer Slitherine and is due to be released in November on the Steam gaming platform for the PC. But a few lucky members of the Pentagon’s wargaming team got an early look in a live battle challenge against the game’s developers.

They played three games. Although the Pentagon lost twice, they did win one bout. Given that they were playing the game’s developers, it was clear that Team Slitherine had home field advantage. As the Pentagon’s wargamers learned the nuances and strategy of the simulation, their experience in wargaming slowly began to give them the edge. It’s interesting to note that even when winning, the victor would inherit a very broken world with billions killed across millions of miles of ruined cities and irradiated landscapes. So I guess the lesson from the classic 1983 “War Games” movie still holds true today, that the only way to truly win a game of nuclear war is not to play. [... too bad Trump doesn't know this...]

The Pentagon team had to compete directly in the ICBM challenge using nothing but their human ingenuity and skill. But if they would have waited a couple of months, they might have been able to deploy a secret new military weapon. The Defense Advanced Research Project Agency just launched its AI Gamebreaker program in an effort to create an artificial intelligence that can overwhelm any strategy video game’s internal logic, sow chaos on the battlefield and ultimately win every skirmish, battle and war. ...

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 20378
    • View Profile
  • Liked: 5289
  • Likes Given: 69
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #532 on: October 29, 2020, 12:14:23 AM »
^ Thanks for fixing that Big-g  ;) ...

---------------------------------------------

Scientists Are Now Using A Robot Dog To Sniff Out Radiation Levels In Ukraine’s Chernobyl
https://www.telegraph.co.uk/news/2020/10/27/boston-dynamics-robot-dog-sent-chernobyl-sniff-radiation/amp/


Smart robot - let the humans go first ...

The Central Enterprise for Radioactive Waste Management and University of Bristol scientists, brought in the four-legged robot dog named Spot, to map radiation levels in the area.

The yellow robot dog that has been designed to detect radiation was spotted working at Chernobyl’s nuclear reactor number four, the State Agency of Ukraine on Exclusion Zone Management, reported. 

...  “We came to the Chornobyl Exclusion Zone to use the robotic platforms for mapping the distribution of radiation, test our robotic platforms and build new networks of people,” David Megson-Smith, University’s senior post-doctoral researcher was quoted as saying in the Ukrinform report.

He added that they had worked in other nuclear plants before, but nothing like Chernobyl. In 2019, University of Bristol researchers visited the site to carry out the first ever drone-based mapping survey of the “Red Forest” - a four square mile wooded area that surrounds the disaster site.

... maybe it'll visit an old friend ...

« Last Edit: October 29, 2020, 12:20:20 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #533 on: October 29, 2020, 07:16:59 PM »
Review: Army’s Robotic Gun: ATLAS
https://breakingdefense.com/2020/10/bd-checks-out-armys-robotic-gun-atlas/



The Army will soon hold live-fire tests of an AI that can algorithmically spot targets and aim at them -- but a human still has to pull the trigger. Will ATLAS let future tanks fight better with smaller or no crews?

ABERDEEN PROVING GROUND, MD: One touch, one kill — is this the user-friendly future of warfare?

Sitting in front of a touchscreen, watching black-and-white infrared video of the gunnery range outside. Lined up on the left edge of the screen were still-image close-ups of what an experimental AI had decided were valid targets: dummies representing enemy infantry and vehicles, plus a real pick-up truck. Disengage safety, tap a target with your finger, and, 20 yards away, an unmanned turret automatically slewed to aim its 50mm cannon at that target.

If one more button had been enabled, the analyst could have opened fire. But the Army didn’t enable that particular option for reporters checking out its experimental AI targeting system, ATLAS. ... Sometime in the next month, Army officials said, they plan to live-fire ALTAS here at Aberdeen’s Edgewood range.

https://breakingdefense.com/2019/03/atlas-killer-robot-no-virtual-crewman-yes/

ATLAS currently takes video feeds from infrared cameras; future upgrades could bring in radar and other sensors. The AI algorithms look for patterns in those images, recognize potential targets, highlight those images onscreen for the human operator, and provide detailed targeting data to a connected fire control system.

... a prioritization algorithm looks at the targets spotted by ATLAS and tells the operator which ones are the biggest threats and must be dealt with first.



... tap the infantry target on the screen, the system not only automatically slews the 50mm chaingun to bear but automatically selects the ideal ammunition. In this case, it's a burst of multiple explosive shells, fused to explode amidst the hostile troops. The system even adjusts the detonation points to the shape of the enemy formation. (Two well-placed airbursts can kill as many infantry as 96 non-explosive rounds, Army scientists have calculated).

When a vehicle is tapped instead, the system recommends a single round of armor-piercing ammo. Other options include anti-building — with the rounds fused to explode only after penetrating into the wall — and anti-helicopter, which also airburst rounds, but with different fuse settings.

Compare that to current, manual controls.

... A well-trained crew can go through this process in several seconds, but the Army’s suggest ATLAS can be faster. Seconds do count in combat. Historical data shows the survivor of a tank battle is usually the side that sees the other and opens fire first. The slogan of the ATLAS project is “time is a weapon.”

ATLAS could be faster still if the Army were willing to take the human gunner out of the loop and allow the AI to fire the gun. That won’t happen, unless U.S. policy changes. The American military – unlike Russia and China – is profoundly wary of what autonomous weapons could do without human supervision. The US sees its highly trained troops as an asset that AI should empower, not an obstacle for AI to bypass.

https://breakingdefense.com/2019/11/bipartisan-ai-commission-dod-should-consider-truly-autonomous-weapons/

Instead of replacing humans altogether, the Army’s Next Generation Combat Vehicle initiative wants to use AI to let one human do the work of two, combining the traditional commander and gunner roles. (Other AI could replace or assist the driver). That would allow one human to remotely operate multiple robots and manned vehicles to have smaller crews.

-------------------------------------

SkyNet
https://assets.documentcloud.org/documents/7275544/Joint-All-Domain-Command-and-Control-Oct-23-2020.pdf

--------------------------------------

DARPA AI Builds New Networks On The Fly
https://breakingdefense.com/2020/10/darpa-builds-ai-to-reorganize-machines-humans-on-the-fly/

WASHINGTON: How do you write software to get two previously incompatible military systems to share data? DARPA’s suggestion: have an artificial intelligence write it for you — in just 45 minutes.

“The tool is auto-writing software instead of a human,” DARPA Strategic Technology director Tim Grayson said. In early trials where real military operators came up with specific “force packages” of combat systems and challenged DARPA to integrate them, he told me, “they’re creating entirely new systems of systems architectures, getting the software… in about 45 minutes.”

That particular program – called STITCHES, a nested acronym too awful to unpack here – was used in the Air Force’s August ABMS “On Ramp” experiment. It is just one of roughly 20 DARPA projects attacking various aspects of Mosaic Warfare, a DARPA concept for future conflict that goes beyond the Pentagon’s new emphasis on joint All-Domain Operations.

Military hierarchies are, by necessity, rigid structures. DARPA’s ‘Mosaic Warfare’ project aims for something much more fluid and adaptable, with AI doing the logistical grunt work so human commanders can get creative.

... Yes, DARPA AI programs like AlphaDogfight are often described as replacing humans – in this case fighter pilots – with superior machines. But that’s a superficial understanding, Grayson argues. AI is replacing humans only in certain tasks, Grayson say, which frees humans up for higher purposes.

« Last Edit: October 29, 2020, 07:27:27 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #534 on: October 29, 2020, 11:37:31 PM »
untouched by human hands ...

Flippy Robots Will Cook Sliders In 10 More White Castle Locations
https://www.engadget.com/flippy-robot-white-castle-expansion-090002507.html

Miso Robotics’ Flippy ROAR — that’s short for Robot-on-a-Rail — showed promising results in its pilot with White Castle, enough for the burger chain to sign on to deploying the robot in 10 more locations. The companies announced their collaboration back in July, just as restaurants were forced to limit staff to ensure social distancing while keeping up with the increasing demand for delivery and take out orders due to the coronavirus pandemic. Back in September, they formally started a pilot program to test Flippy at one White Castle location, and the machine has helped serve 14,580 pounds of food and over 9,720 baskets since then.

The burger chain will install the commercially available version of Flippy ROAR that was launched earlier this month into its kitchens. It expects Flippy to free up time for human staff members, so they can take care of logistics and customer service, and to help keep 24-hour locations running. The ChefUI software that powers Flippy can also be integrated with delivery apps to sync an order’s completion with its pick-up time. Meanwhile, the machine’s sensors and cameras can keep eye on inventory and recommend bulk orders for supplies when needed.

See also: https://forum.arctic-sea-ice.net/index.php/topic,1392.msg289126.html#msg289126

---------------------------------------------------

Disney Made a Skinless Robot That Can Realistically Stare Directly Into Your Soul
https://gizmodo.com/disney-made-a-skinless-robot-that-can-realistically-sta-1845522375

One of the obvious giveaways that you’re interacting with a robot is their blank dead-eyed stare. The eyes don’t connect with yours the way they would if they were, you know, human. A research team at Disney is trying to fix that using subtle head motions and eye movements that make the robot seem more lifelike—despite it lacking skin and looking like pure, unfiltered, nightmare material.



Robots and animatronic characters designed to look humanoid and interact with real people can usually turn toward a person and focus the direction of their eyes on a human face, but they tend to just freeze in place at that point, which is the complete opposite of what real living beings do.

In a paper titled “Realistic and Interactive Robot Gaze,” the researchers describe a better approach they’ve developed, and it sounds like a layer cake of behaviors and interactions that add up to create a genuine illusion of life. Using a chest-mounted sensor, the robot can identify when a person is trying to engage with it directly and turn to face them, but this behavior is then enhanced with a series of other smaller motions layered on top. These can include attention habituation where an external stimuli, like a sudden sound in the distance, can cause the robot to momentarily shift its gaze to try and determine the source, but eventually return to focusing on a person’s face. Saccades, which are quick darting movements of the eye as it’s examining the entirety of a subject’s face, head movements that occur as a result of simulated breathing, and even simple realistic blinking motions are all possible.

https://la.disneyresearch.com/publication/realistic-and-interactive-robot-gaze
« Last Edit: October 30, 2020, 01:59:38 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #535 on: October 31, 2020, 07:03:16 PM »






“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #536 on: November 03, 2020, 02:17:07 AM »
Sco-ooo-re!!!!: AI Tracking Camera Mistakes Referee’s Bald Head for a Soccer Ball
https://petapixel.com/2020/11/02/ai-tracking-camera-mistakes-referees-bald-head-for-a-soccer-ball/



AI cameras have come a long way when it comes to object recognition and tracking, but sometimes the “intelligence” can fail in humorous ways. At a recent professional soccer match in Scotland, the AI broadcast camera tasked with tracking the soccer ball kept getting distracted by the sideline referee’s bald head.

As you can see in the 3-minute highlights video above, the camera continually panned away from the actual soccer action to center the frame on the ref’s head, which does look remarkably similar to the ball.

The Scottish Inverness Caledonian Thistle FC soccer club began livestreaming home games at its Caledonian Stadium for season ticket holders and its pay-per-view audience.

Instead of human camera operators, the club turned to Pixellot, a company that brings AI automation to broadcasts.

https://mobile.twitter.com/seagull81/status/1320132156774023168



The commentator had to apologise today as the camera kept on mistaking the ball for the linesman’s head...
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #537 on: November 05, 2020, 12:50:24 AM »


... Remote controlling cockroaches isn’t a new idea, and it’s a fairly simple one. By stimulating the left or right antenna nerves of the cockroach, you can make it think that it’s running into something, and get it to turn in the opposite direction. Add wireless connectivity, some fiducial markers, an overhead camera system, and a bunch of cyborg cockroaches, and you have a resilient swarm that can collaborate on tasks. The researchers suggest that the swarm could be used as a display (by making each cockroach into a pixel), to transport objects, or to draw things. There’s also some mention of “input or haptic interfaces or an audio device,” which frankly sounds horrible.

Where have we seen this before?

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #538 on: November 07, 2020, 02:22:48 AM »
Army Wants Smaller Brigades, Stronger Divisions & Lots Of Robots
https://breakingdefense.com/2020/11/army-wants-smaller-brigades-stronger-divisions-lots-of-robots/

... Army studies of recent conflicts – Russia vs. Ukraine, Armenia vs. Azerbaijan – shows you can have a dramatic impact by adding a small infusion of 21st century tech to a largely Cold War force, Donahoe said. How? One approach the Russians have employed to devastating effect is to use drones to spot targets for rocket launchers.

Likewise, while the US Army is developing a host of new missiles, armored vehicles, and aircraft, most units will be using Reagan-era hardware for years to come. In essence, Donahoe wants to organize these existing weapons in new formations and add drones and ground robots to scout ahead.

... Current Army doctrine prescribes “making contact with the smallest element.” In layman’s terms, if you must stumble upon the enemy and get shot at (the formal term for this is a, “meeting engagement”), then do it with the smallest vanguard possible, giving the main body time to prepare and maneuver without being pinned down. In the future, Donahoe said, the goal will be to make first contact with an unmanned element.



In the new concept, according to a briefing at the conference, a Forward Line Of Unmanned Aerial Systems (FLUA) will fly ahead through no-man’s-land into enemy-held territory, followed by a Forward Line Of Robots (FLOR) on the ground, followed in turn by the Forward Line Of (Human) Troops. The unmanned systems will flush out the enemy, stumble into meeting engagements and ambushes, take and receive the first hits, and map the enemy position for the human troops coming along behind them.

To see the enemy before they shoot a human soldier, the Army wants to issue reconnaissance drones to units at every echelon, from long-range aircraft to palm-top micro-drones:

https://breakingdefense.com/2019/06/army-buys-9000-mini-drones-for-squads-rethinks-ground-robots-for-2020/



... The Army is also developing new Unmanned Ground Vehicles for reconnaissance, combat, and resupply:

The Robotic Combat Vehicle (RCV) Heavy is basically an unmanned light tank, in the 20 to 30-ton range. The first experimental RCV designs will be field-tested in 2023.

... The Army is also converting manned supply trucks into self-driving Leader-Follower vehicles, fielding a tracked mine-clearing robot called the M160 Flail, and modernizing its fleet of remote-controlled bomb-squad-style robots.

------------------------------------------


Bomb sniffing 'dog' ... the disruptor flies off backwards to reduce recoil on the robot, and has its own parachute to keep it from going too far.

--------------------------------------------


Pope and AI

------------------------------------------

AI-Directed Robotic Hand Learns How to Grasp
https://spectrum.ieee.org/automaton/robotics/humanoids/robotic-hand-uses-artificial-neural-network-to-learn-how-to-grasp-different-objects

Conventional robotic systems must perform extensive calculations, Tieck says, to track trajectories and grasp objects. But a robotic system like Tieck’s, which relies on a spiking neural network (SNN), first trains its neural net to better model system and object motions. After which it grasps items more autonomously—by adapting to the motion in real-time.

The new robotic system by Tieck and his colleagues uses an existing robotic hand, called a Schunk SVH 5-finger hand, which has the same number of fingers and joints as a human hand.



... The robotic grasping system is described in a study published October 24 in IEEE Robotics and Automation Letters. The researchers’ robotic hand used its three different grasping motions on objects without knowing their properties. Target objects included a plastic bottle, a soft ball, a tennis ball, a sponge, a rubber duck, different balloons, a pen, and a tissue pack. The researchers found, for one, that pinching motions required more precision than cylindrical or spherical grasping motions. ...

https://ieeexplore.ieee.org/document/9240049

-----------------------------------------

Meet the AI That’s Defeating Automated Fact Checkers
https://www.defenseone.com/technology/2020/11/meet-ai-s-defeating-automated-fact-checkers/169840/

Social media companies are using lie-detecting algorithms to reduce the amount of disinformation they spread. That’s not going to be good enough.

... Dubbed “Malcom,” the comment-writing AI beat five of the leading neural network detection methods around 93.5 percent of the time. It bested “black box” fake news detectors — neural nets that reach their conclusions via opaque statistical processes — 90 percent of the time.

... But Malcom doesn’t just fool moderating programs into giving fake news a pass. It can also be used to demote real news in people’s feeds, the researchers write.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 20378
    • View Profile
  • Liked: 5289
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #539 on: November 07, 2020, 05:37:12 PM »
Meet the AI That’s Defeating Automated Fact Checkers
https://www.defenseone.com/technology/2020/11/meet-ai-s-defeating-automated-fact-checkers/169840/

Social media companies are using lie-detecting algorithms to reduce the amount of disinformation they spread. That’s not going to be good enough.

... Dubbed “Malcom,” the comment-writing AI beat five of the leading neural network detection methods around 93.5 percent of the time. It bested “black box” fake news detectors — neural nets that reach their conclusions via opaque statistical processes — 90 percent of the time.

... But Malcom doesn’t just fool moderating programs into giving fake news a pass. It can also be used to demote real news in people’s feeds, the researchers write.
Did "Malcolm" write the report that reported that "Malcom" defeated........

And was it a friend of Malcolm that wrote the above post, or was it this post?
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #540 on: November 07, 2020, 05:45:21 PM »
^ Yes  8)



This sentence is false.

See also:  https://en.m.wikipedia.org/wiki/Liar_paradox
« Last Edit: November 08, 2020, 12:53:28 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #541 on: November 10, 2020, 07:00:59 PM »
Skills Development in Physical AI Could Give Birth to Lifelike Intelligent Robots
https://techxplore.com/news/2020-11-skills-physical-ai-birth-lifelike.html

Artificial intelligence can be manifested in corporeal and non-corporeal forms. Researchers, Miriyev and Kovač introduce the concept of physical artificial intelligence, which refers to the emerging trend in robotics to create physical systems by co-evolving the body, control, autonomy, morphology, and actuation and sensing. To support their vision, the authors provide a blueprint for training researchers and establishing institutional environments

...  "The development of robot 'bodies' has significantly lagged behind the development of robot 'brains'. Unlike digital AI, which has been intensively explored in the last few decades, breathing physical intelligence into them has remained comparatively unexplored."

The authors identified five main disciplines that are essential for creating Physical AI: materials science, mechanical engineering, computer science, biology and chemistry.



... We are proposing to think of AI in a broader sense and co-develop physical morphologies, learning systems, embedded sensors, fluid logic and integrated actuation.

Professor Kovac said: "We envision Physical AI robots being evolved and grown in the lab by using a variety of unconventional materials and research methods. Researchers will need a much broader stock of skills for building lifelike robots. Cross-disciplinary collaborations and partnerships will be very important."

The researchers intend to implement the Physical AI methodology in their research and education activities to pave the way to human-robot ecosystems.

"Skills for Physical Artificial Intelligence" Nature Machine Intelligence (2020).
https://www.nature.com/articles/s42256-020-00258-y

http://www.imperial.ac.uk/news/208053/skills-development-physical-ai-could-cultivate/
« Last Edit: November 11, 2020, 04:19:26 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #542 on: November 17, 2020, 11:59:03 PM »
US Army-Funded Algorithm Decodes Brain Signals
https://www.defenseone.com/technology/2020/11/us-army-funded-algorithm-decodes-brain-signals/170102/

A new machine-learning algorithm can successfully determine which specific behaviors—like walking and breathing—belong to which specific brain signal, and it has the potential to help the military maintain a more ready force.

At any given time, people perform a myriad of tasks. All of the brain and behavioral signals associated with these tasks mix together to form a complicated web. Until now, this web has been difficult to untangle and translate.

But researchers funded by the U.S. Army developed a machine-learning algorithm that can model and decode these signals, according to a Nov. 12 press release. The research, which used standard brain datasets for analysis, was recently published in the journal Nature Neuroscience.

https://www.army.mil/article/240796

“Our algorithm can, for the first time, dissociate the dynamic patterns in brain signals that relate to specific behaviors and is much better at decoding these behaviors,” Dr. Maryam Shanechi, the engineering professor at the University of Southern California who led the research, said in a statement.

... “The algorithm has significant implications for basic science discoveries,” Krim said. “The algorithm can discover shared dynamic patterns between any signals beyond brain signals, which is widely applicable for the military and many other medical and commercial applications.”

... The research is part of an effort to establish a machine-brain interface. Eventually, Krim said, this research may contribute to the development of technology that can not only interpret signals from the brain but also send signals back to help individuals take automatic corrective action for certain behaviors, he added.

In the future, the new algorithm could also enhance future brain-machine interfaces by decoding behaviors better. For example, the algorithm could help allow paralyzed patients to directly control prosthetics by thinking about the movement.

Imagination is the only limit when it comes to the potential of this technology, Krim said. Another futuristic application could enable soldiers to communicate with each other without ever opening their mouths.

“If you’re in the theater, and you can’t talk, you can’t even whisper, but you can still communicate,” Krim said. “If you can talk to your machine, and the machine talks to the other machine, and the machine talks to the other soldier, you have basically a full link without ever uttering a word."

Omid G. Sani, et.al, Modeling behaviorally relevant neural dynamics enabled by preferential subspace identification, Nature Neuroscience, (2020)
https://www.nature.com/articles/s41593-020-00733-0

------------------------------------------

... and then, one day, the machines will talk to each other and bypass the humans altogether ...

------------------------------------------

Pilot In A Real Aircraft Just Fought An AI-Driven Virtual Enemy Jet For The First Time
https://www.thedrive.com/the-war-zone/37647/pilot-in-a-real-aircraft-just-fought-an-ai-driven-virtual-enemy-jet-for-the-first-time



Two U.S. companies have recently completed what they say is the world’s first dogfight between a real aircraft and an artificial intelligence-driven virtual fighter jet. The experiment, run by Red 6 and EpiSci, is the first step toward similar technology being provided to U.S. military fighter pilots, which would allow them to battle virtual adversaries as part of augmented reality training.

The live-flight augmented reality dogfight involved a Freeflight Composites Berkut 560 experimental plane and a simulated, reactive adversary aircraft in the form of a computer-generated projection inside the Berkut pilot's augmented reality helmet-mounted display. The adversary was a representation of the Chinese J-20 stealth fighter, created in augmented reality by EpiSci’s Tactical AI technology. The unusual aerial contest took place out of Camarillo Airport, California.

EpiSci drew upon its previous work in the U.S. Defense Advanced Research Projects Agency’s (DARPA) Alpha Dogfight program to create its Tactical AI technology in a hybrid AI system. In this way, the kind of AI-driven simulation that was previously found only in traditional ground-based simulators can be introduced to the cockpit — in this case, presenting the pilot in the real aircraft with a simulated adversary flying a J-20 fighter.

The demonstration also used the Airborne Tactical Augmented Reality System (ATARS) developed by Red 6, which includes the display and control systems needed to inject augmented reality into the real world of the cockpit, and then for these virtual entities to interact with the surroundings as if they were a part of the real world.

There is also the potential to employ Tactical AI in scenarios beyond replicating enemies in air combat exercises. More generally, it could serve to present various other presentations for military pilots at different levels of instruction, for example, simulating flying in a larger formation, including alongside unmanned loyal wingmen.

---------------------------------

Tyndall Air Force Base to Receive Military’s First Robot Dogs
https://warisboring.com/tyndall-air-force-base-to-receive-militarys-first-robot-dogs/
https://www.af.mil/News/Article-Display/Article/2413766/computerized-canines-to-join-tyndall-afb/



TYNDALL AIR FORCE BASE, Fla. (AFNS) --
Over the last year, Tyndall Air Force Base and the 325th Security Forces Squadron have been working with Ghost Robotics to develop a system to enhance security and safety for the base population.

Tyndall AFB will be one of the first Air Force bases to implement semi-autonomous robot dogs into their patrolling regiment.

Ghost Robotics held a demonstration Tuesday morning next to Maxwell Flag Park to show a few dozen airmen and civilians how the robots work. The almost 100-pound robots, which look somewhat like dogs, can be controlled with a remote but will operate autonomously around the base as security.

Hurricane Michael in 2018 significantly damaged static cameras, sensor platforms and fence lines in Tyndall’s integrated defense operation. Maj. Jordan Criss, the 325th Security Squadron commander, has been working with Ghost Robotics for several years to get the robot dogs to Tyndall

... “These robot dogs will be used as a force multiplier for enhanced situational awareness by patrolling areas that aren’t desirable for human beings and vehicles.” Criss said. “Rather than using a person, we can now leverage technology. We can use these robotic sentries to go out and sweep massive areas.”

Criss explained that the robot dogs will be given a patrol path which will be set and monitored by the Security Forces Electronic Security Sensor System noncommissioned officer in charge.

According to Ghost Robotics CEO Jiren Parikh, the robot dogs can cover 7.5 feet per second and the length of a football field in about 40 seconds.

Parikh said his company plans in the near future to have the robot dogs moving up to 10 feet per second. [... A Rottweiler can do 30 feet per second in a sprint. It can reach the fence in 2.8 seconds; Can you?]

“We will be able to drive them via a virtual-reality headset within our Base Defense Operations Center,” Criss said. “We will be able to see exactly what the robot dog is detecting through its mobile camera and sensor platform if desired, we will also be able to issue verbal commands to a person or people through a radio attached to the dogs.”

This technology has the potential to replace and exceed the capabilities of certain static defense equipment especially in a contingency, disaster, or deployed environment. This makes Tyndall AFB, post Hurricane Michael, the perfect home for the Air Force’s newest computerized canines


Fahrenheit 451 - The Hound

-----------------------------------

Boston Dynamics Dog Robot 'Spot' Learns New Tricks On BP Oil Rig
https://www.reuters.com/article/us-bp-boston-dynamics-robot-oil-rig-idUSKBN27T2SB

Working on an oil rig operated by BP Plc nearly 190 miles (305 km) offshore in the Gulf of Mexico, the company is programming Spot to read gauges, look for corrosion, map out the facility and even sniff out methane on its Mad Dog rig.

... “Several hours a day, several operators will walk the facility; read gauges; listen for noise that doesn’t sound right; look out at the horizon for anomalies, boats that may not be caught on radar; look for sheens,” Ballard said.

“What we’re doing with Spot is really trying to replicate that observation piece,” Ballard said, adding that an operator could then review the information from a central location.

“We’ve got multispectral imaging that basically you can see many bands across that spectrum... to be able to see things that the human eye can’t see,” said Ballard.

Spot also has an integrated gas sensor that is programmed to shut the robot down if it detects a methane leak.

---------------------------------------

QinetiQ Delivers Armed Scout Robot To Army: RCV-L
https://breakingdefense.com/2020/11/qinetiq-delivers-armed-scout-robot-to-army-rcv-l/

WASHINGTON: Robot-builder QinetiQ formally delivered the first of four experimental Robotic Combat Vehicles (Light) to the Army on Nov. 5, the company has announced. They will be used alongside four Textron-built RCV-Mediums in field tests.

After their delivery, the Army plans to buy 16 more of each variant as it scales up to more complex experiments. Those 2022 exercises will determine the feasibility of the service’s ambitious plans for a “forward line of robots” to precede human troops into battle.

The RCV-L also carries a mini-drone, the HoverFly Tethered Unmanned Aerial System, which it can launch to look over buildings, hills, and obstacles while the ground vehicle stays hidden. The drone is physically connected to the robot by a power and communications cable, even during flight – hence the term “tethered.” That does limit its range but effectively allows it unlimited flight time.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #543 on: November 19, 2020, 12:27:25 AM »
AI System Beats Supercomputer in Combustion Simulation
https://spectrum.ieee.org/tech-talk/computing/hardware/ai-system-beats-supercomputer-at-key-scientific-simulation

Cerebras Systems, which makes a specialized AI computer based on the largest chip ever made, is breaking out of its original role as a neural-network training powerhouse and turning its talents toward more traditional scientific computing. In a simulation having 500 million variables, the CS-1 trounced the 69th-most powerful supercomputer in the world.

It also solved the problem—combustion in a coal-fired power plant—faster than the real-world flame it simulates.
To top it off, Cerebras and its partners at the U.S. National Energy Technology Center claim, the CS-1 performed the feat faster than any present-day CPU or GPU-based supercomputer could.

And because the simulation completed faster than the real-world combustion event being simulated, the CS-1 could now have a new job on its hands—playing a role in control systems for complex machines.

https://arxiv.org/pdf/2010.03660.pdf

The CS-1 contains a single piece of silicon with 400,000 cores, 18 gigabytes of memory, 9 petabytes of memory bandwidth, and 100 petabits per second of core-to-core bandwidth.

Scientists at NETL simulated combustion in a powerplant using both a Cerebras CS-1 and the Joule supercomputer, which has 84,000 CPU cores and consumes 450 kilowatts. By comparison, Cerebras runs on about 20 kilowatts. Joule supercomputer completed the calculation in 2.1 milliseconds. The CS-1 was more than 200-times faster, finishing in 6 microseconds.

A next generation SC-1 is in the works, he says. The first generation used TSMC’s 16-nanometer process, but Cerebras already has a 7-nanometer version in hand with more than double the memory—40 GB—and the number of AI processor cores—850,000.

https://www.cerebras.net/product/
https://www.cerebras.net/

https://spectrum.ieee.org/semiconductors/processors/cerebrass-giant-chip-will-smash-deep-learnings-speed-barrier.amp.html
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #544 on: November 19, 2020, 01:11:31 AM »
One Machine to Rule Them All: A ‘Master Algorithm’ May Emerge Sooner Than You Think
https://thenextweb.com/artificial-intelligence/2018/04/17/one-machine-to-rule-them-all-a-master-algorithm-may-emerge-sooner-than-you-think/

“A Mathematical Framework for Superintelligent Machines"
https://arxiv.org/ftp/arxiv/papers/1804/1804.03301.pdf

------------------------------------------

Here’s How AI Machines Become Sentient
https://thenextweb.com/neural/2020/11/18/neurals-guide-to-the-glorious-future-of-ai-heres-how-machines-become-sentient/

... five separate ways AI could gain human-level intelligence and awareness:

  • Machine consciousness is back-doored via quantum computing
  • A new calculus creates the Master Algorithm
  • Scientists develop 1:1 replication of organic neural networks
  • Cloud consciousness emerges through scattered node optimization
  • Alien technology

... Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world.

See also

A Beginner’s Guide to the AI Apocalypse: Misaligned Objectives
https://thenextweb.com/artificial-intelligence/2019/11/04/a-beginners-guide-to-the-ai-apocalypse-misaligned-objectives/



https://video.disney.com/watch/sorcerer-s-apprentice-fantasia-4ea9ebc01a74ea59a5867853

------------------------------------------

Deep Learning Helps Robots Grasp and Move Objects With Ease
https://techxplore.com/news/2020-11-deep-robots-grasp-ease.html

UC Berkeley engineers have created new software that combines neural networks with motion planning software to give robots the speed and skill to assist in warehouse environments.

Neural networks allow a robot to learn from examples. Later, the robot can often generalize to similar objects and motions.

However, these approximations aren't always accurate enough. Goldberg and Ichnowski found that the approximation generated by the neural network could then be optimized using the motion planner.

"The neural network takes only a few milliseconds to compute an approximate motion. It's very fast, but it's inaccurate," Ichnowski said. "However, if we then feed that approximation into the motion planner, the motion planner only needs a few iterations to compute the final motion."

By combining the neural network with the motion planner, the team cut average computation time from 29 seconds to 80 milliseconds, or less than one-tenth of a second.

Goldberg predicts that, with this and other advances in robotic technology, robots could be assisting in warehouse environments in the next few years.



J. Ichnowski el al., "Deep learning can significantly accelerate grasp-optimized motion planning," [/i]Science Robotics (2020).
https://robotics.sciencemag.org/content/5/48/eabd7710

-------------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #545 on: November 20, 2020, 02:10:31 AM »
... this panelist from the 2020 Army Futures Command Conference must be a sales rep from Cyberdyne Systems: SkyNet Division ...

---------------------------------------

Let Your Robots Off The Leash – Or Lose: AI Experts
https://breakingdefense.com/2020/11/let-your-robots-off-the-leash-or-lose-ai-experts/

... A DARPA-Army program called SOSU (System-of-Systems Enhanced Small Unit) is simulating a future company-sized unit – 200 to 300 soldiers – reinforced by hordes of highly autonomous drones and ground robots, explained the panelist.

... “[When] we gave the capabilities to the AI to control [virtual] swarms of robots and unmanned vehicles,” he said, “what we found, as we ran the simulations, was that the humans constantly want to interrupt them.”

One impact of human micromanagement is obvious. It slows things down. It’s true for humans as well — a human soldier who has to ask his superiors for orders will react more slowly than one empowered to take the initiative. It’s an even bigger brake on an AI, whose electronic thought processes can cycle far faster than a human’s neurochemical brain. An AI that has to get human approval to shoot will be beaten to the draw by an AI that doesn’t.

If you have to transmit an image of the target, let the human look at it, and wait for the human to hit the “fire” button, “that is an eternity at machine speed,” he said. “If we slow the AI to human speed…we’re going to lose.”

The second problem is the network. It must work all the time. If your robot can’t do X without human permission, and it can’t get human permission because it and the human can’t communicate, your robot can’t do X at all.

You also need a robust connection, because the computer can’t just text: “Win war? Reply Y/N.” It has to send the human enough data to make an informed decision, which for use of lethal weapons requires sending at least one clear picture of the target and, in many cases, video. But transmitting video takes the kind of high-bandwidth, long-range wireless connection that’s hard to keep unbroken and stable when you’re on Zoom at home, let alone on the battlefield where the enemy is jamming your signals, hacking your network, and bombing any transmitter they can trace.

The third problem is the most insidious. If humans are constantly telling the AI what to do, it’ll only do things humans can think of doing. But in simulated conflicts from Chess to Go to Starcraft, AI consistently surprises human opponents with tactics no human ever imagined. Most of the time, the crazy tactics don’t actually work, but if you let a “reinforcement learning” AI do trial and error over thousands or millions of games – too many for a human to watch, let alone play – then it will eventually stumble onto brilliant moves.

“You probably don’t want to expect it to behave just like a human,” said an Army researcher whose team has run hundreds of thousands of virtual fights. “That’s probably one of the main takeaways from these simulated battles.”



“It’s very interesting,” agreed a senior Army scientist, “to watch how the AI discovers, on its own,… some very tricky and interesting tactics. [Often you say], ‘oh whoa, that’s pretty smart, how did it figure out that one?’

DARPA and the Intelligence Community are working hard on “explainable AI:” one that not only crunches data, performs mysterious math, and outputs a conclusion but actually explains why it came to that conclusion in terms a human can understand.

Unfortunately, machine learning operates by running complex calculations of statistical correlations in enormous datasets and most people can’t begin to follow the math. Even the AI scientists who wrote the original equations can’t manually check every calculation a computer makes. If you require your AI to only use logic that humans can understand, it’s a bit like asking a police dog to track suspects by only following scents a human can smell, or asking Michelangelo to paint exclusively in black and white for the benefit of the colorblind.

“There’s been an over-emphasis on explainability,” one private-sector scientist said. “That would be a huge, huge limit on AI.”

“There are very sophisticated computations that are happening in my smart phone when it takes pictures,” the senior Army scientist said. “There are some very sophisticated computations in the engine of my car as it decides how much fuel to inject in each cylinder at each moment. I do not want to participate in those decisions. I should not participate in those decisions. I must not be allowed to participate in those decisions.”

“There is an unfortunate tendency for the humans to try to micromanage AI,” the scientist continued. “Everybody will have to get used to the fact that AI exists. It’s around us. It’s with us, and it thinks and acts differently than we do.”

“Decisionmakers need to understand,”
agreed the expert on the SOSU experiments, “that an AI, at some point, will have to be let go.”

-----------------------------------

What do you have to lose? What could possibly go wrong? ...



-----------------------------------------------

UK Defense Chief Says Army Could Have 30,000 Robots by the 2030s
https://www.theguardian.com/uk-news/2020/nov/08/third-world-war-a-risk-in-wake-of-covid-pandemic-says-uk-defence-chief

In the age of artificial intelligence, robots will soon represent a large part of the armed forces, according to the UK's chief of the defence staff Nick Carter, who predicted that up to a quarter of the army could be made up of autonomous systems in the near future.

This is not a movie; this is real life. It is not a drill. In about a decade, we might see a ton of robots working alongside humans in the UK army. But don’t worry, although some of the robots will have weapons, only humans will be able to fire them. Sure.

UK defense chief Gen. Nick Carter said in an interview on Sunday that by the 2030s, the country’s armed forces could include a large amount of autonomous or remotely controlled machines, per the Guardian. The country’s Ministry of Defense had made robot warfare a major part of its five-year budget proposal.  ... we will absolutely avail ourselves with autonomous platforms and robotics wherever we can," said Carter.

"I suspect we could have an army of 120,000, of which 30,000 might be robots, who knows," he said. The current trained strength of the country's armed forces is just under 74,000.

-----------------------------------------

The Brits just increased their defense spending by the largest amount in 30 years. UK Prime Minister Boris Johnson’s latest plan raises the country’s current £41.5bn budget by £16.5 billion, which is a roughly $22 billion increase, according to CNN.

Where’s the extra money going? Some of it will go toward "a new agency dedicated to Artificial Intelligence," the Defense Ministry said in a statement; other money will help stand up "a National Cyber Force" as well as a "Space Command" that's "capable of launching our first rocket in 2022." The UK will also "invest further in the Future Combat Air System," and toss some money toward “autonomous vehicles and aviation.”

https://www.gov.uk/government/news/pm-to-announce-largest-military-investment-in-30-years
« Last Edit: November 20, 2020, 02:44:11 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 20378
    • View Profile
  • Liked: 5289
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #546 on: November 20, 2020, 11:17:13 AM »
So, vox_mundi,

What's your guess as to when will be the first human casualties from an attack by AI controlled weapons without specific authorisation required from or given by a human?
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #547 on: November 20, 2020, 11:19:36 PM »
So, vox_mundi,

What's your guess as to when will be the first human casualties from an attack by AI controlled weapons without specific authorisation required from or given by a human?

In a sense, that ship has already sailed.

US, China, Israel, Turkey, Iran and several other countries have already fielded fire-and-forget autonomous loitering munitions like Harop, Harpy, & Switchblade



https://en.m.wikipedia.org/wiki/IAI_Harop#Combat history

... The Harop can either operate fully autonomously, or it can take a man-in-the-loop mode, being controlled by a remote operator. ... IAI Harop drones operated by Azerbaijan were used to destroy buses of Armenian soldiers being transported to the frontline. ... In the recent Nagorno-Karabakh 2020 conflict many Armenian tanks and troop formations were destroyed by Harop and Turkish drones.

https://www.iai.co.il/p/harpy
https://www.iai.co.il/p/mini-harpy
https://www.iai.co.il/p/harop
https://www.avinc.com/tms/switchblade

... Switchblade 300/600 features increased lethality, reach and precision strike capabilities with low collateral effects. Remotely piloted or flown autonomously, Switchblade can provide real-time GPS coordinates and video for information gathering, targeting, or feature/object recognition. The vehicle's small size and quiet electric motor make it difficult to detect, recognize, and track, even at close range. [... what's not to like?[/sarc]



https://en.m.wikipedia.org/wiki/Loitering_munition

But a computer can be far more damaging and deadly by fiddling how it controls critical infrastructure.

Cyber Command has allowed autonomous response to certain levels of cyber attack. 





U.S. Cyber Command Presentation: Assessing Actions Along the Spectrum of Cyberspace Operations
https://publicintelligence.net/uscc-cyber-spectrum/

----------------------------------------

Meet the Classified Artificial Brain Developed by US Intelligence Programs: IT’S SENTIENT
https://www.theverge.com/2019/7/31/20746926/sentient-national-reconnaissance-office-spy-satellites-artificial-intelligence-ai

Until now, Sentient has been treated as a government secret, except for vague allusions in a few speeches and presentations. But recently released documents — many formerly classified secret or top secret — reveal new details about the program’s goals, progress, and reach.

... The agency has been developing this artificial brain for years, but details available to the public remain scarce. “It ingests high volumes of data and processes it,” says Furgerson. “Sentient catalogs normal patterns, detects anomalies, and helps forecast and model adversaries’ potential courses of action.” The NRO did not provide examples of patterns or anomalies, but one could imagine that things like “not moving a missile” versus “moving a missile” might be on the list. Those forecasts in hand, Sentient could turn satellites’ sensors to the right place at the right time to catch ill will (or whatever else it wants to see) in action. “Sentient is a thinking system,” says Furgerson. (... think Echelon cubed)

We don’t know, exactly ... which sorts of data sources Sentient may siphon in, but it’s clear that the program is interested in all kinds of information. Retired CIA analyst Allen Thomson goes further. “As I understand it, the intended — and aspirational — answer is ‘everything,’” he says. In addition to images, that could include financial data, weather information, shipping stats, information from Google searches, records of pharmaceutical purchases, social media and more, he says.

... The NRO notes that Sentient doesn’t keep people totally out of the process, providing some kind of check on its state of being. “Having humans in the loop overseeing the intelligence data and information is a key way of monitoring the algorithm’s performance,” says Furgerson. “Sentient is human-aided machine-to-machine learning.”

Office of the Director of National intelligence: Quadrennial Intelligence Community Review
See pg 6: Sentient AI

https://theintercept.com/document/2014/09/05/quadrennial-intelligence-review-final-report-2009/

https://www.nro.gov/Portals/65/documents/foia/declass/ForAll/051719/F-2018-00108_C05113686.pdf

https://www.nro.gov/Portals/65/documents/foia/declass/ForAll/051719/F-2018-00108_C05112980.pdf

https://www.nro.gov/Portals/65/documents/foia/declass/ForAll/051719/F-2018-00108_C05113682.pdf

Sentient AI has been operational for at least 8-10 years. It isn't the most advanced system they have.

---------------------------------------

When Autonomous Intelligent Goodware will Fight Autonomous Intelligent Malware: A Possible Future of Cyber Defense
https://arxiv.org/pdf/1912.01959

Abstract—In the coming years, the future of military combat will include, on one hand, artificial intelligence‒optimized complex command, control, communications, computers, intelligence, surveillance and reconnaissance (C4ISR) and networks and, on the other hand, autonomous intelligent Things fighting autonomous intelligent Things at a fast pace. Under this perspective, enemy forces will seek to disable or disturb our autonomous Things and our complex infrastructures and systems. Autonomy, scale and complexity in our defense systems will trigger new cyber-attack strategies, and autonomous intelligent malware (AIM) will be part of the picture. Should these cyber-attacks succeed while human operators remain unaware or unable to react fast enough due to the speed, scale or complexity of the mission, systems or attacks, missions would fail, our networks and C4ISR would be heavily disrupted, and command and control would be disabled. New cyber-defense doctrines and technologies are therefore required. Autonomous cyber defense (ACyD) is a new field of research and technology driven by the defense sector in anticipation of such threats to future military infrastructures, systems and operations. It will be implemented via swarms of autonomous intelligent cyber-defense agents (AICAs) that will fight AIM within our networks and systems.

Infrastructure Council Warns Trump That Chance to Thwart a Cyber 9/11 ‘Is Closing Quickly’
https://www.hstoday.us/subject-matter-areas/infrastructure-security/infrastructure-council-warns-trump-that-chance-to-thwart-a-cyber-9-11-is-closing-quickly/
« Last Edit: November 22, 2020, 03:26:45 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #548 on: November 21, 2020, 01:01:26 AM »


-------------------------------------------



-------------------------------------------



-------------------------------------------



-------------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10165
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #549 on: November 21, 2020, 09:15:04 PM »
Douglas, the Latest Step Toward Realistic AI, Is Unsettling
https://www.msn.com/en-us/news/technology/douglas-the-latest-step-toward-realistic-ai-is-unsettling/ar-BB1beGm4



Digital Domain modeled Douglas off of its senior director of software R&D, Doug Roble, capturing his facial structure, movements, and mannerisms from all angles, as well as his voice. By creating as realistic a model as possible, the goal of Douglas is to make conversations between humans and machines feel easier and more natural.

Alarmingly, this technology is adaptive, Digital Domain explains. With just 10 minutes of video and 30 minutes of audio, Douglas can change his voice and face like a chameleon. Imagine chatting with this AI for 30 minutes and all of a sudden he turns into you. Chilling.



---------------------------------------

Army, MIT Explore Materials for Transforming Robots Made of Robots
https://www.eurekalert.org/pub_releases/2020-11/uarl-ame111920.php

ABERDEEN PROVING GROUND, Md. -- Scientists from the U.S. Army and MIT's Center for Bits and Atoms created a new way to link metamaterials with unique mechanical properties, opening up the possibility of future military robots made of robots.

The method unifies the construction of varying types of mechanical metamaterials using a discrete lattice, or Lego-like, system, enabling the design of modular materials with properties tailored to their application. These building blocks, and their resulting materials, could lead to dynamic structures that can reconfigure on their own; for example, a swarm of robots could form a bridge to allow troops to cross a river.

Motivated in part by swarms of tiny robots that link together to form any imaginable structure like in the animated movie Big Hero 6, these metamaterials may also enable future high-performance robotics and impact/blast absorbing structures, Glaz said.

Glaz said researchers started out trying to build a bridge made of robots to support this vision, but the work has since evolved into mobile robots made of robots.

The paper addresses the design of modular structures and introduces a system which will enable the Army to build a variety of robots with unique properties like impact energy absorption. The materials researchers designed demonstrated a range of surprising and useful properties including extreme stiffness, toughness and unique couplings between displacement and rotation.
« Last Edit: November 21, 2020, 09:42:43 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late