Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 57873 times)

vox_mundi

  • Young ice
  • Posts: 4751
    • View Profile
  • Liked: 2561
  • Likes Given: 366
Re: Robots and AI: Our Immortality or Extinction
« Reply #600 on: January 11, 2021, 09:41:01 PM »
Superintelligence Cannot Be Contained
https://techxplore.com/news/2021-01-wouldnt-superintelligent-machines.html



While more progress is being made all the time in Artificial Intelligence (AI), some scientists and philosophers warn of the dangers of an uncontrollable superintelligent AI. Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a superintelligent AI.

Suppose someone were to program an AI system with intelligence superior to that of humans, so it could learn independently. Connected to the Internet, the AI may have access to all the data of humanity. It could replace all existing programs and take control all machines online worldwide. Would this produce a utopia or a dystopia? Would the AI cure cancer, bring about world peace, and prevent a climate disaster? Or would it destroy humanity and take over the Earth?

Computer scientists and philosophers have asked themselves whether we would even be able to control a superintelligent AI at all, to ensure it would not pose a threat to humanity. An international team of computer scientists used theoretical calculations to show that it would be fundamentally impossible to control a super-intelligent AI

"A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity," says study co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development.

Scientists have explored two different ideas for how a superintelligent AI could be controlled. On one hand, the capabilities of superintelligent AI could be specifically limited, for example, by walling it off from the Internet and all other technical devices so it could have no contact with the outside world—yet this would render the superintelligent AI significantly less powerful, less able to answer humanities quests. Lacking that option, the AI could be motivated from the outset to pursue only goals that are in the best interests of humanity, for example by programming ethical principles into it. However, the researchers also show that these and other contemporary and historical ideas for controlling super-intelligent AI have their limits.

In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.

"If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable," says Iyad Rahwan, Director of the Center for Humans and Machines.

Based on these calculations the containment problem is incomputable, i.e. no single algorithm can find a solution for determining whether an AI would produce harm to the world. Furthermore, the researchers demonstrate that we may not even know when superintelligent machines have arrived, because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.

Manuel Alfonseca et al. Superintelligence Cannot be Contained: Lessons from Computability Theory, Journal of Artificial Intelligence Research (2021).
https://jair.org/index.php/jair/article/view/12202

-----------------------------------------------

Samsung is Making a Robot That Can Pour Wine and Bring You a Drink
https://www.theverge.com/2021/1/11/22224649/samsung-bot-handy-care-robots-ces-2021

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4751
    • View Profile
  • Liked: 2561
  • Likes Given: 366
Re: Robots and AI: Our Immortality or Extinction
« Reply #601 on: January 12, 2021, 09:56:54 AM »
Tweaking AI Software to Function Like a Human Brain Improves Computer's Learning Ability
https://gumc.georgetown.edu/news-release/tweaking-ai-software-to-function-like-a-human-brain-improves-computers-learning-ability/

Computer-based artificial intelligence can function more like human intelligence when programmed to use a much faster technique for learning new objects, say two neuroscientists who designed such a model that was designed to mirror human visual learning. They reported their results in the journal Frontiers in Computational Neuroscience

"Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples," says Riesenhuber. "We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing."

Humans can quickly and accurately learn new visual concepts from sparse data—sometimes just a single example. Even three- to four-month-old babies can easily learn to recognize zebras and distinguish them from cats, horses, and giraffes. But computers typically need to "see" many examples of the same object to know what it is, Riesenhuber explains.

The big change needed was in designing software to identify relationships between entire visual categories, instead of trying the more standard approach of identifying an object using only low-level and intermediate information, such as shape and color, Riesenhuber says.

"The computational power of the brain's hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects," he says.

Riesenhuber and Rule found that artificial neural networks, which represent objects in terms of previously learned concepts, learned new visual concepts significantly faster.


Rule explains, "Rather than learn high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts. It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter."

----------------------------------------

Research Team Demonstrates World's Fastest Optical Neuromorphic Processor
https://techxplore.com/news/2021-01-team-world-fastest-optical-neuromorphic.html

An international team of researchers led by Swinburne University of Technology has demonstrated the world's fastest and most powerful optical neuromorphic processor for artificial intelligence (AI), which operates faster than 10 trillion operations per second (TeraOPs/s) and is capable of processing ultra-large scale data. Published in the journal Nature, this breakthrough represents an enormous leap forward for neural networks and neuromorphic processing in general.

The team demonstrated an optical neuromorphic processor operating more than 1000 times faster than any previous processor, with the system also processing record-sized ultra-large scale images—enough to achieve full facial image recognition, something that other optical processors have been unable to accomplish.

"This breakthrough was achieved with 'optical micro-combs," as was our world-record internet data speed reported in May 2020," says Professor Moss, Director of Swinburne's Optical Sciences Centre.

While state-of-the-art electronic processors such as the Google TPU can operate beyond 100 TeraOPs/s, this is done with tens of thousands of parallel processors. In contrast, the optical system demonstrated by the team uses a single processor and was achieved using a new technique of simultaneously interleaving the data in time, wavelength and spatial dimensions through an integrated micro-comb source.

"This processor can serve as a universal ultrahigh bandwidth front end for any neuromorphic hardware —optical or electronic based—bringing massive-data machine learning for real-time ultrahigh bandwidth data within reach," says co-lead author of the study, Dr. Xu, Swinburne alum and postdoctoral fellow with the Electrical and Computer Systems Engineering Department at Monash University.

"We're currently getting a sneak-peak of how the processors of the future will look. It's really showing us how dramatically we can scale the power of our processors through the innovative use of microcombs," Dr. Xu explains.

Xingyuan Xu et al. 11 TOPS photonic convolutional accelerator for optical neural networks, Nature (2021).
https://www.nature.com/articles/s41586-020-03063-0


----------------------------------------

Machine Learning at the Speed of Light: New Paper Demonstrates Use of Photonic Structures for AI
https://techxplore.com/news/2021-01-machine-paper-photonic-ai.html

Light-based processors, called photonic processors, enable computers to complete complex calculations at incredible speeds. New research published this week in the journal Nature examines the potential of photonic processors for artificial intelligence applications. The results demonstrate for the first time that these devices can process information rapidly and in parallel, something that today's electronic chips cannot do.

The researchers combined phase-change materials—the storage material used, for example, on DVDs—and photonic structures to store data in a nonvolatile manner without requiring a continual energy supply. This study is also the first to combine these optical memory cells with a chip-based frequency comb as a light source, which is what allowed them to calculate on 16 different wavelengths simultaneously.

In the paper, the researchers used the technology to create a convolutional neural network that would recognize handwritten numbers. They found that the method granted never-before-seen data rates and computing densities.

"Exploiting light for signal transference enables the processor to perform parallel data processing through wavelength multiplexing, which leads to a higher computing density and many matrix multiplications being carried out in just one timestep. In contrast to traditional electronics, which usually work in the low GHz range, optical modulation speeds can be achieved with speeds up to the 50 to 100 GHz range."

J. Feldmann et al. Parallel convolutional processing using an integrated photonic tensor core, Nature (2021)
https://www.nature.com/articles/s41586-020-03070-1

----------------------------------------

Accelerating AI Computing to the Speed of Light
https://techxplore.com/news/2021-01-ai.html

A University of Washington-led team has come up with an optical computing core prototype that uses phase-change material. This system is fast, energy efficient and capable of accelerating the neural networks used in AI and machine learning. The technology is also scalable and directly applicable to cloud computing.

The team published these findings Jan. 4 in Nature Communications.

Changming Wu et al, Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network, Nature Communications (2021).
https://www.nature.com/articles/s41467-020-20365-z
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 18579
    • View Profile
  • Liked: 838
  • Likes Given: 323
Re: Robots and AI: Our Immortality or Extinction
« Reply #602 on: January 12, 2021, 09:06:41 PM »
Tesla’s other-than-human sensors and AI detect… other-than-human humans? ;)

”Well this is creepy haha ;D ;D :o
➡️ https://twitter.com/tesla_master/status/1348653853042900992
Video clip at the link.
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Young ice
  • Posts: 4751
    • View Profile
  • Liked: 2561
  • Likes Given: 366
Re: Robots and AI: Our Immortality or Extinction
« Reply #603 on: January 12, 2021, 10:29:03 PM »
Spooky!  :o

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4751
    • View Profile
  • Liked: 2561
  • Likes Given: 366
Re: Robots and AI: Our Immortality or Extinction
« Reply #604 on: January 13, 2021, 01:31:36 PM »
‘A 20th Century Commander Will Not Survive’: Why The Military Needs AI
https://breakingdefense.com/2021/01/a-20th-century-commander-will-not-survive-why-the-military-needs-ai/

Today’s huge HQs are slow-moving “rocket magnets” that can’t keep up in 21st century combat, the director of the Joint Artificial Intelligence Center, Lt. Gen. Mike Groen said,. To survive and win the military must replace cumbersome manual processes with AI.

“Clausewitz always talks about this coup d’oeil,” Lt. Gen. Mike Groen explained, “this insight that some commanders have.

Two hundred years ago, a general like Napoleon could stand atop a strategically located hill and survey the entire battlefield with their eyes and make snap decisions based on their read of the situation – what the French called coup d’oeil. ... The best generals developed an intuition the Prussians called fingerspitzengefühl, the “fingertip feeling” for the ever-changing shape of battle.

... “Today, you have a large command post,” Groen told me. “You have tents full of people who are on phones, who are on email, who are all reaching out and gathering information from across the force…. You might have dozens or hundreds of humans just watching all of that video [from drones and satellites] to try to detect targets.”

Then, using chat rooms, sticky notes, and old-fashioned yelling, staff officers have to share that information with each other, sort it, make sense of it, and briefing the commander. If the commander asks for data the staffers don’t have, they have to go back to their phones and computers. The cycle – what the late Col. John Boyd called the OODA loop, for Observe, Orient, Decide, & Act – can take hours or even days, and by the time the commander gets answers, the data may be out of date.

Today’s manual staff processes require legions of staff officers clustered in one location so they can talk to each other face to face; diesel generators running hot to power all the electronics; and veritable “antenna farms” to receive reports, transmit orders, and download full-motion video from drones. Those visible, infrared, and radio-frequency signatures are all easy for enemy drones and satellites to detect - a target-rich environment.

... What if you could replace the humans doing the cognitive grunt work – watching surveillance video for enemy forces, tracking your own units’ locations and assignments and levels of supply – with AI? What if you could replace the constant radio chatter of humans checking up on one another with machine-generated updates, transmitted in short bursts? What if you could replace the ponderous staff briefing process with an AI-driven dashboard that showed the commander what they needed to know, right now, based on up-to-the-minute data?

Then, for the first time in 200 years, a single commander could see the whole battlefield at a glance, with a 21st century coup d’oeil. In a world of conflicts too big to see from any physical hilltop, however high, you could build a virtual hilltop for the commander to look down from. ...

------------------------------------------------------

... and, while looking down from that virtual hilltop the commander doesn't notice the Cylon in his midst ...


for fans of Battlestar Galactica
« Last Edit: January 13, 2021, 01:43:47 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 18579
    • View Profile
  • Liked: 838
  • Likes Given: 323
Re: Robots and AI: Our Immortality or Extinction
« Reply #605 on: January 15, 2021, 08:42:20 PM »
Neighborhood watch?  Video clip:  Encountering a Spot robot dog on the prowl at night.

K10(@Kristennetten):
Policing?
➡️ https://twitter.com/kristennetten/status/1349660501739933696
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Young ice
  • Posts: 4751
    • View Profile
  • Liked: 2561
  • Likes Given: 366
Re: Robots and AI: Our Immortality or Extinction
« Reply #606 on: January 16, 2021, 12:22:18 AM »
Toyota Research Institute (TRI) is researching how to bring together the instinctive reflexes of professional drivers and automated driving technology that uses the calculated foresight of a supercomputer


... Fast & Furious ... or Mr Toad's Wild Ride

-----------------------------------------



... try and get somebody that's making $10.25/hr to do that

-------------------------------------------

work from home ...



---------------------------------------------

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4751
    • View Profile
  • Liked: 2561
  • Likes Given: 366
Re: Robots and AI: Our Immortality or Extinction
« Reply #607 on: January 16, 2021, 02:38:16 AM »
Evolvable Neural Units That Can Mimic the Brain's Synaptic Plasticity
https://techxplore.com/news/2021-01-evolvable-neural-mimic-brain-synaptic.html

Researchers at Korea University have recently tried to reproduce the complexity of biological neurons more effectively by approximating the function of individual neurons and synapses. Their paper, published in Nature Machine Intelligence, introduces a network of evolvable neural units (ENUs) that can adapt to mimic specific neurons and mechanisms of synaptic plasticity.

... "Current artificial neural networks used in deep learning are very powerful in many ways, but they do not really match biological neural network behavior. Our idea was to use these existing artificial neural networks not to model the entire brain, but to model each individual neuron and synapse."

The ENUs developed by Bertens and his colleague Seong-Whan Lee are based on artificial neural networks (ANNs). However, instead of reproducing the overall structure of biological neural networks, these ANNs were used to model individual neurons and synapses.

The behavior of the ENUs was programmed to change over time, using evolutionary algorithms. These are algorithms that can simulate a specific type of evolutionary process based on the notions of survival of the fittest, random mutation and reproduction.

"By using such evolutionary methods, it is possible to evolve these units to perform very complex information processing, similar to biological neurons," Bertens explained. "Most current neuron models only allow single output values (spikes or graded potentials), and in case of synapses only a single synaptic weight value. The main unique characteristics of ENUs is that they can output multiple values (vectors), which could be seen as analogous to neurotransmitters in the brain."

The ENUs developed by Bertens and Lee can output values that act in ANNs as neurotransmitters do in the brain. This characteristic allows them to learn far more complex behavior than existing, predefined mathematical models.

"I believe that the most meaningful finding and result of this study was showing that the proposed ENUs can not only perform similar mathematical operations as current neuroscience models, but they can also be evolved to essentially perform any type of behavior that is beneficial for survival," Bertens said. "This means it is possible to get much more complex functions for each neuron than the current hand-designed mathematical ones."

Network of evolvable neural units can learn synaptic learning rules and spiking dynamics. Nature Machine Intelligence (2020).
https://www.nature.com/articles/s42256-020-00267-x
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late