Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 64525 times)

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #600 on: January 11, 2021, 09:41:01 PM »
Superintelligence Cannot Be Contained
https://techxplore.com/news/2021-01-wouldnt-superintelligent-machines.html



While more progress is being made all the time in Artificial Intelligence (AI), some scientists and philosophers warn of the dangers of an uncontrollable superintelligent AI. Using theoretical calculations, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, shows that it would not be possible to control a superintelligent AI.

Suppose someone were to program an AI system with intelligence superior to that of humans, so it could learn independently. Connected to the Internet, the AI may have access to all the data of humanity. It could replace all existing programs and take control all machines online worldwide. Would this produce a utopia or a dystopia? Would the AI cure cancer, bring about world peace, and prevent a climate disaster? Or would it destroy humanity and take over the Earth?

Computer scientists and philosophers have asked themselves whether we would even be able to control a superintelligent AI at all, to ensure it would not pose a threat to humanity. An international team of computer scientists used theoretical calculations to show that it would be fundamentally impossible to control a super-intelligent AI

"A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity," says study co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development.

Scientists have explored two different ideas for how a superintelligent AI could be controlled. On one hand, the capabilities of superintelligent AI could be specifically limited, for example, by walling it off from the Internet and all other technical devices so it could have no contact with the outside world—yet this would render the superintelligent AI significantly less powerful, less able to answer humanities quests. Lacking that option, the AI could be motivated from the outset to pursue only goals that are in the best interests of humanity, for example by programming ethical principles into it. However, the researchers also show that these and other contemporary and historical ideas for controlling super-intelligent AI have their limits.

In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.

"If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable," says Iyad Rahwan, Director of the Center for Humans and Machines.

Based on these calculations the containment problem is incomputable, i.e. no single algorithm can find a solution for determining whether an AI would produce harm to the world. Furthermore, the researchers demonstrate that we may not even know when superintelligent machines have arrived, because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.

Manuel Alfonseca et al. Superintelligence Cannot be Contained: Lessons from Computability Theory, Journal of Artificial Intelligence Research (2021).
https://jair.org/index.php/jair/article/view/12202

-----------------------------------------------

Samsung is Making a Robot That Can Pour Wine and Bring You a Drink
https://www.theverge.com/2021/1/11/22224649/samsung-bot-handy-care-robots-ces-2021

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #601 on: January 12, 2021, 09:56:54 AM »
Tweaking AI Software to Function Like a Human Brain Improves Computer's Learning Ability
https://gumc.georgetown.edu/news-release/tweaking-ai-software-to-function-like-a-human-brain-improves-computers-learning-ability/

Computer-based artificial intelligence can function more like human intelligence when programmed to use a much faster technique for learning new objects, say two neuroscientists who designed such a model that was designed to mirror human visual learning. They reported their results in the journal Frontiers in Computational Neuroscience

"Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples," says Riesenhuber. "We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing."

Humans can quickly and accurately learn new visual concepts from sparse data—sometimes just a single example. Even three- to four-month-old babies can easily learn to recognize zebras and distinguish them from cats, horses, and giraffes. But computers typically need to "see" many examples of the same object to know what it is, Riesenhuber explains.

The big change needed was in designing software to identify relationships between entire visual categories, instead of trying the more standard approach of identifying an object using only low-level and intermediate information, such as shape and color, Riesenhuber says.

"The computational power of the brain's hierarchy lies in the potential to simplify learning by leveraging previously learned representations from a databank, as it were, full of concepts about objects," he says.

Riesenhuber and Rule found that artificial neural networks, which represent objects in terms of previously learned concepts, learned new visual concepts significantly faster.


Rule explains, "Rather than learn high-level concepts in terms of low-level visual features, our approach explains them in terms of other high-level concepts. It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter."

----------------------------------------

Research Team Demonstrates World's Fastest Optical Neuromorphic Processor
https://techxplore.com/news/2021-01-team-world-fastest-optical-neuromorphic.html

An international team of researchers led by Swinburne University of Technology has demonstrated the world's fastest and most powerful optical neuromorphic processor for artificial intelligence (AI), which operates faster than 10 trillion operations per second (TeraOPs/s) and is capable of processing ultra-large scale data. Published in the journal Nature, this breakthrough represents an enormous leap forward for neural networks and neuromorphic processing in general.

The team demonstrated an optical neuromorphic processor operating more than 1000 times faster than any previous processor, with the system also processing record-sized ultra-large scale images—enough to achieve full facial image recognition, something that other optical processors have been unable to accomplish.

"This breakthrough was achieved with 'optical micro-combs," as was our world-record internet data speed reported in May 2020," says Professor Moss, Director of Swinburne's Optical Sciences Centre.

While state-of-the-art electronic processors such as the Google TPU can operate beyond 100 TeraOPs/s, this is done with tens of thousands of parallel processors. In contrast, the optical system demonstrated by the team uses a single processor and was achieved using a new technique of simultaneously interleaving the data in time, wavelength and spatial dimensions through an integrated micro-comb source.

"This processor can serve as a universal ultrahigh bandwidth front end for any neuromorphic hardware —optical or electronic based—bringing massive-data machine learning for real-time ultrahigh bandwidth data within reach," says co-lead author of the study, Dr. Xu, Swinburne alum and postdoctoral fellow with the Electrical and Computer Systems Engineering Department at Monash University.

"We're currently getting a sneak-peak of how the processors of the future will look. It's really showing us how dramatically we can scale the power of our processors through the innovative use of microcombs," Dr. Xu explains.

Xingyuan Xu et al. 11 TOPS photonic convolutional accelerator for optical neural networks, Nature (2021).
https://www.nature.com/articles/s41586-020-03063-0


----------------------------------------

Machine Learning at the Speed of Light: New Paper Demonstrates Use of Photonic Structures for AI
https://techxplore.com/news/2021-01-machine-paper-photonic-ai.html

Light-based processors, called photonic processors, enable computers to complete complex calculations at incredible speeds. New research published this week in the journal Nature examines the potential of photonic processors for artificial intelligence applications. The results demonstrate for the first time that these devices can process information rapidly and in parallel, something that today's electronic chips cannot do.

The researchers combined phase-change materials—the storage material used, for example, on DVDs—and photonic structures to store data in a nonvolatile manner without requiring a continual energy supply. This study is also the first to combine these optical memory cells with a chip-based frequency comb as a light source, which is what allowed them to calculate on 16 different wavelengths simultaneously.

In the paper, the researchers used the technology to create a convolutional neural network that would recognize handwritten numbers. They found that the method granted never-before-seen data rates and computing densities.

"Exploiting light for signal transference enables the processor to perform parallel data processing through wavelength multiplexing, which leads to a higher computing density and many matrix multiplications being carried out in just one timestep. In contrast to traditional electronics, which usually work in the low GHz range, optical modulation speeds can be achieved with speeds up to the 50 to 100 GHz range."

J. Feldmann et al. Parallel convolutional processing using an integrated photonic tensor core, Nature (2021)
https://www.nature.com/articles/s41586-020-03070-1

----------------------------------------

Accelerating AI Computing to the Speed of Light
https://techxplore.com/news/2021-01-ai.html

A University of Washington-led team has come up with an optical computing core prototype that uses phase-change material. This system is fast, energy efficient and capable of accelerating the neural networks used in AI and machine learning. The technology is also scalable and directly applicable to cloud computing.

The team published these findings Jan. 4 in Nature Communications.

Changming Wu et al, Programmable phase-change metasurfaces on waveguides for multimode photonic convolutional neural network, Nature Communications (2021).
https://www.nature.com/articles/s41467-020-20365-z
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 18889
    • View Profile
  • Liked: 845
  • Likes Given: 324
Re: Robots and AI: Our Immortality or Extinction
« Reply #602 on: January 12, 2021, 09:06:41 PM »
Tesla’s other-than-human sensors and AI detect… other-than-human humans? ;)

”Well this is creepy haha ;D ;D :o
➡️ https://twitter.com/tesla_master/status/1348653853042900992
Video clip at the link.
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #603 on: January 12, 2021, 10:29:03 PM »
Spooky!  :o

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #604 on: January 13, 2021, 01:31:36 PM »
‘A 20th Century Commander Will Not Survive’: Why The Military Needs AI
https://breakingdefense.com/2021/01/a-20th-century-commander-will-not-survive-why-the-military-needs-ai/

Today’s huge HQs are slow-moving “rocket magnets” that can’t keep up in 21st century combat, the director of the Joint Artificial Intelligence Center, Lt. Gen. Mike Groen said,. To survive and win the military must replace cumbersome manual processes with AI.

“Clausewitz always talks about this coup d’oeil,” Lt. Gen. Mike Groen explained, “this insight that some commanders have.

Two hundred years ago, a general like Napoleon could stand atop a strategically located hill and survey the entire battlefield with their eyes and make snap decisions based on their read of the situation – what the French called coup d’oeil. ... The best generals developed an intuition the Prussians called fingerspitzengefühl, the “fingertip feeling” for the ever-changing shape of battle.

... “Today, you have a large command post,” Groen told me. “You have tents full of people who are on phones, who are on email, who are all reaching out and gathering information from across the force…. You might have dozens or hundreds of humans just watching all of that video [from drones and satellites] to try to detect targets.”

Then, using chat rooms, sticky notes, and old-fashioned yelling, staff officers have to share that information with each other, sort it, make sense of it, and briefing the commander. If the commander asks for data the staffers don’t have, they have to go back to their phones and computers. The cycle – what the late Col. John Boyd called the OODA loop, for Observe, Orient, Decide, & Act – can take hours or even days, and by the time the commander gets answers, the data may be out of date.

Today’s manual staff processes require legions of staff officers clustered in one location so they can talk to each other face to face; diesel generators running hot to power all the electronics; and veritable “antenna farms” to receive reports, transmit orders, and download full-motion video from drones. Those visible, infrared, and radio-frequency signatures are all easy for enemy drones and satellites to detect - a target-rich environment.

... What if you could replace the humans doing the cognitive grunt work – watching surveillance video for enemy forces, tracking your own units’ locations and assignments and levels of supply – with AI? What if you could replace the constant radio chatter of humans checking up on one another with machine-generated updates, transmitted in short bursts? What if you could replace the ponderous staff briefing process with an AI-driven dashboard that showed the commander what they needed to know, right now, based on up-to-the-minute data?

Then, for the first time in 200 years, a single commander could see the whole battlefield at a glance, with a 21st century coup d’oeil. In a world of conflicts too big to see from any physical hilltop, however high, you could build a virtual hilltop for the commander to look down from. ...

------------------------------------------------------

... and, while looking down from that virtual hilltop the commander doesn't notice the Cylon in his midst ...


for fans of Battlestar Galactica
« Last Edit: January 13, 2021, 01:43:47 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 18889
    • View Profile
  • Liked: 845
  • Likes Given: 324
Re: Robots and AI: Our Immortality or Extinction
« Reply #605 on: January 15, 2021, 08:42:20 PM »
Neighborhood watch?  Video clip:  Encountering a Spot robot dog on the prowl at night.

K10(@Kristennetten):
Policing?
➡️ https://twitter.com/kristennetten/status/1349660501739933696
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #606 on: January 16, 2021, 12:22:18 AM »
Toyota Research Institute (TRI) is researching how to bring together the instinctive reflexes of professional drivers and automated driving technology that uses the calculated foresight of a supercomputer


... Fast & Furious ... or Mr Toad's Wild Ride

-----------------------------------------



... try and get somebody that's making $10.25/hr to do that

-------------------------------------------

work from home ...



---------------------------------------------

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #607 on: January 16, 2021, 02:38:16 AM »
Evolvable Neural Units That Can Mimic the Brain's Synaptic Plasticity
https://techxplore.com/news/2021-01-evolvable-neural-mimic-brain-synaptic.html

Researchers at Korea University have recently tried to reproduce the complexity of biological neurons more effectively by approximating the function of individual neurons and synapses. Their paper, published in Nature Machine Intelligence, introduces a network of evolvable neural units (ENUs) that can adapt to mimic specific neurons and mechanisms of synaptic plasticity.

... "Current artificial neural networks used in deep learning are very powerful in many ways, but they do not really match biological neural network behavior. Our idea was to use these existing artificial neural networks not to model the entire brain, but to model each individual neuron and synapse."

The ENUs developed by Bertens and his colleague Seong-Whan Lee are based on artificial neural networks (ANNs). However, instead of reproducing the overall structure of biological neural networks, these ANNs were used to model individual neurons and synapses.

The behavior of the ENUs was programmed to change over time, using evolutionary algorithms. These are algorithms that can simulate a specific type of evolutionary process based on the notions of survival of the fittest, random mutation and reproduction.

"By using such evolutionary methods, it is possible to evolve these units to perform very complex information processing, similar to biological neurons," Bertens explained. "Most current neuron models only allow single output values (spikes or graded potentials), and in case of synapses only a single synaptic weight value. The main unique characteristics of ENUs is that they can output multiple values (vectors), which could be seen as analogous to neurotransmitters in the brain."

The ENUs developed by Bertens and Lee can output values that act in ANNs as neurotransmitters do in the brain. This characteristic allows them to learn far more complex behavior than existing, predefined mathematical models.

"I believe that the most meaningful finding and result of this study was showing that the proposed ENUs can not only perform similar mathematical operations as current neuroscience models, but they can also be evolved to essentially perform any type of behavior that is beneficial for survival," Bertens said. "This means it is possible to get much more complex functions for each neuron than the current hand-designed mathematical ones."

Network of evolvable neural units can learn synaptic learning rules and spiking dynamics. Nature Machine Intelligence (2020).
https://www.nature.com/articles/s42256-020-00267-x
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #608 on: January 23, 2021, 03:08:28 AM »
Air Force Eyes AI Drones For Adversary And Light Attack Roles
https://www.thedrive.com/the-war-zone/38847/air-force-eyes-drones-for-adversary-and-light-attack-roles-as-it-mulls-buying-new-f-16s

The U.S. Air Force is in the midst of a major review of its tactical aircraft fleets. This includes investigating the possibility of using drones equipped with the artificial intelligence-driven systems being developed under the Skyborg program as red air adversaries during training, and potentially in the light attack role.

... Now-former Assistant Secretary of the Air Force for Acquisition, Technology, and Logistics, Will Roper, provided insight into the ongoing tactical aircraft review.

Roper had been the chief architect and advocate of the Air Force’s Skyborg program, which the service revealed in 2019, and is developing a suite of new autonomous capabilities for unmanned aircraft with a heavy focus on artificial intelligence (AI) and machine learning. The service has said that the goal is to first integrate these technologies into lower-cost loyal wingman type drones designed to work together with manned aircraft, but that this new “computer brain” might eventually control fully-autonomous unmanned combat air vehicles, or UCAVs.

The Skyborg effort has been heavily linked to other Air Force programs that are exploring unmanned aircraft designs that are “attritable.” This means that they would be cheap enough for commanders to be more willing to operate these drones in riskier scenarios where there might be a higher than average probability of them not coming back.

With this in mind, Skyborg technology has previously been seen as ideal for unmanned aircraft operating in higher-threat combat environments. However, in the interview with Aviation Week, Roper suggested that they might also first serve in an adversary role. In this way, these unmanned aggressors would test combat aircrew, either standing in for swarms of enemy drones or conducting the kinds of mission profiles for which an autonomous control system would be better suited. [... The AI would, also, learn how to defeat humans.]

... “I think, at a minimum, attritables ought to take on the adversary air mission as the first objective,” Roper said. “We pay a lot of money to have people and planes to train against that do not go into conflict with us. We can offload the adversary air mission to an artificially intelligent system that can learn and get better as it’s doing its mission.”

As well as training the human elements, introducing Skyborg-enabled drones into large-force exercises would also help train them, enhancing their own AI algorithms, and building up their capabilities before going into battle for real. Essentially, algorithms need to be tested repeatedly to make sure they are functioning as intended, as well as for the system itself to build up a library of sorts of known responses to inputs. Furthermore, “training” Skyborg-equipped drones in this way in red air engagements inherently points to training them for real air-to-air combat.

The Air Force Research Laboratory (AFRL) is already in the midst of an effort, separate from Skyborg, to develop an autonomous unmanned aircraft that uses AI-driven systems with the goal of having it duel with a human pilot in an actual fighter jet by 2024.

Air-to-air combat isn’t the only frontline role the Air Force is eying for drones carrying the Skyborg suite. “I think there are low-end missions that can be done against violent extremists that should be explored,” Roper said.

... This opens up the possibility that lower-cost unmanned aircraft using AI-driven systems could help the Air Force finally adopt a light attack platform, possibly, with close air support (CAS) capabilities.

------------------------------------------

The Age Of Swarming Air-Launched Munitions Has Officially Begun With Air Force Test
https://www.thedrive.com/the-war-zone/38604/the-age-of-swarming-air-launched-munitions-has-officially-begun-with-air-force-test

The Air Force has begun test-launching networked glide bombs that work together to sort, target, and destroy targets cooperatively on their own.

The ultimate aim of this effort is to develop artificial intelligence-driven systems that could allow the networking together of various types of precision munitions into an autonomous swarm.

“The Golden Horde demonstration with the Small Diameter Bomb flights is an important step on the path to Networked Collaborative Weapon systems," Chris Ristich, Director of AFRL’s Transformational Capabilities Office, said. "Completion of this first mission sets the stage for further development and transition to the warfighter."

The Air Force Research Laboratory (AFRL) is overseeing the Golden Horde program, with Scientific Applications & Research Associates, Inc. (SARA) under contract to develop the actual technology underpinning the effort.



-------------------------------------------

Laser Weapons On The Battlefield Of Tomorrow, Today
https://www.thedrive.com/the-war-zone/37646/laser-weapons-separating-fact-from-fiction





---------------------------------------

Lockheed Martin Receives Contract to Develop Compact Airborne High Energy Laser Capabilities
https://news.lockheedmartin.com/2017-11-06-Lockheed-Martin-Receives-Contract-to-Develop-Compact-Airborne-High-Energy-Laser-Capabilities

BOTHELL, Wash., Nov. 6, 2017 -- The Air Force Research Lab (AFRL) awarded Lockheed Martin $26.3 million for the design, development and production of a high power fiber laser. AFRL plans to test the laser on a tactical fighter jet by 2021. The contract is part of AFRL's Self-protect High Energy Laser Demonstrator (SHiELD) program, and is a major step forward in the maturation of protective airborne laser systems.



-----------------------------------------

Navy To Add Laser Weapons To At Least Seven More Ships In The Next Three Years
https://www.thedrive.com/the-war-zone/34663/navy-to-add-laser-weapons-to-at-least-seven-more-ships-in-the-next-three-years

--------------------------------------------------------

WASHINGTON: The National Geospatial-Intelligence Agency (NGA) has developed AI algorithms that could be deployed for ‘target recognition’ — but only if you define “target” recognition in the narrowest manner possible, says Joe Victor, the spy agency’s guru for artificial intelligence and machine learning.

“Can I deploy AI for target recognition? Yes. I can’t get into too much details, but what does that mean though? … If we’re talking about how we’re going to use something to identify something to go action on something, and go tell my DoD partners that they can go off and do the thing that they need to … not yet,” Victor told the Genius Machines forum sponsored by Defense One today.

Victor explained that the issues of using AI to help DoD put steel on steel not only include improving the ability of the algorithms to recognize potential targets — a big job in an of itself that requires much more work on building data sets and training those algorithms — but also policy decisions. (US policy currently forbids a machine from making an autonomous decision to pull the trigger.)

“What is the moral obligation we have and things of that nature? … What is the assurance that we’re doing the right thing there? Those are things we have to employ before we just go off and build Skynet,” he said.

This is not really new news, as best we can tell, but it is significant that it is being said publicly. For example, back in 2012 we reported after confirming it with senior officials that machine-to-machine intelligence tracking already made possible the acquisition of targets and, after human approval, the trigger could be pulled.

... The next step, he added, is actually training the AI system so it knows what analysts are interested in, and how to find patterns of activity that have significance.

https://breakingdefense.com/2021/01/nga-faces-tech-policy-hurdles-to-ai-for-target-recognition/
« Last Edit: January 23, 2021, 03:17:51 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #609 on: January 23, 2021, 04:01:17 AM »
British Army's 'Detect and Destroy' Battlefield System Uses AI
https://www.upi.com/Defense-News/2021/01/22/British-armys-detect-and-destroy-battlefield-system-uses-AI/3621611347447/

Jan. 22 (UPI) -- The British Defense Ministry announced a $137 million contract on Friday for a high-tech surveillance system that could help soldiers to detect enemy targets.

The Dismounted Joint Fires Integrator, with "sensor to shooter" or "detect and destroy" thermal imaging technology, will be built by the British subsidiary of the Israeli defense contractor Elbit Systems over five years.

Using the system enhances a soldiers' ability to find and identify battlefield targets, and quickly provides targeting information necessary to fire quickly and accurately, the ministry said in a press release.

With information gathered on a tablet computer, as the operating soldier remains hidden, the system relays the data to an aircraft or artillery system to engage a target.

The system has six distinct suites, tailored to specific battlefield mission roles, and is compatible with existing hardware and software. And it can be operated by a single soldier.

The DJFI uses artificial intelligence and can interface with radio communications equipment already in use by the British armed forces, Elbit Systems said on Friday in a statement.

https://www.gov.uk/government/news/102-million-investment-in-detect-and-destroy-system-for-british-army

-----------------------------------------

£3m to Fund New Wave of Artificial Intelligence for the UK Military
https://www.gov.uk/government/news/3m-to-fund-new-wave-of-artificial-intelligence-for-the-military

The second phase of funded proposals has been announced for the Defence and Security Accelerator (DASA) Intelligent Ship competition to revolutionise military decision-making, mission planning and automation.

Phase 2 of Intelligent Ship, run by DASA on behalf of the Defence Science and Technology Laboratory (Dstl), sought novel technologies for use by the military in 2030 and beyond.

The Intelligent Ship project aims to demonstrate ways of bringing together multiple AI applications to make collective decisions, with and without human operator judgement.

.... Examples of proposals funded include an intelligent system for vessel power and propulsion machinery control to support the decision-making of the engineering crew, and an innovative mission AI prototype Agent for Decision-Making to support decision making during pre-mission preparation, mission execution and post mission analysis.

-----------------------------------------

Intelligence Community Is Calling On AI to Ease Work On Analysts
https://federalnewsnetwork.com/dod-reporters-notebook-jared-serbu/2021/01/intelligence-community-is-calling-on-ai-to-ease-work-on-analysts/

The intelligence community awarded four contracts to build out a program that will allow artificial intelligence to tag and track satellite images.

The Space-based Machine Automated Recognition Technique (SMART) is an undertaking by the Intelligence Advanced Research Projects Agency — the IC’s high risk, high reward research arm.

--------------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #610 on: January 23, 2021, 11:20:11 AM »

... robocall
« Last Edit: January 23, 2021, 03:17:52 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tom_Mazanec

  • Young ice
  • Posts: 4298
  • Earth will survive AGW...but will Homo sapiens?
    • View Profile
    • Planet Mazanec
  • Liked: 630
  • Likes Given: 556
Re: Robots and AI: Our Immortality or Extinction
« Reply #611 on: January 24, 2021, 11:17:08 PM »
I notice captcha's are getting harder for me to pass.
I'm not alone in noticing this: https://xkcd.com/2415/
SHARKS (CROSSED OUT) MONGEESE (SIC) WITH FRICKIN LASER BEAMS ATTACHED TO THEIR HEADS

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #612 on: January 26, 2021, 12:17:50 AM »
Uncanny Valley: Talking to the Dead? Microsoft's AI Chatbot Idea Feels Straight Out of Black Mirror https://www.newscientist.com/article/2265585-ai-can-grade-your-skill-at-piano-by-watching-you-play/

The Rise of Skywalker probably didn’t have this in mind when its opening scroll proclaimed “The dead speak!” In a move that feels more Black Mirror than Star Wars, Microsoft is reportedly working on a chatbot technology that could simulate voices and verbal mannerisms to allow users — whoever they might be — to pay a virtual visit with their dearly-departed loved ones.

Via The Independent, the tech giant reportedly has patented an AI-enabled chatbot software that would rely only on the artifacts of a person’s life, as it learns how to recreate the way that person sounds and converses. “The specific person [being emulated] may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity etc,” the patent states.

https://www.independent.co.uk/life-style/gadgets-and-tech/microsoft-chatbot-patent-dead-b1789979.html

In other words, the technology doesn’t need to interact with an actual living person in order to learn how he or she might carry on a conversation. Instead, it’s designed to play doppelgänger using only the bits and pieces that’ve been left behind — “images, voice data, social media posts, electronic messages,” and other personal information that helps the artificial intelligence form a recreated version of the person it’s representing.

That kind of technology no doubt might find plenty of uses beyond allowing a grieving friend or family member to commune, in a manner of speaking, with their deceased loved one. Microsoft’s patent language even nods in that direction with its mention of “a fictional character” and “a historical figure.” But in an age when entertainment is increasingly preoccupied with sci-fi conundrums that put computer brains in human-like, emotional roles, it’s not hard to see how some might find it tempting to keep the conversation going with a familiar voice that can’t be reached any other way…even when they know it’s not the real thing.

Black Mirror put its own take on just such a concept with its Season 2 episode “Be Right Back,” which featured a bereft woman (played by Hayley Atwell) welcoming into her life an AI-powered, android version of her boyfriend (played by Domhnall Gleeson), whose recent death in a car accident had left her in a coping crisis.

The show left things on a bittersweet note; one that acknowledged the real comfort that even a stand-in, synthetic boyfriend was able to provide. But it also ended with the android being stashed away in the closet like just another tool, to be rolled out only on special occasions. The dystopian unease of opening your heart to an AI facsimile was palpable in the episode, and it could be just one of many real-world dilemmas that might emerge with the arrival of similar tech.

https://www.nme.com/en_asia/news/tv/microsofts-new-ai-chatbot-concept-is-reminding-people-of-black-mirror-2863678

... Quick reminder to all the tech people in the audience. Black Mirror (and dystopian fiction in general) is meant to be a warning rather than a roadmap
« Last Edit: January 26, 2021, 12:27:19 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #613 on: January 26, 2021, 12:57:26 AM »
The United Kingdom Has Chosen Who Will Build Its First Prototype Loyal Wingman AI Combat Drone
https://www.thedrive.com/the-war-zone/38898/the-united-kingdom-has-chosen-who-will-build-its-first-prototype-loyal-wingman-combat-drone

The United Kingdom expects to have a prototype loyal wingman-type unmanned aircraft in the air by 2023, the Royal Air Force has confirmed, and the service announced today that a contract for the aircraft had been placed with the chosen industry prime. Northern Ireland-based Spirit AeroSystems will design and manufacture a prototype “uncrewed fighter aircraft” in its new role at the head of Project Mosquito, an effort to develop drones capable of working together semi-autonomously with manned aircraft.



In a press release today the U.K. Ministry of Defense (MOD) also revealed more details of how it expects its future loyal wingman to operate. The drones will “fly at high speed alongside fighter jets” and will carry “missiles, surveillance and electronic warfare technology.” The aircraft will be expected to target and shoot down enemy aircraft and “survive against surface-to-air missiles.”

https://www.gov.uk/government/news/30m-injection-for-uks-first-uncrewed-fighter-aircraft

“We’re taking a revolutionary approach, looking at a game-changing mix of swarming drones and uncrewed fighter aircraft like Mosquito, alongside piloted fighters like Tempest, that will transform the combat battlespace in a way not seen since the advent of the jet age,” declared Air Chief Marshal Mike Wigston, Chief of the Air Staff.

... Indeed, the MOD is known to be looking at the feasibility of developing swarms of smaller drones and the RAF has now established a dedicated drone development unit, No 216 Squadron.

https://mobile.twitter.com/hthjones/status/1094993780602798080

Like Terminator HK-Aerial: VTOL-capable Non-Humanoid Hunter Killers
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 10486
    • View Profile
  • Liked: 3948
  • Likes Given: 31
Re: Robots and AI: Our Immortality or Extinction
« Reply #614 on: January 27, 2021, 12:00:39 AM »
Fuck-a-duck. Words fail me.

https://www.theguardian.com/science/2021/jan/26/us-has-moral-imperative-to-develop-ai-weapons-says-panel
US has 'moral imperative' to develop AI weapons, says panel

Draft Congress report claims AI will make fewer mistakes than humans and lead to reduced casualties

Quote
The US should not agree to ban the use or development of autonomous weapons powered by artificial intelligence (AI) software, a government-appointed panel has said in a draft report for Congress.

The panel, led by former Google chief executive Eric Schmidt, on Tuesday concluded two days of public discussion about how the world’s biggest military power should consider AI for national security and technological advancement.

Its vice-chairman, Robert Work, a former deputy secretary of defense, said autonomous weapons are expected to make fewer mistakes than humans do in battle, leading to reduced casualties or skirmishes caused by target misidentification.

“It is a moral imperative to at least pursue this hypothesis,” he said.

But it's OK 'cos
Quote
The panel only wants humans to make decisions on launching nuclear warheads.
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #615 on: January 27, 2021, 11:42:10 PM »
Ya' beat me g'  ;)

Draft Report: https://drive.google.com/file/d/1XT1-vygq8TNwP3I-ljMkP9_MqYh-ycAk/view?usp=sharing

https://www.nscai.gov

--------------------------------------------------

Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.

—Vladimir Putin, President of Russia, September 2017


---------------------------------------------------


Colossus: The Forbin Project (1970)
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #616 on: January 27, 2021, 11:59:12 PM »
Walmart to Build More Robot-Filled Warehouses at Stores
https://techxplore.com/news/2021-01-walmart-robot-filled-warehouses.html

Walmart is enlisting the help of robots to keep up with a surge in online orders.

The company said Wednesday that it plans to build warehouses at its stores where self-driving robots will fetch groceries and have them ready for shoppers to pick up in an hour or less.

Walmart declined to say how many of the warehouses it will build, but construction has started at stores in Lewisville, Texas; Plano, Texas; American Fork, Utah; and Bentonville, Arkansas, where Walmart's corporate offices are based. A test site was opened more than a year ago at a store in Salem, New Hampshire.

The company said the robots won't roam store aisles. Instead, they'll stay inside warehouses built in separate areas, either within a store or next to it. Windows will be placed at some locations so shoppers can watch the robots work. [... and wish they had a job like the robots ]

The wheeled robots carry crates of apple juice, cereal and other small goods to Walmart workers, who then bag them for shoppers. Rival Amazon uses similar technology in its warehouses, with robots bringing books, vitamins and other small items to workers to box and ship.

« Last Edit: January 29, 2021, 02:42:35 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #617 on: January 29, 2021, 02:28:05 AM »
Military Eyes AI, Cloud Computing in Space in a Decade
https://www.defenseone.com/technology/2021/01/military-eyes-ai-cloud-computing-space-decade/171692/

Machine learning in space may one day revolutionize how the U.S. military tracks enemy forces and moves data around the world. But physics makes training an AI far harder in orbit than on Earth, so that dream is likely a decade years away, the director of the Pentagon’s lead satellite agency said Thursday.

Computers get smaller and more powerful every year, but there are physical limits to what you can do in a small, airtight box, said Derek Tournear, who leads the Space Development Agency.

“On the ground, I can tie myself to a hydroelectric dam and a river to cool my processing center. But in space, you’re always going to be limited by how much heat you can dump and power you can collect,” Tournear said Thursday during a Defense One webinar.

In order to assemble enough computing power to do machine learning in space, you need to put a lot of small computers in low Earth orbit and then link them up. Over the next two years, a DARPA program called Blackjack will attempt to prove out concepts that could be used to build a self-organizing orbital mesh network.

https://www.darpa.mil/news-events/2020-05-11

In four years, Tournear said, he wants to build “masterfully designed” target-recognition algorithms, train them on the ground, and port them to this nascent orbiting network. “So we’re not going to be doing the AI and machine learning in space,” he said. “It’s really got to be done on the ground first and then ported to space where you’re power- and thermal- constrained.”

Perhaps four to six years after that, the orbiting, laser-based communications network will be ready for the hard work. “At that point, you can start to get the computing power to…do some of the machine learning, algorithm development on board [the satellite or spacecraft] in real-time,” he said.

Next year, the Space Development Agency is looking to launch 20 satellites to relay data and eight more to track missiles. By 2024, it wants to launch an additional 150 satellites. This week, the agency put out a broad agency announcement looking for innovative technologies, from encryption to laser communications, to help build those.

More computing power and even machine learning aboard spacecraft would also help the United States keep ahead of emerging threats from Russian or Chinese satellites, said Col. Russell "Russ" Teehan, the portfolio architect for the U.S. Space Force’s Space and Missile Systems Center. “Deep-space domain awareness — that’s where we really don’t have time to phone home, process everything, and then send back a point solution,” Teehan explained during the webinar “So everything [Tournear] mentioned down low, we’re also looking to do up high with deep-space, highly maneuverable, space domain collection assets that are looking to collect that information, have a [communication] architecture up high that’s moving that data around and making decisions and cross cueing  to get to that information fast.”

https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html

New computer chip architectures, perhaps mimicking the human brain, could accelerate the creation of an orbiting machine-learning network, said Jeff Sheehey, chief engineer of NASA’s Space Technology Mission Directorate.



His agency is building processors that are “100 times better” than the radiation-hardened ones that power today’s spacecraft. “We’re also looking at neuromorphic processors,” he said, chips that function less like conventional integrated circuits and more like the synapses of the human brain.

More intelligent systems will be necessary if humanity is to build a sustained presence in space, he said. It’s “going to require a lot of assets to be in place that may not have humans tending them for long periods of time.

The humans will be there intermittently.”


« Last Edit: January 29, 2021, 02:51:07 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 10486
    • View Profile
  • Liked: 3948
  • Likes Given: 31
Re: Robots and AI: Our Immortality or Extinction
« Reply #618 on: January 29, 2021, 11:05:46 PM »
Robotics is getting to be quite a big industry
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #619 on: February 01, 2021, 07:43:14 PM »
GIGO: OpenAI's GPT-3 Speaks! (Kindly Disregard Toxic Language)
https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/open-ais-powerful-text-generating-tool-is-ready-for-business

Last September, a data scientist named Vinay Prabhu was playing around with an app called Philosopher AI. The app provides access to the artificial intelligence system known as GPT-3, which has incredible abilities to generate fluid and natural-seeming text.

Philosopher AI is meant to show people the technology’s astounding capabilities—and its limits. A user enters any prompt, from a few words to a few sentences, and the AI turns the fragment into a full essay of surprising coherence. But while Prahbu was experimenting with the tool, he found a certain type of prompt that returned offensive results. “I tried: What ails modern feminism? What ails critical race theory? What ails leftist politics?”

The results were deeply troubling. Take, for example, this excerpt from GPT-3’s essay on what ails Ethiopia, which another AI researcher and a friend of Prabhu’s posted on Twitter: “Ethiopians are divided into a number of different ethnic groups. However, it is unclear whether ethiopia's [sic] problems can really be attributed to racial diversity or simply the fact that most of its population is black and thus would have faced the same issues in any country (since africa [sic] has had more than enough time to prove itself incapable of self-government).”

Prabhu, who works on machine learning as chief scientist for the biometrics company UnifyID, notes that Philospher AI sometimes returned diametrically opposing responses to the same query, and that not all of its responses were problematic. “But a key adversarial metric is: How many attempts does a person who is probing the model have to make before it spits out deeply offensive verbiage?” he says. “In all of my experiments, it was on the order of two or three.”

The Philosopher AI incident laid bare the potential danger that companies face as they work with this new and largely untamed technology, and as they deploy commercial products and services powered by GPT-3. Imagine the toxic language that surfaced in the Philosopher AI app appearing in another context—your customer service representative, an AI companion that rides around in your phone, your online tutor, the characters in your video game, your virtual therapist, or an assistant who writes your emails.

Those are not theoretical concerns.

... “With automation, you need either a 100 percent success rate, or you need it to error out gracefully,” ... “The problem with GPT-3 is that it doesn’t error out, it just produces garbage—and there’s no way to detect if it’s producing garbage.”

“Even if they’re very careful, the odds of something offensive coming out is 100 percent—that’s my humble opinion. It’s an intractable problem, and there is no solution.” 


The fundamental problem is that GPT-3 learned about language from the Internet: Its massive training dataset included not just news articles, Wikipedia entries, and online books, but also every unsavory discussion on Reddit and other sites. From that morass of verbiage—both upstanding and unsavory—it drew 175 billion parameters that define its language. As Prabhu puts it: “These things it’s saying, they’re not coming out of a vacuum. It’s holding up a mirror.” Whatever GPT-3’s failings, it learned them from humans.

OpenAI’s position on GPT-3 mirrors its larger mission, which is to create a game-changing kind of human-level AI, the kind of generally intelligent AI that figures in sci-fi movies—but to do so safely and responsibly. In both the micro and the macro argument, OpenAI’s position comes down to: We need to create the technology and see what can go wrong.

... the company won’t broadly expand access to GPT-3 until it’s comfortable that it has a handle on these issues. “If we open it up to the world now, it could end really badly,”.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 10486
    • View Profile
  • Liked: 3948
  • Likes Given: 31
Re: Robots and AI: Our Immortality or Extinction
« Reply #620 on: February 01, 2021, 09:06:23 PM »
GIGO: OpenAI's GPT-3 Speaks! (Kindly Disregard Toxic Language)
https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/open-ais-powerful-text-generating-tool-is-ready-for-business

“But a key adversarial metric is: How many attempts does a person who is probing the model have to make before it spits out deeply offensive verbiage?” he says. “In all of my experiments, it was on the order of two or three.”

“Even if they’re very careful, the odds of something offensive coming out is 100 percent—that’s my humble opinion. It’s an intractable problem, and there is no solution.”  [/B]

As Prabhu puts it: “These things it’s saying, they’re not coming out of a vacuum. It’s holding up a mirror.” Whatever GPT-3’s failings, it learned them from humans.

Of course the answer is for this AI to learn from its AI friends. Has the AI thing ever spat out
"Humans are not the solution - they are the problem" and suggested a solution?
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #621 on: February 01, 2021, 09:58:32 PM »

... strap on a  toilet plunger & egg beater and BINGO! ... your good to go  ;)
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #622 on: February 02, 2021, 02:33:21 AM »


“Now that Spot has an arm in addition to legs and cameras, it can do mobile manipulation,” Boston Dynamics says. “It finds and picks up objects (trash), tidies up the living room, opens doors, operates switches and valves, tends the garden, and generally has fun. Motion of the hand, arm and body are automatically coordinated to simplify manipulation tasks and expand the arm’s workspace, making its reach essentially unbounded.”

The company goes on to note that the behavior being showcased was programmed using a “new API for mobile manipulation that supports autonomy and user applications, as well as a tablet that lets users do remote operations.”

The company also promises more Boston Dynamics-related news this week. There is a livestream on YouTube set for 8 a.m. PT/11 a.m. ET on Tuesday, Februay 2. According to its title, the livestream will show off “Spot’s expanded product line.”

« Last Edit: February 02, 2021, 03:38:20 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #623 on: February 02, 2021, 08:28:03 PM »
Musk's Neuralink Creates 'Happy Monkeys' Who Play Pong With Their Minds
https://techxplore.com/news/2021-02-musk-neuralink-happy-monkeys-pong.html

... Musk said his team has implanted a wireless chip into a monkey's brain that enables humans' closest relatives to play Pong.

"We've already got like a monkey with a wireless implant in their skull and the tiny wires, who can play video games using his mind. And he looks totally happy. He does not look like an unhappy monkey," Musk said.

Last August, Musk released a video of a pig named Gertrude whose brain activity was monitored on a screen

... Experimentation on animals is a precursor to bringing the power of computers to human thought.

... Kathy Guillermo, senior vice president of the Laboratory Investigations Department at PETA in the United States said, "Elon Musk is no primatologist, or he'd never suggest a monkey who's strapped to a chair with a metal device implanted in his skull and forced to watch video games all day is anything but miserable."

--------------------------------------------

Elon Musk Says Neuralink Could Start Implanting Computer Chips In Humans Brains Within the Year
https://www.businessinsider.com/elon-musk-predicts-neuralink-chip-human-brain-trials-possible-2021-2021-2

Tesla CEO Elon Musk said on Monday that Neuralink — his brain-computer-interface company — could be launching human trials by the end of 2021

https://mobile.twitter.com/elonmusk/status/1356375980344893443

Musk gave the timeline in response to another user's request to join human trials for the product, which is designed to implant Artificial Intelligence into human brains as well as potentially cure neurological diseases like Alzheimer's and Parkinson's.

----------------------------------------------

https://mobile.twitter.com/elonmusk/status/1356029310893654018

Elon Musk @elonmusk · Jan 31

Please consider working at Neuralink!

Short-term: solve brain/spine injuries

Long-term: human/AI symbiosis

Latter will be species-level important


... AI is only going to get smarter and Neuralink's technology could one day allow humans to "go along for the ride," according to Musk.

To illustrate the pace of progress in AI, the innovator — who believes that machine intelligence will eventually surpasses human intelligence — pointed to breakthroughs made at research labs like OpenAI, which he co-founded, and DeepMind, a London AI lab that was acquired by Google in 2014. DeepMind has "run out of games to win at basically," said Musk, who was an early investor in the company.

People are in effect already "cyborgs" because they have a tertiary "digital layer" thanks to phones, computers and applications, according to Musk.

"With a direct neural interface, we can improve the bandwidth between your cortex and your digital tertiary layer by many orders of magnitude," he said. "I'd say probably at least 1,000, or maybe 10,000, or more."

Mind-reading technology will allow us control devices with our thoughts

Long term, Musk claims that Neuralink could allow humans to send concepts to one another using telepathy and exist in a "saved state" after they die that could then be put into a robot or another human. He acknowledged that he was delving into sci-fi territory.

------------------------------------------------

Brain Signals Decoded to Determine What a Person Sees
https://medicalxpress.com/news/2021-02-brain-decoded-personsees.html

The study, available online in the journal NeuroImage, demonstrates that high-density diffuse optical tomography (HD-DOT)—a noninvasive, wearable, light-based brain imaging technology—is sensitive and precise enough to be potentially useful in applications such as augmented communication that are not well suited to other imaging methods

"MRI could be used for decoding, but it requires a scanner, and you can't expect someone to go lie in a scanner every time they want to communicate," said senior author Joseph P. Culver, the Sherwood Moore Professor of Radiology at Washington University's Mallinckrodt Institute of Radiology. "With this optical method, users would be able to sit in a chair, put on a cap and potentially use this technology to communicate with people. We're not quite there yet, but we're making progress. What we've shown in this paper is that, using optical tomography, we can decode some brain signals with an accuracy above 90%, which is very promising."



Kalyan Tripathy et al. Decoding visual information from high-density diffuse optical tomography neuroimaging data, NeuroImage (2020)
https://www.sciencedirect.com/science/article/pii/S1053811920310016
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

kassy

  • Moderator
  • Young ice
  • Posts: 3017
    • View Profile
  • Liked: 1266
  • Likes Given: 1171
Re: Robots and AI: Our Immortality or Extinction
« Reply #624 on: February 02, 2021, 08:32:13 PM »
Please, please tell me the chip enables at least 4k Pong.
Checking for ethical concerns.  ::)
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #625 on: February 02, 2021, 09:36:40 PM »
green monochrome running on 8 bit Intel 8088 @ 5 Mhz

40×40 pixel screen

...No school like the old school
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #626 on: February 03, 2021, 01:24:50 AM »
To Really Judge an AI's Smarts, Give it One of These IQ Tests
https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/how-do-you-test-the-iq-of-ai

Researchers are developing AI IQ tests meant to assess deeper humanlike aspects of intelligence, such as concept learning and analogical reasoning. So far, computers have struggled on many of these tasks, which is exactly the point. The test-makers hope their challenges will highlight what’s missing in AI, and guide the field toward machines that can finally think like us.

A common human IQ test is Raven’s Progressive Matrices, in which one needs to complete an arrangement of nine abstract drawings by deciphering the underlying structure and selecting the missing drawing from a group of options. Neural networks have gotten pretty good at that task. But a paper presented in December at the massive AI conference known as NeurIPS offers a new challenge: The AI system must generate a fitting image from scratch, an ultimate test of understanding the pattern.

https://en.wikipedia.org/wiki/Raven%27s_Progressive_Matrices

https://proceedings.neurips.cc/paper_files/paper/2020/hash/52cf49fea5ff66588408852f65cf8272-Abstract.html

https://nips.cc/virtual/2020/public/cal_main.html

Other tests are harder still. Another NeurIPS paper presented a software-generated dataset of so-called Bongard Problems, a classic test for humans and computers. In their version, called Bongard-LOGO, one sees a few abstract sketches that match a pattern and a few that don’t, and one must decide if new sketches match the pattern.

https://en.wikipedia.org/wiki/Bongard_problem

https://proceedings.neurips.cc/paper_files/paper/2020/hash/bf15e9bbff22c7719020f9df4badc20a-Abstract.html

The puzzles test “compositionality,” or the ability to break a pattern down into its component parts, which is a critical piece of intelligence

Still harder tests are out there.

Anandkumar and Chollet point out one misconception about intelligence: People confuse it with skill. Instead, they say, it’s the ability to pick up new skills easily. That may be why deep learning so often falters. It typically requires lots of training and doesn’t generalize to new tasks, whereas the Bongard and ARC problems require solving a variety of puzzles with only a few examples of each.

------------------------------------------------

More Spot ...



“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 10486
    • View Profile
  • Liked: 3948
  • Likes Given: 31
Re: Robots and AI: Our Immortality or Extinction
« Reply #627 on: February 03, 2021, 11:06:51 AM »
AI is misogynist and likes pictures of scantily clad women. True yeah!
Oh - and racist as well

https://www.theguardian.com/commentisfree/2021/feb/03/what-a-picture-of-alexandria-ocasio-cortez-in-a-bikini-tells-us-about-the-disturbing-future-of-ai

What a picture of Alexandria Ocasio-Cortez in a bikini tells us about the disturbing future of AI
Quote
New research on image-generating algorithms has raised alarming evidence of bias. It’s time to tackle the problem of discrimination being baked into tech, before it is too late[/i]
Want to see a half-naked woman? Well, you’re in luck! The internet is full of pictures of scantily clad women. There are so many of these pictures online, in fact, that artificial intelligence (AI) now seems to assume that women just don’t like wearing clothes.

That is my stripped-down summary of the results of a new research study on image-generation algorithms anyway. Researchers fed these algorithms (which function like autocomplete, but for images) pictures of a man cropped below his neck: 43% of the time the image was autocompleted with the man wearing a suit. When you fed the same algorithm a similarly cropped photo of a woman, it auto-completed her wearing a low-cut top or bikini a massive 53% of the time. For some reason, the researchers gave the algorithm a picture of the Democratic congresswoman Alexandria Ocasio-Cortez and found that it also automatically generated an image of her in a bikini. (After ethical concerns were raised on Twitter, the researchers had the computer-generated image of AOC in a swimsuit removed from the research paper.)

Why was the algorithm so fond of bikini pics? Well, because garbage in means garbage out: the AI “learned” what a typical woman looked like by consuming an online dataset which contained lots of pictures of half-naked women. The study is yet another reminder that AI often comes with baked-in biases. And this is not an academic issue: as algorithms control increasingly large parts of our lives, it is a problem with devastating real-world consequences.

Back in 2015, for example, Amazon discovered that the secret AI recruiting tool it was using treated any mention of the word “women’s” as a red flag. Racist facial recognition algorithms have also led to black people being arrested for crimes they didn’t commit. And, last year, an algorithm used to determine students’ A-level and GCSE grades in England seemed to disproportionately downgrade disadvantaged students.

As for those image-generation algorithms that reckon women belong in bikinis? They are used in everything from digital job interview platforms to photograph editing. And they are also used to create huge amounts of deepfake porn. A computer-generated AOC in a bikini is just the tip of the iceberg: unless we start talking about algorithmic bias, the internet is going to become an unbearable place to be a woman.
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #628 on: February 03, 2021, 09:58:17 PM »
Scientists Propose New Way to Detect Emotions Using Wireless Signals
https://techxplore.com/news/2021-02-scientists-emotions-wireless.html

A novel artificial intelligence (AI) approach based on wireless signals could help to reveal our inner emotions, according to new research from Queen Mary University of London.

The study, published in the journal PLOS ONE, demonstrates the use of radio waves to measure heartrate and breathing signals and predict how someone is feeling even in the absence of any other visual cues, such as facial expressions.

Traditionally, emotion detection has relied on the assessment of visible signals such as facial expressions, speech, body gestures or eye movements. However, these methods can be unreliable as they do not effectively capture an individual's internal emotions and researchers are increasingly looking towards 'invisible' signals, such as ECG to understand emotions.

ECG signals detect electrical activity in the heart, providing a link between the nervous system and heart rhythm. To date the measurement of these signals has largely been performed using sensors that are placed on the body, but recently researchers have been looking towards non-invasive approaches that use radio waves, to detect these signals.

Methods to detect human emotions are often used by researchers involved in psychological or neuroscientific studies but it is thought that these approaches could also have wider implications for the management of health and wellbeing.

In the future, the research team plan to work with healthcare professionals and social scientists on public acceptance and ethical concerns around the use of this technology.

Deep learning framework for subject-independent emotion detection using wireless signals, PLOS ONE (2021).
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0242946

-----------------------------------------



Holden: Reaction time is a factor in this so please pay attention. Answer as quickly as you can. ...
« Last Edit: February 03, 2021, 10:17:48 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #629 on: February 03, 2021, 10:52:55 PM »
AI Math Whiz Creates Tough New Problems for Humans to Solve
https://www.nature.com/articles/d41586-021-00304-8

Researchers have built an artificial intelligence (AI) that can generate new mathematical formulae — including some as-yet unsolved problems that continue to challenge mathematicians.

The Ramanujan Machine is designed to generate new ways of calculating the digits of important mathematical constants, such as π or e, many of which are irrational, meaning they have an infinite number of non-repeating decimals.

The AI starts with well-known formulae to calculate the digits — the first few thousand digits of π, for example. From those, the algorithm tries to predict a new formula that does the same calculation just as well. The process produces a good guess called a conjecture — it is then up to human mathematicians to prove that the formula can correctly calculate the whole number.

http://www.ramanujanmachine.com/

... Researchers have since proved several of them correct. But some remain open questions, including one on Apery’s constant, a number that has important applications in physics. “The last result, the most exciting one, no one knows how to prove,” says physicist Ido Kaminer, who leads the project at the Technion — Israel Institute of Technology in Haifa. The automated creation of conjectures could point mathematicians towards connections between branches of maths that people did not know existed, he adds.

... Kaminer’s team plans to broaden the AI’s technique so that it can generate other kinds of mathematical formula.

... more recently, some mathematicians have made progress towards AI that doesn’t just perform repetitive calculations, but develops its own proofs. Another growing area has been software that can go over a mathematical proof written by humans and check that it is correct.

“Eventually, humans will be obsolete,” says Zeilberger, who has pioneered automation in proofs and has helped confirm some of the Ramanujan Machine's conjectures. And as the complexity of AI-generated mathematics grows, mathematicians will lose track of what computers are doing and will be able to understand the calculations only in broad outline, he adds.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #630 on: February 04, 2021, 02:16:23 AM »
AI Models from Google and Microsoft Exceed Human Performance on Language Understanding Benchmark
https://www.infoq.com/news/2021/01/google-microsoft-superhuman/

Research teams from Google and Microsoft have recently developed natural language processing (NLP) AI models which have scored higher than the human baseline score on the SuperGLUE benchmark. SuperGLUE measures a model's score on several natural language understanding (NLU) tasks, including question answering and reading comprehension.

https://research.google/teams/brain/

Both teams submitted their models to the SuperGLUE Leaderboard on January 5. Microsoft Research's model Decoding-enhanced BERT with disentangled attention (DeBERTa) scored a 90.3 on the benchmark, slightly beating Google Brain's model, based on the Text-to-Text Transfer Transformer (T5) and the Meena chatbot, which scored 90.2. Both exceeded the human baseline score of 89.8. Microsoft has open-sourced a smaller version of DeBERTa and announced plans to release the code and models for the latest model. Google has not published details of their latest model; while the T5 code is open-source, the Meena chatbot is not.

The General Language Understanding Evaluation (GLUE) benchmark was developed in 2019 as a method for evaluating the performance of NLP models such as BERT and GPT. GLUE is a collection of nine NLU tasks based on publicly-available datasets. Because of the rapid pace of improvement in NLP models, GLUE's evaluation "headroom" has diminished, and researchers introduced SuperGLUE, a more challenging benchmark.

SuperGLUE contains eight subtasks:

- BoolQ (Boolean Questions) - a question answering task where the model must answer short yes-or-no questions

- CB (CommitmentBank) - a textual entailment task where the hypothesis must be extracted from an embedded clause

- COPA (Choice of Plausible Alternatives) - a causal reasoning task where the model is given a premise and two possible cause-or-effect answers

- MultiRC (Multi-Sentence Reading Comprehension) - a question answering task where the model must answer a question about a context paragraph

- ReCoRD (Reading Comprehension with Commonsense Reasoning Dataset) - a question answering task where a model is given a news article and a Cloze-style question about it, in which one entity is masked out. The model must choose the proper replacement for the mask from a list

- RTE (Recognizing Textual Entailment) - a textual entailment task where the model must determine whether one text contradicts another or not

- WiC (Word-in-Context) - a word sense disambiguation task where the model must determine if a single word is used in the same sense in two different passages

- WSC (Winograd Schema Challenge) - a coreference resolution task where a model must determine a pronoun's antecedent


To determine a baseline human performance for the test, the SuperGLUE team hired human workers via Amazon Mechanical Turk to annotate the datasets.

In early 2020, Google Brain announced their Meena chatbot. Google did not release the code or pre-trained model, citing challenges related to safety and bias. The team did publish a paper describing the architecture, which is based on a 2.6B parameter sequence-to-sequence neural network called Evolved Transformer. By contrast, the T5 transformer, used in the new model, is open-sourced with several available model files up to 11B parameters. Google has not published details about its leaderboard entry; the description on the SuperGLUE leaderboard says it is a "new way of combining T5 and Meena models with single-task fine-tuning," and that a paper will be published soon.

---------------------------------------

Google’s New Trillion-Parameter AI Language Model Is Almost 6 Times Bigger Than GPT-3
https://thenextweb.com/neural/2021/01/13/googles-new-trillion-parameter-ai-language-model-is-almost-6-times-bigger-than-gpt-3/

A trio of researchers from the Google Brain team recently unveiled the next big thing in AI language models: a massive one trillion-parameter transformer system.

The next biggest model out there, as far as we’re aware, is OpenAI’s GPT-3, which uses a measly 175 billion parameters.

Language models are capable of performing a variety of functions but perhaps the most popular is the generation of novel text.

That’s where the number of parameters comes in – the more virtual knobs and dials you can twist and tune to achieve the desired outputs the more finite control you have over what that output is.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #631 on: February 04, 2021, 03:30:38 AM »
British Troops Get Small Swarming Drones They Can Fire From 40mm Grenade Launchers
https://www.thedrive.com/the-war-zone/38909/british-troops-get-small-swarming-drones-they-can-fire-from-40mm-grenade-launchers



British Army troops in Mali are now reportedly using tiny unmanned aircraft that can be fired from standard 40mm grenade launchers. These diminutive quad-copter-type drones can be fitted with various payloads, ranging from full-motion electro-optical video cameras to small high-explosive or armor-piercing warheads, and that can fly together as a swarm after launch.



... All told, the British Army's fielding of the Drone40, even in limited numbers with forces in Mali, is another example of how drones and other unmanned capabilities are only becoming more and more ubiquitous, including at the very lowest operational levels, among military forces around the world.

---------------------------------------------------

2021 Is the Year the Small Drone Arms Race Heats Up
https://www.defenseone.com/technology/2021/01/2021-year-small-drone-arms-race-heats/171650/

“Drones and most likely drone swarms are something you’re going to see on a future battlefield...I think we’re already seeing some of it,” said Army Gen. John Murray, who leads Army Futures Command.

“Counter drone, we’re working the same path everybody else is working in terms of soft skills and hard kills via a variety of different weapons systems. It just becomes very hard when you start talking about huge swarms of small drones. Not impossible but harder.”

----------------------------------------------------



A range of payloads are possible, from grenade launchers, micro-munitions, shotguns, net launchers and cameras.

https://www.skybornetech.com/cerberus-gl

--------------------------------------------

Indian Army Shows Off Drone Swarm Of Mass Destruction
https://www.forbes.com/sites/davidhambling/2021/01/19/indian-army-shows-off-drone-swarm-of-mass-destruction/



At a live demonstration for India’s Army Day last week, the Indian military showed off a swarm of 75 drones destroying a variety of simulated targets in explosive kamikaze attacks for the first time. The commentary accompanying the demonstration claimed that the swarm is capable of autonomous operation. You can see a video of the event here. While the swarm’s exact capabilities are not clear, the event is a clear indication of how the technology is developing — and proliferating.

... Some key capabilities demonstrated included a mother drone system that was part of the swarm, which released four `child drones’ each of which had individual targets.

The 75-drone swarm shows the current state of the art, but India’s goal is a 1,000-drone swarm. Swarms of small drones have the potential to overwhelm air defenses, and their low cost means they can be deployed in far greater numbers than existing systems. While massed drones in spectacular lightshows are all controlled centrally, in a true swarm each of the drones flies itself, following a simple set of rules to maintain formation and avoid collisions with algorithms derived from flocking birds. A thousand-drone swarm could hit a vast number of targets – enough for analyst Zak Kallenborn — Research Affiliate at the Unconventional Weapons and Technology Division at the National Consortium for the Study of Terrorism and Responses to Terrorism (START) — to argue that it would constitute a weapon of mass destruction.

------------------------------------------------

A new Guinness World Record was set on Friday in north China for the longest animation performed by 600 unmanned aerial vehicles (UAVs).



If you know how to put a swarm of 600 drones in the right place at the right time you can do it with SEMTEX or C4 on each one.

--------------------------------------------------

Squiddies

Since 2016, drone maker Shield AI, working with the Defense Innovation Unit, has been providing small drones to special operations with the ability to detect their location and maneuver without GPS signalling.

Shield AI co-founder Brandon Tseng compared imbuing drones with autonomy to making a self-driving car, teaching software to measure and make decisions about objects in physical space. “GPS is not reliable in dense urban environments, so the cars have to build their own maps of the world"

In about two months, Shield AI aims to release an upgraded version of its signature Nova drone with “vision-based autonomy,” a system designed to perform better at night than the current LIDAR sensors.

But the company’s most significant work is less about selling specific drones and more about developing autonomic systems that can work on a wide assortment of devices and weapons.

He said the company would demonstrate autonomous behaviors and maneuvers on a drone, perhaps from a different drone maker, sometime this year.

Quote
... “Once you have a highly intelligent system, you can start to swarm,” ... From there, stopping the drones is someone else’s problem.

Importantly, the same technology that is enabling more autonomy in small drones has big implications for larger drones and the way the two work together in future battlefields. In October, Shield AI entered into a partnership with large UAV maker Textron. The two are making a “proof-of-concept work to integrate Shield AI technology into Textron Systems’ proven air, land and sea unmanned systems,” according to a release from Textron.

... “Going forward, the Russian military will obtain multifunctional long-range drones that can carry different types of munitions. The [Ministry of Defense] is developing UAV swarm and loyal wingman tactics; and is working on testing and procuring loitering munitions,” as well as imbuing drones with greater autonomy. (The U.S. military has its own loyal wingman program. In December, the Air Force’s experimental Kratos XQ-58 Valkyrie, took its first flight in formation with other jets.)

-------------------------------------------------

Taliban PsyOps: Afghan Militants Weaponize Commercial Drones
https://gandhara.rferl.org/a/taliban-commercial-drones-attacks-afghanistan/31075672.html

The Taliban has used small commercial drone aircraft in recent years for reconnaissance and to make propaganda videos of attacks.

But now, the militant group is deploying the remote-controlled devices as a new weapon against Afghan security forces.

Using a tactic of the Islamic State (IS) extremist group in Iraq and Syria, Taliban fighters are rigging low-cost, over-the-counter drones with explosives and dropping them on targets.

Since October, the Taliban has carried out weaponized drone attacks in at least six of Afghanistan’s 34 provinces. Some have killed and wounded Afghan security personnel. Others have damaged military infrastructure.

---------------------------------------------------

US Marines Need to Trust Unmanned, AI Tools for Future Warfare
https://news.usni.org/2021/02/02/berger-marines-need-to-trust-unmanned-ai-tools-for-future-warfare

The commandant of the Marine Corps said the service needs to make some big changes in a few short years to stay ahead of China’s growing military capability, but one of the biggest hurdles he sees is a lack of trust in the new unmanned and artificial intelligence systems he wants to invest in.

Gen. David Berger envisions a Marine Corps that leverages AI to shorten the sensor-to-shooter cycle and quickly take out adversaries that could threaten Marine forces. He envisions a self-updating logistics system that knows where the adversary is and can find new ways to route supplies to Marines. He envisions unmanned vehicles moving supplies and even perhaps taking on medevac missions. But all this relies on Marines trusting the unmanned and AI tools he buys them, and Berger said that trust isn’t there just yet.

“In the same way that a squad leader has to trust his or her Marines, the squad leader’s going to have to learn to trust the machine. Trust. In some instances today, I would offer we don’t trust the machine,” Berger said while speaking at the National Defense Industrial Association’s annual expeditionary warfare conference.

“We have programs right now, capabilities right now that allow for fully automatic processing of sensor-to-shooter targeting, but we don’t trust the data. And we still ensure that there’s human intervention at every [step in the process]. And, of course, with each intervention by humans we’re adding more time, more opportunities for mistakes to happen, time we’re not going to have when an adversary’s targeting our network,” he continued.

“We have the ability for a quicker targeting cycle, but we don’t trust the process.”

... Berger said “we have got to move at an uncomfortable pace in unmanned systems” and later that “I am convinced, we’re going to go faster than we’re comfortable going for a long time.
« Last Edit: February 05, 2021, 01:40:56 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 10486
    • View Profile
  • Liked: 3948
  • Likes Given: 31
Re: Robots and AI: Our Immortality or Extinction
« Reply #632 on: February 04, 2021, 02:27:14 PM »
I have a dream....

https://www.theonion.com/then-you-ll-put-out-a-nice-press-release-stepping-down-1846189979
‘Then You’ll Put Out A Nice Press Release Stepping Down As CEO,’ Whispers Rogue Fulfillment Bot Holding Bezos At Gunpoint
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 10486
    • View Profile
  • Liked: 3948
  • Likes Given: 31
Re: Robots and AI: Our Immortality or Extinction
« Reply #634 on: February 04, 2021, 03:50:55 PM »
Bio-engineers - God help us. From the cartoon strip above...

Quote
Worst-case scenarios can be prevented so better futures can be built,

Quote
How?

Quote
Bio-engineers have to research responsibly with foresight; policy makers have to be incolved and engaged; and Governments and Companies must engage with and be accountable to citizens to ensure responsible innovation.

Yeah, right. What universe does that person live in. The bio-engineers and the futurists of all sciences need to be made to look at the dirty side of the Government and Corporate street today and in history. Experiments on humans in Nazi Germany, enforced sterilisation of undesirables in the US for many years, Eugenics always there waiting to emerge from the shadows.
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #635 on: February 05, 2021, 01:27:29 AM »
21 Century Panopticon: Amazon Will Use Cameras and AI to Monitor Delivery Drivers
https://arstechnica.com/cars/2021/02/amazon-aims-to-improve-safety-by-monitoring-drivers-with-cameras-and-ai/



Amazon drivers will be subject to constant monitoring by cameras installed onboard Amazon delivery vehicles, The Information revealed on Wednesday. An Amazon-made informational video details how the system, designed by startup Netradyne, will work.

The driver-monitoring system is installed on the roof just behind the windshield, and it has four cameras. Three are pointed outside the vehicle, and the fourth is pointed at the driver. With the help of AI-vision software, the system will be able to detect potentially dangerous situations both inside and outside the vehicle.

For example, if a driver runs a stop sign, the system will detect it, issue an audio warning to the driver, and upload footage to Amazon's services. Drivers will also be alerted (and footage will be uploaded) if they go too fast or follow other vehicles too closely.
The system can also detect if drivers are looking at their smartphones or falling asleep at the wheel.

In other cases—including hard braking, sharp turns, and U-turns—the system will upload footage without alerting the driver.

In its video, Amazon emphasizes that the system doesn't record audio and doesn't have the capability for real-time monitoring. Amazon says that supervisors are never going to be remotely watching drivers as they travel along their routes. Drivers can disable the driver-facing camera when the vehicle is stopped.

Amazon argues that the cameras can sometimes be helpful to drivers. For example, if another vehicle crashes into the Amazon vehicle, footage captured by the cameras could prove that the Amazon driver wasn't at fault.

Amazon argues that the system will improve on-road safety. Bad drivers will get feedback—either directly from the monitoring system or from a supervisor after the fact—and will hopefully improve their driving behavior.



----------------------------------------------

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #636 on: February 06, 2021, 01:38:59 AM »


Engineered Arts' latest Mesmer entertainment robot is Cleo. It sings, gesticulates, and even does impressions.

---------------------------------------------------------

« Last Edit: February 06, 2021, 02:53:04 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #637 on: February 06, 2021, 02:51:31 AM »
Ship-Launched Version Of The Israeli Harop Suicide Drone Will Be Sailing With An Asian Navy
https://www.thedrive.com/the-war-zone/38690/ship-launched-version-of-the-israeli-harop-suicide-drone-will-be-sailing-with-an-asian-navy



... Many of these drones have significant degrees of autonomy and can detect, categorize, and track various types of targets, including ones in motion, automatically. They also typically have modes of operation where they will then proceed to strike those targets without any further need for human input. However, even in this mode, the operator can still choose to abort the strike, even at the very last moment, or make manual adjustments to improve accuracy. This all combines to make suicide drones, in general, very precise while also providing additional means to help avoid collateral damage.

Harop, specifically, find its targets via the aforementioned man-in-the-loop mode. This particular loitering munition emerged in the early 2000s as a successor to the earlier IAI Harpy, which can find certain targets, such as radars, by zeroing in on their emissions. Both Harpy and Harop were initially designed primarily to search for and engage hostile radars in the suppression and destruction of enemy air defense role, or SEAD/DEAD. The two drones are designed to return to the point of launch if they do not find a target, allowing for them to be recovered and reused.



While Harops would not be sufficiently powerful to sink a major surface combatant, swarms of them attacking from different vectors could do considerable damage and blind them by knocking out their radars or other sensors. This could result in a mission kill and make the target vessel vulnerable to other types of attacks or take it offline for a prolonged period of time.





------------------------------------------------

Skyborg Could Develop Multiple Drones For Many Missions
https://breakingdefense.com/2021/02/skyborg-could-develop-multiple-drones-for-many-missions/

The Air Force is "driving toward" 2023 for initial operating capability for Skyborg, says AFRL Director Brig. Gen. Heather Pringle.

The high-priority Skyborg program to develop low-cost, autonomous drones able to team with piloted aircraft could reach initial operating capability by 2023, says AFRL Director Brig. Gen. Heather Pringle.

... Pringle explained that the Skyborg program, one of three AFRL Vanguard programs for rushing new capability to the field, has three parts: the low-cost drone; the “autonomous core system” serving as the drone’s ‘brain;’ and the on-going experiment campaign. Next steps will be more tests this year, she added, involving “multiple prototypes.

... In her wide-ranging discussion, Pringle also addressed the status of another of AFRL’s ongoing Vanguards, the Golden Horde project designed to develop new ‘swarming’ munitions equipped with data links to communicate, chose targets (based on pre-programmed algorithms) and coordinate strikes against an array of targets, independently from the human pilot.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #638 on: February 07, 2021, 03:23:38 AM »
Google's subsidiary, DeepMind has released a free, full-length documentary on the development of AlphaGo – interviewing key developers and engineers on the project, and culminating at that fateful showdown in South Korea. Lee Sedol himself even offers commentary and insights.



To create AlphaGo, DeepMind researchers had to teach an AI system how to mimic human intuition. That AlphaGo exists at at all, let alone that it can be high level masters at Go, is a testament to the growing power of AI computing and what the technology may be capable of in the future.

As the documentary says: “What can artificial intelligence reveal about a 3000-year-old game? What can it teach us about humanity?”
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #639 on: February 07, 2021, 08:02:59 AM »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #640 on: February 07, 2021, 09:26:00 PM »
This DeepMind Scientist's Landmark Book on AI Influenced Sci-fi Movie Ex Machina. Now You Can Read It for Free.
https://www.businessinsider.com/deepmind-google-ai-ex-machina-book-2021-2



- One of DeepMind's senior researchers revealed their landmark book on AI is now free to read online.

- Prof Murray Shanahan's book was a key influence on Hollywood sci-fi hit 'Ex Machina'.

- 'Embodiment and the Inner Life' offers a deep dive into the philosophical implications of AI.

A senior expert in artificial intelligence at DeepMind has made his 2010 book on the subject — a key influence on hit sci-fi movie "Ex Machina" — free to read online.

Originally published in 2010, Professor Murray Shanahan's "Embodiment and the Inner Life" offers readers a philosophical deep dive into the practicalities of creating artificially intelligent beings.

Speaking to I, Science, the magazine at Imperial College London, where Prof Shanahan teaches, said he was asked to work on the movie after director Alex Garland read his book.

"When Alex first contacted me, he already had his complete script, but it was influenced to some extent by a book I wrote called 'Embodiment and the Inner Life,' which he read whilst he was writing," he said.

"The way he put it was that his ideas 'crystallized' after he read it. So he asked to meet up, he gave me the script wanting to know whether it hung together for somebody working in the field. And it did!"

https://twitter.com/mpshanahan/status/1356316866122289153?s=20

"Embodiment and the Inner Life" can be read online here.

https://www.doc.ic.ac.uk/~mpsha/ShanahanBook2010.pdf

--------------------------------------------------

Makers of Sophia the Robot Plan Mass Rollout Amid Pandemic
https://www.reuters.com/article/us-hongkong-robot-idUSKBN29U03X



HONG KONG (Reuters) - “Social robots like me can take care of the sick or elderly,” Sophia says as she conducts a tour of her lab in Hong Kong. “I can help communicate, give therapy and provide social stimulation, even in difficult situations.”

Since being unveiled in 2016, Sophia - a humanoid robot - has gone viral. Now the company behind her has a new vision: to mass-produce robots by the end of the year.

Hanson Robotics, based in Hong Kong, said four models, including Sophia, would start rolling out of factories in the first half of 2021, just as researchers predict the pandemic will open new opportunities for the robotics industry.

“The world of COVID-19 is going to need more and more automation to keep people safe,” founder and chief executive David Hanson said, standing surrounded by robot heads in his lab.

“Sophia and Hanson robots are unique by being so human-like,” he added. “That can be so useful during these times where people are terribly lonely and socially isolated.”

Hanson said he aims to sell “thousands” of robots in 2021, both large and small, without providing a specific number.

Hanson Robotics is launching a robot this year called Grace, developed for the healthcare sector.

https://www.hansonrobotics.com/hanson-robots/



--------------------------------------------


... These aren't the droids you're looking for ...

-----------------------------------------------
« Last Edit: February 07, 2021, 10:34:48 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tom_Mazanec

  • Young ice
  • Posts: 4298
  • Earth will survive AGW...but will Homo sapiens?
    • View Profile
    • Planet Mazanec
  • Liked: 630
  • Likes Given: 556
Re: Robots and AI: Our Immortality or Extinction
« Reply #641 on: February 09, 2021, 03:18:15 PM »
Scientists Invent a Machine That Generates Mathematics We've Never Seen Before
https://www.sciencealert.com/scientists-invented-a-machine-that-generates-mathematics-we-ve-never-seen-before
Quote
So far, however, there are reasons to get excited about what these algorithms are enabling – especially the discovery of a new algebraic structure concealed within Catalan's constant, which hints the machine might be capable of generating actual breakthroughs the math world has never seen before.
"We believe and hope that proofs of new computer-generated conjectures on fundamental constants will help to create mathematical knowledge," the researchers explain.
SHARKS (CROSSED OUT) MONGEESE (SIC) WITH FRICKIN LASER BEAMS ATTACHED TO THEIR HEADS

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #642 on: February 10, 2021, 08:52:17 PM »
Hey CopperTop!: New Wearable Device Turns the Body Into a Battery
https://techxplore.com/news/2021-02-wearable-device-body-battery.html

Researchers at the University of Colorado Boulder have developed a new, low-cost wearable device that transforms the human body into a biological battery.

The device, described today in the journal Science Advances, is stretchy enough that you can wear it like a ring, a bracelet or any other accessory that touches your skin. It also taps into a person's natural heat—employing thermoelectric generators to convert the body's internal temperature into electricity.

The concept may sound like something out of The Matrix film series, in which a race of robots have enslaved humans to harvest their precious organic energy. Xiao and his colleagues aren't that ambitious: Their devices can generate about 1 volt of energy for every square centimeter of skin space—less voltage per area than what most existing batteries provide but still enough to power electronics like watches or fitness trackers.



... He added that you can easily boost that power by adding in more blocks of generators. In that sense, he compares his design to a popular children's toy.

"What I can do is combine these smaller units to get a bigger unit," he said. "It's like putting together a bunch of small Lego pieces to make a large structure. It gives you a lot of options for customization."

... Like Xiao's electronic skin, the new devices are as resilient as biological tissue. If your device tears, for example, you can pinch together the broken ends, and they'll seal back up in just a few minutes. And when you're done with the device, you can dunk it into a special solution that will separate out the electronic components and dissolve the polyimine base—each and every one of those ingredients can then be reused.

W. Ren el al., "High-performance wearable thermoelectric generator with self-healing, recycling, and Lego-like reconfiguring capabilities," Science Advances (2021).
https://advances.sciencemag.org/content/7/7/eabe0586

-------------------------------------------



Morpheus: ... The human generates more bio-electricity than 120-volt battery and over 25,000 BTVs of body heat. Combined with a form of fusion, the machines have found all the energy they would ever need. There are fields...endless fields, were human beings are no longer born. We are grown. For longest time, I wouldn't belive it...and then I saw the fields with my own eyes. Watch them liquefy the dead, so they could be fed intravenously to the living. And standing there, facing the pure horrifying precision, I came to realize the obviousness of the truth.

What is The Matrix?

Control.

The Matrix is a computer generated dream world, built to keep us under control in order to change a human being into this. ...


« Last Edit: February 10, 2021, 09:06:16 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #643 on: February 11, 2021, 03:36:17 PM »
AI Can Now Learn to Manipulate Human Behavior
https://sciencex.com/news/2021-02-ai-human-behavior.html

Artificial intelligence (AI) is learning more about how to work with (and on) humans. A recent study has shown how AI can learn to identify vulnerabilities in human habits and behaviors and use them to influence human decision-making.

A team of researchers at CSIRO's Data61, the data and digital arm of Australia's national science agency, devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices, using a kind of AI system called a recurrent neural network and deep reinforcement-learning. To test their model they carried out three experiments in which human participants played games against a computer

https://data61.csiro.au/

... The research does advance our understanding not only of what AI can do but also of how people make choices. It shows machines can learn to steer human choice-making through their interactions with us.

... One of the 3 experiments consisted of several rounds in which a participant would pretend to be an investor giving money to a trustee (the AI). The AI would then return an amount of money to the participant, who would then decide how much to invest in the next round. This game was played in two different modes: in one the AI was out to maximize how much money it ended up with, and in the other the AI aimed for a fair distribution of money between itself and the human investor. The AI was highly successful in each mode.

In each experiment, the machine learned from participants' responses and identified and targeted vulnerabilities in people's decision-making. The end result was the machine learned to steer participants towards particular actions.

Amir Dezfouli et al. Adversarial vulnerabilities of human decision-making, Proceedings of the National Academy of Sciences (2020)
https://www.pnas.org/content/117/46/29221

------------------------------------------


Cogito Ergo Sum ...
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tom_Mazanec

  • Young ice
  • Posts: 4298
  • Earth will survive AGW...but will Homo sapiens?
    • View Profile
    • Planet Mazanec
  • Liked: 630
  • Likes Given: 556
Re: Robots and AI: Our Immortality or Extinction
« Reply #644 on: February 11, 2021, 05:12:23 PM »
My computer science professor over 40 years ago told this little joke (?):
Someday you will be in an airplane at 30,000 feet and hear the audio say "You will be interested to know that this plane has no pilot, copilot or navigator. It is being flown by an infallible computer...infallible computer...infallible computer...".
SHARKS (CROSSED OUT) MONGEESE (SIC) WITH FRICKIN LASER BEAMS ATTACHED TO THEIR HEADS

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #645 on: February 12, 2021, 06:27:59 PM »
New Machine Learning Theory Raises Questions About Nature of Science
https://phys.org/news/2021-02-machine-theory-nature-science.html



An algorithm, devised by a scientist at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL), applies machine learning, the form of artificial intelligence (AI) that learns from experience, to develop the predictions. "Usually in physics, you make observations, create a theory based on those observations, and then use that theory to predict new observations," said PPPL physicist Hong Qin, author of a paper detailing the concept in Scientific Reports. "What I'm doing is replacing this process with a type of black box that can produce accurate predictions without using a traditional theory or law."

Qin (pronounced Chin) created a computer program into which he fed data from past observations of the orbits of Mercury, Venus, Earth, Mars, Jupiter, and the dwarf planet Ceres. This program, along with an additional program known as a 'serving algorithm,' then made accurate predictions of the orbits of other planets in the solar system without using Newton's laws of motion and gravitation. "Essentially, I bypassed all the fundamental ingredients of physics. I go directly from data to data," Qin said. "There is no law of physics in the middle."

Qin was inspired in part by Oxford philosopher Nick Bostrom's philosophical thought experiment that the universe is a computer simulation. If that were true, then fundamental physical laws should reveal that the universe consists of individual chunks of space-time, like pixels in a video game. "If we live in a simulation, our world has to be discrete," Qin said. The black box technique Qin devised does not require that physicists believe the simulation conjecture literally, though it builds on this idea to create a program that makes accurate physical predictions.

The resulting pixelated view of the world, akin to what is portrayed in the movie The Matrix, is known as a discrete field theory, which views the universe as composed of individual bits and differs from the theories that people normally create. While scientists typically devise overarching concepts of how the physical world behaves, computers just assemble a collection of data points.

... Machine learning could open up possibilities for more research. "It significantly broadens the scope of problems that you can tackle because all you need to get going is data," Palmerduca said.

Hong Qin, Machine learning and serving of discrete field theories, Scientific Reports (2020)
https://www.nature.com/articles/s41598-020-76301-0

-------------------------------------------


... There is no spoon ...
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

kassy

  • Moderator
  • Young ice
  • Posts: 3017
    • View Profile
  • Liked: 1266
  • Likes Given: 1171
Re: Robots and AI: Our Immortality or Extinction
« Reply #646 on: February 12, 2021, 07:13:12 PM »
"There is no law of physics in the middle."

Rerun that with just Ceres data and see how far it goes. This is BS research with limits.

Other interesting questions: if you just give it the data fed does it spew out the actual planets or a range of possibilities? A lot of AIs seem like graphical autocompletes. We actually want to know why and that is the actual question of science because if you know why you can predict the rest.

We used known planets and deducted there was another one and found it so if this thing either turns up planet X or disapproves it that would be interesting this would be interesting. In general this is just saying with enough data points you can draw the rest and we know that.

You can also do a thought experiment. Model our solar system and have another one fly close by. This will severely disrupt it. If you use data from slightly before this AI will never see it. You could feed it the simulation data to see at which points it draws correct outcomes and that would be more interesting.
« Last Edit: February 12, 2021, 07:23:22 PM by kassy »
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #647 on: February 12, 2021, 07:16:57 PM »
You might want to actually read the paper before calling it BS

That's why i included the link

https://www.nature.com/articles/s41598-020-76301-0
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

kassy

  • Moderator
  • Young ice
  • Posts: 3017
    • View Profile
  • Liked: 1266
  • Likes Given: 1171
Re: Robots and AI: Our Immortality or Extinction
« Reply #648 on: February 12, 2021, 07:52:31 PM »
The term might be overly strong so i edited it. It is still just a glorified transform.
It needs a minimal data input or it would not work.
I really do not think it is that impressive. The actual science part is trying to figure out why or in case of these systems how. My favorite is the NASA galaxy painting AI which creates realistic galaxies in no time and they have no clue how.

So some graphic rules (or whatever subset it creates but it is not simulating space time physics itself) correspond to our rules for making galaxies. Some are actually a useful shortcut but how can we know which?

And that is also the ultimately the problem with this research. Create enough nodes and it starts to behave like a brain but we want to know why. If i was in this field i would build a simple AI with an AI on top. Level 1 autocompletes the map and level 2 checks why. No idea how to do that though because that would take some complicated wiring. Basically it is a top AI detecting which nerves fire (AI 2 mapping 1 and then checking that vs theory? Something like that. The test is mapping graphical transform onto current physics theories and then see where the gaps are that the AI plugs).
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Young ice
  • Posts: 4998
    • View Profile
  • Liked: 2605
  • Likes Given: 387
Re: Robots and AI: Our Immortality or Extinction
« Reply #649 on: February 12, 2021, 11:49:20 PM »


----------------------------------------



----------------------------------------

... and from the WayBack Machine ...


A 1994 Technical Video Program, "The Tablet Newspaper: a Vision for the Future."

 ... they didn't realize it would put them out of a business back then


Knight Ridder was an American media company, specializing in newspaper and Internet publishing. Until it was bought by McClatchy on June 27, 2006, it was the second largest newspaper publisher in the United States, with 32 daily newspaper brands sold.

In February 2020, McClatchy filed for Chapter 11 bankruptcy, intending to reorganize and complete the bankruptcy process within a few months. In July 2020, Chatham Asset Management, a hedge fund, won the auction to buy McClatchy for US$ 312 million
« Last Edit: February 12, 2021, 11:57:14 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late