Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 69869 times)

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #650 on: February 13, 2021, 04:38:42 PM »
Missouri Bill Would Welcome Our Robot (Delivery) Overlords
https://m.riverfronttimes.com/newsblog/2021/02/11/missouri-bill-would-welcome-our-robot-delivery-overlords



A proposed Missouri law that would open the state's sidewalks and roadways to robotic delivery vehicles drew support this week from Amazon and FedEx, both of which are developing their own delivery bots and have backed similar legislation in multiple states.

First Corporation; Now Robots

If passed, House Bill 592 would give the robots, whether self-driven or piloted by a person, "all of the rights and responsibilities as a pedestrian,"
according to the bill text. The robots would be free to roll on "any roadway of any county or municipality in the state," as long as they maintain $100,000 in liability insurance and don't "unreasonably interfere with motor vehicles or traffic."

https://house.mo.gov/Bill.aspx?bill=HB592&year=2021&code=R

According to the bill, local governments would be explicitly restricted from enacting or enforcing laws that would limit the robots' "hours or zones of operation" or the type of property transported — with the exception of hazardous materials, which are prohibited under the bill. The provisions would also prevent local governments from enacting laws relating to robots' design, manufacture, maintenance, licensing, taxation and insurance.

------------------------------------------------

These Robots Have Made 1 Million Autonomous Deliveries



------------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #651 on: February 13, 2021, 04:41:41 PM »
Drone Swarms Are Getting Too Fast For Humans To Fight, U.S. General Warns
https://www.forbes.com/sites/davidhambling/2021/01/27/drone-swarms-are-getting-too-fast-for-humans-too-fight-us-general-warns/

General John Murray, head of Army Futures Command, told a webinar audience at the Center for Strategic & International Studies that humans may not be able to fight swarms of enemy drones, and that the rules governing human control over artificial intelligence might need to be relaxed.

"When you are defending against a drone swarm, a human may be required to make that first decision, but I am just not sure any human can keep up," said Murray. "How much human involvement do you actually need when you are [making] nonlethal decisions from a human standpoint?"

This indicates a new interpretation of the Pentagon’s rules on the use of autonomous weapons. These require meaningful human control over any lethal system, though that may be in a supervisory role rather than direct control – termed ‘human-on-the-loop’ rather than ‘human-in-the-loop.’ 

Murray said that Pentagon leaders need to lead a discussion on how much human control of AI is needed to be safe but still effective, especially in the context of countering new threats such as drone swarms. Such swarms are likely to synchronize their attacks so the assault comes in all directions at once, with the aim over overwhelming air defenses. Military swarms of a few hundred drones have already been demonstrated, in future we are likely to see swarms of thousands, or more. One U.S. Navy project envisages having to counter up to a million drones at once

The U.S. Army is spending a billion dollars on new air defense vehicles known as IM-SHORAD with cannon, two types of missile, jammers, and future options of laser and interceptor drones. Using the right weapon against the right target at the right time will be vital. Faced with large numbers of incoming threats, many of which may be decoys, human gunners are likely to be overtaxed. Murray said that the Army’s standard test involving flashcard identification requires an 80% pass rate. During the recent Project Convergence exercise, artificial intelligence software boosted this to 98% or 99%, according to Murray.

This is not the first time that the Army Future Command has suggested that humans on their own may be outclassed. In a briefing on the DARPA-Army program called SESU (System-of-Systems Enhanced Small Unit), which teamed infantry with a mix of drones and ground robots, scientists noted that the human operators kept wanting to interfere with the robots’ actions. Attempts to micromanage the machines degraded their performance.

... AI is in the ascendant. The 5-0 victory over a human pilot in a virtual dogfight last August is still being debated, but there is no doubting that machines have faster reflexes, and ability to keep track of several things at once, and are not troubled by the fatigue or fear that can lead to poor decisions in combat. ...  If AI-controlled weapons can defeat those operated by humans, then whoever has the AIs will win and failing to deploy them means accepting defeat.

Debate still swirls around this topic. The emergence of drone swarms and other types of weapons that cannot be defeated by humans alone will crystalize it. However, it is not clear whether the legal argument will be able to keep up with technology, given how long it has already been going on. At this rate, large-scale AI-powered swarm weapons may be used in action before the debate is concluded. The big question is which nations will have them first.

------------------------------------------------


... wishfull thinking by marketing; looks like someone played too many games of Missle Command
https://www.retrogamer.net/retro_games80/missile-command/

------------------------------------------------

Robot Motherships To Launch Drone Swarms From Sea, Underwater, Air And Near-Space
https://www.forbes.com/sites/davidhambling/2021/02/05/robot-motherships-to-launch-drone-swarms-from-sea-underwater-air-and-near-space/

Last week Louisiana-based shipbuilder Metal Shark announced that the U.S. Marine Corps had selected them to develop a Long Range Unmanned Surface Vessel (LRUSV), an 11-meter robot boat capable of operating autonomously and launching loitering munitions to attack targets at sea and on land. The unmanned boat is just the latest of a series of new platforms for launching drone swarms.

... It will “collaboratively interact with other vessels as a cluster,” suggesting that numbers of LRUSV would be deployed together. Such a cluster could unleash a swarm of dozens, hundreds or even thousands of small drones to overwhelm a target.

In 2019, budget documents revealed that the next phase of LOCUST would see the swarms launched from robot submarines. The U.S. Navy already launches aerial drones as scouts from submarines, so this is a small technological step. It would fit in well with the giant new Orca and Snakehead robot submarines, long-range vessels big enough to deploy swarms of drones.

... In 2017, the Pentagon demonstrated F/A-18s releasing 103 small Perdix drones which then networked together into a swarm to carry out a mission. Again, an unmanned platform might be more useful.

... “Right now we’re on the wrong side of the cost imposition curve because this technology favors the attacker, not the defender” ...

These various projects all suggest that the same idea has taken root across several services: that swarming drones now represent a powerful new capability. One U.S. general recently suggested that they may become impossible to counter without AI or machine assistance.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #652 on: February 15, 2021, 04:23:21 PM »
Thought-Detection: AI Has Infiltrated Our Last Bastion of Privacy
https://venturebeat.com/2021/02/13/thought-detection-ai-has-infiltrated-our-last-bastion-of-privacy/amp/

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction.

----------------------------------------------

Brain-Drone Races: USF Students Fly Drones Using Their Minds
https://wusfnews.wusf.usf.edu/university-beat/2019-04-10/talk-about-brain-power-usf-students-fly-drones-using-their-minds

https://mobile.twitter.com/wusfschreiner/status/1115733883402752000



https://www.auvsi.org/industry-news/university-south-floridas-brain-drone-race-welcomes-diversity-and-inclusivity



-----------------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #653 on: February 20, 2021, 11:34:22 AM »
Do As AI Say: Susceptibility In Deployment of Clinical Decision-Aids
https://www.nature.com/articles/s41746-021-00385-9

Abstract: Artificial intelligence (AI) models for decision support have been developed for clinical settings such as radiology, but little work evaluates the potential impact of such systems. In this study, physicians received chest X-rays and diagnostic advice, some of which was inaccurate, and were asked to evaluate advice quality and make diagnoses. All advice was generated by human experts, but some was labeled as coming from an AI system.

As a group, radiologists rated advice as lower quality when it appeared to come from an AI system; physicians with less task-expertise did not. Diagnostic accuracy was significantly worse when participants received inaccurate advice, regardless of the purported source. This work raises important considerations for how advice, AI and non-AI, should be deployed in clinical environments.


-------------------------------------------











------------------------------------------

Why Developing AI to Defeat Us May Be Humanity’s Only Hope
https://thenextweb.com/neural/2021/02/18/why-developing-ai-defeat-us-humanitys-only-hope/

One glance at the state of things and it’s evident humanity’s evolved itself into a corner. On the one hand, we’re smart enough to create machines that learn. On the other, people are dying in Texas because elected officials want to keep the government out of Texas. Chew on that for a second.

What we need is a superhero better villain.

Humans fight. Whether you believe it’s an inalienable part of our mammalian psyche or that we’re capable of restraint, but unwilling, the fact we’re a violent species is inescapable.

And it doesn’t appear that we’re getting better as we evolve. Researchers from the University of Iowa conducted a study on existing material covering ‘human aggression’ in 2002 and their findings, as expected, painted a pretty nasty picture of our species:

... In its most extreme forms, aggression is human tragedy unsurpassed. Hopes that the horrors of World War II and the Holocaust would produce a worldwide revulsion against killing have been dashed. Since World War II, homicide rates have actually increased rather than decreased in a number of industrialized countries, most notably the United States.

The rational end game for humanity is self-wrought extinction. Whether via climate change or mutually assured destruction through military means, we’ve entered a gridlock against progression.

Luckily for us, humans are highly adaptive creatures. There’s always hope we’ll find a way to live together in peace and harmony. Typically, these hopes are abstract – if we can just solve world hunger with a food replication machine like Star Trek then maybe, just maybe, we can achieve peace.

But the entire history of humanity is evidence against that ever happening. We are violent and competitive. After all, we have the resources to feed everyone on the planet right now. We’re just choosing not to.

That’s why we need a better enemy. Choosing ourselves as our greatest enemy is self-defeating and stupid, but nobody else has stepped up. We’re even starting to kick the coronavirus’ ass at this point.

Simply put: we need the aliens from the movie Independence Day to come down and just attack the crap out of us.

Or… killer robots

Just to be clear, we’re not advocating for extraterrestrials to come and exterminate us. We just need to focus all of our adaptive intelligence on an enemy other than ourselves.

In artificial intelligence terms, we need a real-world generative adversarial network where humans are the learners and aliens are the discriminators. ... Anything less than total cooperation and our species would fail to pass the discriminator’s test and the aliens would swat our attempt away like a cosmic Dikembe Mutombo.

We can’t control aliens. In fact, it’s possible they don’t even exist. Aliens are not dependable enemies.

We do, however, have complete control over our computers and artificial intelligence systems. And we should definitely start teaching them to continuously challenge us.

With AI, we can dictate how powerful an opponent it becomes with smart, well-paced development. We could avoid the whole shooting lasers at cities part of the story and just slowly work our way towards the rallying part where we all work together to win.

Maybe we need an AI adversary to be our “Huckleberry” when it comes to the urge for competition. If we can’t make most humans non-violent, then perhaps we could direct that violence toward a tangible, non-human opponent we can all feel good about defeating.

We don’t need killer robots or aliens for that. All we need is for the AI community and humanity at large to stop caring about making it even easier to do all the violent things we’ve always done to each other and to start giving us something else to do with all those harmful intentions.

Maybe it’s time we stopped fighting against the idea of robot overlords, and came up with some robot overlords to fight.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

OrganicSu

  • Frazil ice
  • Posts: 115
    • View Profile
  • Liked: 7
  • Likes Given: 2
Re: Robots and AI: Our Immortality or Extinction
« Reply #654 on: February 20, 2021, 01:02:29 PM »

Maybe it’s time we stopped fighting against the idea of robot overlords, and came up with some robot overlords to fight.
And with what would humans fight - whistles and wooden sticks? Please know that AI will take control of all 'your' weapons, which are so heavily dependant on computer code in the blink of an eye (and most likely before humans even get the feeling to engage in combat).
Humans could try telling really funny jokes. Might work.
Or humans could try daring the AI to delete random coding and see what happens. The AI might become curious and give it a go (afterall the AI already knows all moves humans could make and knows everything leads to it winning).

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #655 on: February 21, 2021, 01:13:43 PM »
^ It's a metaphor ...

The enemy of my enemy is my friend
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #656 on: February 23, 2021, 06:53:53 PM »
Boston Dynamics’ Robot Dog Is Now Armed—in the Name of Art
https://www.wired.com/story/boston-dynamics-robot-dog-armed-name-art/
https://techcrunch.com/2021/02/22/mschf-mounted-a-remote-control-paintball-gun-to-spot/amp/

Bad robot productions. Some artists have pissed off the company behind the robotic dogs we’ve seen doing human-like things in web videos for the past couple years. The robot in question is affectionately known as “Spot,” and the public relations team from its makers at Boston Dynamics alerted the world to a controversial art project that sort of launched on Monday — but the real show is scheduled for today at 1 p.m. ET.

https://spotsrampage.com/

A group of meme-spinning pranksters now wants to present a more dystopian view of the company's robotic tech. They added a .68-caliber paintball gun to Spot, the company’s doglike machine, and plan to let others control it inside a mocked-up art gallery via the internet later this week.



Why? “Spot is an empathy missile, shaped like man’s best friend and targeted straight at our fight or flight instinct,” the artists write on their site, adding wryly, “When killer robots come to America they will be wrapped in fur, carrying a ball.”

See also: https://screenrant.com/black-mirror-metalhead-inspiration/

--------------------------------------

Future Acres Launches With the Arrival of Crop-Transporting Robot, Carry
https://techcrunch.com/2021/02/23/future-acres-launches-with-the-arrival-of-crop-transporting-robot-carry/

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #657 on: February 24, 2021, 05:34:06 PM »
AI Is Killing Choice and Chance—Changing What It Means to Be Human
https://techxplore.com/news/2021-02-ai-choice-chancechanging-human.html

Philosophers from Rousseau to Heidegger to Carl Schmitt have argued that technology is never a neutral tool for achieving human ends. Technological innovations—from the most rudimentary to the most sophisticated – reshape people as they use these innovations to control their environment. Artificial intelligence is a new and powerful tool, and it, too, is altering humanity.

Writing and, later, the printing press made it possible to carefully record history and easily disseminate knowledge, but it eliminated centuries-old traditions of oral storytelling. Ubiquitous digital and phone cameras have changed how people experience and perceive events. Widely available GPS systems have meant that drivers rarely get lost, but a reliance on them has also atrophied their native capacity to orient themselves.

AI is no different.

As AI increasingly shapes the human experience, how does this change what it means to be human? Central to the problem is a person's capacity to make choices, particularly judgments that have moral implications.

AI is being used for wide and rapidly expanding purposes. It is being used to predict which television shows or movies individuals will want to watch based on past preferences and to make decisions about who can borrow money based on past performance and other proxies for the likelihood of repayment. It's being used to detect fraudulent commercial transactions and identify malignant tumors. It's being used for hiring and firing decisions in large chain stores and public school districts. And it's being used in law enforcement—from assessing the chances of recidivism, to police force allocation, to the facial identification of criminal suspects. ... These are areas where algorithmic prescription is replacing human judgment, and so people who might have had the chance to develop practical judgment in these areas no longer will.

Aristotle argued that the capacity for making practical judgments depends on regularly making them – on habit and practice. We see the emergence of machines as substitute judges in a variety of workaday contexts as a potential threat to people learning how to effectively exercise judgment themselves.

Recommendation engines, which are increasingly prevalent intermediaries in people's consumption of culture, may serve to constrain choice and minimize serendipity. By presenting consumers with algorithmically curated choices of what to watch, read, stream and visit next, companies are replacing human taste with machine taste. ... There is some risk that people's options will be constrained by their pasts in a new and unanticipated way—a generalization of the "echo chamber" people are already seeing in social media.

As machine learning algorithms, a common form of "narrow" or "weak" AI, improve and as they train on more extensive data sets, larger parts of everyday life are likely to become utterly predictable. The predictions are going to get better and better, and they will ultimately make common experiences more efficient and more pleasant.

But to the extent that unpredictability is part of how people understand themselves and part of what people like about themselves, humanity is in the process of losing something significant. As they become more and more predictable, the creatures inhabiting the increasingly AI-mediated world will become less and less like us.

-----------------------------------------

Computer Says Go: Taking Orders From an AI Boss
https://www.bbc.com/news/business-56023932

For those of us who have seen the Terminator movies rather too often, the thought of a computer, or robot, bossing you around is bound to raise fears that the machines are in danger of taking over.

Yet this ignores the fact that we already spend a lot of time obeying machines, and we don't even think about it, let alone worry.

Jeff Schwartz, a senior partner at business consulting and audit firm Deloitte, and a global adviser on the future of work, points to a simple everyday machine that we all obey unthinkingly.

"A traffic light used to be a job, there used to be a person who would stand there directing the cars," he says. "But very clearly that is now a machine, and it is getting smarter - they are now putting AI into traffic lights [so they can best respond to traffic levels]."

So it seems we are perfectly willing to take orders from a machine in some clearly defined situations.

What has increasingly happened in recent years, however, is that more of us are already being ordered around by computers at work. And experts say that this is only set to increase.

Take taxi firm Uber. There isn't a man or woman in the office giving out the jobs to the drivers. It is done automatically by the company's AI software system.

In the retail sector, Amazon increasingly uses AI systems to direct and monitor staff in its warehouses. This has led to several reports of employees being overworked, accusations that Amazon has repeatedly denied. Amazon says that if the AI notices a worker underperforming, he or she gets additional support and training, which comes from a human.

AI software that both gives work to, and checks on, call centre staff has also been criticised for being too demanding, and unfair.

----------------------------------------------



-----------------------------------------------

Target Acquired: Facial Recognition Drones Use AI to Take the Perfect Picture of You
https://singularityhub.com/2021/02/23/drones-programmed-to-take-the-perfect-picture-of-you-could-be-the-future-of-facial-recognition/



Facial recognition technology has been banned by multiple US cities, including Portland, Boston, and San Francisco. Besides the very real risk of the tech being biased against minorities, the technology also carries with it an uneasy sense that we’re creeping towards a surveillance state.

Despite these concerns, though, work to improve facial recognition tech is still forging ahead, with both private companies and governments looking to harness its potential for military, law enforcement, or profit-seeking applications.

One such company is an Israeli startup called AnyVision Interactive Technologies. AnyVision is looking to kick facial recognition up a notch by employing drones for image capture. A US patent application published earlier this month outlines the company’s system, which sounds like something straight out of a Black Mirror episode.

The drone captures an image of its “target person,” then analyzes the image to figure out how to get a better image; it adjusts its positioning in relation to the target, say by flying a bit lower or centering its angle. It then captures more images, and runs them through a machine learning model to get a “face classification” and a “classification probability score,” essentially trying to identify whether the person being photographed is in fact the person it’s looking for. If the probability score is too low, the system gets routed back to the first step, starting the image capture and refinement process all over again.

If the thought of a drone programmed to move itself around in whatever way necessary to capture the clearest possible picture of your face doesn’t freak you out, you must not have seen much dystopian sci-fi, nor cherish privacy as a basic right. Stationary cameras used for this purpose can at least be ducked under, turned away from, or quickly passed by; but a flying camera running on an algorithm that’s determined to identify its target is a different—and much more invasive—story.

The nightmare scenario is for technology like AnyVision’s to be employed by governments and law enforcement agencies. But the company says this is far from its intent; CEO Avi Golan told Fast Company that the picture-taking drones could be used for things like package delivery (to identify recipients and make sure the right person is getting the right package), or to help track employees for safety purposes in dangerous workplaces like mines. Golan added that there are “many opportunities in the civilian market” where AnyVision’s technology could be useful.

... AnyVision was backed by Microsoft until 2019, when allegations arose that AnyVision’s technology was being used in a military surveillance project that tracked West Bank Palestinians. Microsoft has since not only stopped investing in any startups working on facial recognition tech, it also stopped selling its own version of the technology to law enforcement agencies, with the company’s president vowing not to resume until national laws “grounded in human rights” are in place to govern its use.

What might such laws look like? How would we determine where and when—and on whom—it’s ok to use something like a drone that self-adjusts until it captures an unmistakable image of someone’s face?

... “The basic premise of a free society is that you shouldn’t be subject to tracking by the government without suspicion of wrongdoing. […] face surveillance flips the premise of freedom on its head and you start becoming a society where everyone is tracked no matter what they do all the time.”

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tor Bejnar

  • Young ice
  • Posts: 3852
    • View Profile
  • Liked: 671
  • Likes Given: 508
Re: Robots and AI: Our Immortality or Extinction
« Reply #658 on: February 24, 2021, 06:46:07 PM »
Quote
...
"A traffic light used to be a job, there used to be a person who would stand there directing the cars," he says. "But very clearly that is now a machine, and it is getting smarter - they are now putting AI into traffic lights [so they can best respond to traffic levels]."

So it seems we are perfectly willing to take orders from a machine in some clearly defined situations.
...
The number of people I've seen run red lights, especially the ones who've slowed down or even stopped first, suggests there is still some humanity left in humanity.
:)  -  except for the ones who've pulled out in front of me ...  :(
Arctic ice is healthy for children and other living things.

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #659 on: February 25, 2021, 06:32:46 PM »
The Robot Uprising Sucks
https://www.theverge.com/2021/2/24/22299346/irobot-roomba-update-issues-vacuums-fix-several-weeks

Thousands of automated Roomba vacuum cleaners have been acting “drunk” after the latest software update from parent company iRobot, The Verge reported Wednesday. “One user described their robot cleaner as acting “drunk” after the update: spinning itself around and bumping into furniture, cleaning in strange patterns, getting stuck in an empty area, and not being able to make it home to the dock.”

https://twitter.com/AnthonyVirtuoso/status/1363549503907840008

https://www.reddit.com/r/roomba/comments/lprthq/roomba_s9_weird_behaviour_on_version_3108/

https://twitter.com/ArekSarkissian/status/1360318393191137284


... go home Roomba, you're drunk ...

See for yourself. Here’s one example of the disk-shaped bot fumbling uselessly around a fireplace.

https://www.reddit.com/r/roomba/comments/l3mdad/time_lapse_video_of_i7_attempting_to_return_to/

----------------------------------------



----------------------------------------


... get a broom ...
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #660 on: February 26, 2021, 09:25:43 PM »
AI Teaches Itself Diplomacy
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-learns-diplomacy-gaming

Now that DeepMind has taught AI to master the game of Go—and furthered its advantage in chess—they’ve turned their attention to another board game: Diplomacy. Unlike Go, it is seven-player, it requires a combination of competition and cooperation, and on each turn players make moves simultaneously, so they must reason about what others are reasoning about them, and so on.

“It’s a qualitatively different problem from something like Go or chess,” says Andrea Tacchetti, a computer scientist at DeepMind. In December, Tacchetti and collaborators presented a paper at the NeurIPS conference on their system, which advances the state of the art, and may point the way toward AI systems with real-world diplomatic skills—in negotiating with strategic or commercial partners or simply scheduling your next team meeting.

Diplomacy is a strategy game played on a map of Europe divided into 75 provinces. Players build and mobilize military units to occupy provinces until someone controls a majority of supply centers. Each turn, players write down their moves, which are then executed simultaneously. They can attack or defend against opposing players’ units, or support opposing players’ attacks and defenses, building alliances. In the full version, players can negotiate. DeepMind tackled the simpler No-Press Diplomacy, devoid of explicit communication.

Historically, AI has played Diplomacy using hand-crafted strategies. In 2019, the Montreal research institute Mila beat the field with a system using deep learning. They trained a neural network they called DipNet to imitate humans, based on a dataset of 150,000 human games. DeepMind started with a version of DipNet and refined it using reinforcement learning, a kind of trial-and-error.

Exploring the space of possibility purely through trial-and-error would pose problems, though. They calculated that a 20-move game can be played nearly 1×10^868 ways—yes, that’s 10 with 868 zeroes after it.

So they tweaked their reinforcement-learning algorithm. During training, on each move, they sample likely moves of opponents, calculate the move that works best on average across these scenarios, then train their net to prefer this move. After training, it skips the sampling and just works from what its learning has taught it. “The message of our paper is: we can make reinforcement learning work in such an environment,” Tacchetti says.

In April, Facebook will present a paper at the ICLR conference describing their own work on No-Press Diplomacy. They also built on a human-imitating network similar to DipNet. But instead of adding reinforcement learning, they added search—the techniques of taking extra time to plan ahead and reason about what every player is likely to do next.

Both teams found that their systems were not easily exploitable. Facebook, for example, invited two top human players to each play 35 straight games against SearchBot, probing for weaknesses. The humans won only 6 percent of the time. Both groups also found that their systems didn’t just compete, but also cooperated, sometimes supporting opponents. “They get that in order to win, they have to work with others,” says Yoram Bachrach, from the DeepMind team.

How close are we to AI that can play Diplomacy with “press,” negotiating all the while using natural language?

“For Press Diplomacy, as well as other settings that mix cooperation and competition, you need progress,” Bachrach says, “in terms of theory of mind, how they can communicate with others about their preferences or goals or plans.

-----------------------------------------

Louise Banks : Let's say that I taught them Chess instead of English. Every conversation would be a game. Every idea expressed through opposition, victory, defeat. You see the problem? If all I ever gave you was a hammer...
Colonel Weber : Everything's a nail ...

Arrival - 2016


“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #661 on: February 26, 2021, 09:31:20 PM »


-----------------------------------------



------------------------------------------

The NYPD Deploys a Robot Dog Again
https://www.theverge.com/platform/amp/2021/2/24/22299140/nypd-boston-dynamics-spot-robot-dog



The cyberpunk dystopia is here! (If you weren’t aware: I’m sorry. You’re living in a cyberpunk dystopia.) The latest sign — aside from corporations controlling many aspects of everyday life, massive widespread wealth inequality, and the recent prominence of bisexual lighting — comes in the form of robot dogs deployed to do jobs human police used to. Yesterday, as the New York Post reports, the NYPD deployed Boston Dynamics’ robot “dog” Spot to a home invasion crime scene in the Bronx.

https://nypost.com/2021/02/23/video-shows-nypds-new-robotic-dog-in-action-in-the-bronx

The video was taken by videographer Daniel Valls, of FreedomNews.tv. You can hear a voice say “that thing is creepy” as the robot prances past the camera. The Post reports that a spokesperson for the NYPD said the robot is in a test phase, presumably to see if it’s actually useful out in the field. (It was equipped with lights and cameras, the spokesperson continued, to ensure that NYPD could see whatever the robot was seeing.)

This isn’t the first time the NYPD has deployed one of Boston Dynamics’ robots. Back in October, the department used another Spot to find a gunman who’d barricaded himself in a building after he’d accidentally shot someone in the head during a parking dispute in Brooklyn. [... these things happen ...]



----------------------------------------------

... Send out the hound!

... Originally, dogs served as the rescuers for firemen. They were given the job of sniffing out the injured or weak. However, in this dystopia, the Hound has been made into a watchdog of society. Like the Furies, the Mechanical Hound has been programmed (by the government) to avenge and punish citizens who break society's rules. The ones who are not loyal to the rules must especially be punished, and the Hound serves as the enforcer of these rules.

- Fahrenheit 451


https://www.cliffsnotes.com/literature/f/fahrenheit-451/character-analysis/the-mechanical-hound

« Last Edit: February 26, 2021, 11:12:39 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #662 on: February 28, 2021, 11:01:05 PM »
11 On the Creep-O-Meter: AI Tool “Deep Nostalgia” Lets You Reanimate Your Dead Relatives
https://www.theverge.com/platform/amp/2021/2/28/22306097/ai-brings-still-photos-life-meme-twitter-geneaology-myheritage

It seems like a nice idea in theory but it’s a tiny bit creepy as well



An AI-powered service called Deep Nostalgia that animates still photos has become the main character on Twitter this fine Sunday, as people try to create the creepiest fake “video” possible, apparently.

The Deep Nostalgia service, offered by online genealogy company MyHeritage, uses AI licensed from D-ID to create the effect that a still photo is moving. It’s kinda like the iOS Live Photos feature, which adds a few seconds of video to help smartphone photographers find the best shot.

But Deep Nostalgia can take photos from any camera and bring them to “life.” The program uses pre-recorded driver videos of facial movements and applies the one that works best for the still photo in question. Its intended purpose is to allow you to upload photos of deceased loved ones and see them in “action,” which seems like a lovely idea

https://mobile.twitter.com/FlintDibble/status/1365848777400139779



------------------------------------------

« Last Edit: March 01, 2021, 01:55:13 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 10905
    • View Profile
  • Liked: 4036
  • Likes Given: 31
Re: Robots and AI: Our Immortality or Extinction
« Reply #663 on: March 02, 2021, 01:19:07 PM »
More about DigiDog, and racism built into Police AI.

https://www.theguardian.com/commentisfree/2021/mar/02/nypd-police-robodog-patrols
A dystopian robo-dog now patrols New York City. That's the last thing we need

Quote
There is more than enough evidence that law enforcement is lethally racially biased, and adding an intimidating non-human layer to it seems cruel. And, as we’ve seen with artificial intelligence domestically and autonomous drone warfare abroad, it is clear that already dehumanized Black and Muslim residents will be the ones to face the brunt of the damage of this dystopian development, particularly in a city with a history of both anti-Black racism and Islamophobia.

Law enforcement in the United States is already biased and grounded in a history of systemic racism. Many police departments in the US evolved from slave-catching units or union-busting militias, and their use today to disproportionately capture and imprison Black people drips of those origins. And it isn’t just the institutions themselves that perpetuate racism; individual police officers are also biased and more likely to view Black people as threats. Even Black police officers share these biases and often replicate the harm of their white counterparts. On top of that, the NYPD in particular has a history of targeting its Arab and Muslim population, even going as far as to use undercover agents to spy on Muslim student associations in surrounding states. Any new technological development will only give police departments new tools to further surveil, and potentially to arrest or kill, Black and Muslim people.

By removing the human factor, artificial intelligence may appear to be an “equalizer” in the same vein as more diverse police departments. But AI shares the biases of our society. Coded Biases, a 2020 documentary, followed the journey of Joy Buolamwini, a PhD candidate at MIT, as she set out to expose the inability of facial recognition software to distinguish dark-skinned women from one another. While many tech companies have now ceased providing this software to police departments due to the dangers it may pose, police departments themselves have doubled down on the use of other forms of AI-driven law enforcement.

The use of human operators will do little to offset the biases of AI programming
Police already use location-based AI to determine when and where crime may occur, and individual-based AI to identify people deemed to have an increased probability of committing crime. While these tools are considered a more objective way of policing, they are dependent on data from biased police departments, courts and prisons. For example, Black people are more likely to be arrested for drug-related crimes, and thus appear more likely to commit crime, despite being less likely to sell drugs in the first place.

While Boston Dynamics, the creators of the robot dog, have insisted that Digidog will never be used as a weapon, it is highly unlikely that that will remain true. MSCHF, a political art collective, has already shown how easy it is to weaponize the dog. In February they mounted a paintball gun on its back and used it to fire upon a series of art pieces in a gallery. The future of weaponized robot policing has already been paved by the Dallas police department. In 2016, the DPD used a robot armed with a bomb to kill Micah Johnson, an army reservist who served in Afghanistan, after he killed five police officers in what he said was retaliation for the deaths of Black people at the hands of law enforcement. While it was clear that he posed a threat to police, it is very fitting that a Black man would be the first person to be killed by an armed robot in the United States – roughly a year after the white mass shooter Dylann Roof was met with a free burger and police protection.

The United Nations has called for a ban on autonomous weapons, and not long ago many countries around the world desired to ban armed drones. But the United States unfortunately continues to set the precedent for drone and autonomous warfare, driving other countries to follow suit in competition. We can’t allow our government to replicate this dynamic inside our borders, also, with the domestic use of drones and robotic police.

This is a time for the US to scale back its wars, internal and external, but instead, the NYPD, which many people – including former mayor Michael Bloomberg – consider an army, has chosen to lead the way in dystopian enforcement.
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

gerontocrat

  • Multi-year ice
  • Posts: 10905
    • View Profile
  • Liked: 4036
  • Likes Given: 31
Re: Robots and AI: Our Immortality or Extinction
« Reply #664 on: March 03, 2021, 01:23:45 PM »
Beyond 1984
- Be careful to only express "+ve" energy because AI is checking up on you.
- One logical outcome is that persistent offenders (i.e. those who express -ve energy regularly) may end up in thought realignment camps ( in China ) or lose their jobs (e.g. with Facebook, Amazon, Ebay, Google, Goldman Sachs).

https://www.theguardian.com/global-development/2021/mar/03/china-positive-energy-emotion-surveillance-recognition-tech
Smile for the camera: the dark side of China's emotion-recognition tech

Xi Jinping wants ‘positive energy’ but critics say the surveillance tools’ racial bias and monitoring for anger or sadness should be banned
Quote
Ordinary people here in China aren’t happy about this technology but they have no choice. If the police say there have to be cameras in a community, people will just have to live with it. There’s always that demand and we’re here to fulfil it.” So says Chen Wei at Taigusys, a company specialising in emotion recognition technology, the latest evolution in the broader world of surveillance systems that play a part in nearly every aspect of Chinese society.

Emotion-recognition technologies – in which facial expressions of anger, sadness, happiness and boredom, as well as other biometric data are tracked – are supposedly able to infer a person’s feelings based on traits such as facial muscle movements, vocal tone, body movements and other biometric signals. It goes beyond facial-recognition technologies, which simply compare faces to determine a match.

But similar to facial recognition, it involves the mass collection of sensitive personal data to track, monitor and profile people and uses machine learning to analyse expressions and other clues.

The industry is booming in China, where since at least 2012, figures including President Xi Jinping have emphasised the creation of “positive energy” as part of an ideological campaign to encourage certain kinds of expression and limit others.

Critics say the technology is based on a pseudo-science of stereotypes, and an increasing number of researchers, lawyers and rights activists believe it has serious implications for human rights, privacy and freedom of expression. With the global industry forecast to be worth nearly $36bn by 2023, growing at nearly 30% a year, rights groups say action needs to be taken now.

‘Intimidation and censorship’
The main office of Taigusys is tucked behind a few low-rise office buildings in Shenzhen. Visitors are greeted at the doorway by a series of cameras capturing their images on a big screen that displays body temperature, along with age estimates, and other statistics. Chen, a general manager at the company, says the system in the doorway is the company’s bestseller at the moment because of high demand during the coronavirus pandemic.

Chen hails emotion recognition as a way to predict dangerous behaviour by prisoners, detect potential criminals at police checkpoints, problem pupils in schools and elderly people experiencing dementia in care homes.

Taigusys systems are installed in about 300 prisons, detention centres and remand facilities around China, connecting 60,000 cameras. “Violence and suicide are very common in detention centres,” says Chen. “Even if police nowadays don’t beat prisoners, they often try to wear them down by not allowing them to fall asleep. As a result, some prisoners will have a mental breakdown and seek to kill themselves. And our system will help prevent that from happening.

Chen says that since prisoners know they are monitored by this system – 24 hours a day, in real time – they are made more docile, which for authorities is a positive on many fronts. “Because they know what the system does, they won’t consciously try to violate certain rules,” he says.

Besides prisons and police checkpoints, Taigusys has deployed its systems in schools to monitor teachers, pupils and staff, in care homes for older people to detect falls and changes in the emotional state of residents, and in shopping centres and car parks.

While the use of emotion-recognition technology in schools in China has sparked some criticism, there has been very little discussion of its use by authorities on citizens.

Potential for misuse
Asked if he was concerned about these features being misused by authorities, Chen says that he is not worried because the software is being used by police, implying that such institutions should be automatically trusted.

“I’m not concerned because it’s not our technology that’s the problem,” Chen says. “There are demands for this technology in certain scenarios and places, and we will try our best to meet those demands.”
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #665 on: March 03, 2021, 04:03:05 PM »
^ ... reminds me of an old episode of the Twilight Zone ...

https://en.m.wikipedia.org/wiki/It%27s_a_Good_Life_(The_Twilight_Zone)#Plot_summary

... The people live in fear of little six-year-old Anthony Fremont, constantly telling him how everything he does is "good," since he banishes anyone thinking unhappy thoughts into the otherworldly cornfield from which there is no return.

http://twilightzonevortex.blogspot.com/2016/06/its-good-life.html

AI is the new Anthony

-------------------------------------------------

Emotions for humans = bad
Emotions for AI = good

Sonantic Uses AI to Infuse Emotion In Automated Speech
https://venturebeat.com/2021/03/02/sonantic-uses-ai-to-infuse-emotion-in-automated-speech-for-game-prototypes/

Sonantic has figured out how to use AI to turn written words into spoken dialogue in a script, and it can infuse those words with the proper emotion.

And it turns out this is a pretty good way to prototype the audio storytelling in triple-A video games. That’s why the Sonantic technology is finding use with 200 different video game companies for audio engineering.

Building upon the existing framework of text-to-speech, London-based Sonantic’s approach is what differentiates a standard robotic voice from one that sounds genuinely human. Creating that “believability” factor is at the core of Sonantic’s voice platform, which captures the nuances of the human voice.

The AI can provide true emotional depth to the words, conveying complex human emotions from fear and sadness to joy and surprise. The breakthrough advancement revolutionizes audio engineering capabilities for gaming and film studios, culminating in hyper-realistic, emotionally expressive and controllable artificial voices.



... “Last year, we had the AI that could cry, with emotion and sadness,” Flynn said. “It’s really about the nuances in speech, that quiver of the voice for sadness, an exertion for anger. We try and model those really deeply. Once you add in those details and layer them on top, you start to get energy and it becomes really realistic.”

------------------------------------------

MetaHuman Creator - a new tool designed to bring the highest fidelity facial rendering to the wider development community.


... none of these people are real

As changes and enhancements are made, MetaHuman Creator intelligently uses data from its cloud-based library to extrapolate a realistic digital person.

... There is more to the process than just the graphics - quality of performance and motion capture are going to be key. However, we are clearly seeing some cutting edge technology here and these initial demos are striking. Skin shading, texture quality and geometric density are very impressive, while eyes look expressive. Additionally, hair is always a particularly tricky part of rendering convincing characters - but MHC can tap into the very latest strand rendering technology to produce a convincing look, a 'next-gen' feature we've only really seen on proprietary engines so far.

--------------------------------------------------

AI Isn’t Yet Ready to Pass for Human On Video Calls
https://venturebeat.com/2021/02/21/ai-isnt-yet-ready-to-pass-for-human-on-video-calls/

Leading up to Superbowl Sunday, Amazon flooded social media with coquettish ads teasing “Alexa’s new body.” Its gameday commercial depicts one woman’s fantasy of the AI voice assistant embodied by actor Michael B. Jordan, who seductively caters to her every whim — to the consternation of her increasingly irate husband. No doubt most viewers walked away giggling at the implausible idea of Amazon’s new line of spouse replacement robots, but the reality is that embodied, humanlike AI may be closer than you think.



Today, AI avatars — i.e., AI rendered with a digital body and/or face — lack the sex appeal of Michael B. Most, in fact, are downright creepy. Research shows that imbuing robots with humanlike features endears them to us —  to a point. Past that threshold, the more humanlike a system appears, the more paradoxically repulsed we feel.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #666 on: March 03, 2021, 04:08:40 PM »
U.S. ‘Not Prepared To Defend Or Compete’ With China On AI According To Commission Report
https://www.thedrive.com/the-war-zone/39559/national-security-commission-warns-u-s-is-not-prepared-to-defend-or-compete-with-china-on-ai

The National Security Commission on Artificial Intelligence, or NCSAI, issued a report on Monday, March 1, 2021, which offers a stark warning to the leadership of the United States. According to the thorough 756-page report, China could likely soon replace the U.S. as the world’s leader in artificial intelligence, or AI, and that shift will have significant ramifications for the U.S. military at home and abroad.

https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf



... The NSCAI report specifically cites AI-enabled and autonomous weapon systems of all types, not just autonomous aerial vehicles, noting that “the global, unchecked use of such systems could increase risks of unintended conflict escalation and crisis instability.” In particular, the report cites increasingly sophisticated cyberweapons, commercial drones armed with AI software “smart weapons” that can wreak havoc on infrastructure, and AI-enabled “weapons of mass influence” designed to sow discord among the U.S. populace.

The NSCAI report states that despite the progress being made in the private sector in terms of AI tools, “visionary technologists and warfighters largely remain stymied by antiquated technology, cumbersome processes, and incentive structures that are designed for outdated or competing aims.”  ... "Many Departmental processes still rely too much on PowerPoint and manually driven work streams. The data that is needed to fuel machine learning (ML) is currently stovepiped, messy, or often discarded. Platforms are disconnected. Acquisition, development, and fielding practices largely follow rigid, sequential processes, inhibiting early and continuous experimentation and testing critical for AI."

Among the many recommendations the report makes, in order to counteract this rising foreign AI threat, one is bolstering the U.S. talent base through a new National Defense Education Act, scaling up digital talent in government, and establishing a domestic manufacturing base for microelectronics. Currently, the U.S. is almost entirely reliant on foreign-made electronics to power most of its technologies, both in the defense and consumer sectors. ... The commission advises the U.S. government to more than double the amount of money it invests in AI R&D by 2026, aiming for $32 billion a year.

It also urges President Joe Biden to reject calls for a global ban on AI-powered autonomous weapons, saying that China and Russia are unlikely to keep to any treaty they sign.

... America's two main adversaries, Google & Facebook China & Russia, are just as keenly aware of how AI supremacy could lead to battlefield supremacy and are making just as much investment into AI as the new NSCAI report recommends America does.

The commission was established as part of the 2019 National Defense Authorization Act and the majority of its members -- who include representatives from Google, Microsoft, Amazon Web Services and Oracle -- were appointed by Congress.

... The report’s key takeaway is that the Department of Defense and the U.S. Intelligence Community (IC) must be “AI-ready” by 2025 ... [... except, SkyNet will become self-aware in 2024]

------------------------------------------------

Australia's Autonomous AI 'Loyal Wingman' Drone Has Flown For The First Time
https://www.thedrive.com/the-war-zone/39539/australias-loyal-wingman-air-combat-drone-has-flown-for-the-first-time

Known as the Airpower Teaming System (ATS), Boeing Australia's new loyal wingman drone for the Royal Australian Air Force (RAAF) has taken to the sky for the first time. It's not clear exactly when the flight took place, but it occurred at the high-security RAAF Base Woomera and its surrounding range complex. The flight was originally supposed to occur around the end of 2020, but it was pushed back due to a number of factors.

The first flight test profile was intended to validate basic flight functions and included a significant degree of autonomous operations.



The ATS, which is a modular design capable of having its entire nose section swapped out quickly, is seen as a landmark program for Australia and the RAAF. It is the first clean-sheet aircraft Boeing has brought to fruition outside the U.S. and the first military aircraft Australia has independently produced in over half a century.

... “Boeing and Australia are pioneering fully integrated combat operations by crewed and uncrewed aircraft,” said Boeing Defense, Space & Security President and CEO Leanne Caret.

... “The Loyal Wingman project is a pathfinder for the integration of autonomous systems and artificial intelligence to create smart human-machine teams.

... Additional Loyal Wingman aircraft are currently under development, with plans for teaming flights scheduled for later this year. ... Much of the basic command logic that will drive the ATS has already been tested on subscale flying demonstrators.


The U.S. is actively pursuing similar capabilities in the form of its Skyborg program, as well as other parallel initiatives. Boeing Australia's design could even factor into that program in the near future.

------------------------------------------------

Boeing Is Adapting Its Australian Combat Drone For The U.S. Air Force's Skyborg Program
https://www.thedrive.com/the-war-zone/39560/boeing-is-adapting-its-australian-combat-drone-for-the-u-s-air-forces-skyborg-program



Just days after the first flight of the Boeing Airpower Teaming System combat drone that’s being developed for Australia, the company confirmed this unmanned aircraft will also provide the basis for its offering for the U.S. Air Force’s Skyborg loyal wingman program.

... Unlike fighter jets, the drone uses a commercially available jet engine; Boeing won’t disclose the manufacturer. The company is using robots to build the drone, unlike the labor-intensive human assembly lines of their manned companions.

“For this particular concept to work, it needs to be at a cost point that the customer is willing to lose the aircraft because there is no future scenario in the future fight where there isn't attrition in the airspace,” Arnott said. “The whole idea here is it's better for that to happen in an uncrewed system than a crewed system."

... Skyborg covers the development a whole range of systems that will form an artificial intelligence-driven “computer brain” capable of flying networked “loyal wingman” type drones and autonomous unmanned combat air vehicles, or UCAVs.

------------------------------------------------

Lockheed Martin's 'Skunk Works' Secretive 'Speed Racer' Program
https://www.thedrive.com/the-war-zone/39495/heres-everything-we-know-about-skunk-works-secretive-speed-racer-program

Lockheed Martin's Skunk Works advanced projects bureau has officially revealed the design of its secretive Speed Racer air vehicle. The missile-shaped unmanned system is ostensibly intended to serve as an experiment in digital engineering techniques, but has the potential to be the basis for future swarming drones and low-cost cruise missiles.

From what little Lockheed Martin has shared so far, the main focus of Speed Racer is to validate the StarDrive toolset. "Lockheed built the StarDrive to reduce the time and cost of producing and operating new flight vehicles for the military," the Aviation Week story from earlier this month had explained.

... "The ultimate capability of the system is really not what the project is focusing on," ... “What we are really working to do is show how we use the toolset and how we implement [it], starting from a one-page concept, and [bringing] that all the way through flight."

------------------------------------------------

The Navy Plans To Launch Swarms Of Aerial Drones From Unmanned Submarines And Ships
https://www.thedrive.com/the-war-zone/39535/navy-contract-exposes-plans-to-launch-swarms-of-drones-from-unmanned-boats-and-submarines

... This is "a rapid capability effort to achieve operational launch capability from unmanned surface vessels (USVs) and an unmanned underwater vessel (UUV). The intended concept of operations (CONOP) and tactics, techniques and procedures (TTPs) are to provide intelligence, surveillance and reconnaissance (ISR) and precision strike capability from maritime platforms," the contracting notice added. "Additionally, the High Volume Long Range Precision Strike (HVLRPS) from USVs and Fires (HVLRPF) from UUVs demonstrations will leverage prior efforts including the Innovative Naval Prototype (INP) and progress on the Mobile Precision Attack Vehicle (MoPAV)."

... Autonomous swarming technology, including artificial intelligence-driven flight and targeting capabilities, are becoming increasingly popular additions to loitering munitions, as well. This kind of swarm can more rapidly search for and then engage multiple targets, either automatically or with human approval, across a large area. It's important to note that ONR has already conducted demonstrations involving Block 1 Coyotes operating in swarms as part of its Low-Cost UAV Swarming Technology program, or LOCUST.

The Navy's interest in loitering munitions is hardly surprising, both for its own use or in support of U.S. Marine Corps requirements. Both services, as well as other elements of the U.S. military, are pursuing multiple programs in this same general vein. What is much more notable about this particular contract is the desire to rapidly develop an operational capability to deploy swarms of loitering munitions from both unmanned boats and submarines.

An edition of Future Force, an official ONR magazine, that was published last year said that recent "experimentation efforts" in support of Navy and Marine Corps requirements had included "Close-in Covert Autonomous Disposable Aircraft super swarm experimentation." ... "This record-setting effort simultaneously launched 1,000 unmanned aerial vehicles out of a C-130 and demonstrated behaviors critical to future super swarm employment,"

... A swarm might not necessarily have to just be made up of loitering munitions, either. Coyotes, or other small drones, carrying ISR, electronic warfare, or other payloads, could also be networked together, providing different types of functionality to make it easier to find threats and engage them in the most optimal way.

... the Navy has made clear that it sees its future operations as being full of swarms that expand the capabilities of its surface and underwater fleets, both at sea and over the shore.

------------------------------------------------

Air Force Testing Out Weapons That Fry Enemy Drones with Directed Energy, Microwaves
https://www.military.com/daily-news/2021/02/25/air-force-testing-out-weapons-fry-enemy-drones-directed-energy-microwaves.html

The U.S. Air Force is testing new counter-drone systems that use either direct energy or microwaves to take out unmanned drones that pose a threat to troops and bases overseas.

The service announced this month that it has been testing an upgraded laser system, known as the High Energy Laser Weapon System 2, or H2, through a series of experiments that began last summer at Kirtland Air Force Base, New Mexico.

The news follows the U.S. Army's announcement Wednesday that it will partner with the Air Force on its Tactical High Power Operational Responder, or THOR, which can disable a drone's electronics at certain ranges. During its development phase, THOR was referred to as the Tactical High-power Microwave Operational Responder.

... THOR, developed by the Air Force Research Lab and housed at Kirtland, looks like a standard Conex box with a satellite dish strapped to it.

While high-energy lasers can kill one target at a time, high-powered microwaves "can kill groups or swarms, which is why we are pursuing a combination of both technologies"

"The system output is powerful radio wave bursts, which offer a greater engagement range than bullets or nets, and its effects are silent and instantaneous," added Amber Anderson, THOR program manager. [... works on humans, too ... just sayin']

------------------------------------------------

... expect a lot of cancers

------------------------------------------------

Cybersecurity and Infrastructure Security Agency Report: Protecting Against the Threat of Unmanned Aircraft Systems (UAS)
https://publicintelligence.net/cisa-unmanned-aircraft-systems-threats/
« Last Edit: March 03, 2021, 06:34:41 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #667 on: March 03, 2021, 04:39:42 PM »
AI Designing AI: Google’s Deep Learning Finds a Critical Path in AI Chips
https://ai.googleblog.com/2021/02/machine-learning-for-computer.html

A year ago, ZDNet spoke with Google Brain director Jeff Dean about how the company is using artificial intelligence to advance its internal development of custom chips to accelerate its software. Dean noted that deep learning forms of artificial intelligence can in some cases make better decisions than humans about how to lay out circuitry in a chip.

This month, Google unveiled to the world one of those research projects, called Apollo, in a paper posted on the arXiv file server, "Apollo: Transferable Architecture Exploration," and a companion blog post by lead author Amir Yazdanbakhsh.

https://arxiv.org/abs/2102.01723

Apollo represents an intriguing development that moves past what Dean hinted at in his formal address a year ago at the International Solid State Circuits Conference, and in his remarks to ZDNet.

In the example Dean gave at the time, machine learning could be used for some low-level design decisions, known as "place and route." In place and route, chip designers use software to determine the layout of the circuits that form the chip's operations, analogous to designing the floor plan of a building.

In Apollo, by contrast, rather than a floor plan, the program is performing what Yazdanbakhsh and colleagues call "architecture exploration."

The architecture for a chip is the design of the functional elements of a chip, how they interact, and how software programmers should gain access to those functional elements.

For example, a classic Intel x86 processor has a certain amount of on-chip memory, a dedicated arithmetic-logic unit, and a number of registers, among other things. The way those parts are put together gives the so-called Intel architecture its meaning.

Asked about Dean's description, Yazdanbakhsh told ZDNet in email, "I would see our work and place-and-route project orthogonal and complementary.

"Architecture exploration is much higher-level than place-and-route in the computing stack," explained Yazdanbakhsh, referring to a presentation by Cornell University's Christopher Batten.

"I believe it [architecture exploration] is where a higher margin for performance improvement exists," said Yazdanbakhsh.

Yazdanbakhsh and colleagues call Apollo the "first transferable architecture exploration infrastructure," the first program that gets better at exploring possible chip architectures the more it works on different chips, thus transferring what is learned to each new task.

The chips that Yazdanbakhsh and the team are developing are themselves chips for AI, known as accelerators. This is the same class of chips as the Nvidia A100 "Ampere" GPUs, the Cerebras Systems WSE chip, and many other startup parts currently hitting the market. Hence, a nice symmetry, using AI to design chips to run AI.


-----------------------------------------------------------

New Theory for How Memories Are Stored In the Brain
https://medicalxpress.com/news/2021-03-theory-memories-brain.html

In a paper published by Frontiers in Molecular Neuroscience, Dr. Ben Goult from Kent's School of Biosciences describes how his new theory views the brain as an organic supercomputer running a complex binary code with neuronal cells working as a mechanical computer. He explains how a vast network of information-storing memory molecules operating as switches is built into each and every synapse of the brain, representing a complex binary code. This identifies a physical location for data storage in the brain and suggests memories are written in the shape of molecules in the synaptic scaffolds.

The theory is based on the discovery of protein molecules, known as talin, containing 'switch-like' domains that change shape in response to pressures in mechanical force by the cell. These switches have two stable states, 0 and 1, and this pattern of binary information stored in each molecule is dependent on previous input, similar to the Save History function in a computer. The information stored in this binary format can be updated by small changes in force generated by the cell's cytoskeleton.

In the brain, electrochemical signaling between trillions of neurons occurs between synapses, each of which contains a scaffold of the talin molecules. Once assumed to be structural, this research suggests that the meshwork of talin proteins actually represent an array of binary switches with the potential to store information and encode memory.

This mechanical coding would run continuously in every neuron and extend into all cells, ultimately amounting to a machine code coordinating the entire organism. From birth, the life experiences and environmental conditions of an animal could be written into this code, creating a constantly updated, mathematical representation of its unique life

"This research shows that in many ways the brain resembles the early mechanical computers of Charles Babbage and his Analytical Engine. Here, the cytoskeleton serves as the levers and gears that coordinate the computation in the cell in response to chemical and electrical signaling. Like those early computation models, this discovery may be the beginning of a new understanding of brain function and in treating brain diseases."

Benjamin T. Goult, The Mechanical Basis of Memory – the MeshCODE Theory, Frontiers in Molecular Neuroscience (2021).
https://www.frontiersin.org/articles/10.3389/fnmol.2021.592951/full

--------------------------------------------------------

Microsoft Holodeck v0.8 beta: Mesh



Mesh is a collaborative platform that allows anyone to have shared virtual experiences on a variety of devices. “This has been the dream for mixed reality, the idea from the very beginning,” explains Kipman. “You can actually feel like you’re in the same place with someone sharing content or you can teleport from different mixed reality devices and be present with people even when you’re not physically together.”
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

sidd

  • First-year ice
  • Posts: 5865
    • View Profile
  • Liked: 815
  • Likes Given: 0
Re: Robots and AI: Our Immortality or Extinction
« Reply #668 on: March 04, 2021, 03:40:43 AM »
Re: automated emotion recognition from facial, posture, movement, blood flow etc.

I am to some extent involved in emotion recognition algorithms, they ain't so good . Yet.

I have also worked with chimese facial recog software and hardware, and those are pretty good but very tuned to facial characteristics of Chinese population, i've had some pretty spectacular fails when going to other populations.

All of this is getting better, tho. Just not very fast.

sidd

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #669 on: March 04, 2021, 03:55:35 PM »
Sidewalk Robots Get Legal Rights as "Pedestrians"
https://www.axios.com/sidewalk-robots-legal-rights-pedestrians-821614dd-c7ed-4356-ac95-ac4a9e3c7b45.html

As small robots proliferate on sidewalks and city streets, so does legislation that grants them generous access rights and even classifies them, in the case of Pennsylvania, as "pedestrians."

Why it matters: Fears of a dystopian urban world where people dodge heavy, fast-moving droids are colliding with the aims of robot developers large and small — including Amazon and FedEx — to deploy delivery fleets.

Driving the news: States like Pennsylvania, Virginia, Idaho, Florida and Wisconsin have passed what are considered to be liberal rules permitting robots to operate on sidewalks — prompting pushback from cities like Pittsburgh that fear mishaps.

- In Pennsylvania, robot "pedestrians" can weigh up to 550 pounds and drive up to 12 mph.

- "Opposition has largely come from pedestrian and accessibility advocates, as well as labor unions like the Teamsters," per the Pittsburgh City Paper.

- The laws are a boon to Amazon's Scout delivery robot and FedEx's Roxo, which are being tested in urban and suburban settings.

- "Backers say the laws will usher in a future where household items show up in a matter of hours, with fewer idling delivery vans blocking traffic and spewing emissions," per Wired.

https://www.wired.com/story/amazon-fedex-delivery-robots-your-sidewalk/

The bottom line: "We're still in the really early stages of deciding what it means to have a bot running round the sidewalk," Nico Larco, director of the Urbanism Next Center at the University of Oregon, tells Axios.

- 'What happens if this thing falls over? What happens if it breaks? Where is the liability? What kind of insurance do you need?"

- "Because this is so early in development, a lot of legislators really haven’t had time to think of what the ramifications are."



---------------------------------------------

Drones With ‘Most Advanced AI Ever’ Coming Soon To Your Local Police Department
https://www.forbes.com/sites/thomasbrewster/2021/03/03/drones-with-most-advanced-ai-ever-coming-soon-to-your-local-police-department/?sh=54ebcd963f0b

Founded by Google veterans and backed by $340 million from major VCs, Skydio is creating drones that seem straight out of science fiction—and they could end up in your neighborhood soon.

... By Forbes’ calculation, based on documents obtained through Freedom of Information Act (FOIA) requests and Skydio’s public announcements, more than 20 police agencies across the U.S. now have Skydios as part of their drone fleets, including major cities like Austin and Boston, though many got one for free as part of a company project to help out during the pandemic.

... “Autonomy—that core capability of giving a drone the skills of an expert pilot built in, in the software and the hardware—that’s really what we’re all about as a company.”

Skydio claims to be shipping the most advanced AI-powered drone ever built: a quadcopter that costs as little as $1,000, which can latch on to targets and follow them, dodging all sorts of obstacles and capturing everything on high-quality video. Skydio claims that its software can even predict a target’s next move, be that target a pedestrian or a car.

Technically, the Skydio excels in tactical deployments, where it’s deployed in close confines. Last year, in Burlington, Massachusetts, a Skydio came through the woods to help out a SWAT team in a five-hour standoff with two armed suspects holed up in a large suburban house. Using its autonomous flying features, the Skydio was able to get up close to the building by dodging obstacles—a clothesline, a garden umbrella—and peer through the windows. Under surveillance from the drone, the suspects turned themselves in 30 minutes later. “It just flows around, which makes it a lot easier when you're talking about high-risk situations,” says Sage Costa, the officer who was controlling the Skydio.

... Last spring, they began offering government agencies free Skydios, as long as they provided video and reports for the startup’s marketing and research departments. According to FOIA-obtained emails showing lists of recipients in Skydio’s Emergency Response Program, more than 30 public agencies across the country jumped at the chance, including the Boston and Sacramento police departments and Los Angeles County’s fire-and-rescue unit. ... In Chula Vista, where, in a groundbreaking project, drones are sent as first responders before humans arrive, it’s DJI’s drones that are first on scene, not Skydio’s.

... That Skydio is contracting with the Dept. Of Defense and about to start work with the Customs and Border Protection will likely turn some heads. In some corners of Silicon Valley, engineers balk at the idea of working with such agencies. Thousands of Google staff, for instance, called on their employer to cease working with the Pentagon and immigration agencies in 2020. But Skydio CEO Bry says Silicon Valley companies shouldn’t shy away from working on government projects. He won’t comment directly on any work with the CBP, but adds: “It’s unfortunate that some of these agencies are as polarized as they are . . . I think that an organization like Customs and Border Patrol performs an absolutely critical function for society that we all depend on,” Bry says, pointing to corporate promises that Skydio will never sell to a repressive regime or put weapons on its drones.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #670 on: March 09, 2021, 01:56:11 PM »
OpenAI’s State-of-the-Art Machine Vision AI Is Fooled By Handwritten Notes
https://www.theverge.com/2021/3/8/22319173/openai-machine-vision-adversarial-typographic-attacka-clip-multimodal-neuron



Researchers from machine learning lab OpenAI have discovered that their state-of-the-art computer vision system can be defeated by tools no more sophisticated than a pen and a pad. As illustrated in the image above, simply writing down the name of an object and sticking it on another can be enough to trick the software into misidentifying what it sees.

“We refer to these attacks as typographic attacks,” write OpenAI’s researchers in a blog post. “By exploiting the model’s ability to read text robustly, we find that even photographs of hand-written text can often fool the model.” They note that such attacks are similar to “adversarial images” that can fool commercial machine vision systems, but far simpler to produce.

This month, OpenAI researchers published a new paper describing how they’d opened up CLIP to see how it performs. They discovered what they’re calling “multimodal neurons” — individual components in the machine learning network that respond not only to images of objects but also the associated text. One of the reasons this is exciting is that it seems to mirror how the human brain reacts to stimuli, where single brain cells have been observed responding to abstract concepts rather than specific examples. OpenAI’s research suggests it may be possible for AI systems to internalize such knowledge the same way humans do.

Another example given by the lab is the neuron in CLIP that identifies piggy banks. This component not only responds to pictures of piggy banks but strings of dollar signs, too. As in the example above, that means you can fool CLIP into identifying a chainsaw as a piggy bank if you overlay it with “$$$” strings, as if it were half-price at your local hardware store.

The researchers also found that CLIP’s multimodal neurons encoded exactly the sort of biases you might expect to find when sourcing your data from the internet. They note that the neuron for “Middle East” is also associated with terrorism and discovered “a neuron that fires for both dark-skinned people and gorillas.” This replicates an infamous error in Google’s image recognition system, which tagged Black people as gorillas. It’s yet another example of just how different machine intelligence is to that of humans’ — and why pulling apart the former to understand how it works is necessary before we trust our lives to AI.



toaster, telephone?

---------------------------------------------------

Neural Network CLIP Mirrors Human Brain Neurons In Image Recognition
https://techxplore.com/news/2021-03-neural-network-mirrors-human-brain.html

Open AI, the research company founded by Elon Musk, has just discovered that their artificial neural network CLIP shows behavior strikingly similar to a human brain. This find has scientists hopeful for the future of AI networks' ability to identify images in a symbolic, conceptual and literal capacity.

Goh, G., et al. "Multimodal Neurons in Artificial Neural Networks." OpenAI, OpenAI, 4 Mar. 2021,
https://openai.com/blog/multimodal-neurons/
« Last Edit: March 10, 2021, 01:06:10 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

kassy

  • Moderator
  • Young ice
  • Posts: 3150
    • View Profile
  • Liked: 1283
  • Likes Given: 1205
Re: Robots and AI: Our Immortality or Extinction
« Reply #671 on: March 09, 2021, 02:15:19 PM »
Well if it cannot distinguish between a note on an apple an an apple it is clearly not intelligent but just a hyped up visual detection system.
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

oren

  • First-year ice
  • Posts: 6829
    • View Profile
  • Liked: 2507
  • Likes Given: 2264
Re: Robots and AI: Our Immortality or Extinction
« Reply #672 on: March 09, 2021, 04:16:21 PM »
Great, the AI replicates a human taking its information from the Internet. What could go wrong?

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #673 on: March 10, 2021, 01:07:59 AM »
In the Race to Hundreds of Qubits, Photons May Have "Quantum Advantage"
https://spectrum.ieee.org/tech-talk/computing/hardware/race-to-hundreds-of-photonic-qubits-xanadu-scalable-photon



Toronto-based Xanadu has developed a room-temperature photonic quantum chip it says is programmable, can execute multiple algorithms, and is potentially highly scalable.

The new 4 millimeter by 10 millimeter X8 chip is effectively an 8-qubit quantum computer. The scientists say the silicon nitride chip is compatible with conventional semiconductor industry fabrication techniques, and can readily scale to hundreds of qubits.

Infrared laser pulses fired into the chip are coupled together with microscopic resonators to generate so-called “squeezed states” consisting of superpositions of multiple photons. The light next flows to a series of beam splitters and phase shifters that perform the desired computation. The photons then flow out the chip to superconducting detectors that count the photon numbers to extract the answer to the quantum computation.



https://www.nature.com/articles/s41586-021-03202-1
https://arxiv.org/pdf/2103.02109.pdf
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #674 on: March 17, 2021, 11:20:39 PM »
New Soba Noodle-Making Robot at Japan Train Station Eatery Can Cook 150 Servings an Hour
https://mainichi.jp/english/articles/20210315/p2a/00m/0na/021000c

CHIBA -- A two-armed robot is helping to prepare soba noodles at an eatery at JR Kaihimmakuhari Station in this city's Mihama Ward, capably boiling the noodles in a strainer, rinsing them and then dipping them in iced water.

The Sobaichi Perie Kaihimmakuhari eatery implemented a collaborative cooking system, with the robot cooking the food and employees adding the dipping sauce or soup and toppings. It is apparently the first time for the cooking robot to be introduced in an actual restaurant setting.

The robot fetches soba noodles from a box with one arm, and places it in a strainer. Then with the other arm, it picks up the strainer and boils the noodles for a minute and 40 seconds, rinses off the viscous film on the surface and then dips the noodles in iced water to bring out their firmness. The robot can cook 150 servings in an hour, substituting the work of about one employee.

Connected Robotics commented, "Not only can it tackle the shortage of human resources, it can also cook without any human contact and is therefore useful in reducing the risk of coronavirus infections." JR East Foods, meanwhile, explained, "We aim to implement it (the robot) at 30 stores by the end of fiscal 2025."



“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #675 on: March 17, 2021, 11:29:24 PM »
Army Trains AI to Identify Faces in the Dark
https://spectrum.ieee.org/tech-talk/aerospace/military/army-trains-ai-to-identify-faces-in-the-dark

Facial recognition has already come a long way since U.S. Special Operations Forces used the technology to help identify Osama bin Laden after killing the Al-Qaeda leader in his Pakistani hideout in 2011. The U.S. Army Research Laboratory recently unveiled a dataset of faces designed to help train AI on identifying people even in the dark—a possible expansion of facial recognition capabilities that some experts warn could lead to expanded surveillance beyond the battlefield.



The Army Research Laboratory Visible and Thermal Face dataset contains 500,000 images from 395 people. Despite its modest size as far as facial recognition datasets go, it is one of the largest and most comprehensive datasets that includes matching images of people’s faces taken under both ordinary visible-light conditions and with heat-sensing thermal cameras in low-light conditions.

... Facial recognition applications for nighttime or low-light conditions are still not mature enough for deployment, according to the Army Research Laboratory team. Early benchmark testing with the dataset showed that facial recognition algorithms struggled to either identify key facial features or identify unique individual faces from thermal camera images—especially when the normal details visible in faces are reduced to blobby heat patterns.

The algorithms also struggled with “off-pose” images in which the person’s face is angled 20 degrees or more away from center. And they had problems matching the visible-light images of individual faces with their thermal imagery counterparts when the person was wearing glasses in one of the images.

But several independent experts familiar with facial recognition technology warned against feeling any false sense of security about how such technology is currently struggling to identify faces in the dark. After all, well-documented problems of racial bias, gender bias, and other accuracy issues with facial recognition have not stopped companies and law enforcement agencies from deploying the technology.

... “It's another of those steppingstones on the way to removing the ability for us to be anonymous at all”

The development of the dataset is related to ongoing work at the Army Research Laboratory aimed at developing automatic facial recognition that can work with the thermal cameras already deployed by military aircraft, drones, ground vehicles, watch towers, and checkpoints.

... Some Americans may have a casual attitude toward the idea of the U.S. military deploying facial recognition outside the United States. In 2017, Boudreaux helped conduct a RAND Corporation survey (PDF); 62 percent of American respondents thought it was “ethically permissible” for a robot to use facial recognition at a military checkpoint “to identify and subdue enemy combatants.” (Survey participants skewed more white and male than the overall U.S. population.)

But Americans ought not feel complacent about such technology remaining limited to overseas military deployment.

“It starts with military funding and research, and then it just kind of very quickly proliferates through all the commercial applications a lot faster than it ever used to,” O’Sullivan says. She added that “if you think that this technology is not going to end up in a stadium or in a school, then you just haven't been paying attention to history.”

... Even an imperfect version of such facial recognition coupled with other biometric surveillance methods could shrink the space of surveillance-free movement and activity for individuals.

The United States currently has no federal data privacy law restricting the use of facial recognition.

--------------------------------------------

Papers, Please!: Russia Ramps Up Facial Recognition Systems
https://techxplore.com/news/2021-03-russia-ramps-facial-recognition.html

From cameras criss-crossing the city to payment systems popping up at metro gates and supermarket checkouts, facial recognition is rapidly taking root in Moscow.

The initiative has gained ground since the start of the coronavirus pandemic, with authorities using it as a tool to enforce lockdown measures while Russians increasingly turn to contactless payments.

... The latest development came Wednesday, as the country's leading food retailer X5 group announced the rollout of a facial recognition payment system at dozens of its Moscow supermarkets.

It said some 3,000 stores across the country will feature the technology by the end of the year.

Visa said Wednesday its research showed that 70 percent of Russians plan to use the payment system, with the pandemic triggering increased demand for contactless transactions.

The service will only be available at self-service checkouts for Sberbank customers after the bank recently allowed its users to set up facial recognition so as to pay from their accounts.

Beyond supermarkets, Muscovites will now also be able to use facial recognition technology to pay for metro rides, the Interfax news agency reported earlier this month.

To use the "Face Pay" system, metro riders must have a bank account that has their biometric data on file, metro security service head Andrei Kichigin told Interfax.

It said authorities are hoping to increase the number of people who have signed over their biometric data from the current total of 164,000 to 70 million over the next two years.

The increasing collection of biometric data and a sprawling network of some 100,000 facial recognition cameras in Moscow has sparked concerns from activists over state surveillance.


... After protests in January and early February over the jailing of Kremlin critic Alexei Navalny, concerns were raised when demonstrators and activists alleged that law enforcement officials had tracked down people present at the rallies using facial recognition technology.

The worries were only buttressed when an unnamed law enforcement official last month told the state-run TASS news agency that facial recognition cameras were used to identify and detain regular protesters ahead of those demonstrations.

... "Only people on the wanted list are checked," metro security service head Kichigin told the Lenta.ru website
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #676 on: March 18, 2021, 03:22:00 PM »
IBM's AI Debating System Able to Compete With Expert Human Debaters
https://techxplore.com/news/2021-03-ibm-ai-debating-expert-human.html

IBM has developed an artificial intelligence-based system designed to engage in debates with humans. In their paper published in the journal Nature, the team members describe their system and how well it performed when pitted against human opponents.

Chris Reed with the University of Dundee has published a News & Views piece in the same journal issue outlining the history and development of AI technology based around the types of logic used in human arguments and the new system developed by IBM.

As Reed notes, debating is a skill humans have been honing for thousands of years. It is generally considered to be a type of discussion in which one or more people attempt to persuade others that their opinion on a topic is right. In this new effort, the team at IBM has created an AI system designed to debate with humans in a live setting. It listens to moderators and opponents and responds in a female voice.

In most debates, people presenting an argument tend to cite others who can back up their claims. They may note prior research, or quote well-known phrases used by people respected in the field of argument. The IBM system, known simply as Project Debater, scans the internet for such arguments and uses them in ways that it has learned are convincing.

Most debates also generally involve the participants attempting to shoot down the arguments of their opponent. To carry out such tasks, Project Debater uses Watson, the IBM system that beat contestants on the game show "Jeopardy," to listen to the arguments given by opponents and then searches for rebuttals that have been given by others to similar claims.

IBM began testing the system back in 2019, when it participated in a debate with Harish Natarajan, an expert debater. Those in attendance agreed that Project Debater did not beat Natarajan, but the same audience also agreed that it did very well. In a later test, Project Debater was asked to convince a panel of viewers that telemedicine was a good idea. Most of those on the panel found that the AI system did indeed change their stance on the topic—a possible indication that AI systems may one day soon play a role in human debates such as those that occur on social media sites.



Noam Slonim et al. An autonomous debating system, Nature (2021).
https://www.nature.com/articles/s41586-021-03215-w

---------------------------------------------

AI Can Now Learn To Manipulate Human Behaviour
https://www.gizmodo.com.au/2021/02/ai-can-now-learn-to-manipulate-human-behaviour/

Artificial intelligence (AI) is learning more about how to work with (and on) humans. A recent study has shown how AI can learn to identify vulnerabilities in human habits and behaviours and use them to influence human decision-making.

A team of researchers at CSIRO’s Data61, the data and digital arm of Australia’s national science agency, devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices, using a kind of AI system called a recurrent neural network and deep reinforcement-learning. ...

Adversarial vulnerabilities of human decision-making, PNAS, (2021)
https://www.pnas.org/content/117/46/29221
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #677 on: March 19, 2021, 02:58:30 PM »
Competitive Physical Human-Robot Sword Play, by Boling Yang, Xiangyu Xie, Golnaz Habibi, and Joshua R. Smith from the University of Washington and MIT
https://dl.acm.org/doi/10.1145/3434074.3447168


https://spectrum.ieee.org/automaton/robotics/robotics-hardware/foam-sword-fencing-pr2

... pretty slow, but give it a few years of practice ...

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #678 on: March 20, 2021, 12:37:06 AM »
Amazon Driver Quits, Saying the Final Straw Was the Company's New AI-Powered Truck Cameras That Can Sense When Workers Yawn or Don't Use a Seatbelt
https://news.trust.org/item/20210319120214-n93hk/

March 19 (Thomson Reuters Foundation) – When Vic started delivering packages for Amazon in 2019, he enjoyed it - the work was physical, he liked the autonomy, and it let him explore new neighborhoods in Denver, Colorado.

But Vic, who asked to be referred to by his first name for fear of retaliation, did not like the sensation that he was constantly under surveillance.

At first, it was Amazon’s “Mentor” app that constantly monitored his driving, phone use and location, generating a score for bosses to evaluate his performance on the road.

“If we went over a bump, the phone would rattle, the Mentor app would log that I used the phone while driving, and boom, I’d get docked,” he said.

Then, Amazon started asking him to post “selfies” before each shift on Amazon Flex, another app he had to install.

“I had already logged in with my keycard at the beginning of the shift, and now they want a photo? It was too much," he said.

The final indignity, he said, was Amazon's decision to install a four-lens, AI-powered camera in delivery vehicles that would record and analyse his face and body the entire shift.

This month, Vic put in his two-week notice and quit, ahead of a March 23 deadline for all workers at his Denver dispatch location to sign release forms authorising Amazon to film them and collect and store their biometric information.

“It was both a privacy violation, and a breach of trust,” he said. “And I was not going to stand for it.”

The camera systems, made by U.S.-based firm Netradyne, are part of a nationwide effort by Amazon to address concerns over accidents involving its increasingly ubiquitous delivery vans.



... Albert Fox Cahn, who runs the Surveillance Technology Oversight Project - a privacy organisation - said the Amazon cameras were part of a worrying, new trend.

"As cameras get cheaper and artificial intelligence becomes more powerful, these invasive tracking systems are increasingly the norm," he said.

The cameras are equipped with sensors that pick up if a driver yawns, drives without a seatbelt, or appears distracted, according to a product description posted online.

If any such behaviours are detected, the camera records the incident and shares it with the dispatcher.

... Each time the camera's AI detected an anomaly in Vic's behavior - a yawn, a glance at his phone - it started recording, and saving the footage. ... At the end of his shift his supervisor showed him all the images that had been captured

Vic felt violated.

... Eventually, his DSP told Vic that cameras were going to become mandatory company policy for all vans all the time, and he would have to agree to be filmed, or seek other work.

On March 2, he got a notification in his Amazon Flex App that he would now have to sign a consent form to allow Amazon to film him at work, as cameras were going in all vans.

When Vic read the documents, he was disturbed to read that  Amazon reserved the right to “share the information....with Third-party service providers” and “Amazon group affiliates”.

"The way they are written basically reserves the right for Amazon to do just about anything they want with this data."

In a letter to Amazon on March 3, five Democratic senators raised concerns about the cameras' privacy implications.

The senators echoed another concerns of Vic’s: that there was no way to opt out of the surveillance, even for drivers with stellar safety records.

They also asked Amazon to “identify any third parties with which Amazon has shared or plans to share” their footage.

Senator Ed Markey of Massachusetts, one of the signatories,  told the Thomson Reuters Foundation that Amazon had not replied.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #679 on: March 20, 2021, 12:39:04 AM »
---------------------------------------------



Man-Machine Synergy Effectors, Inc. is a Japanese company working on an absolutely massive “human machine synergistic effect device,” which is a huge robot controlled by a nearby human using a haptic rig.

---------------------------------------------



DARPA is making progress on its AI dogfighting program, with physical flight tests expected this year

--------------------------------------------



Ford does robots

-------------------------------------------
« Last Edit: March 21, 2021, 03:23:59 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #680 on: March 21, 2021, 03:15:16 AM »
Study: It Might Be Unethical to Force AI to Tell Us the Truth
https://thenextweb.com/neural/2021/03/10/it-might-be-unethical-to-force-ai-to-tell-us-the-truth/

Until recently, deceit was a trait unique to living beings. But these days artificial intelligence agents lie to us and each other all the time. The most popular example of dishonest AI came a couple years back when Facebook developed an AI system that created its own language in order to simplify negotiations with itself.

Once it was able to process inputs and outputs in a language it understood, the model was able to use human-like negotiation techniques to attempt to get a good deal.

https://arxiv.org/abs/1706.05125

According to the Facebook researchers:

... Analysing the performance of our agents, we find evidence of sophisticated negotiation strategies. For example, we find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it. Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learnt relatively late in child development. Our agents have learnt to deceive without any explicit human design simply by trying to achieve their goals.

A team of researchers at Carnegie Mellon University today published a pre-print study discussing situations like this and whether we should allow AI to lie. Perhaps shockingly, the researchers appear to claim that not only should we develop AI that lies, but it’s actually ethical. And maybe even necessary.

https://arxiv.org/pdf/2103.05434.pdf

... One might think that conversational AI must be regulated to never utter false statements (or lie) to humans. But, the ethics of lying in negotiation is more complicated than it appears. Lying in negotiation is not necessarily unethical or illegal under some circumstances, and such permissible lies play an essential economic role in an efficient negotiation, benefiting both parties.

That’s a fancy way of saying that humans lie all the time, and sometimes its not unethical. The researchers use the example of a used-car dealer and an average consumer negotiating.

According to the researchers, this is ethical because there’s no intent to break the implicit trust between these two people. They both interpret each other’s “bids” as salvos, not ultimatums, because negotiation involves an implicit hint of acceptable dishonesty.

That being said, it’s easy to see how building robots that can’t lie could make them patsies for humans who figure out how to exploit their honesty. If your client is negotiating like a human and your machine is bottom-lining everything, you could lose a deal over robo-human cultural differences, for example.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #681 on: March 21, 2021, 03:18:10 AM »
In Battle With U.S., China to Focus On 7 'Frontier' Technologies from AI Chips to Brain-Computer Fusion
https://www.cnbc.com/amp/2021/03/05/china-to-focus-on-frontier-tech-from-chips-to-quantum-computing.html

In its 14th five-year plan, China laid out seven technology areas it will focus research on including artificial intelligence, quantum computing, semiconductors and space.

Premier Li Keqiang said on Friday that China would increase research and development spending by more than 7% per year between 2021 and 2025, in pursuit of "major breakthroughs" in technology.

China plans to focus on specialized chip development for AI applications and developing so-called open source algorithms. Open source technology is usually developed by one entity and licensed by other companies.

There will also be an emphasis on machine learning in areas such as decision making. Machine learning is the development of AI programs trained on vast amounts of data. The program "learns" as it is fed more data.

... China also says that it plans to look into "brain-inspired computing" as well as "brain-computer fusion technology," according to a CNBC translation. The five-year plan did not elaborate on what that could look like.

However, such work is already underway in the U.S. at Elon Musk's company Neuralink. Musk is working on implantable brain-chip interfaces to connect humans and computers.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #682 on: March 21, 2021, 03:21:47 AM »
Novel Device Records, Senses and Manipulates 'Mini-Brains'
https://medicalxpress.com/news/2021-03-device-mini-brains.html


Three dimensional multifunctional neural interfaces for cortical spheroids and bioengineered assembloids. Credit: Northwestern University

A team of scientists, led by researchers at Northwestern University, Shirley Ryan AbilityLab and the University of Illinois at Chicago (UIC), has developed novel technology promising to increase understanding of how brains develop, and offer answers on repairing brains in the wake of neurotrauma and neurodegenerative diseases.

Their research is the first to combine the most sophisticated 3D bioelectronic systems with highly advanced 3D human neural cultures. The goal is to enable precise studies of how human brain circuits develop and repair themselves in vitro. The study is the cover story for the March 19 issue of Science Advances.

The cortical spheroids used in the study, akin to "mini-brains," were derived from human-induced pluripotent stem cells. Leveraging a 3D neural interface system that the team developed, scientists were able to create a "mini laboratory in a dish" specifically tailored to study the mini-brains and collect different types of data simultaneously. Scientists incorporated electrodes to record electrical activity. They added tiny heating elements to either keep the brain cultures warm or, in some cases, intentionally overheated the cultures to stress them. They also incorporated tiny probes—such as oxygen sensors and small LED lights—to perform optogenetic experiments. For instance, they introduced genes into the cells that allowed them to control the neural activity using different-colored light pulses.

This platform then enabled scientists to perform complex studies of human tissue without directly involving humans or performing invasive testing. In theory, any person could donate a limited number of their cells (e.g., blood sample, skin biopsy). Scientists can then reprogram these cells to produce a tiny brain spheroid that shares the person's genetic identity. [... or they could just jack into the Matrix.] The authors believe that, by combining this technology with a personalized medicine approach using human stem cell-derived brain cultures, they will be able to glean insights faster and generate better, novel interventions.

... Yoonseok Park, postdoctoral fellow at Northwestern University and co-lead author, added, "This is just the beginning of an entirely new class of miniaturized, 3D bioelectronic systems that we can construct to expand the capacity of the regenerative medicine field. For example, our next generation of device will support the formation of even more complex neural circuits from brain to muscle, and increasingly dynamic tissues like a beating heart."

... "Now, with our small, soft 3D electronics, the capacity to build devices that mimic the complex biological shapes found in the human body is finally possible, providing a much more holistic understanding of a culture," said Northwestern's John Rogers, who led the technology development using technology similar to that found in phones and computers. "We no longer have to compromise function to achieve the optimal form for interfacing with our biology."

Yoonseok Park et al, Three-dimensional, multifunctional neural interfaces for cortical spheroids and engineered assembloids, [/I]Science Advances[/I] (2021).
https://advances.sciencemag.org/content/7/12/eabf9153

------------------------------------------------------

With This CAD for Genomes, You Can Design New Organisms
https://spectrum.ieee.org/the-human-os/biomedical/ethics/with-this-cad-for-genomes-you-can-design-new-organisms

Imagine being able to design a new organism as easily as you can design a new integrated circuit. That’s the ultimate vision behind the computer-aided design (CAD) program being developed by the GP-write consortium.

... What does it mean to write a genome? It means going far beyond the current edits done with cutting-edge tools such as CRISPR, and designing DNA sequences to create human or animal cells with new properties.

Pioneering synthetic biology companies such as Gingko Bioworks and Zymergen are already redesigning single-celled organisms like yeast and bacteria, turning them into microscopic factories that produce desirable substances.

Schwartz’s team aims to help scientists go far beyond changing individual base pairs (the most basic units of DNA). It’s intended to capture scientists’ intent at a far higher and more abstract level. If, for example, they want to add a new metabolic pathway to create a certain protein, the CAD will make all the necessary changes in all the necessary places in the genome. It’s also meant to catch coding errors that would result in a dysfunctional cell, and to accurately predict the functional effect of edits on a cell—the kind of assessment that typically requires a human expert today.

... The CAD program will be freely available to academics, as well as GP-write’s industry partners and companies that are selected to participate in its new incubator. Other companies will be able to access it for a fee, Schwartz says. The platform will also include order forms so users can send their CAD files to companies that manufacture synthetic DNA; the designed constructs can then be shipped to users so they can see how their design turn out in real life.

-------------------------------------------------

WestWorld; Here We Come: Rapid 3D Printing Method Moves Toward 3D-Printed Organs, Body Parts
https://medicalxpress.com/news/2021-03-rapid-3d-method-3d-printed.html

It looks like science fiction: A machine dips into a shallow vat of translucent yellow goo and pulls out what becomes a life-sized hand.

But the seven-second video, which is sped-up from 19 minutes, is real.

The hand, which would take six hours to create using conventional 3-D printing methods, demonstrates what University at Buffalo engineers say is progress toward 3-D-printed human tissue and organs—biotechnology that could eventually save countless lives lost due to the shortage of donor organs.

The work is described in a study published Feb. 15 in the journal Advanced Healthcare Materials.

It centers on a 3-D printing method called stereolithography and jelly-like materials known as hydrogels, which are used to create, among things, diapers, contact lenses and scaffolds in tissue engineering.

The latter application is particularly useful in 3-D printing, and it's something the research team spent a major part of its effort optimizing to achieve its incredibly fast and accurate 3-D printing technique.

Researchers say the method is particularly suitable for printing cells with embedded blood vessel networks, a nascent technology expected to be a central part of the production of 3-D-printed human tissue and organs.



Nanditha Anandakrishnan et al, Fast Stereolithography Printing of Large‐Scale Biocompatible Hydrogel Models, Advanced Healthcare Materials (2021)
https://onlinelibrary.wiley.com/doi/10.1002/adhm.202002103

-------------------------------------------

Westworld’s Technology is a Bit Too Familiar for Comfort: What Happens When Robotics and 3D Printing Advance Too Far?
https://3dprint.com/151246/westworld-3d-printing-robotics/

From Blade Runner to A.I., popular entertainment has focused on the disturbing question: what happens if robots become so advanced that the lines between human and machine start to blur? It’s a question that has become unsettlingly relevant nowadays as robots rapidly become more autonomous, and it’s not just sci-fi material anymore – even Stephen Hawking has warned that artificial intelligence has the real potential to evolve faster than humans and to end us all.

... The most effective horror, many people agree, is plausible horror – and while stories of robots gone wrong used to be pure escapism, they’re now very uncomfortably close to home. Westworld may be fantasy, but it’s fantasy with a warning – we need to be careful with our technology before it gets too far ahead of us.

--------------------------------------------

https://www.3diligent.com/3diligent-blog/3d-printing-westworld/
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 10905
    • View Profile
  • Liked: 4036
  • Likes Given: 31
Re: Robots and AI: Our Immortality or Extinction
« Reply #683 on: March 21, 2021, 10:19:20 AM »

Quote
Westworld may be fantasy, but it’s fantasy with a warning – we need to be careful with our technology before it gets too far ahead of us.

Too late - that horse left the stable some time ago.
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #684 on: March 23, 2021, 12:22:55 AM »
Pentagon Unveils Details On Effort To Equip Its Services With Massive Swarms Of Suicide Drones
https://www.thedrive.com/the-war-zone/39814/pentagon-unveils-details-on-effort-to-equip-its-services-with-massive-swarms-of-deadly-drones



The Pentagon has quietly laid critical groundwork for fielding weaponized swarms of drones across all of the services.

The Pentagon has announced that one of its offices has completed planned research and development work on a number of unmanned swarming technologies and has now turned them over to the U.S. Air Force, Army, Navy, and Marine Corps to support various follow-on programs. The systems in question are the Block 3 version of Raytheon's Coyote unmanned aircraft and an associated launcher, a jam-resistant datalink, and a software package to enable the aforementioned drones to operate as an autonomous swarm. These developments give us a glimpse into what has been a fairly opaque, integrated development effort to field lower-end swarming drones across the services that leverages common components.

... Readily available details about the LCCM project are limited. It "provides a decentralized autonomy capability for low-cost, conventional air-launched cruise missiles that will enable joint access and maneuver in the global commons," according to the Pentagon's 2019 Fiscal Year budget request "It will be capable of conducting networked integrated attacks, in-flight dynamic retargeting/reallocation and synchronized cooperative/saturation attacks."

... The press release regarding the transition of the technologies says that multiple "flight tests and operational demonstrations" were conducted at the U.S. Army's Yuma Proving Ground in Arizona in 2018 and 2019. "In the final operational demonstration in 2020, multiple cruise missiles were pneumatically launched in a matter of minutes," it adds.

"The swarm of LCCM vehicles then dynamically reacted to a prioritized threat environment while conducting collaborative target identification and allocation along with synchronized attacks," the release continues.

... We do know what at least one of these other projects is thanks to a contracting announcement the Navy issued in February, which called for the acquisition of Block 3 Coyotes configured as loitering munitions, commonly known as "suicide drones," that could operate in swarms after being launched from unmanned surface and undersea vehicles, or USVs and UUVs.

The Pentagon's description of the LCCM project also raises questions about its possible relation to other ONR efforts, including various "Super Swarm" experiments. Last year, that Navy office disclosed it had conducted a "record-setting effort [that] simultaneously launched 1,000 unmanned aerial vehicles out of a C-130 and demonstrated behaviors critical to future super swarm employment" known as the "Close-in Covert Autonomous Disposable Aircraft super swarm (CICADA)."

... The Air Force Golden Horde flight tests, which began late last year, have been demonstrating capabilities that sound virtually identical to the LCCM's stated objectives of exploring "networked integrated attacks, in-flight dynamic retargeting/reallocation and synchronized cooperative/saturation attacks."

Swarms present novel ways to potentially reduce costs, since not every unmanned vehicle within one would have to be configured in the same way with the same capabilities. Certain platforms might be equipped with sensor packages to locate targets, while others could carry other payloads, such as electronic warfare packages or high-explosive warheads, to actually engage those threats. Advanced autonomous capabilities, supported by developments in artificial intelligence and machine learning, would enable the various elements of a swarm to apply their individual capabilities in the most effective manner and rapidly shift their focus to respond to pop-up threats and other changes in the battlespace.

-----------------------------------------------

Drones vs. Drones: Lockheed MORFIUS Uses Microwaves To Kill Swarms
https://breakingdefense.com/2021/03/drones-vs-drones-lockheed-morfius-uses-microwaves-to-kill-swarms/



To fight the growing danger of hostile drones, Lockheed Martin is offering MORFIUS, a drone armed with a High-Powered Microwave (HPM) to zap UAV swarms out of the sky. MORFIUS is a reusable drone that can fit inside a six-inch diameter launch tube and weighs less than 30 pounds, light and versatile enough to attach to ground stations, ground vehicles, or aircraft.

Working as part of a layered approach to counter-drone defense, MORFIUS units will be launched at hostile drones, or drone swarms, and then disable them in close proximity, with potentially a gigawatt of microwave power — or, as Lockheed put it, a million times the power of a standard 1,000-watt microwave oven. [... and if you're in the blast radius you end up looking like a Christmas turkey]

Crucial to the promise of MORFIUS is its ability to zap many drones at once in mid-air, far from the friendly vehicles, buildings, or people actively being defended.

A microwave is as close as electronic warfare comes to brute force, frying electronics [... and people] rather than bypassing them.

----------------------------------------------

Morpheus: Tank, charge the EMP.

Morpheus: How we doing Tank?

Tank: Main power offline. EMP armed ... and ready.

Neo: EMP?

Trinity: Electro-Magnetic Pulse, disables any electrical system within the blast radius, only weapon we have against the machines.

The Matrix - (1999)




-----------------------------------------------

Air Force's MQ-9 Reaper Drone Replacement Requirements Now Include Air-To-Air Combat Capability
https://www.thedrive.com/the-war-zone/39677/mq-9-reaper-replacement-requirements-now-include-air-to-air-capability-in-contested-airspace

The Reaper’s successor should be able to defend both high-value manned aircraft and itself, in a high-end battlespace, according to the Air Force.

The new combat drone will have to take on additional missions compared to today’s Reaper, including air-to-air combat, base defense, electronic warfare, and moving target indicator surveillance against assets in the air and on the ground. What is more, the drone is to be designed from the outset to operate as part of the Joint All Domain Command and Control (JADC2) network. There is also a requirement for data-sharing between multiple UAS [swarming].

---------------------------------------------

U.S. Army Outlines Ambitious Schedule For Robots, Armor
https://breakingdefense.com/2021/03/army-outlines-ambitious-schedule-for-robots-armor/

Robotic Combat Vehicles (RCV) are potentially the most revolutionary new weapon, although they remain in an experimental phase. Currently, the vehicles are teleoperated, with one soldier driving by remote control and another controlling the sensors and weapons. But according to Maj. Gen. Richard Ross Coffman, the director of armor modernization at Army Futures Command , the objective is to make them more and more autonomous until one soldier can oversee a swarm of 12 robots, moving out ahead of the manned force as an unmanned, (relatively) expendable vanguard.


Textron M5 Ripsaw unmanned mini-tank

----------------------------------------------

New York Lawmaker Wants to Ban Police Use of Armed Robots
https://arstechnica.com/tech-policy/2021/03/new-york-lawmaker-wants-to-ban-police-use-of-armed-robots/

New York City councilmember Ben Kallos says he "watched in horror" last month when city police responded to a hostage situation in the Bronx using Boston Dynamics' Digidog, a remotely operated robotic dog equipped with surveillance cameras. Pictures of the Digidog went viral on Twitter, in part due to their uncanny resemblance with world-ending machines in the Netflix sci-fi series Black Mirror.

Now Kallos is proposing what may be the nation's first law banning police from owning or operating robots armed with weapons.

Kallos' bill would not ban unarmed utility robots like the Digidog, only weaponized robots. But robotics experts and ethicists say he has tapped into concerns about the increasing militarization of police: their increasing access to sophisticated robots through private vendors and a controversial military equipment pipeline. Police in Massachusetts and Hawaii are testing the Digidog as well.

... The incident raised questions about how police acquire robots. Dallas police had at least three bomb robots in 2016. Two were acquired from the defense contractor Northrop Grumman, according to Reuters. The third came through the federal government's 1033 program, which permits the transfer of surplus military equipment to local police departments. Since 1997, over 8,000 police departments have received over $7 billion in equipment.

President Obama placed limits on the types of equipment that police departments can obtain through the system, but President Trump later reversed them.

"Nonlethal robots could very well morph into lethal ones," says Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic University, San Luis Obispo. Lin briefed CIA employees on autonomous weapons during the Obama administration and supports a ban on armed robots. He worries their increased availability poses a serious concern.

"It's almost always the police officer arguing that they're defending themselves by using lethal force," he says. "But a robot has no right to self-defense. So why would it be justified in using lethal force?"

... This increasing militarization is part of why Kallos, the New York councilmember, wants to "avoid investing in an ever escalating arms race when these dollars could be better spent" elsewhere.

... Lin, the Cal Poly professor, worries that many police officers do not live in the communities they patrol, and remote policing could worsen an "us-versus-them" divide. The Digidog would not be banned under Kallos' bill, but Lin says military drones offer a cautionary tale. They too began strictly as reconnaissance devices before being weaponized.

"It's hard to see a reason why this wouldn't happen with police drones, given the trend toward greater militarization," Lin says.  [... when will they start using surplus suicide drone swarms]
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #685 on: March 23, 2021, 12:28:34 AM »
SkyNet Becoming Self-Aware: NORTHCOM Developing, Testing AI Tools To Implement Command and Control
https://breakingdefense.com/2021/03/exclusive-northcom-developing-testing-ai-tools-to-implement-jadc2/



Northern Command is prototyping and testing a set of AI tools to support Joint All Domain Command and Control (JADC2) implementation, NORTHCOM officials tell Breaking Defense. Most importantly, they said, the new artificial intelligence will instantly pull together all sorts of data to give commanders a clear picture of the battlefield, enabling good, fast decisions [... maybe one; maybe the other; not both.].

The command is leading a virtual exercise, called the Global Information Dominance Exercise (GIDE) 2, March 18-23 to test three "decision aids" using AI to speed commanders' ability to act. The AI algorithms will enable all-domain situational awareness, “information dominance,” and real-time “cross-Combatant Command collaboration.”

The first tool, that NORTHCOM has been using itself for about six months, is called Pathfinder. Pathfinder “takes raw radar feeds off of every military and FAA radar in North America, Canada, Alaska, Hawaii, and Guam,” he explained, and fuses the data to create a picture of adversary activities. ... And now we’re working with another vendor that’s doing a lot of the the user interface for us,” as well as upgrading the software to “turn it from a from a situational awareness tool, to now actually a command and control tool.”

The GIDE 2 exercise will use that software to pull in live data from each one of NORTHCOM’s homeland defense airfields,” he explained. “We’re gonna have real threats that we’re representing via bombers, and when those threats are actually flying against North America, the system is going to be recommending the best course of action to launch fighters or other assets to that threat.”

Quote
...The key here is that the AI system — not a slow human as in the past — will rapidly provide and constantly upgrade best options to ensure a high probability of intercept, the time each solution would take, and the best asset to pair to each.

The second AI system builds from the first, but is bringing together an expanded set of data, Strohmeyer said. It, too, has been deployed to NORTHCOM’s C2 center. “We’re using it right now in real-world scenarios,” he said.

The system is “able to see all domains from subsurface to geosynchronous orbit, bringing in both Blue Force, and Red Force feeds and views,” he explained. Space data is one critical input, he noted, from imagery gathered by military, Intelligence Community and commercial satellites, but also signals intelligence and electronic intelligence data from IC satellites. For this reason, NORTHCOM is working closely with the National Geospatial-Intelligence Agency (NGA). NGA further has been concentrating on use of AI/machine learning to speed its own analysis ... Further, he noted, open source information (social media) is also being fed into the system.

https://breakingdefense.com/2021/01/nga-faces-tech-policy-hurdles-to-ai-for-target-recognition/

---------------------------------------------

Air Force Wants To Give Its F-15s Game-Changing Cognitive Electronic Warfare Capabilities
https://www.thedrive.com/the-war-zone/39797/air-force-wants-to-give-its-f-15s-game-changing-cognitive-electronic-warfare-capabilities

The U.S. Air Force is looking to add new "cognitive" capabilities that leverage artificial intelligence, or AI, and machine learning, into electronic warfare systems now in development for various versions of the F-15, a concept known broadly as cognitive electronic warfare.

Cognitive electronic warfare, as a general concept, which you can read about more in this past War Zone piece, seeks to automate and otherwise speed up various aspects of electronic warfare, including the rapid development of new countermeasures, possibly in real-time.

https://www.thedrive.com/the-war-zone/34606/cognitive-electronic-warfare-could-revolutionize-how-america-wages-war-with-radio-waves

The Air Force Life Cycle Manager Center (AFLCMC) at Wright Patterson Air Force Base in Ohio issued the contracting notice relating to adding cognitive electronic warfare capabilities onto F-15 variants on March 11, 2021. The F-15 Program Office is interested in "cognitive (artificial intelligence/machine learning) EW [electronic warfare] capabilities ... that can be fielded in the next two years and incrementally improved upon and integrated into EW systems currently in development for the F-15," according to that announcement.

https://beta.sam.gov/opp/db33f3c0d6b749cb9bd5e577d4195886/view



... By every indication, EPAWSS already functions in a highly automated manner. This would make it ideally suited to the integration of cognitive electronic warfare capabilities.

"Cognitive electronic support and electronic attack technologies will investigate/resolve challenges of adaptive, agile, ambiguous, and out of library complex emitters that coexist with background (signals that are not the primary signal of interest) signal challenges," the AFLCMC's contracting notice said. "The government is also interested in cognitive technologies which provide rapid EW reprogramming capability or leverage the interplay and accumulation of knowledge for improved system performance."

------------------------------------------'----

How to Teach AI Decision-Making Skills and Common Sense: Play Games
https://techxplore.com/news/2021-03-ai-decision-making-skills-common-games.html

... With the successes of research studies like this one, AI is getting closer and closer to resembling human characteristics that were previously exclusive to our kind. This study and others like it will propel the artificial intelligence field into one that truly understands the ins and outs of being human.

Learning to Generalize for Sequential Decision Making:
https://arxiv.org/abs/2010.02229
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #686 on: March 26, 2021, 12:31:43 AM »
AI at Work: Staff 'Hired and Fired by Algorithm'
https://www.bbc.com/news/amp/technology-56515827

The Trades Union Congress (TUC) has warned about what it calls “huge gaps” in UK employment law over the use of artificial intelligence at work.

TUC general secretary Frances O’Grady said the use of AI at work stood at “a fork in the road”.

... “AI at work could be used to improve productivity and working lives. But it is already being used to make life-changing decisions about people at work - like who gets hired and fired.

... as AI becomes more sophisticated, the fear is that it will be entrusted with more serious, high-risk decisions, such as analysing those performance metrics to figure out who should be first in line for promotion – or being let go.

“A human might undertake some formal task, such as handling a document, but the human agency in the decision is minimal,” the authors write.

“Sometimes the human decision making is largely illusory, for instance where a human is ultimately involved only in some formal way in the decision what to do with the output from the machine.” ...



Mr. Kim: You are fired!

- 5th Element - (1997)


---------------------------------------------

Amazon Delivery Drivers Have to Consent to AI Surveillance In Their Vans Or Lose Their Jobs
https://www.theverge.com/platform/amp/2021/3/24/22347945/amazon-delivery-drivers-ai-surveillance-cameras-vans-consent-form

Amazon is well-known for its technological Taylorism: using digital sensors to monitor and control the activity of its workers in the name of efficiency. But after installing machine learning-powered surveillance cameras in its delivery vans earlier this year, the company is now telling employees: agree to be surveilled by AI or lose your job.

As first reported by Vice, Amazon delivery drivers in the US now have to sign “biometric consent” forms to continue working for the retailing giant.

https://assets.documentcloud.org/documents/20521478/amazonprivacypolicyforvehiclecameratechnology.pdf

This level of micro-management — and the potential for the AI systems to get it wrong — seems to have angered some drivers.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #687 on: March 29, 2021, 03:18:41 PM »
Say bye-bye to a couple of million jobs ...

Meet Boston Dynamics’ Next Commercial Robot, Stretch
https://arstechnica.com/gadgets/2021/03/meet-boston-dynamics-next-commercial-robot-stretch/

It can unload trucks, build pallets, and will fit anywhere a pallet fits.


https://www.bostondynamics.com/stretch

Today, Boston Dynamics' quest for commercialization continues with the announcement of a second commercial robot, "Stretch," a box-moving bot designed to meet the demands of warehouses and distribution centers. The robot is designed to "go where the work is" in a warehouse, unloading trucks, de-palleting shipments, and eventually building orders. For now, we're seeing a prototype, but Boston Dynamics hopes companies will start buying Stretch when it hits commercial deployment in 2022.

... Stretch is the first Boston Dynamics robot that's "fully purpose-built" for the warehouse, and you can see that a lot of the nimble bird design from Handle has been thrown out in favor of a big, hulking industrial robot. We'll start with the base: the robot is simply mounted on a big box now, so it's stable by default and doesn't have to actively balance anymore. The robot weighs 2,650 lbs (1,200 kg) now, so there's no need for a big, swinging counterweight when lifting—it's not going to tip over. The arm can spin around on top of the base, so it can unload boxes from a truck to a conveyor belt without needing to move and bump into something. The result is that Stretch can unload a truck about five times faster than Handle. Stretch can move up to 800 boxes an hour.

Most warehouses are designed around the 48x40-inch dimensions of a pallet, so the base of Stretch just happens to have a 48x40-inch footprint, and it can fit anywhere a pallet fits. Wheels in each corner of the box, all with independent steering, let Stretch move in any direction, including side to side or rotating in place. The giant base also means there is a lot of room for the battery, enough to power Stretch through an eight-hour work shift, or up to 16 hours with "the extended range option."



... Being mobile means Stretch can do the work of multiple stationary arms as the needs of the warehouse dictate, without the need to redesign or install anything. Blankespoor imagines a typical day in the warehouse for Stretch: "Stretch might spend the morning on the inbound side of the warehouse, unloading boxes from trucks. It might spend the afternoon in the aisles of the warehouse, building up pallets—those will go off to retailers or e-commerce centers. And it might spend the evening loading boxes back into trucks."

Stationary arms can be as beefy as they need to be, but being mobile means Stretch needs to watch its weight. Boston Dynamics' custom arm design is one-fourth the weight of an industrial arm, while still being able to out-lift its predecessor, with a 50-pound max payload (23 kg) versus the 33-pound (15 kg) capacity of Handle. The arm needed to be designed so it could reach across pallets and boxes all the way at the top of the truck, where there won't be much clearance. The robot actually grabs the top row of boxes from the side, since it won't be able to fit between the box and the roof.

The final major component of Stretch is the perception mast, a big tower that sits on the same rotating base as the arm and houses most of the robot's sensors, so it's never in the way of the arm. The mast houses both 2D and depth sensors, giving Stretch a high-up view of its surroundings. For vision, the robot uses Boston Dynamics' "Pick" software, a collection of machine-learning-powered algorithms for detecting and moving boxes, which arrived at the company via an acquisition of Kinema Systems.



The base of Stretch actually has a modular interface where you can attach various accessories. For truck unloading, you can attach a conveyor belt to Stretch, so the robot can bring the conveyor belt with it as it moves deeper into the truck. This means it only ever has to just pick up a box, spin around, and drop it for faster unloading. There's also a pallet cart attachment, so the robot can haul a pallet around as it builds orders. Additional sensors can be attached to the base, too, either for situational awareness like extra cameras or lidar, or a barcode reader for input.

The Stretch product will look a lot like this, but it's really been totally redesigned from the ground up. Every component's been reworked for manufacturability for cost reduction, reliability, and higher performance. ... We'll start rolling out applications that the product can do, incrementally. The first one we'll do is truck unload, and then a little bit later we'll start doing pallet building." Blankespoor says the final product will get a few more sensors, like a lidar on the face of the robot.

----------------------------------------------

'Treating Us Like Robots': Amazon Workers Seek Union
https://techxplore.com/news/2021-03-robots-amazon-workers-union.html

Linda Burns was excited at first to land a job at the Amazon warehouse outside Birmingham, Alabama.

A cog in a fast-moving assembly line, her job involved picking up customers' orders and sending them down the line to the packers. Now she is a staunch supporter of getting a union at the Bessemer facility. She said employees face relentless quotas and deserve more respect.

"They are treating us like robots rather than humans," said Burns, 51, who said she is out of leave after developing tendonitis.

Amazon is fighting the union. The company argues the warehouse created thousands of jobs with an average pay of $15.30 per hour—more than twice the minimum wage in Alabama. Workers also get benefits including health care, vision and dental insurance without paying union dues, the company said.

... Burns and Harvey Wilson, a 41-year-old who works as a "picker" at Amazon, both said they're supporting the union because of poor working conditions at the warehouse. Employees face relentless quotas and the mammoth size of the facility makes it nearly impossible to get to the bathroom and back to your station during a workers' break time, they said.
« Last Edit: March 29, 2021, 03:49:44 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #688 on: March 31, 2021, 10:58:35 PM »
Navy’s Plans Call For New Autonomous Drones To Shoot, Spy, Jam
https://breakingdefense.com/2021/03/navys-emerging-plans-call-for-new-drones-to-shoot-spy-jam/

WASHINGTON: The Navy now aims to have 60% of its carrier air wing comprise unmanned aircraft as it replaces F-18s.

The Next Generation Air Dominance (NGAD) effort, a joint effort between the Navy and Air Force, is still in its early stages, but the admiral in charge of the Navy’s air wing said today he would like to see a 60/40 mix of unmanned to manned aircraft to replace the F/A-18E/F Super Hornet and electronic attack EA-18G Growlers.

“In the next probably two to three years, we’ll have a better idea whether replacement for the F-18 E and F will be manned or unmanned,” Rear Adm. Gregory Harris,  director of the Navy’s Air Warfare Division, said at a Navy League event this morning. The service will initially try for a 40/60 unmanned to manned aircraft mix, leading to the 60/40 ratio as time goes on.

... “Having an unmanned platform out there as an adjunct missile carrier, I see as not a step too far, too soon,” Harris said. “An unmanned system with missiles I can clearly — in my mind — envision a way to say ‘fly a defensive combat spread, shoot on this target,’ and I will squeeze the trigger or I will enable that unmanned platform to shoot the designated target. That doesn’t stretch beyond my realm of imagination.”

-----------------------------------------



------------------------------------------

Status Report: Navy Unmanned Aerial, Subsurface Platforms
https://news.usni.org/2021/03/26/status-report-navy-unmanned-aerial-subsurface-platforms

The Navy wants to emphasize the development of enablers for unmanned systems – the common interfaces and control stations, the networks, the secure data formats, the autonomy behaviors – as it pursues a hybrid manned/unmanned fleet for the future.

------------------------------------------

Report on Navy Large Unmanned Surface and Undersea Vehicles
https://news.usni.org/2021/03/30/report-on-navy-large-unmanned-surface-and-undersea-vehicles-3

March 25, 2020 Congressional Research Service Report, Navy Large Unmanned Surface and Undersea Vehicles: Background and Issues for Congress.

... The Navy envisions LUSVs as being 200 feet to 300 feet in length and having full load displacements of 1,000 tons to 2,000 tons. The Navy wants LUSVs to be low-cost, high-endurance, reconfigurable ships based on commercial ship designs, with ample capacity for carrying various modular payloads—particularly anti-surface warfare (ASuW) and strike payloads, meaning principally anti-ship and land-attack missiles.



-----------------------------------------

Robot Security Dogs Start Guarding Tyndall Air Force Base
https://www.upi.com/Defense-News/2021/03/29/robot-dogs-tyndall/9951617032912/

March 29 (UPI) -- Robot dogs, or quad-legged unmanned ground vehicles, have begun guarding Tyndall Air Force Base, Fla., the U.S. Air Force announced on Monday.

The semi-autonomous machines, which walk on four legs and resemble dogs' bodies, were integrated into the 325th Security Forces Squadron at the base on March 22.

... "As a mobile sensor platform, the Q-UGVs will significantly increase situational awareness for defenders," ... "They can patrol the remote areas of a base while defenders can continue to patrol and monitor other critical areas of an installation,"  Mark Shackley, security forces program manager at Tyndall Air Force Base's program management office, said in a press release.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #689 on: March 31, 2021, 11:49:08 PM »
Mission Impossible ...

Microsoft Wins $22 Billion Deal Making AR Headsets for US Army
https://www.defenseone.com/technology/2021/03/us-army-ready-roll-out-futuristic-goggles-larger-force/173026/
https://techxplore.com/news/2021-03-microsoft-billion-headsets-army.html

Microsoft won a nearly $22 billion contract to supply U.S. Army combat troops with its augmented reality headsets.

The technology is based on Microsoft's HoloLens headsets, which were originally intended for the video game and entertainment industries.

Pentagon officials have described the futuristic technology—which the Army calls its Integrated Visual Augmentation System—as a way of boosting soldiers' awareness of their surroundings and their ability to spot targets and dangers.

The Army has been experimenting with the IVAS, a heads-up digital display based loosely on Microsoft’s HoloLens headset for gamers, for two years now, with some units sent to members of the special operations community. The headset connects to the cloud to offer what is often referred to as an “augmented reality” view. The goal is to allow soldiers to access any piece of data that might be useful to them in training, operations, or combat. That includes the view from a small aerial drone or a feed from cameras mounted on Army vehicles, visual targeting aids to differentiate friend from foe, and facial recognition, possibly even at night.

A group of Microsoft workers in 2019 petitioned the company to cancel its initial Army deal, arguing it would turn real-world battlefields into a video game.



--------------------------------------------

New Technology Recognizes Faces In the Dark, Far Away
https://www.army.mil/article/232503/new_technology_recognizes_faces_in_the_dark_far_away

https://www.defenseone.com/technology/2019/07/army-soldier-goggles-will-feature-facial-recognition-tech-very-soon/158505/

----------------------------------------------------

Army Makes Gargantuan Bet On New Augmented Reality Goggles For Its Soldiers
https://www.thedrive.com/the-war-zone/40023/army-makes-gargantuan-bet-on-new-augmented-reality-goggles-for-its-soldiers



IVAS has night vision and thermal video cameras, which allow individuals to see at night or through smoke, dust, and other obscurants, much like more traditional night vision or thermal optics. It will be able to fuse those feeds together to maximize the fidelity and other benefits that these different kinds of imagery offer in different environments.

The system might eventually be able to automatically spot and mark objects of interest for the user and IVAS reportedly already has some level of facial recognition capability, which could assist in positively identifying specific individuals during raids. In the future, artificial intelligence-driven systems could further help speed up the process of identifying potential threats that might not be immediately obvious, especially in an actual firefight where things can easily be quite chaotic. Those kinds of capabilities are already being integrated into larger fire control systems on vehicles, as seen in the video below.

https://www.thedrive.com/the-war-zone/36205/reaper-drone-flies-with-podded-ai-that-sifts-through-huge-sums-of-data-to-pick-out-targets

IVAS can also pipe in a video feed from a suitable optic mounted on a rifle, carbine, or a machine gun, giving personnel a way to peer around corners or into other hard-to-reach areas without first having to expose themselves to any significant degree to possible enemy fire. [... It's like having eyes in the back of your head.]

« Last Edit: April 02, 2021, 01:47:48 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #690 on: April 01, 2021, 02:57:52 PM »
Researchers Demonstrate First Human Use of High-Bandwidth Wireless Brain-Computer Interface
https://medicalxpress.com/news/2021-04-human-high-bandwidth-wireless-brain-computer-interface.html

For years, investigational BCIs used in clinical trials have required cables to connect the sensing array in the brain to computers that decode the signals and use them to drive external devices.

Now, for the first time, BrainGate clinical trial participants with tetraplegia have demonstrated use of an intracortical wireless BCI with an external wireless transmitter. The system is capable of transmitting brain signals at single-neuron resolution and in full broadband fidelity without physically tethering the user to a decoding system. The traditional cables are replaced by a small transmitter about 2 inches in its largest dimension and weighing a little over 1.5 ounces. The unit sits on top of a user's head and connects to an electrode array within the brain's motor cortex using the same port used by wired systems.

For a study published in IEEE Transactions on Biomedical Engineering, two clinical trial participants with paralysis used the BrainGate system with a wireless transmitter to point, click and type on a standard tablet computer. The study showed that the wireless system transmitted signals with virtually the same fidelity as wired systems, and participants achieved similar point-and-click accuracy and typing speeds.

The researchers say the study represents an early but important step toward a major objective in BCI research: a fully implantable intracortical system that aids in restoring independence for people who have lost the ability to move. While wireless devices with lower bandwidth have been reported previously, this is the first device to transmit the full spectrum of signals recorded by an intracortical sensor. That high-broadband wireless signal enables clinical research and basic human neuroscience that is much more difficult to perform with wired BCIs.

... Dubbed the Brown Wireless Device (BWD), it was designed to transmit high-fidelity signals while drawing minimal power. In the current study, two devices used together recorded neural signals at 48 megabits per second from 200 electrodes with a battery life of over 36 hours. ...



John D Simeral et al. Home Use of a Percutaneous Wireless Intracortical Brain-Computer Interface by Individuals With Tetraplegia, IEEE Transactions on Biomedical Engineering (2021)
https://ieeexplore.ieee.org/document/9390339
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 19104
    • View Profile
  • Liked: 853
  • Likes Given: 324
Re: Robots and AI: Our Immortality or Extinction
« Reply #691 on: April 02, 2021, 01:49:10 PM »
When captcha decides it doesn’t like you. ;D

➡️ https://twitter.com/jalmzy/status/1376606858467770372
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #692 on: April 02, 2021, 05:39:38 PM »
^ ... I've had that problem

Bipedal Robots Are Learning To Move With Arms as Well as Legs
https://spectrum.ieee.org/automaton/robotics/humanoids/bipedal-robot-learning-to-move-arms-legs

--------------------------------------



... Now, can it find the keys and open the door while carrying a pizza?
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #693 on: April 02, 2021, 09:37:14 PM »
After years of trying, 60 Minutes cameras finally get a peek inside the workshop at Boston Dynamics, where robots move in ways once only thought possible in movies. Anderson Cooper reports.



----------------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

morganism

  • Frazil ice
  • Posts: 294
    • View Profile
  • Liked: 61
  • Likes Given: 1
Re: Robots and AI: Our Immortality or Extinction
« Reply #694 on: April 05, 2021, 01:12:48 AM »
Complexity Rising:
From Human Beings to Human Civilization, a Complexity Profile

https://necsi.edu/s/EOLSSComplexityRising.pdf

" This article analyzes the human social environment using the "complexity profile," a mathematical tool for characterizing the collective behavior of a system. The analysis is used to justify the qualitative observation that complexity of existence has increased and is increasing. The increase in complexity is directly related to sweeping changes in the structure and dynamics of human civilization—the increasing interdependence of the global economic and social system and the instabilities of dictatorships, communism and corporate hierarchies."

https://necsi.edu/complexity-rising-from-human-beings-to-human-civilization-a-complexity-profile

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #695 on: April 06, 2021, 03:37:55 AM »
The UK Wants To Add Combat Drones To Its Aircraft Carriers
https://www.thedrive.com/the-war-zone/39922/now-the-uk-wants-to-add-combat-drones-to-its-aircraft-carriers-but-is-it-really-feasible



Project Vixen is studying how a large high-performance combat drone could undertake missions from the Royal Navy’s flattops.

A naval combat drone could be headed to the decks of the U.K.’s two aircraft carriers in the future. Under the recently revealed Project Vixen, the U.K. Royal Navy is studying the potential for adding a large unmanned aerial vehicle that could undertake missions including aerial refueling — like the U.S. Navy’s MQ-25 Stingray — as well as strike, potentially in loyal wingman-type role, networked together with its F-35B Lightning stealth fighters.

https://www.naval-technology.com/news/royal-navy-project-vixen-exploring-potential-carrier-uas/

The aspiration to have a large-size unmanned aircraft operate from the decks of the two Queen Elizabeth class ships doesn’t come altogether out of the blue. Last month, a request for information (RFI) for “aircraft launch and recovery equipment” appeared on the U.K. government’s public sector contracts website.

https://bidstats.uk/tenders/2021/W08/745668808

.... the upper weight limit could point to the Royal Navy looking at plans for operating large-size drones with considerable capacity for fuel, ordnance, or sensor payloads.



... Project Vixen also parallels the U.K. Royal Air Force’s Team Mosquito project, part of the Lightweight Affordable Novel Combat Aircraft (LANCA) initiative. Naval Technology reports that the Royal Navy and RAF are working together to study potential platforms for Mosquito and Vixen, suggesting that a broadly common drone could eventually be fielded for both land-based and carrier applications.

... Plans call for start flight testing of a full-scale Project Mosquito vehicle by the end of 2023.



--------------------------------------------------

Stealthy Valkyrie Drone Uses Weapons Bay For First Time To Launch Smaller Drone
https://www.thedrive.com/the-war-zone/40068/xq-58a-valkyrie-uses-weapons-bay-for-first-time-to-launch-smaller-drone

--------------------------------------------------

AI-Controlled F-16s Are Now Working as a Team In DARPA's Virtual Dogfights
https://www.darpa.mil/news-events/2021-03-18a

The goal of bringing artificial intelligence into the air-to-air dogfighting arena has moved a step closer with a series of simulated tests that pitted AI-controlled F-16 fighter jets working as a team against an opponent. The experiments were part of Phase 1 of the Defense Advanced Research Projects Agency’s (DARPA) Air Combat Evolution (ACE) program, focused on exploring how AI and machine learning may help automate various aspects of air-to-air combat.



DARPA announced recently that it’s halfway through Phase 1 of ACE and that simulated AI dogfights under the so-called Scrimmage 1 took place at Johns Hopkins Applied Physics Laboratory (APL) last month.

Using a simulation environment designed by APL, Scrimmage 1 involved a demonstration of 2-v-1 simulated engagements with two blue force (friendly) F-16s working collaboratively to defeat an undisclosed enemy red air (enemy) aircraft.

Compared to the AlphaDogfight Trials, which were gun-only, Scrimmage 1 introduced new simulated weapons, in the form of a “missile for longer-range targets.”

“Adding more weapon options and multiple aircraft introduces a lot of the dynamics that we were unable to push and explore in the AlphaDogfight Trials,” Javorsek added. “These new engagements represent an important step in building trust in the algorithms since they allow us to assess how the AI agents handle clear avenues of fire restrictions set up to prevent fratricide. This is exceedingly important when operating with offensive weapons in a dynamic and confusing environment that includes a manned fighter and also affords the opportunity to increase the complexity and teaming associated with maneuvering two aircraft in relation to an adversary.”

So far, ACE has demonstrated advanced virtual AI dogfights involving both within-visual-range (WVR) and beyond-visual-range (BVR) multi-aircraft scenarios with simulated weapons, plus live flying using an instrumented jet to measure pilot physiology and trust in AI.

The process of “capturing trust data” has seen test pilots fly in an L-29 Delfin jet trainer at the University of Iowa Technology Institute’s Operator Performance Laboratory. This aircraft has been adapted with cockpit sensors to measure the pilot’s physiological responses, giving an insight into whether or not the pilot trusts the AI. In these missions, the L-29 has been flown by a safety pilot in the front seat, who makes flight control inputs based on AI decisions. However, for the pilot whose responses are being evaluated, it is as if the AI is flying the jet.

ACE Phase 2, planned for later this year, will add dogfights involving live subscale aircraft, both propeller-driven and jet-powered, to ensure that the AI algorithms can be moved out of the virtual environment and into real-world flying. Meanwhile, Calspan has also begun work on modifying a full-scale L-39 Albatros jet trainer to host an onboard AI “pilot” for Phase 3, a set of live-fly dogfights scheduled for late 2023 and 2024.

Once this concept is proven, DARPA plans to insert the AI technology developed in loyal wingman-type drones, like Skyborg, working collaboratively alongside manned fighters. In this way, the drones would be able to conduct dogfights with some autonomy, while the human pilot in the manned aircraft focuses primarily on battle management.

Ultimately, this AI could be crucial in realizing the dream of a fully-autonomous unmanned combat air vehicle (UCAV) capable of air-to-air combat, as well as air-to-ground strikes. While a UCAV would be able to perform many of the same functions as manned aircraft, its AI “brain” would be able to make key decisions faster and more accurately, taking into account much more information in a shorter period of time, without any concern about being distracted or confused by the general chaos of combat. The same algorithms could also be adapted to enable drones to be networked into swarms that work cooperatively to maximize their combat effectiveness, with decisions being made far quicker than a human-piloted formation.

-------------------------------------------

Multiple Destroyers Were Swarmed By Mysterious 'Drones' Off California Over Numerous Nights
https://www.thedrive.com/the-war-zone/39913/multiple-destroyers-were-swarmed-by-mysterious-drones-off-california-over-numerous-nights

The disturbing series of events during the summer of 2019 resulted in an investigation that made its way to the highest echelons of the Navy.



... they ain't our's; they're probably not China's or Russia's; that leaves something from outside the neighborhood

-------------------------------------

Navy's Top Officer Says ‘Drones’ That Swarmed Destroyers Remain Unidentified
https://www.thedrive.com/the-war-zone/40071/navys-top-officer-says-mysterious-drones-that-swarmed-destroyers-remain-unidentified

At a roundtable with reporters today, Chief of Naval Operations Admiral Michael Gilday, the U.S. Navy's top officer, was asked about a series of bizarre incidents that took place in July 2019 and involved what only have been described as 'drones' swarming American destroyers off the coast of Southern California.

Asked by Jeff Schogol of Task & Purpose if the Navy had positively identified any of the aircraft involved, Gilday responded by saying:

“No, we have not. I am aware of those sightings and as it’s been reported there have been other sightings by aviators in the air and by other ships not only of the United States, but other nations – and of course other elements within the U.S. joint force.”

... A Senate-requested report on Unidentified Aerial Phenomena is expected later this year.

https://www.thedrive.com/the-war-zone/34288/senate-intel-committee-pushes-for-unprecedented-public-report-on-ufos
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #696 on: April 08, 2021, 03:25:25 AM »
Am I Arguing With a Machine? AI Debaters Highlight Need for Transparency
https://www.nature.com/articles/d41586-021-00867-6

With artificial intelligence starting to take part in debates with humans, more oversight is needed to avoid manipulation and harm.

As AI systems become better at framing persuasive arguments, should it always be made clear whether one is engaging in discourse with a human or a machine? There’s a compelling case that people should be told when their medical diagnosis comes from AI and not a human doctor. But should the same apply if, for example, advertising or political speech is AI-generated?

Unlike a machine-learning approach to debate, human discourse is guided by implicit assumptions that a speaker makes about how their audience reasons and interprets, as well as what is likely to persuade them — what psychologists call a theory of mind.

But researchers are starting to incorporate some elements of a theory of mind into their AI models (L. Cominelli et al. Front. Robot. AI https://doi.org/ghmq5q; 2018 ) — with the implication that the algorithms could become more explicitly manipulative (A. F. T. Winfield Front. Robot. AI https://doi.org/ggvhvt; 2018 ). Given such capabilities, it’s possible that a computer might one day create persuasive language with stronger oratorical ability and recourse to emotive appeals — both of which are known to be more effective than facts and logic in gaining attention and winning converts, especially for false claims (C. Martel et al. Cogn. Res. https://doi.org/ghhwn7 (2020); S. Vosoughi et al. Science 359, 1146–1151; 2018).

As former US president Donald Trump repeatedly demonstrated, effective orators need not be logical, coherent, nor indeed truthful, to succeed in persuading people to follow them. Although machines might not yet be able to replicate this, it would be wise to propose regulatory oversight that anticipates harm, rather than waiting for problems to arise.

Government is already undermined when politicians resort to compelling but dishonest arguments. It could be worse still if victory at the polls is influenced by who has the best algorithm.

--------------------------------------------

Are Digital Humans the Next Step in Human-Computer Interaction?
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/are-digital-humans-the-next-step-in-humancomputer-interaction



https://digitalhumans.com/



------------------------------------------------------

Lord Johnson-Johnson: ... Any old iron. Any old iron. Any old iron. Any old iron. ... Expel your Mecha. Purge yourselves of artificiality. Come along, now. Let some Mecha loose to run. Any old unlicensed iron down there?

- A.I. Artificial Intelligence (2001)
« Last Edit: April 08, 2021, 03:43:56 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #697 on: April 08, 2021, 03:28:30 AM »
The French Army Is Testing Boston Dynamics’ Spot the Robot In Combat Scenarios
https://www.theverge.com/2021/4/7/22371590/boston-dynamics-spot-robot-military-exercises-french-army

Boston Robotic’s Spot quadruped robot seems to be hitting the battlefield with a group of French Army trainees in a series of drills and simulations that explore how these currently unarmed robots could work side-by-side with humans.

The soldiers-in-training used Spot for various reconnaissance tasks during a two-day trial of the technology.

As reported by news outlet Ouest-France, Spot and some robot friends are supplying intelligence and support for ground troops. The other robots included the French-made pack robot called the Nexter ULTRO and Shark Robotics Barakuda, a wheeled drone that carries a heavy blast shield to protect the student.



http://lignesdedefense.blogs.ouest-france.fr/archive/2021/03/31/quand-emia-part-au-combat-avec-des-robots-terrestres-22012.html

The tests, which took place in late March, were part of a project by the École Militaire Interarmes school at a French army camp Saint-Cyr Coëtquidan.

... Sources quoted in the article say that the robots slowed down operations but helped keep troops safe. “During the urban combat phase where we weren’t using robots, I died. But I didn’t die when we had the robot do a recce first,” one soldier is quoted as saying. They added that one problem was Spot’s battery life: it apparently ran out of juice during an exercise and had to be carried out.



https://mobile.twitter.com/SaintCyrCoet/status/1379457690020294665

Boston Dynamics’ vice president of business development Michael Perry told The Verge that the robot had been supplied by a European distributor, Shark Robotics, and that the US firm had not been notified in advance about its use. “We’re learning about it as you are,” says Perry. “We’re not clear on the exact scope of this engagement.”

Spot’s appearance on simulated battlefields raises questions about where the robot will be deployed in future. Boston Dynamics has a long history of developing robots for the US army, but as it’s moved into commercial markets it’s distanced itself from military connections. Spot is still being tested by a number of US police forces, including by the NYPD, but Boston Dynamics has always stressed that its machines will never be armed. “We unequivocally do not want any customer using the robot to harm people,” says Perry.

The test in France seems to be the first time Spot has been seen in a true military setting. If robots prove reliable as roaming CCTV, it’s only a matter of time before those capabilities are introduced to active combat zones.

--------------------------------------------

meanwhile, in China ...



---------------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #698 on: April 08, 2021, 03:33:11 AM »
New AI Technique Transforms Any Image Into the Style of Famous Artists
https://thenextweb.com/neural/2021/04/06/ai-text-to-image-generator-transforms-any-picture-into-style-of-famous-artists-glenn-marshall-openai-dail/

... Marshall named the technique Chimera, after the mythical beast formed from various animal parts, which has become a byword for something that exists only in the imagination and isn’t possible in reality.



The system morphs an input image towards the suggestion of a text prompt, such as “Salvador Dalí Art.” Over repeated mutations and iterations of each frame, the AI gradually finds features and shapes that match the text description until it produces a final composition.



Each piece was generated with a modified version of the Aleph-Image notebook, which is itself powered by OpenAI’s DALL-E and CLIP models.

I think the surrealists would probably bow down to the wonders of AI as Gods, but the Renaissance guys would probably send the witch-hunters after it for desecrating their art with evil machines.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • First-year ice
  • Posts: 5214
    • View Profile
  • Liked: 2637
  • Likes Given: 403
Re: Robots and AI: Our Immortality or Extinction
« Reply #699 on: April 08, 2021, 03:50:26 AM »
Report: Clearview AI's Facial Recognition Has Been Used by Over 1,800 Public Agencies
https://gizmodo.com/report-clearview-ais-facial-recognition-has-been-used-1846628884/amp



A new series of reports from BuzzFeed News shows the wide net cast by shadowy surveillance firm Clearview AI. Individuals at 1,803 public agencies—many of which are police departments—have used its facial recognition software at some point over recent years, according to data reviewed by the news outlet.

... One of the more interesting revelations from BuzzFeed’s coverage is the fact that the New York Police Department appears to have lied about whether it ever worked with Clearview. In 2020, the NYPD stated that it had “no institutional relationship” with the surveillance firm. However, according to the recent investigation, Clearview was actually “an acknowledged vendor to the department from as early as 2018.” That undisclosed relationship involved a trial of Clearview’s services, which reportedly included contracts, emails, and in-person meetings between police and the company.

https://www.buzzfeednews.com/article/ryanmac/clearview-ai-local-police-facial-recognition

Clearview AI, founded by Hoan Ton-That, markets itself as a searchable facial-recognition database for law enforcement agencies. The New York Times has previously reported on Ton-That’s close association with notorious figures from the far right, and the company is backed by early Facebook investor, and Trump confidant, Peter Thiel. The company’s USP has been to download every image posted to social media without permission to build its database — something the social media companies in question have tried to stop. The company is currently under investigation in both the UK and Australia for its data-collection practices.

https://www.nytimes.com/2021/03/18/technology/clearview-facial-recognition-ai.html

https://www.engadget.com/clearview-ai-investigation-australia-uk-142825168.html

The report — which you should read in its entirety — outlines how Clearview has offered generous free trials to individual employees at public bodies. This approach is meant to encourage these employees to incorporate the system into their working day, and advocate for their agencies to sign up. But there are a number of civil liberties, privacy, legal and accuracy questions that remain in the air as to how Clearview operates. This has not deterred agencies like ICE, however, from signing up to use the system, although other agencies, like the LAPD, have already banned use of the platform.

https://www.buzzfeednews.com/article/carolinehaskins1/nypd-has-misled-public-about-clearview-ai-use

-------------------------------------------------

Time to Regulate AI That Interprets Human Emotions
https://www.nature.com/articles/d41586-021-00868-5

-----------------------------------------

Discover the Stupidity of AI Emotion Recognition With This Little Browser Game
https://www.theverge.com/2021/4/6/22369698/ai-emotion-recognition-unscientific-emojify-web-browser-game

-----------------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― Leonardo da Vinci

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late