Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 353035 times)

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #550 on: November 27, 2020, 05:58:57 PM »
Amazon Web Outage Breaks Vacuums and Doorbells
https://www.bbc.com/news/technology-55087054

An outage with Amazon's web infrastructure left smart-home enthusiasts unable to use basic household items.

Amazon Web Services (AWS) is a huge part of the company's business and the backbone of the internet's most popular sites and services.

A widespread US outage late on Wednesday disrupted many of those services.

Robot vacuums and smart doorbells suddenly stopped working in people's homes.

"I... can't vacuum... because US-east-1 [region] is down,"
read one popular tweet, from LinkedIn's top information security official, Geoff Belknap.

"Welcome to the future," replied another user.

The iRobot company, makers of the popular Roomba robot vacuum, acknowledged the widespread problem.

"An Amazon AWS outage is currently impacting our iRobot Home App," it said. "Please know that our team is aware and monitoring the situation and hope to get the app back online soon."

Roombas can be used without an internet connection, by pushing a button on the device. But connected services are used to [ spy on you ] and keep it within a specific room and to remotely activate or schedule cleaning, which is how many owners use the robot.

Owners of Amazon's own Ring smart doorbells also suddenly found the device no longer worked at all.

The AWS outage also hit other software, including Photoshop-maker Adobe and the Washington Post newspaper - which is owned by Amazon boss Jeff Bezos.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 20384
    • View Profile
  • Liked: 5289
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #551 on: November 27, 2020, 06:25:31 PM »
Amazon Web Outage Breaks Vacuums and Doorbells
What happens when an outage breaks communication with semi-autonomous military offensive hardware already in combat mode somewhere? Surely impossible for some machine firing a 50 mm gatling gun to wander off and do its own thing?

But of course such snafus cannot possibly happen (often?).
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #552 on: November 27, 2020, 08:08:25 PM »
Amazon Web Outage Breaks Vacuums and Doorbells
What happens when an outage breaks communication with semi-autonomous military offensive hardware already in combat mode somewhere? Surely impossible for some machine firing a 50 mm gatling gun to wander off and do its own thing?

But of course such snafus cannot possibly happen (often?).

In a communication compromised situation, they follow orders like Capt. Ramsey in Crimson Tide ...

-----------------------------------------

Capt. Ramsey : We have orders in hand. Those orders are to make a pre-emptive launch of nuclear missles. Every second that we lose increases the chances that by the time our missiles arrive, their silos could be empty because they've flown their birds and struck us first.

Hunter : Yes sir.

Capt. Ramsey : You know as well as I do that any launch order received without authentication, is no order at all.

Hunter : Yes sir.

Capt. Ramsey : That's our number one rule.

Hunter : [tries to get a word in]  National mil...

Capt. Ramsey : And that rule is the basis for the scenario we've trained on, time and time again. It's a rule we follow without exception.

Hunter : Captain, National Military Command Center knows what sector we're in. They have satellites looking down on us to see if our birds are aloft and if they're *not*, then they give our orders to somebody else. That's why we maintain more than one sub, it's what they call 'redundancy'!

Capt. Ramsey : I know about redundancy, Mr Hunter.

Hunter : All I'm saying...

[Ramsey walks off]

Hunter : [follows Ramsey, lowers his voice]  All I'm saying Captain, is that we have backup. Now it's our duty, *not* to launch until we can confirm.

Capt. Ramsey : You're presuming we have other submarines out there, ready to launch. Well as Captain, I must assume our submarines could've been taken out by other Akulas. We can play these games all night Mr Hunter but uh, I don't have the luxury of your presumptions.

Hunter : Sir...

Capt. Ramsey : Mr Hunter, we have rules that are not open to interpretation, personal intuition, gut feelings, hairs on the back of your neck, little devils or angels sitting on your shoulder. We're all very well aware of what our orders are and what those orders mean. They come down from our Commander in Chief. They contain no ambiguity.

Hunter : Captain...

Capt. Ramsey : Mr Hunter. I've made a decision. I'm Captain of this boat. NOW SHUT THE FUCK UP!

- Crimson Tide - 1995



-----------------------------------------

Oh, did I mention that.the Navy wants 'many' Extra Large Unmanned Undersea Vehicle (XLUUV)

https://news.usni.org/2020/09/09/report-to-congress-on-navy-large-unmanned-surface-and-undersea-vehicles-4

... The Navy in FY2021 and beyond wants to develop and procure three types of large unmanned vehicles (UVs). These large UVs are called Large Unmanned Surface Vehicles (LUSVs), Medium Unmanned Surface Vehicles (MUSVs), and Extra-Large Unmanned Undersea Vehicles (XLUUVs). The Navy is requesting $579.9 million in FY2021 research and development funding for these large UVs and their enabling technologies.

The Navy wants to acquire these large UVs as part of an effort to shift the Navy to a more distributed fleet architecture. Compared to the current fleet architecture, this more distributed architecture is to include proportionately fewer large surface combatants (i.e., cruisers and destroyers), proportionately more small surface combatants (i.e., frigates and Littoral Combat Ships), and the addition of significant numbers of large UVs.

The Navy wants to employ accelerated acquisition strategies for procuring these large UVs, so as to get them into service more quickly. The Navy’s desire to employ these accelerated acquisition strategies can be viewed as an expression of the urgency that the Navy attaches to fielding large UVs for meeting future military challenges from countries such as China.

The Navy envisions LUSVs as being 200 feet to 300 feet in length and having full load displacements of 1,000 tons to 2,000 tons. The Navy wants LUSVs to be low-cost, high-endurance, reconfigurable ships based on commercial ship designs, with ample capacity for carrying various modular payloads—particularly anti-surface warfare (ASuW) and strike payloads, meaning principally anti-ship and land-attack missiles.

The first five XLUUVs were funded in FY2019; they are being built by Boeing. The Navy wants to procure additional XLUUVs at a rate of two per year starting in FY2023.



Boeing, Lockheed Martin Moving Forward with Navy XLUUV Acquisition Program
https://news.usni.org/2017/10/17/28810

Orca XLUUV - Extra Large Unmanned Undersea Vehicle
https://www.lockheedmartin.com/en-us/products/orca-extra-large-unmanned-underwater-vehicle-xluuv.html

Orca XLUUV is being designed to support multiple critical missions. This long-range autonomous vehicle will perform a variety of missions.

... Key attributes include extended vehicle range, autonomy, and persistence. Orca XLUUV will transit to an area of operation; loiter with the ability to periodically establish communications, deploy payloads, and transit home.

OBTW, the UGM-133A Trident II, or Trident D5, a submarine-launched ballistic nuclear missile (SLBM) is built by Lockheed Martin. Just sayin'
« Last Edit: November 27, 2020, 08:50:02 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #553 on: November 27, 2020, 08:35:51 PM »
Looking for Ways to Prevent Price Collusion With AI Systems
https://techxplore.com/news/2020-11-ways-price-collusion-ai.html

For most countries, price collusion is illegal. It is where two or more makers or sellers of goods get together and agree to charge higher than market prices for the goods or services they are selling. Such practices are illegal because consumers wind up paying higher prices than they would if prices were market based.

In their paper the economists reveal that many large corporations have taken to using computer systems with an AI component to set their prices. Using computers to set prices is not new, of course, some companies sell hundreds of thousands of products. Using computers to help set prices saves a lot of time and money. But until now, such systems have been constrained by the laws in which the companies operate—such laws can be baked in.

But now, the authors contend, things have begun to change. AI systems have found, through learned experience, that uncommunicated collusion can lead to higher profits. Such systems do not have to meet secretly in back rooms—instead, they use logic to discover that their company will make more money if they charge more for products. And if all of their competitors are using similar systems, they can all agree to raises prices and hold them there, without ever having to actually agree to do so. Worse, because they do not break any of the rules that have been established to prevent human price setters from colluding, there is nothing the law can do to stop them. At least not right now, based on current laws.

Emilio Calvano et al. Protecting consumers from collusive prices due to AI, Science (2020).
https://science.sciencemag.org/content/370/6520/1040
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #554 on: November 27, 2020, 08:47:26 PM »
Another mindless job eliminated by robots ...

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 20384
    • View Profile
  • Liked: 5289
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #555 on: November 27, 2020, 09:49:40 PM »
Amazon Web Outage Breaks Vacuums and Doorbells
What happens when an outage breaks communication with semi-autonomous military offensive hardware already in combat mode somewhere? Surely impossible for some machine firing a 50 mm gatling gun to wander off and do its own thing?

But of course such snafus cannot possibly happen (often?).

In a communication compromised situation, they follow orders like Capt. Ramsey in Crimson Tide ...
Thanks for the reassurance,

At the moment my answer to the question Robots and AI:  Our Immortality or Extinction? is - both. Extinction but those robots will keep the memory of us saps, schmuks and dumbos alive for ever, and ever, and ever.
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #556 on: November 27, 2020, 11:12:58 PM »
AI Can Run Your Work Meetings Now
https://arstechnica.com/information-technology/2020/11/ai-can-run-your-work-meetings-now/

"Optimizing" meetings, from automated scheduling to facial recognition to measure attention.



----------------------------------------------

Using Artificial Intelligence to Help Drones Find People Lost In the Woods
https://techxplore.com/news/2020-11-artificial-intelligence-drones-people-lost.html



... Testing of the system showed it to be approximately 87 to 95 accurate compared to just 25 percent accurate for traditional thermal images. The researchers suggest their system is ready for use by search and rescue crews and could also be used by law enforcement, the military, or wildlife management teams.

David C. Schedl et al. Search and rescue with airborne optical sectioning, Nature Machine Intelligence (2020).
https://www.nature.com/articles/s42256-020-00261-3

----------------------------------------

Verse by Verse: AI Poet
https://sites.research.google/versebyverse/

For all aspiring great poets today—and for all those whose poems simply suck—there is help. Verse by Verse was fed tens of thousands of words of the world's greatest poets and was trained to write its own gems, emulating the grammar and style of the grandmasters of poetry.

Users of the program are asked to write a first line for a poem. They then select up to three famous poets whose style they would like to incorporate into their writing. There are 22 poets to choose from, including Emily Dickinson, Robert Frost, Paul Laurence Dunbar and Edgar Allan Poe.

Verse by Verse then generates additional lines of verse, with suggestions from each of the selected poets. The user may choose one line at a time from any of the poets. A poetic form must be selected: quatrain, couplet or free verse. Users then select a syllable count—nine is most common—and a rhythm pattern to determine which lines of the poem must rhyme.

---------------------------------------



---------------------------------------
« Last Edit: November 28, 2020, 08:37:28 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #557 on: November 29, 2020, 09:35:26 PM »
WarGames for Real: How One 1983 Exercise Nearly Triggered WWIII
https://arstechnica.com/information-technology/2020/11/wargames-for-real-how-one-1983-exercise-nearly-triggered-wwiii/



"Let's play Global Thermonuclear War."

Thirty-two years ago, just months after the release of the movie WarGames, the world came the closest it ever has to nuclear Armageddon. In the movie version of a global near-death experience, a teenage hacker messing around with an artificial intelligence program that just happened to control the American nuclear missile force unleashes chaos. In reality, a very different computer program run by the Soviets fed growing paranoia about the intentions of the United States, very nearly triggering a nuclear war.

The software in question was a KGB computer model constructed as part of Operation RYAN (РЯН), details of which were obtained from Oleg Gordievsky, the KGB's London section chief who was at the same time spying for Britain's MI6. Named for an acronym for "Nuclear Missile Attack" (Ракетное Ядерное Нападение), RYAN was an intelligence operation started in 1981 to help the intelligence agency forecast if the US and its allies were planning a nuclear strike. The KGB believed that by analyzing quantitative data from intelligence on US and NATO activities relative to the Soviet Union, they could predict when a sneak attack was most likely.

As it turned out, Exercise Able Archer '83 triggered that forecast. The war game, which was staged over two weeks in November of 1983, simulated the procedures that NATO would go through prior to a nuclear launch. Many of these procedures and tactics were things the Soviets had never seen, and the whole exercise came after a series of feints by US and NATO forces to size up Soviet defenses and the downing of Korean Air Lines Flight 007 on September 1, 1983. So as Soviet leaders monitored the exercise and considered the current climate, they put one and one together. Able Archer, according to Soviet leadership at least, must have been a cover for a genuine surprise attack planned by the US, then led by a president possibly insane enough to do it.

While some studies, including an analysis some 12 years ago by historian Fritz Earth, have downplayed the actual Soviet response to Able Archer, a newly published declassified 1990 report from the President's Foreign Intelligence Advisory Board (PFIAB) to President George H. W. Bush obtained by the National Security Archive suggests that the danger was all too real. The document was classified as Top Secret with the code word UMBRA, denoting the most sensitive compartment of classified material, and it cites data from sources that to this day remain highly classified. When combined with previously released CIA, National Security Agency (NSA), and Defense Department documents, this PFIAB report shows that only the illness of Soviet leader Yuri Andropov—and the instincts of one mid-level Soviet officer—may have prevented a nuclear launch.

"Nuclear Missile Attack" (Ракетное Ядерное Нападение), RYAN PFIAB Report: http://nsarchive.gwu.edu/nukevault/ebb533-The-Able-Archer-War-Scare-Declassified-PFIAB-Report-Released/


https://nsarchive2.gwu.edu/nukevault/ebb533-The-Able-Archer-War-Scare-Declassified-PFIAB-Report-Released/2012-0238-MR.pdf

... According to the declassified PFIAB item, CIA analysts reported that by 1981 the increasing shift in the strategic balance and the deterioration of the relationship between the US and the USSR led the Soviet leadership to believe there was "an increased threat of war and increased likelihood of the use of nuclear weapons." After Reagan took office in 1981, the Soviet leadership pushed on the USSR's intelligence apparatus to make sure they were ready to act if Reagan did the insane and was preparing a surprise attack.

Yuri Andropov, at that time head of the KGB, told a group of KGB officers in May of 1981 that the US was actively preparing for war against the Soviet Union. He believed in a possible, surprise US nuclear first strike, and his solution was to enhance the RYAN program fast.

Development of the RYAN computer model began in the mid-1970s, and by the end of the decade the KGB convinced the Politburo that the software was essential to make an accurate assessment of the relationship between the USSR and the United States. While it followed prior approaches to analysis used by the KGB, the pace of Western technological advancements and other factors made it much more difficult to keep track of everything affecting the "correlation of forces" between the two sides.

Even if it was technologically advanced, the thinking behind RYAN was purely old-school, based on the lessons learned by the Soviets from World War II. It used a collection of approximately 40,000 weighted data points based on military, political, and economic factors that Soviet experts believed were decisive in determining the course of the war with Nazi Germany. The fundamental assumption of RYAN's forecasting was that the US would act much like the Nazis did—if the "correlation of forces" was decisively in favor of the US, then it would be highly likely that the US would launch a surprise attack just as Germany did with Operation Barbarossa.

The forecast that RYAN spit out was, for all the model's complexity, very simple. The system used the US' power as a fixed scale, measuring the Soviet position as a percentage score based on all the data points. RYAN's model was constantly updated with new data from the field, and the RYAN score report was sent once a month to the Politburo. Anything above a 70 was acceptable, but the experts who built the system believed that a score of 60 or above meant the Soviet Union was safe from surprise attack. Anything lower was bad news.

In 1981, the score was dipping below 60, so Andropov pushed for enhanced data points to be added to RYAN to improve its accuracy. In May, he ordered the creation of a special "institute" within the KGB to develop the additional military intelligence input requirements. ...

... However the KGB decided to use these data points, it did not make things better. As soon as RYAN was updated with the new information, it immediately started churning out bad news. This only fueled demand for more and better data to feed the model. In a tasking communique to field officers in the US and Western Europe, KGB headquarters said: ...

... It didn't help much that President Reagan had essentially given the US Navy and Air Force carte blanche ability to screw with the Soviets' heads. Soon after being sworn in, Reagan signed off on a series of psychological warfare operations against the Soviet Union. The Air Force flew bombers up over the North Pole and out of bases in Europe and Asia to come close to Soviet airspace then turn off just as they approached the border. The Navy staged multiple operations and exercises in places where the fleet had never gone before, all in close proximity to major Soviet military and industrial sites.

In the summer of 1981, the aircraft carrier USS Eisenhower and an accompanying force of 82 US, Canadian, Norwegian, and British ships used a combination of deceptive lighting and other practices, radio silence, and electronic warfare to sneak through what is known as the Greenland-Iceland-United Kingdom (GIUK) gap and into the North Sea. The initiative even took advantage of cloud cover to evade Soviet satellites. When Soviet maritime patrol planes finally found them, the carrier's fighter wing staged simulated attacks on the "Bear" patrol planes as they were performing in-flight refueling.

All of these details were fed into RYAN, and they made the Soviet Politburo very, very nervous. This sense of dread filtered down. Marshal Nikolai Ogarkov, the chief of the general staff of the Soviet military, called for moving the entire country to a "war footing" in preparation for a complete military mobilization. A lieutenant colonel acting as an instructor at Moscow's Civil Defense Headquarters told civilians that the Soviet military "intended to deliver a preemptive strike against the US, using 50 percent of its warheads," according to the PFIAB report. And the KGB issued an order to all departments of its foreign intelligence arm to increase collection efforts even further, all because there was information indicating NATO was preparing for "a third world war."

With Brezhnev's death on November 10, 1982, the RYAN number likely slipped into the red. ... In the Soviet military, no one was sure who had nuclear release authority until Andropov was named as Brezhnev's successor on November 15.

Andropov had a fever for more RYAN, and the KGB responded by creating an entire new workforce in its stations at Soviet embassies in the West dedicated to feeding it. In February of 1983, KGB headquarters sent a cable to its London section chief, telling him that he was being sent a new agent with one job—feeding RYAN military data.

London

Comrade Yermakov

[A. V. Guk]

(strictly personal)

Permanent Operational Assignment to Uncover NATO Preparations for a Nuclear Missile Attack on the USSR ...

... Pretty much everything the Reagan administration and the US military did in 1983—along with some of the things the Soviets thought that they had done—pushed the buttons on RYAN. ...


... read the list  :o

And starting in September, NATO staged its annual series of elaborate war games known as Autumn Forge, culminating in a nuclear war game called Exercise Able Archer '83. It began with a massive airlift of 16,000 US troops to Europe on 139 flights, all under radio silence. The Soviets had never seen anything like it.

As if all the tension wasn't enough, on September 26, 1983 the Oko early warning system reported twice that US ballistic missiles had been launched. Lt. Colonel Stanislav Petrov, the watch officer in the Soviet Air Defense Forces' command bunker outside Moscow that night, made a gut call that the launch warnings were a malfunction. (It was later determined the warnings were caused by the way the sun bounced off high-altitude clouds). If Petrov had followed procedures in place, Andropov would have been alerted of a nuclear launch and an immediate launch of ICBMs would have been ordered.

During this period, the RYAN score dropped precipitously. A report from early in 1984 placed the RYAN score at 45; it may have dipped even lower during the fall of 1983. Any numbers in this range would have likely pushed Soviet paranoia to the edge.

Some KGB operatives objected to the analyses that they kept getting back from headquarters of the situation—being more familiar with how the West operates, they believed there was no evidence that there was an actual plan to launch a surprise attack. "None of the political reporting officers who concentrated on RYAN believed in the immediacy of the threat—especially a US surprise attack," the PFIAB report records.

In fact, the demand for "raw" intelligence rather than analysis—in a system that placed political incentive for even more alarming raw data—was at the root of the "war panic" that reached its peak with the beginning of Able Archer '83, the culmination of the Autumn Forge exercise. Over 40,000 NATO troops were on the move across Europe under command using encrypted communications and often operating under radio silence.

... the Politburo ultimately decided that Able Archer was, in fact, a cover for an actual surprise nuclear attack. They began acting accordingly.

... Helicopters ferried nuclear warheads to be loaded into weapons and aircraft. Missile and air forces were put on a round-the-clock 30-minute alert. Soviet strike fighter-bombers in East Germany and Poland were loaded with nuclear weapons. About 70 SS-20 missiles were put on ready alert with warheads loaded. And ballistic missile subs were ordered to disperse from port beneath the Arctic ice cap in preparation for an incoming attack.


But the next day, Andropov (already in questionable health) became seriously ill and dropped out of the public eye. Three months later, he would die of renal failure. As Andropov became incapacitated, there was near panic that this would be the moment the US was waiting for to strike. At the same time, there was confusion over who could actually order a pre-emptive nuclear strike in his absence.



On November 11, after testing new procedures for signaling authority to launch a nuclear attack and walking NATO forces up from normal readiness to a simulated General Alert (DEFCON 1) and a full-scale simulated release of nuclear weapons, Able Archer '83 concluded. Thus, the Soviets ended their alert

...

RYAN is a dramatic example of how analytic systems can lead their users astray. It adds resonance to the types of fears that WarGames (and Terminator a year later) tapped into—that artificial intelligence connected to weapons of war could be a very bad thing.

While we haven't exactly hooked up a WOPR or Skynet to the ballistic missile network quite yet, the intelligence community of the US has increasingly turned to machine learning, expert systems, and analytics to drive its identification of targets of interest. Recent artificial research by Google has demonstrated how machine learning systems can go wrong when fed random data, finding patterns that aren't there—a behavior they called "inceptionism." Imagine, then, if an artificial intelligence system started looking into the emptiness of its data feed and started seeing enemies everywhere.

https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html?m=1

Perhaps we should remember Operation RYAN the next time there's a conversation about letting autonomous systems control weapons—no matter what the caliber. Maybe we should all have AI stick to chess for a while longer.
« Last Edit: November 30, 2020, 03:21:39 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

sidd

  • First-year ice
  • Posts: 6774
    • View Profile
  • Liked: 1047
  • Likes Given: 0
Re: Robots and AI: Our Immortality or Extinction
« Reply #558 on: November 30, 2020, 02:05:56 AM »
Re:  started seeing enemies everywhere

Humans do it better, Angleton and the wilderness of mirrors to quote one famous example.

That's probably where the machines learned it from.

sidd

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #559 on: November 30, 2020, 04:00:34 AM »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #560 on: November 30, 2020, 04:01:43 AM »
As the AI version of Gahan Wilson would say ...


I paint what I see
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #561 on: November 30, 2020, 07:54:10 PM »
‘It Will Change Everything’: DeepMind’s AI Makes Gigantic Leap In Solving Protein Structures
https://www.nature.com/articles/d41586-020-03348-4

https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology


humans? ... Not even close!

-------------------------------------------

DeepMind AI Cracks 50-Year-old Problem of Protein Folding
https://www.theguardian.com/technology/2020/nov/30/deepmind-ai-cracks-50-year-old-problem-of-biology-research

Program solves scientific problem in ‘stunning advance’ for understanding machinery of life

Having risen to fame on its superhuman performance at playing games, the artificial intelligence group DeepMind has cracked a serious scientific problem that has stumped researchers for half a century.

With its latest AI program, AlphaFold, the company and research laboratory showed it can predict how proteins fold into 3D shapes, a fiendishly complex process that is fundamental to understanding the biological machinery of life.



"A 50-year-old grand challenge in computer science has been to a large degree solved."

Independent scientists said the breakthrough would help researchers tease apart the mechanisms that drive some diseases and pave the way for designer medicines, more nutritious crops and “green enzymes” that can break down plastic pollution.

... “It marks an exciting moment for the field,” said Demis Hassabis, DeepMind’s founder and chief executive. “These algorithms are now becoming mature enough and powerful enough to be applicable to really challenging scientific problems.”

Venki Ramakrishnan, the president of the Royal Society, called the work “a stunning advance” that had occurred “decades before many people in the field would have predicted”.

... Andrei Lupas, the director of the Max Planck Institute for Developmental Biology in Tübingen, Germany, said he had already used the program to solve a protein structure that scientists had been stuck on for a decade.

Quote
... "The model from group 427 gave us our structure in half an hour, after we had spent a decade trying everything"

Protein folding has been a grand challenge in biology for 50 years. An arcane form of molecular origami, its importance is hard to overstate. Most biological processes revolve around proteins and a protein’s shape determines its function. When researchers know how a protein folds up, they can start to uncover what it does. How insulin controls sugar levels in the blood and how antibodies fight coronavirus are both determined by protein structure.

... To learn how proteins fold, researchers at DeepMind trained their algorithm on a public database containing about 170,000 protein sequences and their shapes. Running on the equivalent of 100 to 200 graphics processing units – by modern standards, a modest amount of computing power – the training took a few weeks.

DeepMind put AlphaFold through its paces by entering it for a biennial “protein olympics” known as Casp, the Critical Assessment of Protein Structure Prediction. Entrants to the international competition are given the amino acid sequences for about 100 proteins and challenged to work them out. The results from teams that use computers are compared with those based on lab work.

AlphaFold not only outperformed other computer programs but reached an accuracy comparable to the laborious and time-consuming lab-based methods. [/B]When ranked across all proteins analysed, AlphaFold had a median score of 92.5 out of 100, with 90 being the equivalent to experimental methods.[/B] For the hardest proteins, the median score fell, but only marginally to 87.

... It is not the end of the work, however. Future research will focus on how proteins combine to form larger “complexes” and how they interact with other molecules in living organisms.

---------------------------------------





CASP uses the "Global Distance Test (GDT)" metric to assess accuracy, ranging from 0-100. The new AlphaFold system achieves a median score of 92.4 GDT overall across all targets. The system's average error is approximately 1.6 Angstroms—about the width of an atom.

-----------------------------------------

New Cyberattack Can Trick Scientists Into Making Toxins or Viruses
https://phys.org/news/2020-11-cyberattack-scientists-toxins-viruses.html

An end-to-end cyber-biological attack, in which unwitting biologists may be tricked into generating dangerous toxins in their labs, has been discovered by Ben-Gurion University of the Negev cyber-researchers.

According to a new paper just published in Nature Biotechnology, it is currently believed that a criminal needs to have physical contact with a dangerous substance to produce and deliver it. However, malware could easily replace a short sub-string of the DNA on a bioengineer's computer so that they unintentionally create a toxin producing sequence.

"To regulate both intentional and unintentional generation of dangerous substances, most synthetic gene providers screen DNA orders, which is currently the most effective line of defense against such attacks," says Rami Puzis, head of the BGU Complex Networks Analysis Lab, a member of the Department of Software and Information Systems Engineering and Cyber@BGU. California was the first state in 2020 to introduce gene purchase regulation legislation.

"However, outside the state, bioterrorists can buy dangerous DNA, from companies that do not screen the orders," Puzis says.

A weakness in the U.S. Department of Health and Human Services (HHS) guidance for DNA providers allows screening protocols to be circumvented using a generic obfuscation procedure which makes it difficult for the screening software to detect the toxin producing DNA. "Using this technique, our experiments revealed that that 16 out of 50 obfuscated DNA samples were not detected when screened according to the 'best-match' HHS guidelines," Puzis says.

The researchers also found that accessibility and automation of the synthetic gene engineering workflow, combined with insufficient cybersecurity controls, allow malware to interfere with biological processes within the victim's lab, closing the loop with the possibility of an exploit written into a DNA molecule.

Rami Puzis et al, Increased cyber-biosecurity for DNA synthesis, Nature Biotechnology (2020).
https://www.nature.com/articles/s41587-020-00761-y

... I'd recommend a slow toxin, like ricin, tetrodotoxin, ciguatoxin, or maitotoxin ... or maybe a binary toxin, like an anaphylactic food allegin triggered by the next meal ... or it could synthesize H1N1 influenza, ... or smallpox
« Last Edit: November 30, 2020, 08:44:02 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #562 on: December 02, 2020, 03:18:13 AM »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #563 on: December 03, 2020, 03:18:33 AM »
Russia’s Okhotnik Unmanned Combat Air Vehicle Tests Air-To-Air Missiles: Report
https://www.thedrive.com/the-war-zone/37914/russias-okhotnik-unmanned-combat-air-vehicle-tests-air-to-air-missiles-report

The recent series of weapons trials apparently saw the unmanned aircraft used in a “fighter-interceptor role” and were reported by Russia’s state-run media outlet RIA Novosti. The outlet said that the tests had taken place over the Ashuluk training range in southwestern Russia.

The RIA Novosti piece doesn’t specify what types of missiles were involved in the reported tests, either. It did mention that they included infrared and radar-homing types, which could point to the drone having carried both short-range and medium-to-long-range AAMs. With the
Okhotnik — meaning 'Hunter' in Russian —  still in the early phase of its evaluation, any missile armament likely comprised types that are already in the inventory or at least at an advanced stage of development.



Russia’s primary close-combat AAMs at present are the R-73 and R-74M family and the country is also working on the K-74M2 dogfighting missile, which is intended for deployment from the internal “quick launch” weapons bays of the Su-57 Felon fighter jet. The K-74M2, which features a lock-on after launch (LOAL) mode, beginning its flight under inertial control before achieving an in-flight lock-on, would be a suitable candidate for internal carriage by the Okhotnik.

RIA Novosti's story added that the missile trials “will make it possible to assess the coupling of the drone’s avionics with missile guidance systems and the lead Su-57 aircraft.” This adds weight to reports that the plan is to utilize the Okhotnik, at least partially, as a loyal wingman-type complement to the manned Su-57.

--------------------------------------------

Are AI Professionals Actually Unwilling to Work for the Pentagon?
https://www.defenseone.com/ideas/2020/11/are-ai-professionals-actually-unwilling-work-pentagon/170359/

When Google employees protested their company’s work on Project Maven in 2018, their public letter against the company’s involvement in “the business of war” drew attention to the idea of a "culture clash" between the tech sector and the U.S. Department of Defense. A tense, adversarial relationship makes headlines — but how many AI professionals are actually unwilling to work with the U.S. military?

Recent research by the Center for Security and Emerging Technology, a policy research organization within Georgetown University’s Walsh School of Foreign Service, suggests a more nuanced relationship, including areas of potential alignment. A CSET survey of 160 U.S. AI industry professionals found little outright rejection of working with DOD. In fact, only 7 percent of the surveyed AI professionals working at U.S.-based AI companies felt extremely negative about working on DOD AI projects, and only a few expressed absolute refusal to work with DOD.

... Many see professional benefits, including the promise of working on hard problems, especially the kind not being explored in the private sector. In their own words, surveyed AI professionals note opportunities to “expand the state of the art without market forces” or do “research which doesn't have an immediate commercial application.” DOD has long recognized this ability to offer intellectually and technically challenging problems as its ace in the hole when unable to compete against the salaries in the private sector.

As a whole, professionals who are more aware of Defense Department AI projects and have experience working on DOD-funded research in general were more positive about working on DOD AI projects more specifically.

While connecting with technology companies has been a key priority for the Pentagon in recent years, many DOD efforts remain shrouded in mystery, causing AI professionals to question the motives behind DOD funding and feeding into fears that working with the U.S. military on AI is akin to “expanding the efficiency of the murderous American war machine” —as one surveyed professional put it.

... some AI professionals are concerned that collaborating with DOD means creating “autonomous killer drones,” or “weaponized research [without] human in the loop circuit breakers.”

-------------------------------------------

Artificial Intelligence In War: Human Judgment as an Organizational Strength and a Strategic Liability
https://www.brookings.edu/research/artificial-intelligence-in-war-human-judgment-as-an-organizational-strength-and-a-strategic-liability/

-------------------------------------------

Soldiers Don’t Trust Robot Battle Buddies. Can Virtual Training Fix That?
https://www.defenseone.com/technology/2020/11/soldiers-dont-trust-robot-battle-buddies-can-virtual-training-fix/170378/

You might think that troops would be eager to incorporate robots and automata into operations, since military robots are intended to save soldiers, airmen, etc., from the “dull, dirty, dangerous” jobs that operators are called on to perform in combat settings. But a new survey from the U.S. Air Force’s Journal of Indo-Pacific Affairs shows that frontline military personnel are actually more apprehensive than their commanders about it.

The paper, based on a survey of 800 officer cadets and midshipmen at the Australian Defence Force Academy, showed that “a significant majority would be unwilling to deploy alongside fully autonomous” lethal autonomous weapons systems, or LAWS, and that “the perceived safety, accuracy, and reliability of the autonomous system and that the potential to reduce harm to civilians, allied forces, and ADF personnel are the most persuasive benefits,” as opposed to other factors, such as cost savings, etc.

https://www.airuniversity.af.edu/JIPA/Display/Article/2425657/risks-and-benefits-of-autonomous-weapon-systems-perceptions-among-future-austra/

-------------------------------------

... maybe they saw the execution of Order 66 in the Clone Wars episode

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 20384
    • View Profile
  • Liked: 5289
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #564 on: December 03, 2020, 03:56:53 PM »
It seems google want their employees to be robots -
- don't dare to attempt to Unionise,
- don't dare to raise concerns about google's developmenmts - e.g. in AI ethics.

But google can soon replace most of the staff with robots ?

https://www.bbc.co.uk/news/technology-55173063
Google fired employees for union activity, says US agency

Google unlawfully fired employees for attempting to organise a union, a US federal agency has said.

Quote
A complaint filed by the US National Labor Relations Board (NLRB) alleged that Google unlawfully monitored and questioned its employees about their union activity. It fired a number of staff for violating data security - but the NLRB said the rules were applied only to those engaging in union activity.

Google denies doing anything unlawful.

It comes as another prominent Google employee, a leading figure in AI ethics, said she was fired for an email she sent to employees.

Why were they fired?
The NLRB complaint dealt with employees who were fired over a year ago, in November 2019.
Known as the "Thanksgiving Four", they were officially fired for breaking security and safety rules. But the workers alleged they were fired for "speaking out" about Google's policies.

Reacting to the NLRB complaint, Google said it had "always worked to support a culture of internal discussion". "Actions undertaken by the employees at issue were a serious violation of our policies and an unacceptable breach of a trusted responsibility," it said.

But the NLRB said the rules in question were only applied to those employees who were engaged in worker organisation.

What did Google do?
The complaint, on behalf of two of the workers, said the employees accessed basic tools like employee calendars and meeting rooms for the purposes of organising union-related activities.
Google "interrogated" its employees about accessing such information, which the NLRB said were protected activities under labour organising rules. It also "threatened employees with unspecified reprisals" and demanded they address any workplace concerns through official channels only.
And it also accessed an employee slide presentation that was part of a union drive, the complaint said.

In November 2019, Google brought in rules banning employees from accessing each others' calendars other than for reasons directly related to work. It did so "to discourage its employees from forming, joining, assisting a union or engaging in other protected, concerted activities", the NLRB said. The company fired the workers behind the activity "to discourage employees" from doing the same, it added. All of these actions amounted to "interfering with, restraining, and coercing employees" when it came to their rights.

Laurence Berland, one of the employees named in the complaint, said: "Employees who speak out on ethical issues, harassment, discrimination and all these matters are no longer really welcome at Google in the way they used to be." "I think it is part of a shift in culture there."

What happened to the AI ethics researcher?
The NLRB news comes on the day that a well-respected member of Google's AI team, Timnit Gebru, said she had been fired by Google. She tweeted that she was fired "for my email to [internal Google group] Brain Women and Allies".

She wrote that her corporate email had been deactivated, and so she could not share a copy of the email - but that Google told her that "certain aspects of the email you sent last night to non-management employees in the brain group reflect behaviour that is inconsistent with the expectations of a Google manager".

She also said the company said it accepted her resignation - which she said she had never offered.

The news prompted a backlash among software engineers and AI ethics watchers, among whom Ms Gebru is a respected researcher. She was one of the authors of a 2018 paper which concluded that AI facial recognition has difficulty identifying dark-skinned women - because the original datasets are mostly based on white men.

Earlier this year, she was interviewed in a piece for The New York Times titled A Case for Banning Facial Recognition and why she believes it should not be used for policing. "The combination of over-reliance on technology, misuse and lack of transparency - we don't know how widespread the use of this software is - is dangerous," she told the newspaper.
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #565 on: December 03, 2020, 04:15:49 PM »
Quote
... Google denies doing anything unlawful. ...

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #566 on: December 03, 2020, 07:09:47 PM »
Robot Hands One Step Closer to Humans' Thanks to AI Algorithms
https://warwick.ac.uk/newsandevents/pressreleases/robot_hands_one/



The Shadow Robot Dexterous Hand is a robot hand, with size, shape and movement capabilities similar to those of a human hand. To give the robotic hand the ability to learn how to manipulate objects researchers from WMG, University of Warwick, have developed new AI algorithms.

Robot hands can be used in many applications, such as manufacturing, surgery and dangerous activities like nuclear decommissioning. For instance, robotic hands can be very useful in computer assembly where assembling microchips requires a level of precision that only human hands can currently achieve. Thanks to the utilization of robot hands in assembly lines, higher productivity may be achieved whilst securing reduced exposure from work risk situations to human workers.

In the paper, "Solving Challenging Dexterous Manipulation Tasks With Trajectory Optimisation and Reinforcement Learning," researchers Professor Giovanni Montana and Dr. Henry Charlesworth from WMG, University of Warwick have developed new AI algorithms—or the "brain"—required to learn how to coordinate the fingers' movements and enable manipulation.

Using physically realistic simulations of Shadow's robotic hand, the researchers have been able to make two hands pass and throw objects to each other, as well as spin a pen between its fingers. The algorithms however are not limited to these tasks but can learn any task as long as it can be simulated. The 3-D simulations were developed using MuJoCo (Multi-Joint Dynamics withContact), a physics engine from the University of Washington.



Henry Charlesworth, Giovanni Montana. Solving Challenging Dexterous Manipulation Tasks With Trajectory Optimisation and Reinforcement Learning. arXiv:2009.05104 [cs.RO].
https://arxiv.org/abs/2009.05104

https://dexterous-manipulation.github.io/
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

sidd

  • First-year ice
  • Posts: 6774
    • View Profile
  • Liked: 1047
  • Likes Given: 0
Re: Robots and AI: Our Immortality or Extinction
« Reply #567 on: December 04, 2020, 01:53:01 AM »
Can that hand write cursive text yet?

sidd

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #568 on: December 04, 2020, 02:22:18 AM »
Quote
Can that hand write cursive text yet? 
College grads can't even write cursive text yet!



“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tom_Mazanec

  • Guest
Re: Robots and AI: Our Immortality or Extinction
« Reply #569 on: December 04, 2020, 03:25:53 AM »
My cursive handwriting was a secret code back in the Seventies...even I can't read it!

sidd

  • First-year ice
  • Posts: 6774
    • View Profile
  • Liked: 1047
  • Likes Given: 0
Re: Robots and AI: Our Immortality or Extinction
« Reply #570 on: December 04, 2020, 06:30:46 AM »
Re:  cursive handwriting was a secret code

My handwriting has degenerated over the years, i find. But I do have friends whose handwriting has _improved_ , one of them whose handwriting i could barely read decades ago when we were in grad school visited for a sabbatical a couple years ago and and i was amazed that it had improved to the point that i could easily read what he wrote.

I have another friend who has early onset alzheimers, and supposedly writing in cursive helps, so i began a letter exchange, been writing him in cursive on paper. The first letter he got, he called me back, amazed at a blast from the past ... no one had written him a letter in decades. The good part was that he could read my handwriting.

I may begin writing letters to other of my friends. There's something about putting pen (or pencil, you can correct mistakes easier)  to paper, a comforting feeling, as it were.

I write by hand uperhaps half a page or a full page a day but it is disjointed and to do with work. I quite like the feeling of sitting down and writing a page or three in a letter, putting it in an envelope with a stamp and addressing it. Takes me back, as it were.

I use a half mm 2B lead in pentel mechanical pencil for writing for some decades now together with Staedler-Mars erasers. Probably because it lets me do mechanical drawings as well (yes i have CAD programs and the like, i prefer drawing things by hand first.) 

My paper of choice is lined yellow 8-1/2x11 inch or white unlined in that size. The yellow is probably a remnant from grad school, when the yellow pads came out and scribbling began, it was a sign that folks were engaging with your argument and calculations.  (And of course that old standby, graf paper when i am plotting numbers read off meters and scales.)

For letters i prefer white and unlined.

Perhaps we should have a thread called "Working with your hands"

sidd
« Last Edit: December 04, 2020, 07:30:06 AM by sidd »

Tom_Mazanec

  • Guest
Re: Robots and AI: Our Immortality or Extinction
« Reply #571 on: December 04, 2020, 12:45:02 PM »
Well, mine is just as bad as ever.
To get back on topic, are computers able to read even bad handwriting like mine?

Tor Bejnar

  • Young ice
  • Posts: 4606
    • View Profile
  • Liked: 879
  • Likes Given: 826
Re: Robots and AI: Our Immortality or Extinction
« Reply #572 on: December 04, 2020, 06:52:54 PM »
One of the tasks my mother had taken 'in her retirement' was to convert (translate, sometimes) old family letters (from the mid-19th century) into typescript (and electronic files). (It is how I learned I have a 'cousin' who died during the American Civil War in a hospital, but letters from two contemporary sources say the death was from illness (said one) or wounds sustained in combat (said the other) (of course, it might have been from illness [infection] caused by a wound)).  I can read cursive just fine, but 19th century cursive is a different breed of animal.  (And, of course, every animal is different from every one of its kin.)  I scanned some of these old letters for her and I have to say that they are easier to read at two or three hundred percent!  I had a teacher in my youth who didn't 'properly' cross final 't's when at the end of a word (just an upstroke [examples here]), and things like that were common 150 years ago (lots of abbreviations, such as Wm for William), before the first commercially available typewriter.

There are handwriting interpreters using OCR.  I wonder how good they are.

I'm reminded that some of my friends growing up couldn't converse with their grandparents who lived in the same house as the kids weren't taught Spanish.  Now my kids cannot effectively exchange letters with their grandmother.  (They can send an e-mail which my brother prints out and hands it to mom...)
Arctic ice is healthy for children and other living things because "we cannot negotiate with the melting point of ice"

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #573 on: December 05, 2020, 01:41:27 AM »
China Conducting Biological Tests to Create Super Soldiers, US Spy Chief Says
https://www.theguardian.com/world/2020/dec/04/china-super-soldiers-biologically-enhanced-john-ratcliffe

China has conducted testing on its army in the hope of creating biologically enhanced soldiers, according to the top intelligence official in the US.

Writing in the Wall Street Journal, Ratcliffe said: “The intelligence is clear: Beijing intends to dominate the US and the rest of the planet economically, militarily and technologically. Many of China’s major public initiatives and prominent companies offer only a layer of camouflage to the activities of the Chinese Communist Party.”

Ratcliffe said China had gone to extraordinary lengths to achieve its goal.

“US intelligence shows that China has even conducted human testing on members of the People’s Liberation Army in hope of developing soldiers with biologically enhanced capabilities,” Ratcliffe wrote. “There are no ethical boundaries to Beijing’s pursuit of power.”

The enhancement of regular humans engaged in law enforcement or military operations has captured the imagination of many film and TV directors over the years.

Last year, two American scholars wrote a paper examining China's ambitions to apply biotechnology to the battlefield, including what they said were signs that China was interested in using gene-editing technology to enhance human — and perhaps soldier — performance.

Specifically, the scholars explored Chinese research using the gene-editing tool CRISPR, short for "clusters of regularly interspaced short palindromic repeats." CRISPR has been used to treat genetic diseases and modify plants, but Western scientists consider it unethical to seek to manipulate genes to boost the performance of healthy people.

https://jamestown.org/program/chinas-military-biotech-frontier-crispr-military-civil-fusion-and-the-new-revolution-in-military-affairs/

"While the potential leveraging of CRISPR to increase human capabilities on the future battlefield remains only a hypothetical possibility at the present, there are indications that Chinese military researchers are starting to explore its potential," wrote the scholars, Elsa Kania, an expert on Chinese defense technology at the Center for a New American Security, and Wilson VornDick, a consultant on China matters and former Navy officer.

"Chinese military scientists and strategists have consistently emphasized that biotechnology could become a 'new strategic commanding heights of the future Revolution in Military Affairs,'" the scholars wrote, quoting a 2015 article in a military newspaper.

One prominent Chinese general, they noted, said in 2017 that "modern biotechnology and its integration with information, nano(technology), and the cognitive, etc. domains will have revolutionary influences upon weapons and equipment, the combat spaces, the forms of warfare, and military theories."

VornDick said in a phone interview that he is less concerned about the battlefield advantage such research might provide than he is about the consequences of tampering with human genes.

"When we start playing around with genetic organisms, there could be unforeseen consequences," he said.

https://www.nbcnews.com/news/amp/ncna1249914
« Last Edit: December 06, 2020, 12:21:48 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #574 on: December 06, 2020, 12:21:03 AM »
Companies Are Now Writing Reports Tailored for AI Readers – and It Should Worry Us
https://www.theguardian.com/commentisfree/2020/dec/05/companies-are-now-writing-reports-tailored-for-ai-readers-and-it-should-worry-us

A recent study, published by the National Bureau for Economic Research (NBER): How to Talk When a Machine Is Listening: Corporate Disclosure in the Age of AI, suggests lengthy, complex corporate filings are increasingly read by, and written for, machines.

https://www.nber.org/papers/w27950

The paper is an analysis of the 10-K and 10-Q filings that American public companies are obliged to file with the Securities and Exchange Commission (SEC). The 10-K is a version of a company’s annual report, but without the glossy photos and PR hype: a corporate nerd’s delight. It has, says one guide, “the-everything-and-the-kitchen-sink data you can spend hours going through – everything from the geographic source of revenue to the maturity schedule of bonds the company has issued”. Some investors and commentators find the 10-K impenetrable, but for those who possess the requisite stamina (big companies can have 10-Ks that run to several hundred pages), that’s the kind of thing they like. The 10-Q filing is the 10-K’s quarterly little brother.

The observation that triggered the research reported in the paper was that “mechanical” (ie machine-generated) downloads of corporate 10-K and 10-Q filings increased from 360,861 in 2003 to about 165m in 2016, when 78% of all downloads appear to have been triggered by request from a computer. A good deal of research in AI now goes into assessing how good computers are at extracting actionable meaning from such a tsunami of data. There’s a lot riding on this, because the output of machine-read reports is the feedstock that can drive algorithmic traders, robot investment advisers, and quantitative analysts of all stripes.

The NBER researchers, however, looked at the supply side of the tsunami – how companies have adjusted their language and reporting in order to achieve maximum impact with algorithms that are reading their corporate disclosures. And what they found is instructive for anyone wondering what life in an algorithmically dominated future might be like.

The researchers found that “increasing machine and AI readership … motivates firms to prepare filings that are more friendly to machine parsing and processing”. So far, so predictable. But there’s more: “firms with high expected machine downloads manage textual sentiment and audio emotion in ways catered to machine and AI readers”.

In other words, machine readability – measured in terms of how easily the information can be parsed and processed by an algorithm – has become an important factor in composing company reports. So a table in a report might have a low readability score because its formatting makes it difficult for a machine to recognise it as a table; but the same table could receive a high readability score if it made effective use of tagging.

The researchers contend, though, that companies are now going beyond machine readability to try and adjust the sentiment and tone of their reports in ways that might induce algorithmic “readers” to draw favourable conclusions about the content. They do so by avoiding words that are listed as negative in the criteria given to text-reading algorithms. And they are also adjusting the tones of voice used in the standard quarterly conference calls with analysts, because they suspect those on the other end of the call are using voice analysis software to identify vocal patterns and emotions in their commentary.

In one sense, this kind of arms race is predictable in any human activity where a market edge may be acquired by whoever has better technology. It’s a bit like the war between Google and the so-called “optimisers” who try to figure out how to game the latest version of the search engine’s page ranking algorithm. But at another level, it’s an example of how we are being changed by digital technology – as Brett Frischmann and Evan Selinger argued in their sobering book Re-Engineering Humanity.

https://www.cambridge.org/core/books/reengineering-humanity/379F3C68F6AAC6C0C3998C14DACC38CF



... the purpose of this absurd challenge? To convince the computer hosting the site that I am not a robot. It was an inverted Turing test in other words: instead of a machine trying to fool a human into thinking that it was human, I was called upon to convince a computer that I was a human. I was being re-engineered. The road to the future has taken a funny turn.

-----------------------------------------------

A Robot Is Now Making Jamba Smoothies In a California Walmart In Less Than 3 minutes
https://www.businessinsider.com/jamba-using-robot-arm-smoothies-blendid-walmart-2020-12

Jamba has teamed up with Blendid, a robot smoothie maker, to unveil a Jamba by Blendid kiosk at a Walmart in Dixon, California.

The Blendid kiosk uses an artificial intelligence and machine learning-powered system with a robotic arm, blenders, a refrigerator, and ingredient dispensers.

In line with food service and retail trends, especially during COVID-19 times, the kiosk is contactless and operates autonomously.



... wonder if that robot would notice if a rat snuck into one of the blenders.
« Last Edit: December 06, 2020, 12:43:24 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #575 on: December 06, 2020, 10:23:20 PM »
Darktrace Says Its AI Can Be Used for Performance Monitoring
https://www.bloomberg.com/news/articles/2020-12-04/darktrace-says-its-ai-can-be-used-for-performance-monitoring

Darktrace Ltd.’s artificial intelligence could be used to better “understand the productivity and performance” of employees, the British company’s Chief Strategy Officer Nicole Eagan said.

The company’s technology works by learning how companies function, monitoring different processes such as employee email patterns to discover irregularities that might flag potential fraud or hacking vulnerabilities. Darktrace’s ability to autonomously respond to such irregularities could lead to other applications for businesses, Eagan said.

A number of companies have turned to employee-monitoring software during the pandemic to help manage productivity from a suddenly remote workforce. These types of programs, which can monitor how long workers are logged on or the amount of time it takes to finish a task, can raise privacy concerns, particularly if people aren’t aware of the surveillance.

Barclays Plc was investigated by the U.K.’s Information Commissioner’s Officer for using employee-monitoring tools earlier this year. The bank said the rollout was a limited pilot.

Darktrace also aims to make its AI more holistic, moving to systems that can be “self healing” after a cyberattack, Eagan said. The company last year introduced an AI “analyst” that can interpret the system’s own findings as it searches for irregularities.

-----------------------------------------



----------------------------------------

Prominent AI Ethics Researcher Fired: Google Puts Commercial Interests Ahead of Ethics
https://venturebeat.com/2020/12/04/ai-weekly-in-firing-timnit-gebru-google-puts-commercial-interests-ahead-of-ethics/

This week, leading AI researcher Timnit Gebru was fired from her position on an AI ethics team at Google in what she claims was retaliation for sending colleagues an email critical of the company’s managerial practices. The flashpoint was reportedly a paper Gebru coauthored that questioned the wisdom of building large language models and examined who benefits from them and who is disadvantaged.

Google AI lead Jeff Dean wrote in an email to employees following Gebru’s departure that the paper didn’t meet Google’s criteria for publication because it lacked reference to recent research. But from all appearances, Gebru’s work simply spotlighted well-understood problems with models like those deployed by Google, OpenAI, Facebook, Microsoft, and others. A draft obtained by VentureBeat discusses risks associated with deploying large language models, including the impact of their carbon footprint on marginalized communities and their tendency to perpetuate abusive language, hate speech, microaggressions, stereotypes, and other dehumanizing language aimed at specific groups of people.

... Gebru and colleagues’ assertion that language models can spout toxic content is similarly grounded in extensive prior research. In the language domain, a portion of the data used to train models is frequently sourced from communities with pervasive prejudice along gender, race, and religious lines. AI research firm OpenAI notes that this can lead to placing words like “naughty” or “sucked” near female pronouns and “Islam” near words like “terrorism.”

Other studies, like one published by Intel, MIT, and Canadian AI initiative CIFAR researchers in April, have found high levels of stereotypical bias from some of the most popular models, including Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. This bias could be leveraged by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies that “radicalize individuals into violent far-right extremist ideologies and behaviors,” according to the Middlebury Institute of International Studies.

On the subject of bias, OpenAI, which made GPT-3 available via an API earlier this year, has only begun experimenting with safeguards, including “toxicity filters” to limit harmful language generation.

In the draft paper, Gebru and colleagues reasonably suggest that large language models have the potential to mislead AI researchers and prompt the general public to mistake their text as meaningful. “If a large language model … can manipulate linguistic form well enough to cheat its way through tests meant to require language understanding, have we learned anything of value about how to build machine language understanding or have we been led down the garden path?” the paper reads. “We advocate for an approach to research that centers the people who stand to be affected by the resulting technology, with a broad view on the possible ways that technology can affect people.”

It’s no secret that Google has commercial interests in conflict with the viewpoints expressed in the paper. Many of the large language models it develops power customer-facing products, including Cloud Translation API and Natural Language API. The company often touts its work in AI ethics and has seemingly — if reluctantly — tolerated internal research critical of its approaches in the past. Letting Gebru go would appear to mark a shift in thinking among Google’s leadership, particularly in light of the company’s crackdowns on dissent, most recently in the form of illegal spying on employees before firing them.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #576 on: December 08, 2020, 04:09:19 AM »
These Three Companies Will Build Drones To Carry The Air Force's "Skyborg" AI Computer Brain
https://www.thedrive.com/the-war-zone/38015/these-three-companies-will-build-drones-to-carry-the-air-forces-skyborg-ai-computer-brain

The Skyborg program is seeking to infuse artificial intelligence-driven autonomy into modular unmanned air combat vehicles.



The Air Force says it has hired Boeing, General Atomics, and Kratos to build prototype "loyal wingman" type drones to carry systems developed under the Skyborg program. Through Skyborg, which you can read more about in The War Zone's previous reporting on this topic, the service is seeking to acquire a suite of artificial intelligence-driven capabilities that will able to control loyal wingmen, as well as fully-autonomous unmanned combat air vehicles, or UCAVs.

“This award is a major step forward for our game-changing Skyborg capability – this award supporting our operational experimentation is truly where concepts become realities," Air Force Brigadier General Dale White, the service's Program Executive Officer for Fighters and Advanced Aircraft, said in a statement. "We will experiment to prove out this technology and to do that we will aggressively test and fly to get this capability into the hands of our warfighters."



Just last week, Boeing also announced that it had conducted a semi-autonomous test involving five small jet-powered drones flying networked together in support of the ATS program. Boeing has been using these lower-end surrogates to help develop and mature technologies for ATS since 2019.

Last week, General Atomics also revealed that it had conducted a semi-autonomous test involving one of its stealthy Avenger drones conducting a mock air-to-air mission together with five other simulated drones. In its press release, the company said that in this test the Avenger acted as "the flight surrogate for the Skyborg capability set."

It seems very likely that Kratos will supply examples of its XQ-58 Valkyrie drone, or variants or derivatives thereof, to the Air Force under this new contract. The Air Force has already been using its existing XQ-58 in various tests to lay the groundwork for future loyal wingmen and other advanced drone developments, including the Skyborg program.



---------------------------------------------

General Atomics Avenger Drone Flew An Autonomous Air-To-Air Mission Using An AI Brain
https://www.thedrive.com/the-war-zone/37973/general-atomics-avenger-drone-flew-an-autonomous-air-to-air-mission-using-an-ai-brain

General Atomics has revealed that it conducted a semi-autonomous flight test in October involving one of its stealthy Avenger drones equipped with an "autonomy engine" originally developed by the Defense Advanced Research Projects Agency and now managed by the U.S. Navy. The unmanned aircraft worked together with five other simulated Avengers to conduct a mock search for aerial threats in a designated area.

The star of this particular demonstration was the company-owned Avenger equipped with the "autonomy engine" that the Defense Advanced Research Projects Agency (DARPA) had developed as part of its Collaborative Operations in Denied Environment (CODE) program.



As its name implies, CODE was also heavily geared toward developing systems that would still work "in denied or contested airspace," especially in the face of significant electronic warfare jamming. "Using collaborative autonomy, CODE-enabled unmanned aircraft would find targets and engage them as appropriate under established rules of engagement, leverage nearby CODE-equipped systems with minimal supervision, and adapt to dynamic situations such as attrition of friendly forces or the emergence of unanticipated threats," DARPA's website says.

Though the CODE concept did involve networking with manned aircraft, it also envisioned groups of drones using the systems developed to operate as fully-autonomous swarms, as well.

"The CODE autonomy engine was implemented to further understand cognitive Artificial Intelligence (AI) processing on larger UAS platforms, such as Avenger," according to the General Atomics press release.

“For this initial flight, we used Avenger as the flight surrogate for the Skyborg capability set, which is a key focus for GA-ASI emerging air-to-air portfolio," GA-ASI President Alexander added.



In August, an AI-driven "pilot" notably went undefeated against a human opponent in an entirely simulated dogfight as part of DARPA's AlphaDogfight Trials. This project is tied to the Agency's larger Air Combat Evolution (ACE) program, which is exploring how AI and machine learning could help automate various aspects of aerial combat, both with regards to manned and unmanned platforms.

------------------------------------------



A little-known U.S. start-up, Aevum, held an online rollout event yesterday for its Ravn X Autonomous Launch Vehicle, including footage of a full-size mockup of the aircraft. Established in 2016, the Huntsville, Alabama-based firm is proposing a reusable drone that will carry an underslung rocket that will, in turn, launch a small payload, such as a satellite, into low orbit.

https://www.thedrive.com/the-war-zone/37949/aevums-space-launch-plane-is-a-5-vigilante-sized-its-claims-are-even-bigger

------------------------------------------------

Rheinmetall's New Autonomous Armed Reconnaissance Robot Also Provides Fire Support
https://newatlas.com/military/rheinmetall-armed-reconnaissance-combat-robot/

German defense and security technology firm Rheinmetall has unveiled its latest armed battlefield robot designed for tactical intelligence gathering and combat support. Part of the company's Mission Master Autonomous – Unmanned Ground Vehicle (A-UGV) family, the Mission Master – Armed Reconnaissance system is capable of not only carrying out recon missions, but also providing fire support for troops.



The six-wheeled Armed Reconnaissance features a 3.5 m retractable mast, assorted sensors including an infrared sensor, a surveillance radar, a 360° camera, a laser rangefinder, a target designator and a 7.62 mm gun.

Rheinmetall's Mission Master – Armed Reconnaissance robot, is designed for high-risk scouting missions that require real-time retrieval of large amounts of data. To do this, the company took its Mission Master platform and added a sensor suite and a Rheinmetall Fieldranger Remote-Controlled Weapon Station (RCWS) to provide fire support when needed.



Operating as a team, Mission Master robots can complete various tasks including slew-to-cue, zone surveillance, reconnaissance and target position transfer. The family of robots accomplish such tasks by communicating with each other and using artificial intelligence (AI) to achieve the situational awareness demanded of such missions.

An entire 'Wolf Pack' can be controlled via single remote operator using LTE, SATCOM or via military cloud.

Should the Mission Master – Armed Reconnaissance robot need to engage with hostile forces, it has a Rheinmetall Fieldranger Light 7.62 mm RCWS, which has more firepower than its soldier-portable equivalent.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #578 on: December 10, 2020, 11:50:32 PM »
Panel Details Global Artificial Intelligence Arms Race
https://news.usni.org/2020/12/09/panel-details-global-artificial-intelligence-arms-race#more-82009

Harnessing artificial intelligence and machine learning technologies has become the new arms race among the great powers, a Hudson Institute panel on handling big data in military operations said Monday.

Speaking at the online forum, Richard Schultz, director of the international security program in the Fletcher School at Tufts University, said, “that’s the way [Russian President Vladimir] Putin looks at it. I don’t think we have a choice” but to view it the same way.

He added in answer to a question that “the data in information space is enormous,” so finding tools to filter out what’s not necessary is critical. U.S. Special Operations Command is already using AI to do what in the old days was called political or psychological warfare, in addition to targeting, he added.

Big Data at War”: https://mwi.usma.edu/big-data-at-war-special-operations-forces-project-maven-and-twenty-first-century-warfare/



He specifically cited the value of Google's Project Maven as improving a field commander’s ability to more effectively command and control his unit’s operation in conflict – from when, what and where to fire, to not shooting at all.

Project Maven is a Pentagon project using machine learning to sort through masses of intelligence, surveillance and reconnaissance data – unmanned systems video, paper, computer hard drives, thumb drives and more – collected by the department and intelligence agencies for operational use across the services. It has sometimes been called “algorithmic warfare.”

In a combat situation, Clark said, “you’re not trying to kill every bad guy out there” but rather are targeting a leader or a group of leaders. AI has already gained a strong foothold in logistics and maintenance in Pentagon thinking and is now making its way to commanders.

“Maven has made some inroads [because] it is actively giving them courses of actions” and even parallel courses of actions to take simultaneously to further confuse an enemy.

... Clarke said a number of these AI and machine learning ideas would be tested in the Army’s Project Convergence next year.

“We can prevail” against the Chinese and the Russians in the new arms race, Schultz said. “We just need to be able to harness [artificial intelligence].”



---------------------------------------------------------------

The New Laws of Robotics — Building on Asimov's Science Fiction Legacy In the Age of AI
https://www.abc.net.au/news/2020-12-10/new-laws-of-robotics-what-they-mean-for-ai/12947424?section=technology

Asimov was essentially an optimist, but he realised that future AI devices, and their designers, might need a little help keeping on the straight and narrow.

Hence his famous Three Laws, which have influence in science and technology circles to this day.

1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Now, almost 80 years later, legal academic and artificial intelligence expert Frank Pasquale has added four additional principles.

Professor Pasquale says while Asimov's ideas were well founded, they assumed a certain technological trajectory that no longer holds — innovations are not always for the good of humanity.

...

New law 1: AI should complement professionals, not replace them

New law 2: Robotic systems and AI should not counterfeit humanity

New law 3: Robotic systems and AI should not intensify zero-sum arms races

The unchecked development of smart robotic weapons systems risks spiralling out of control, Professor Pasquale warns.

And given our track-record with other military spending, there's every reason to suggest an arms race will develop over the development and deployment of AI weaponry.

"Very early on I think we have to say how we get societies to recognise that this is destructive, it's not providing real human services, it's just investing in the history of destruction," Professor Pasquale says.

New law 4: Robotic systems and AI must always indicate the identity of their creator(s), controller(s) and owner(s)
« Last Edit: December 11, 2020, 12:13:52 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #579 on: December 11, 2020, 12:55:01 AM »
Artificial Chemist 2.0: Quantum Dot R&D In Less Than an Hour
https://phys.org/news/2020-12-artificial-chemist-quantum-dot-hour.html

A new technology, called Artificial Chemist 2.0, allows users to go from requesting a custom quantum dot to completing the relevant R&D and beginning manufacturing in less than an hour. The tech is completely autonomous, and uses artificial intelligence (AI) and automated robotic systems to perform multi-step chemical synthesis and analysis.

Quantum dots are colloidal semiconductor nanocrystals, which are used in applications such as LED displays and solar cells.



From a user standpoint, the whole process essentially consists of three steps. First, a user tells Artificial Chemist 2.0 the parameters for the desired quantum dots. For example, what color light do you want to produce? The second step is effectively the R&D stage, where Artificial Chemist 2.0 autonomously conducts a series of rapid experiments, allowing it to identify the optimum material and the most efficient means of producing that material. Third, the system switches over to manufacturing the desired amount of the material.

"And the first time you set up Artificial Chemist 2.0 to produce quantum dots in any given class, the robot autonomously runs a set of active learning experiments. This is how the brain of the robotic system learns the materials chemistry," Abolhasani says. "Depending on the class of material, this learning stage can take between one and 10 hours. After that one-time active learning period, Artificial Chemist 2.0 can identify the best possible formulation for producing the desired quantum dots from 20 million possible combinations with multiple manufacturing steps in 40 minutes or less."

The researchers note that the R&D process will almost certainly become faster every time people use it, since the AI algorithm that runs the system will learn more—and become more efficient—with every material that it is asked to identify.

"We're excited about what this means for the specialty chemicals industry. It really accelerates R&D to warp speed, but it is also capable of making kilograms per day of high-value, precisely engineered quantum dots. Those are industrially relevant volumes of material."

Kameel Abdel-Latif et al, Self‐Driven Multistep Quantum Dot Synthesis Enabled by Autonomous Robotic Experimentation in Flow, Advanced Intelligent Systems, 10 December 2020
https://doi.org/10.1002/aisy.202000245

« Last Edit: December 11, 2020, 03:16:53 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tom_Mazanec

  • Guest
Re: Robots and AI: Our Immortality or Extinction
« Reply #580 on: December 11, 2020, 02:11:33 AM »
A favorite scene from a favorite movie. THANKS!

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #581 on: December 11, 2020, 06:22:38 PM »
Chinese tech titan Huawei created AI software to pick out Uighurs and report them to police, the Washington Post reported Tuesday off internal documents.

Why this matters: “Artificial-intelligence researchers and human rights advocates said they worry the technology’s development and normalization could lead to its spread around the world, as government authorities elsewhere push for a fast and automated way to detect members of ethnic groups they’ve deemed undesirable or a danger to their political control.”

https://www.washingtonpost.com/technology/2020/12/08/huawei-tested-ai-software-that-could-recognize-uighur-minorities-alert-police-report-says/

---------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #582 on: December 14, 2020, 03:02:22 PM »
Gmail, YouTube, Google Docs and Other Services Go Down in Multiple Countries
https://techcrunch.com/2020/12/14/gmail-youtube-google-docs-and-other-services-go-down-simultaneously-in-multiple-countries/amp/
https://amp.cnn.com/cnn/2020/12/14/tech/google-youtube-gmail-down/index.html

Google's services went down momentarily Monday in a massive outage that prevented many people from watching YouTube videos, accessing their Google Docs or sending email on Gmail.

The outage also made Google Classroom temporarily unavailable, preventing many students learning remotely from accessing their classes.

The company's workspace status dashboard had been red across the board, with every single Google service indicating an outage. Later Monday morning, they all turned green, indicating that they're operating normally.

Multiple Google services have gone down. Gmail, YouTube, Google Drive, Google Docs, Maps, Adwords and Adsense, Google Pay, Google Home, Nest and Google’s Chromecast are all experiencing outages, with dozens, even hundreds, of reports we’ve seen so far coming in from across Europe, the US, Canada, India, South Africa, countries in Central and South America, Australia and likely more.

Downtime site indicators are showing big spikes for services dropping starting from around 11.30AM UK time.

https://downdetector.co.uk/status/gmail/

It’s an unprecedented failure for a system that has grown to be one of biggest traffic and activity drivers on the internet.

It’s also an alarming reminder of just how far Google reaches, and how many of our services — productivity, entertainment, and home/utility — are tied up with a single, proprietary provider.

------------------------------------------

Austin-based SolarWinds at Center of Massive US Government Hack
https://www.kxan.com/news/local/austin/austin-based-solarwinds-at-center-of-massive-us-government-hack/amp/
https://www.npr.org/2020/12/14/946163194/russia-suspected-in-months-long-cyber-attack-on-federal-agencies

Russian hackers working for the Kremlin are believed to be behind an attack into U.S. government computer systems at the departments of Treasury and Commerce that may have lasted months before it was detected, according to U.S. officials and media reports.

The hackers reportedly broke into the email systems at those two government departments. But the full extent of the breach was not immediately clear as U.S. officials scrambled to make an assessment. There are concerns that hackers may have penetrated other government departments and perhaps private companies as well.

Reuters first reported the story on Sunday, and subsequent reports identified Russia's foreign intelligence service, the SVR, as the most likely culprit.

Russia's SVR, the rough equivalent to the CIA in the U.S., was blamed for major hacks in 2014-15 that involved unclassified email systems at the White House, State Department and the Joint Chiefs of Staff.

Meanwhile, the U.S. Cybersecurity and Infrastructure Security Agency (CISA), which is part of Homeland Security, issued an emergency directive overnight calling on all federal civilian agencies to review their computer networks for signs of the compromise and to disconnect from SolarWinds Orion products immediately.

SolarWinds has government contracts, including with the military and intelligence services, according to Reuters. The attackers are believed to have used a "supply chain attack" method that embeds malicious code into legitimate software updates.

On its website, SolarWinds says it has 300,000 customers worldwide, including all five branches of the U.S. military, the Pentagon, the State Department, NASA, the NSA, the Department of Justice and the White House. It says the 10 leading U.S. telecommunications companies and top five U.S. accounting firms as well as 425 of the Fortune500  are also among customers.

https://www.cisa.gov/news/2020/12/13/cisa-issues-emergency-directive-mitigate-compromise-solarwinds-orion-network

"Tonight's directive is intended to mitigate potential compromises within federal civilian networks, and we urge all our partners — in the public and private sectors — to assess their exposure to this compromise and to secure their networks against any exploitation."

... Microsoft said in a blog post late Sunday, "We believe this is nation-state activity at significant scale, aimed at both the government and private sector."

... FireEye reported last week that hackers, also believed to be Russians, stole the company's key tools used to test vulnerabilities in the computer networks of its customers, which include government agencies.

FireEye said in a blog post late Sunday night that said it had identified "a global campaign that introduces a compromise into the networks of public and private organizations through the software supply chain. The compromise is delivered through updates to a widely used IT infrastructure management software – the Orion network monitoring product from SolarWinds."

-----------------------------------------

U.S. Cybersecurity Firm FireEye Discloses Breach, Theft of Hacking Tools
https://www.reuters.com/article/us-fireeye-cyber/u-s-cybersecurity-firm-fireeye-discloses-breach-theft-of-hacking-tools-idUSKBN28I31E

(Reuters) -FireEye, one of the largest cybersecurity companies in the United States, said on Tuesday that it had been hacked, likely by a government, and that an arsenal of hacking tools used to test the defenses of its clients had been stolen.

A blog post by the company here said "red team tools" were stolen as part of a highly sophisticated, likely government-backed hacking operation that used previously unseen techniques.

Beyond the tool theft, the hackers also appeared to be interested in a subset of FireEye customers: government agencies.


The hack of FireEye, a company with an array of contracts across the national security space both in the United States and its allies, is among the most significant breaches in recent memory.

----------------------------------------

US Hospital Systems Facing 'Imminent' Threat of Cyber Attacks, FBI Warns
https://arstechnica.com/information-technology/2020/10/us-government-warns-of-imminent-ransomware-attacks-against-hospitals/
https://www.theguardian.com/society/2020/oct/28/us-healthcare-system-cyber-attacks-fbi

Federal agencies have warned that the US healthcare system is facing an “increased and imminent” threat of cybercrime, and that cybercriminals are unleashing a wave of extortion attempts designed to lock up hospital information systems, which could hurt patient care just as nationwide cases of Covid-19 are spiking.

https://us-cert.cisa.gov/sites/default/files/publications/AA20-302A_Ransomware%20_Activity_Targeting_the_Healthcare_and_Public_Health_Sector.pdf

“CISA, FBI, and HHS have credible information of an increased and imminent cybercrime threat to US hospitals and healthcare providers,” Wednesday evening’s advisory stated. “CISA, FBI, and HHS are sharing this information to provide warning to healthcare providers to ensure that they take timely and reasonable precautions to protect their networks from these threats.”

Security firm Mandiant said much the same in its own notice, which provided indicators of compromise that targeted organizations can use to determine if they were under attack.

https://www.fireeye.com/blog/threat-research/2020/10/kegtap-and-singlemalt-with-a-ransomware-chaser.html

Mandiant Senior VP and CTO Charles Carmakal said in an email to reporters that the targeting was “the most significant cyber security threat we’ve ever seen in the United States.” He went on to describe the Russian hacking group behind the plans as “one of most brazen, heartless, and disruptive threat actors I’ve observed over my career.” Already several hospitals have come under attack in the past few days, he said.

In September, a ransomware attack hobbled all 400 US facilities of the hospital chain Universal Health Services, forcing doctors and nurses to rely on paper and pencil for record-keeping and slowing lab work. Employees described chaotic conditions impeding patient care, including mounting emergency room waits and the failure of wireless vital-signs monitoring equipment.

https://www.nbcnews.com/tech/security/cyberattack-hits-major-u-s-hospital-system-n1241254

CNN said, it had confirmed that “Universal Health Services, a hospital health care service company based in Pennsylvania; St. Lawrence Health Systems in New York; and the Sky Lakes Medical Center in Oregon were all infected over the past few days.

------------------------------------------------------

Russian hackers hit US government using widespread supply chain attack
https://arstechnica.com/information-technology/2020/12/russian-hackers-hit-us-government-using-widespread-supply-chain-attack/

... On Sunday night, FireEye said the attackers were infecting targets using Orion, a widely used business software app from SolarWinds. After taking control of the Orion update mechanism, the attackers were using it to install a backdoor that FireEye researchers are calling Sunburst.

“FireEye has detected this activity at multiple entities worldwide,” FireEye researchers wrote. “The victims have included government, consulting, technology, telecom and extractive entities in North America, Europe, Asia and the Middle East. We anticipate there are additional victims in other countries and verticals. FireEye has notified all entities we are aware of being affected.”

https://www.fireeye.com/blog/threat-research/2020/12/evasive-attacker-leverages-solarwinds-supply-chain-compromises-with-sunburst-backdoor.html

After using the Orion update mechanism to gain a foothold on targeted networks, Microsoft said in its own post, the attackers are stealing signing certificates that allow them to impersonate any of a target's existing users and accounts, including highly privileged accounts.

https://blogs.microsoft.com/on-the-issues/2020/12/13/customers-protect-nation-state-cyberattacks/

In a separate post FireEye said it has identified multiple organizations that appear to have been infected as long ago as this past spring. “Our analysis indicates that these compromises are not self-propagating,” company researchers said. “Each of the attacks require meticulous planning and manual interaction.”
« Last Edit: December 14, 2020, 04:20:31 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

kassy

  • First-year ice
  • Posts: 8235
    • View Profile
  • Liked: 2042
  • Likes Given: 1986
Re: Robots and AI: Our Immortality or Extinction
« Reply #583 on: December 14, 2020, 05:47:12 PM »
Some people were left in the dark or in the light because the had their lights tied up to some google web service. One guy tweeted about rethinking some choices.

I wondered why you would ever install a system that uses non local internet to turn your lights on and of. There is exactly zero gain in that but i bet the gadgets are cheaper to build.
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #584 on: December 16, 2020, 06:00:44 PM »
Walmart Will Use Fully Driverless Trucks to Make Deliveries In 2021
https://www.theverge.com/2020/12/15/22176179/walmart-fully-driverless-box-truck-delivery-gatik

Walmart will use fully autonomous box trucks to make deliveries in Arkansas starting in 2021. The big-box retailer has been working with a startup called Gatik on a delivery pilot for 18 months. Next year, the two companies plan on taking their partnership to the next level by removing the safety driver from their autonomous box trucks.

Gatik, which is based in Palo Alto and Toronto, outfitted several multitemperature box trucks with sensors and software to enable autonomous driving. Since last year, those trucks have been operating on a two-mile route between a “dark store” (a store that stocks items for fulfillment but isn’t open to the public) and a nearby Neighborhood Market in Bentonville, Arkansas. Since then, the vehicles have racked up 70,000 miles in autonomous mode with a safety driver.

Next year, the companies intend to start incorporating fully autonomous trucks into those deliveries. And they plan on expanding to a second location in Louisiana, where trucks with safety drivers will begin delivering items from a “live” Walmart Supercenter to a designated pickup location where customers can retrieve their orders. Those routes, which will begin next year, will be longer than the Arkansas operation — 20-miles between New Orleans and Metairie, Louisiana.

“Our trials with Gatik are just two of many use cases we’re testing with autonomous vehicles, and we’re excited to continue learning how we might incorporate them in a delivery ecosystem,” said Tom Ward, Walmart’s senior VP of customer product.

Walmart is working with a variety of self-driving companies in its search for the best fit for the company’s massive retail and delivery operations. In addition to Gatik, the big-box company is working with Waymo, Cruise, Nuro, Udelv, Baidu, Ford, and Postmates.



3 Million drivers may be out of a job by 2030

---------------------------------------------------

Amazon's Zoox Unveils Autonomous Electric Vehicle
https://techxplore.com/news/2020-12-amazon-zoox-unveils-autonomous-electric.html

An autonomous vehicle company acquired this year by Amazon has unveiled a four-person "robo-taxi," a compact, multidirectional vehicle designed for dense, urban environments.

The carriage-style interior of the vehicle produced by Zoox Inc. has two benches that face each other. There is no steering wheel. It measures just under 12 feet long, about a foot shorter than a standard Mini Cooper.

It is among the first vehicles with bidirectional capabilities and four-wheel steering, allowing for better maneuverability. It has a top speed of 75 miles per hour.



-------------------------------------
« Last Edit: December 16, 2020, 08:32:56 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Tor Bejnar

  • Young ice
  • Posts: 4606
    • View Profile
  • Liked: 879
  • Likes Given: 826
Re: Robots and AI: Our Immortality or Extinction
« Reply #585 on: December 16, 2020, 08:17:24 PM »
Years ago I was hitchhiking in NZ (where I was living) and this car slowed down and started to move off the road (in my direction).  I was delighted until the person in the driver's seat 'totally' took her eyes off the road and was looking at something in her lap.  It was very nerve wracking, even as the car proceeded to come to a stop just in front of me and to the side. (Yes, I got further away from the 'parking area on the side of the road' while this was happening.)  Approaching the stopped car, I realized it was an American car (steering wheel on the left side) and the person in the driver's seat was actually the passenger who had looked at a map as her husband, looking where he was going, steered and stopped.  I had not given a fleeting glance at the 'passenger' who never looked away.

I have no doubt my reaction to the first vehicles on autopilot I see will be the same. 
Quote
"Your Honor, I'm suing the Defendant because their autopiloted vehicle scared the s**t out of me."

"Just as EVs are required to broadcast sounds as they slink silently through parking lots, I hereby order all autopiloted vehicles to have signs posted on sides and front stating, 'Beware: vehicle is recording your reaction to discovering this vehicle has autopilot.'  As to your claim, you can do your own laundry."
Arctic ice is healthy for children and other living things because "we cannot negotiate with the melting point of ice"

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #586 on: December 16, 2020, 08:39:22 PM »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #587 on: December 17, 2020, 07:38:09 PM »
Army Strengthens Future Tech With Muscle-Bound Robots
https://techxplore.com/news/2020-12-army-future-tech-muscle-bound-robots.html

Army Research Laboratory: Robotic systems packed with muscle tissue can produce never-seen-before agility and versatility, Army researchers said.

Researchers with the U.S. Army Combat Capabilities Development Command, now known as DEVCOM, Army Research Laboratory are teaming with collaborators at Duke University and the University of North Carolina on high-risk studies in biohybrid robotics.



"Though impressive in their own right, today's robots are deployed to serve a limited purpose then are retrieved some minutes later," said Dr. Dean Culver, a research scientist at the laboratory. "ARL wants robots to be versatile teammates capable of going anywhere Soldiers can and more, adapting to the needs of any given situation." Biohybrid robotics integrates living organisms to mechanical systems to improve performance.

"Organisms outperform engineered robots in so many ways. Why not use biological components to achieve those remarkable capabilities?" Culver asked rhetorically. [... What could possibly go wrong? ...] The team's proposal involves the behavior of the proteins that drive muscle performance, he said.

The first applications for biohybrid robotics the team expects to focus on are legged platforms similar to the Army's Legged Locomotion and Movement Adaptation research platform, known as LLAMA, and the U.S. Marine Corps' Legged Squad Support System, or LS3. Dean and his collaborators are also considering flapping-wing drones.

"One obstacle that faces ground-based robots today is an inability to instantly adjust or adapt to unstable terrain," Culver said. "Muscle actuation, though certainly not solely responsible for it, is a big contributor to animals' ability to navigate uneven and unreliable terrain. Similarly, flapping wings and flying organisms' ability to reconfigure their envelope gives them the ability to dart here and there even among branches. In multi-domain operations, this kind of agility and versatility means otherwise inaccessible areas are now viable, and those options can be critical to the U.S. military's success."

Army researchers will work on the theoretical mesomechanics that can be tested with the data collected from both the computational and experimental efforts.

Their research is expected to inform the biohybrid engineering community on how to culture strong muscle tissue rather than extract it from a trained organism, he said. In addition, he said researchers expect the research to offer insight into the mesomechanics that govern motor protein motion; the kind of motion responsible for muscle contraction overall.

Their work will be supplemented by a separate Duke University team working on macroscopic performance characteristics of muscle, tendon, and ligaments in jumping creatures for use in legged robots.

"Muscle tissue is outstanding at producing a specific amount of mechanical power at a given moment, and its versatility is unrivaled in robotic actuation today," he said.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #588 on: December 21, 2020, 09:56:39 PM »
Artificial Intelligence Solves Schrödinger's Equation
https://phys.org/news/2020-12-artificial-intelligence-schrdinger-equation.html


https://en.m.wikipedia.org/wiki/Schr%C3%B6dinger_equation#Equation

A team of scientists at Freie Universität Berlin has developed an artificial intelligence (AI) method for calculating the ground state of the Schrödinger equation in quantum chemistry. The goal of quantum chemistry is to predict chemical and physical properties of molecules based solely on the arrangement of their atoms in space, avoiding the need for resource-intensive and time-consuming laboratory experiments. In principle, this can be achieved by solving the Schrödinger equation, but in practice this is extremely difficult.

Up to now, it has been impossible to find an exact solution for arbitrary molecules that can be efficiently computed. But the team at Freie Universität has developed a deep learning method that can achieve an unprecedented combination of accuracy and computational efficiency.

Central to both quantum chemistry and the Schrödinger equation is the wave function—a mathematical object that completely specifies the behavior of the electrons in a molecule. The wave function is a high-dimensional entity, and it is therefore extremely difficult to capture all the nuances that encode how the individual electrons affect each other. Many methods of quantum chemistry in fact give up on expressing the wave function altogether, instead attempting only to determine the energy of a given molecule. This however requires approximations to be made, limiting the prediction quality of such methods.

Other methods represent the wave function with the use of an immense number of simple mathematical building blocks, but such methods are so complex that they are impossible to put into practice for more than a mere handful of atoms.

The deep neural network designed by Professor Noé's team is a new way of representing the wave functions of electrons. "Instead of the standard approach of composing the wave function from relatively simple mathematical components, we designed an artificial neural network capable of learning the complex patterns of how electrons are located around the nuclei," Noé explains. "One peculiar feature of electronic wave functions is their antisymmetry. When two electrons are exchanged, the wave function must change its sign. We had to build this property into the neural network architecture for the approach to work," adds Hermann. This feature, known as 'Pauli's exclusion principle,' is why the authors called their method 'PauliNet.'

Besides the Pauli exclusion principle, electronic wave functions also have other fundamental physical properties, and much of the innovative success of PauliNet is that it integrates these properties into the deep neural network, rather than letting deep learning figure them out by just observing the data. "Building the fundamental physics into the AI is essential for its ability to make meaningful predictions in the field," says Noé. "This is really where scientists can make a substantial contribution to AI, and exactly what my group is focused on."

There are still many challenges to overcome before Hermann and Noé's method is ready for industrial application. "This is still fundamental research," the authors agree, "but it is a fresh approach to an age-old problem in the molecular and material sciences, and we are excited about the possibilities it opens up."

Jan Hermann et al. Deep-neural-network solution of the electronic Schrödinger equation, Nature Chemistry (2020).
https://www.nature.com/articles/s41557-020-0544-y

Abstract:... Here we propose PauliNet, a deep-learning wavefunction ansatz that achieves nearly exact solutions of the electronic Schrödinger equation for molecules with up to 30 electrons. PauliNet has a multireference Hartree–Fock solution built in as a baseline, incorporates the physics of valid wavefunctions and is trained using variational quantum Monte Carlo. PauliNet outperforms previous state-of-the-art variational ansatzes for atoms, diatomic molecules and a strongly correlated linear H10, and matches the accuracy of highly specialized quantum chemistry methods on the transition-state energy of cyclobutadiene, while being computationally efficient.

----------------------------------------

... this would have been very handy on my quantum mechanics final in P.Chem
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #589 on: December 26, 2020, 01:02:19 AM »
DeepMind's New AI Masters Games Without Even Being Taught the Rules
https://techxplore.com/news/2020-12-deepmind-muzero-conquers.html
https://arstechnica.com/science/2020/12/google-develops-an-ai-that-can-learn-both-chess-and-pac-man/

DeepMind, a subsidiary of Alphabet, has previously made groundbreaking strides using reinforcement learning to teach programs to master the Chinese board game Go and the Japanese strategy game Shogi, as well as chess and challenging Atari video games. In all those instances, computers were given the rules of the game.

But Nature reported today that DeepMind's MuZero (μZero) has accomplished the same feats—and in some instances, beat the earlier programs—without first learning the rules. Another step closer to AGI.

Programmers at DeepMind relied on a principle called "look-ahead search." With that approach, MuZero assesses a number of potential moves based on how an opponent would respond. While there would likely be a staggering number of potential moves in complex games such as chess, MuZero prioritizes the most relevant and most likely maneuvers, learning from successful gambits and avoiding ones that failed.

"For the first time, we actually have a system that is able to build its own understanding of how the world works and use that understanding to do this kind of sophisticated look-ahead planning that you've previously seen for games like chess & Go," said DeepMind's principal research scientist David Silver. MuZero can "start from nothing, and just through trial and error, both discover the rules of the world and use those rules to achieve kind of superhuman performance."

The system once trained, needs so little processing to make its decisions that it's entire operation might be managed on a smartphone.

Silver envisions greater applications for MuZero than mere games. Progress has already been made on video compression, a challenging task considering the huge number of varying video formats and numerous modes of compression. So far, they have achieved a 5% improvement in compression, no small feat for the company owned by Google, which also handles the gigantic cache of videos on the world's second-most popular web site, YouTube, where a billion hours of content are viewed daily. (The No. 1 web site? Google.)

Silver says the laboratory is also looking into robotics programming and protein architecture design, which holds promise for personalized production of drugs.

But she also raised a concern about the potential of abuse. "My worry is that whilst constantly striving to improve the performance of their algorithms and apply the results for the benefit of society, the teams at DeepMind are not putting as much effort into thinking through potential unintended consequences of their work," she said.

In fact, the U.S. Air Force had tapped early research papers covering MuZero that were made public last year and used the information to design an AI system that could launch missiles from a U-2 spy plane against specified targets.

... "I doubt the inventors of the jet engine were thinking about global pollution when they were working on their inventions. We must get that balance right in the development of AI technology."

Mastering Atari, Go, chess and shogi by planning with a learned model
https://www.nature.com/articles/s41586-020-03051-4

https://techcrunch.com/2020/12/23/no-rules-no-problem-deepminds-muzero-masters-games-while-learning-how-to-play-them/amp/

https://www.bbc.com/news/technology-55403473

https://deepmind.com/research/publications/Mastering-Atari-Go-Chess-and-Shogi-by-Planning-with-a-Learned-Model

----------------------------------------------

Artificial Intelligence Takes Control Of A U-2 Spy Plane's Sensors In Historic Flight Test
https://www.thedrive.com/the-war-zone/38202/artificial-intelligence-takes-control-of-a-u-2-spy-planes-sensors-in-historic-flight-test

Artificial intelligence-driven algorithms controlled sensor and navigation systems on a U.S. Air Force U-2S Dragon Lady spy plane in a flight test yesterday. The service says that this is the first time that artificial intelligence has been "safely" put in charge of any U.S. military system and appears to be the first time it has been utilized on a military aircraft anywhere in the world, at least publicly.

The test, which took place on Dec. 15, 2020, involved a U-2S from the 9th Reconnaissance Wing at Beale Air Force Base in California. The Air Force has dubbed the artificial intelligence (AI) software package as ARTUµ, the latest in a string of references to the iconic droid from the Star Wars universe, who serves as a sort of robotic flight engineer and navigator, in recent Air Force projects having to do with developments in AI and autonomous flight.


https://mobile.twitter.com/WILLROP3R/status/1339209367262904320

... "Call sign 'Artuμ,' we modified world-leading μZero gaming algorithms to operate the U-2's radar," Roper wrote in his Tweet about the test. "This first AI copilot even served as mission commander on its seminal training flight!"

Our demo flew a reconnaissance mission during a simulated missile strike at Beale Air Force Base on Tuesday. ARTUµ searched for enemy launchers while our pilot searched for threatening aircraft, both sharing the U-2’s radar. With no pilot override, ARTUµ made final calls on devoting the radar to missile hunting versus self-protection. Luke Skywalker certainly never took such orders from his X-Wing sidekick!

Like a breaker box for code, the U-2 gave ARTUµ complete radar control while “switching off” access to other subsystems. The design allows operators to choose what AI won’t do to accept the operational risk of what it will.


-------------------------------------------

AI Copilot: Air Force Achieves First Military Flight With Artificial Intelligence
https://www.af.mil/News/Article-Display/Article/2448376/ai-copilot-air-force-achieves-first-military-flight-with-artificial-intelligence/

The AI algorithm, developed by Air Combat Command’s U-2 Federal Laboratory, trained the AI to execute specific in-flight tasks that would otherwise be done by the pilot. The flight was part of a specifically constructed scenario pitting the AI against another dynamic computer algorithm in order to prove both the new technology capability, and its ability to work in coordination with a human.

... During this flight, ARTUµ was responsible for sensor employment and tactical navigation, while the pilot flew the aircraft and coordinated with the AI on sensor operation. Together, they flew a reconnaissance mission during a simulated missile strike. ARTUµ’s primary responsibility was finding enemy launchers while the pilot was on the lookout for threatening aircraft, both sharing the U-2’s radar.

The flight was part of a precisely constructed scenario which pitted the AI against another dynamic computer algorithm in order to prove the new technology.

After takeoff, the sensor control was positively handed-off to ARTUµ who then manipulated the sensor, based on insight previously learned from over a half-million computer simulated training iterations. The pilot and AI successfully teamed to share the sensor and achieve the mission objectives.

The U-2 Federal Laboratory designed this AI technology to be easily transferable to other systems and plan to further refine the technology.



... DARPA also developed an entire library of AI algorithms relating to the operation of autonomous unmanned aircraft as part of its Collaborative Operations in Denied Environment (CODE) program, which it subsequently transferred to the U.S. Navy. An "autonomy engine" using software developed for CODE was recently used in a separate flight test involving a stealthy General Atomics Avenger drone conducting a mock aerial search mission.

-----------------------------------------------------------

Ripley : How many drops is this for you, Lieutenant?

Lieutenant Gorman : Thirty eight... simulated.

Private Vasquez : How many *combat* drops?

Lieutenant Gorman : Uh, two. Including this one.

Aliens (1986)


-------------------------------------------------------------

Exploring the Notion of Shortcut Learning In Deep Neural Networks
https://techxplore.com/news/2020-12-exploring-notion-shortcut-deep-neural.html

Over the past few years, artificial intelligence (AI) tools, particularly deep neural networks, have achieved remarkable results on a number of tasks. However, recent studies have found that these computational techniques have a number of limitations. In a recent paper published in Nature Machine Intelligence, researchers at Tübingen and Toronto universities explored and discussed a problem known as 'shortcut learning' that appears to underpin many of the shortcomings of deep neural networks identified in recent years.

'Shortcut learning,' or 'cheating,' appears to be a common characteristic across both artificial and biological intelligence."

... The term shortcut learning describes the process through which machines attempt to identify the simplest solution or a 'shortcut' to solve a given problem. For example, a deep neural network may realize that a particular texture patch or part of an object (e.g., a car tire) is typically enough for them to predict the presence of a car in an image, and might thus start predicting the presence of a car in images even when they only include car tires.

"Shortcut learning essentially means that neural networks love to cheat," Geirhos said. "At first glance, AI often seems to work excellently—for example, it can recognize whether a picture contains animals, e.g., sheep. Only upon closer inspection, it is discovered that the neural network has cheated and just looked at the background."

An example of a neural network cheating is a situation in which it categorizes an empty green landscape as 'sheep' simply because it previously processed images in which sheep were standing in front of a natural landscape, while failing to recognize an actual sheep when it is in an unusual setting (e.g., on the beach). This is one of the many examples that Geirhos and his colleagues mention in their paper.

While this is a straightforward example of shortcut learning, often these patterns of cheating are far more subtle. They can be so subtle that researchers sometimes struggle to identify the cheating strategy that an artificial neural network is adopting and may simply be aware that it is not solving a task in the way they hoped it would.

"This pattern of cheating has parallels in everyday life, for example, when pupils prepare for class tests and only learn facts by heart without developing a true understanding of the problem," Geirhos said. "Unfortunately, in the field of AI, shortcut learning not only leads to deceptively good performance, but under certain circumstances, also to discrimination, for example, when an AI prefers to propose men for jobs because previous positions have already been filled mainly by men."

"We encourage our colleagues to jointly develop and apply stronger test procedures: As long as one has not examined whether an algorithm can cope with unexpected images, such as a cow on the beach, cheating must at least be considered a serious possibility," Geirhos said. "All that glitters is not gold: Just because AI is reported to achieve high scores on a benchmark doesn't mean that AI has also solved the problem we actually care about; sometimes, AI just finds a shortcut. Fortunately, however, current methods of artificial intelligence are by no means stupid, just too lazy: If challenged sufficiently, they can learn highly complex relationships—but if they have discovered a simple shortcut, they would be the last to complain about it."

Shortcut learning in deep neural networks. Nature Machine Intelligence (2020).
https://www.nature.com/articles/s42256-020-00257-z
« Last Edit: December 26, 2020, 02:26:26 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 25763
    • View Profile
  • Liked: 1153
  • Likes Given: 430
Re: Robots and AI: Our Immortality or Extinction
« Reply #590 on: December 28, 2020, 09:26:48 PM »
Quote
Lex Fridman (@lexfridman)12/27/20, 1:03 AM
Can you?
https://twitter.com/lexfridman/status/1343075030155059200

Elon Musk:  Good point
⬇️ Meme below.
People who say it cannot be done should not interrupt those who are doing it.

Sigmetnow

  • Multi-year ice
  • Posts: 25763
    • View Profile
  • Liked: 1153
  • Likes Given: 430
Re: Robots and AI: Our Immortality or Extinction
« Reply #591 on: December 29, 2020, 09:54:03 PM »
 :o  ;D 
This is not CGI.
Do You Love Me?


“Happy New Year from Boston Dynamics.”
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #592 on: December 29, 2020, 10:20:08 PM »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

OrganicSu

  • Frazil ice
  • Posts: 124
    • View Profile
  • Liked: 9
  • Likes Given: 2
Re: Robots and AI: Our Immortality or Extinction
« Reply #593 on: December 30, 2020, 12:56:27 PM »
:o  ;D 
This is not CGI.
Do You Love Me?


“Happy New Year from Boston Dynamics.”

Loving conspiracy theories as a way to see what's really happening behind the scenes, I wondered if Covid was being used to peacefully transfer power from the USofA to China. Having seen the above video with many places entering hard lockdown, I'm wondering if Covid will be used to allow robots (non biological living organisms) to leave their laboratory location lockdown earlier than we will get out of our lockdowns. When we freely walk, drive the streets again with 'whom' will we be sharing that space?

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #594 on: December 30, 2020, 09:55:41 PM »
AI-Controlled Vertical Farms Promise Revolution In Food Production
https://techxplore.com/news/2020-12-ai-controlled-vertical-farms-revolution-food.html



These upright farms take up only 2 acres yet produce 720 acres worth of fruit and vegetables. Lighting, temperature and watering are controlled by AI-controlled robots. Sunlight is emulated by LED panels, so food is grown in optimal conditions 24/7. And water is recycled and evaporated water recaptured so there is virtually no waste.

The operation is so efficient it uses 99 percent less land and 95 percent less water than normal farming operations.

"Imagine a 1,500-acre farm," Storey says. "Now, imagine that fitting inside your favorite grocery store, growing up to 350 times more.

It is so efficient that these rows of hanging plants produce 400 times more food per acre than a traditional farm.

... In October, Driscoll's, a leading producer of fresh berries, reached an agreement with Plenty to produce strawberries year-round in its Laramie, Wyoming-based farming operation, currently the largest privately-owned vertical farming and research facility in the world.

The Plenty website lists several products currently offered in stores, including lettuce, arugula, bok choy, mizuna and kale.

https://www.plenty.ag/about-us/

-----------------------------------------

... just don't ask how many tons of  protein it produces per acre ...
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #595 on: January 04, 2021, 10:50:27 PM »
2021: The Year, Autonomous Trucks Will Take to the Road With No One on Board
https://spectrum.ieee.org/transportation/self-driving/this-year-autonomous-trucks-will-take-to-the-road-with-no-one-on-board

The startup TuSimple is deploying tractor-trailers that drive themselves from pickup to delivery.



TuSimple claims that its approach is unique because its equipment is purpose built from the ground up for trucks. “Most of the other companies in this space got the seeds of their ideas from the DARPA Grand and Urban Challenges for autonomous vehicles,” says Chuck Price, chief product officer at TuSimple. “But the dynamics and functional behaviors of trucks are very different.”

The biggest difference is that trucks need to be able to sense conditions farther in advance, to allow for their longer stopping distance. The 200-meter practical range of lidar that most autonomous cars use as their primary sensor is simply not good enough for a fully loaded truck traveling at 120 kilometers per hour. Instead, TuSimple relies on multiple HD cameras that are looking up to 1,000 meters ahead whenever possible. The system detects other vehicles and calculates their trajectories at that distance, which Price says is approximately twice as far out as professional truck drivers look while driving.

Price argues that this capability gives TuSimple’s system more time to make decisions about the safest and most efficient way to drive. Indeed, its trucks use their brakes less often than trucks operated by human drivers, leading to improvements in fuel economy of about 10 percent. ... Price adds that autonomous trucks could also help address a shortage of truck drivers, which is expected to grow at an alarming rate.

TuSimple’s fleet of 40 autonomous trucks has been hauling goods between freight depots in Phoenix, Tucson, Dallas, El Paso, Houston, and San Antonio. These routes are about 95 percent highway, but the trucks can also autonomously handle surface streets, bringing their cargo the entire distance, from depot driveway to depot driveway. Its vehicles join a growing fleet of robotic trucks from competitors such as Aurora, Embark, Locomation, Plus.ai, and even Waymo, the Alphabet spin-off that has long focused on self-driving cars.

By 2024,TuSimple plans to achieve Level 4 autonomy, meaning that its trucks will be able to operate without a human driver under limited conditions that may include time of day, weather, or premapped routes. At that point, TuSimple would start selling the trucks to fleet operators. Along the way, however, there are several other milestones the company must hit, beginning with its first “driver out” test in 2021, which Price describes as a critical real-world demonstration.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #596 on: January 06, 2021, 05:36:57 PM »
New Module for OpenAI GPT-3 Creates Unique Images from Text
https://techxplore.com/news/2021-01-module-openai-gpt-unique-images.html


"an armchair in the shape of an avocado”

A team of researchers at OpenAI, a San Francisco artificial intelligence development company, has added a new module to its GPT-3 autoregressive language model. Called DALL·E, the module excerpts text with multiple characteristics, analyzes it and then draws a picture based on what it believes was described.

On their webpage describing the new module, the team at OpenAI describe it as "a simple decoder-only transformer" and note that they plan to provide more details about its architecture and how it can be used as they learn more about it themselves.

GPT-3 was developed by the company to demonstrate how far neural networks could take text processing and creation. It analyzes user-selected text and generates new text based on that input. In this new effort, the researchers have extended this ability to graphics. A user types in a sentence and DALL·E attempts to generate what is described using graphics and other imagery.

Try it: DALL·E: Creating Images from Text:
https://openai.com/blog/dall-e/
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

kassy

  • First-year ice
  • Posts: 8235
    • View Profile
  • Liked: 2042
  • Likes Given: 1986
Re: Robots and AI: Our Immortality or Extinction
« Reply #597 on: January 06, 2021, 08:25:05 PM »
Not sure if chairs or fruit...
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Multi-year ice
  • Posts: 10166
    • View Profile
  • Liked: 3510
  • Likes Given: 745
Re: Robots and AI: Our Immortality or Extinction
« Reply #598 on: January 06, 2021, 08:45:50 PM »
Give new meaning to the text "bald-headed fart" ...

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

gerontocrat

  • Multi-year ice
  • Posts: 20384
    • View Profile
  • Liked: 5289
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #599 on: January 08, 2021, 12:32:10 PM »
Not a lot of people know this...
Every time we use the word “robot” to denote a humanoid machine, it derives from from the Czech “robota” meaning forced labour.

https://www.theguardian.com/stage/2021/jan/07/robot-wars-100-years-reboot-karel-capek-play-rur-rossums-universal-robots
Robot wars: 100 years on, it's time to reboot Karel Čapek's RUR

The play Rossum’s Universal Robots clearly belongs to the 1920s but its satirical take on the meeting of humans and machines is all too relevant today

Quote
......the robots prove to be stronger and more intelligent than their creators and eventually wipe out virtually all humankind. Only a single engineer survives who, a touch improbably, shows two robots transformed by love.

But I don’t see Čapek’s play as anti-science: initially it suggests robots can relieve humanity of demeaning drudgery. What the play is actually attacking is capitalist greed in that overproduction precipitates the crisis. “Do you know what has caused this calamity? Sheer volume!” cries the marketing manager in the most recent translation by Peter Majer and Cathy Porter. The point is reinforced when the idealistic engineer claims: “Dividends will be the ruin of humanity.” Čapek’s target is not technology as such but its commercial exploitation. Look up artificial intelligence online today and you will find it being promoted with the revealing phrase “future-proof your competitive advantage”.
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)