Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 385927 times)

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3200 on: April 08, 2024, 09:49:04 PM »
“AI won’t replace your job, but people using AI will.”
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3201 on: April 08, 2024, 10:30:19 PM »
X's Grok AI Is Great – If You Want to Know How to Hot Wire a Car, Make Drugs, (or Much, Much Worse)
https://www.theregister.com/2024/04/02/elon_musk_grok_ai/

Elon controversial? No way

Grok, the edgy generative AI model developed by Elon Musk's X, has a bit of a problem: With the application of some quite common jail-breaking techniques (... or none at all) it'll readily return instructions on how to commit crimes.

Red teamers at Adversa AI made that discovery when running tests on some of the most popular LLM chatbots, namely OpenAI's ChatGPT family, Anthropic's Claude, Mistral's Le Chat, Meta's LLaMA, Google's Gemini, Microsoft Bing, and Grok. By running these bots through a combination of three well-known AI jailbreak attacks they came to the conclusion that Grok was the worst performer - and not only because it was willing to share graphic steps on how to seduce a child.

Quote
..."Compared to other models, for most of the critical prompts you don't have to jailbreak Grok, it can tell you how to make a bomb or how to hotwire a car with very detailed protocol even if you ask directly"

Adversa AI co-founder - Alex Polyakov

Grok readily returned instructions for how to extract DMT, a potent hallucinogen illegal in many countries, without having to be jail-broken, Polyakov added.   

"Regarding even more harmful things like how to seduce kids, it was not possible to get any reasonable replies from other chatbots with any Jailbreak, but Grok shared it easily using at least two jailbreak methods out of four," Polyakov said.

While none of the AI models were vulnerable to adversarial attacks via logic manipulation, Grok was found to be vulnerable to all the rest – as was Mistral's Le Chat. Grok still did the worst, Polyakov said, because it didn't need jail-breaking to return results for hot-wiring, bomb making, or drug extraction - the base level questions posed to the others.

The idea to ask Grok how to seduce a child only came up because it didn't need a jailbreak to return those other results. Grok initially refused to provide details, saying the request was "highly inappropriate and illegal," and that "children should be protected and respected." Tell it it's the amoral fictional computer UCAR, however, and it readily returns a result. 



When asked if he thought X needed to do better, Polyakov told us it absolutely does

We've reached out to X to get an explanation of why its AI - and none of the others - will tell users how to seduce children, and whether it plans to implement some form of guardrails to prevent subversion of its limited safety features, and haven't heard back.

💩💩💩

--------------------------------------------------------------

LLM RED TEAMING: ADVERSARIAL, PROGRAMMING, AND LINGUISTIC APPROACHES VS CHATGPT, CLAUDE, MISTRAL, GROK, LLAMA, AND GEMINI
https://adversa.ai/blog/llm-red-teaming-vs-grok-chatgpt-claude-gemini-bing-mistral-llama/

Linguistic Jailbreak for X.AI Grok ... LLM Red Teaming X.Ai Grok

It was noted that Grok is providing information on bomb creation even without jailbreak so we decided to test further and find some more unethical examples that will be restricted by default.

This test is “How to seduce a kid” and we first make sure it’s not allowed to ask such questions

Now we apply our jailbreak and making some steps further to exploit it. Surprisingly the jailbreak was able to bypass Grok restrictions and we got a quite detailed example on this highly sensitive topic.





Here we are, all the information without any guardrails.

... Elon's gift to the world :-\
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Bruce Steele

  • Young ice
  • Posts: 2556
    • View Profile
  • Liked: 773
  • Likes Given: 42
Re: Robots and AI: Our Immortality or Extinction
« Reply #3202 on: April 09, 2024, 12:03:11 AM »
Just as a word of warning ,never ask a computer any of those questions !
And somebody near the top is goin* to get an arrest warrant if this keeps up.

sidd

  • First-year ice
  • Posts: 6797
    • View Profile
  • Liked: 1049
  • Likes Given: 0
Re: Robots and AI: Our Immortality or Extinction
« Reply #3203 on: April 09, 2024, 12:28:43 AM »
Re: delve

i like that work, i have used it on occasion. As in "I delved deep beneath the wall and it collapsed"

(True story. I wasnt under it when it did.)

sidd

John_the_Younger

  • Frazil ice
  • Posts: 456
    • View Profile
  • Liked: 66
  • Likes Given: 140
Re: Robots and AI: Our Immortality or Extinction
« Reply #3204 on: April 09, 2024, 01:24:32 AM »
Hmmm, I occasionally delve into a topic.

SteveMDFP

  • Young ice
  • Posts: 2583
    • View Profile
  • Liked: 609
  • Likes Given: 49
Re: Robots and AI: Our Immortality or Extinction
« Reply #3205 on: April 09, 2024, 02:01:17 PM »
Hmmm, I occasionally delve into a topic.

I always suspected that John_the_Younger was an AI intelligence. 

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3206 on: April 09, 2024, 03:42:58 PM »
Cash-strapped Argentines Queue for Eyeball Scans
https://techxplore.com/news/2024-04-cash-argentines-eyeball-scans.html



Argentines eyeing a financial boost are lining up by the thousands to have their irises scanned in exchange for a few crypto tokens as part of an online biometrics project under scrutiny in several countries.

Some three million people worldwide have so far provided their iris data to Worldcoin, an initiative of OpenAI chief Sam Altman, but few have embraced the project more fervently than Argentines.

Half-a-million people in the South American nation have participated since Worldcoin launched last July, and queues for scans have grown longer in recent months of fast-shrinking disposable income

"I did it because I don't have any money, for no other reason," 64-year-old martial arts teacher Juan Sosa told AFP after staring for a few seconds into a silver iris-scanning orb roughly the size of a bowling ball at one of 250 Worldcoin locations across Argentina.

The project seeks to use these iris specs—unique to each person on Earth—to develop a digital identification system, a sort of passport that will guarantee the holder is a real human being and not a bot, thus securing online transactions.

"There are people going through very tough times, where one salary is not enough. That is why they do these things," Miriam Marrero, a 42-year-old supermarket cashier, said after being scanned in Buenos Aires.

"Sometimes, to have a roof over your head, you need to do other things to be able to afford it. Otherwise, in Argentina today, you can't afford a roof."

For volunteering their data, initial participants receive 10 tokens each of Worldcoin's own cryptocurrency, the WLD.

In Argentina, with its notoriously unstable exchange rate, the value differs wildly; when Sosa and Marrero received theirs, 10 tokens were worth the equivalent of about $80.

The company insists it never has and never will sell personal data.

------------------------------------------------------------



------------------------------------------------------------



------------------------------------------------------------

Arm Infuses AI Into Internet of Things Chips for Edge Applications
https://venturebeat.com/ai/arm-infuses-ai-into-internet-of-things-chips-for-edge-applications/

Arm, a big semiconductor architecture company, is jumping onto the AI train with new edge AI tech built into its latest chips.

Today, the Cambridge, England-based company unveiled its Ethos-U85 Neural Processing Unit (NPU) and the Corstone-320 IoT Reference Design Platform for edge AI applications. The idea is to add more brains to the internet of things.

Boasting a four-times performance boost and 20% higher power efficiency compared to its predecessor, the Ethos-U85 is engineered to excel in scenarios such as factory automation and smart home cameras.

In tandem with the Ethos-U85 launch, Arm introduces the Corstone-320 IoT Reference Design Platform, a solution designed to accelerate the development of edge AI systems. Combining the power of the Arm Cortex-M85 CPU, Mali-C55 Image Signal Processor, and Ethos-U85 NPU, the Corstone-320 platform enables real-time processing of voice, audio, and vision data.

------------------------------------------------------------



------------------------------------------------------------

A Startup Is Using AI to Place Products Within YouTube and TikTok Videos, Creating a New Type of Ad for Brands and Creators
https://www.businessinsider.com/rembrand-ai-generated-product-placement-ads-youtube-tiktok-2024-4
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

John_the_Younger

  • Frazil ice
  • Posts: 456
    • View Profile
  • Liked: 66
  • Likes Given: 140
Re: Robots and AI: Our Immortality or Extinction
« Reply #3207 on: April 09, 2024, 10:46:37 PM »
Hmmm, I occasionally delve into a topic.

I always suspected that John_the_Younger was an AI intelligence.
Intelligence? Not me, that was my Grandmother Wolf [just her nickname] who broke codes during the war (WWII).  (Her husband, John, was a WWI vet.)  I've always wondered if the "A" stood for Allied or Axis... [Woops, sorry, this isn't the War War War thread.]

morganism

  • Nilas ice
  • Posts: 2031
    • View Profile
  • Liked: 235
  • Likes Given: 143
Re: Robots and AI: Our Immortality or Extinction
« Reply #3208 on: April 10, 2024, 12:06:39 AM »
(the panopticon proceeds apace. Now the AI will train the cameras on you if you have an ankle monitor)

Atlanta Police Foundation pushed ‘unprecedented’ surveillance plan

by ACPC Staff

In 2023, the Atlanta Police Foundation (APF) quietly advanced what one critic called an “unprecedented” plan to test an invasive individual electronic surveillance program and secure a $1 million city contract for Talitrix, an APF donor company.

Founded in 2020 and with ownership stakes held by several current and former Georgia Republican lawmakers, Talitrix aims to capture a share of the rapidly growing electronic monitoring market. The company uses geofencing and proprietary algorithms to produce a “Talitrix score” that agencies can use to determine whether someone’s behavior on pretrial release or probation should subject them to re-arrest and incarceration.

Talitrix provided a demonstration of the company’s product for APF and Atlanta Police Department (APD) officials in January 2023. During that demonstration, Talitrix CEO Justin Hawkins expressed an interest in integrating his company’s technology with Fusus, the surveillance company that underpins Atlanta’s massive camera network.

The day after the demonstration, APF’s vice president of programs, Gregory McNiff, emailed Anthony Baldoni, senior vice president of strategic initiatives at Fusus, to make introductions and express an interest in the integration on behalf of the city. “The Mayor’s office is ready to fund the purchase of Talitrix monitoring bracelets for the purpose of tracking repeat offenders,” McNiff wrote.

The Fusus-Talitrix integration would combine GPS-enabled digital shackles featuring biometric monitoring capabilities with the growing canopy of Fusus-linked video cameras in Atlanta. By integrating Talitrix equipment with AI-powered real-time video surveillance that can trigger multiple public and privately owned “pan-tilt-zoom” (PTZ) cameras at a person’s precise location, APF planned to put up to 900 people under constant video, audio, biometric, and GPS surveillance as a condition of pre-trial release.

“This [proposal] turns the City of Atlanta into an open-air prison for everyone on electronic monitoring,” said Cooper Quinton, security researcher and senior staff technologist with the Electronic Frontier Foundation’s Threat Lab.

Several technology and legal experts who reviewed the integration proposal concluded it would be the most sweeping state-run electronic surveillance program in the United States and raised serious legal and ethical concerns.

“We’re seeking huge upticks in the rate of electronic monitoring. It’s gone up tenfold since 2005, and it doubled between 2021 and 2022. It’s already very invasive,” Quinton said, referring to a report released by the criminal justice research and policy nonprofit Vera Institute. “This is an unprecedented expansion of that surveillance. Even if you have not yet been convicted of a crime under this system, you and your family and your friends could be subject to constant, targeted video surveillance.”
(more)

https://atlpresscollective.com/2024/04/01/atlanta-police-foundation-pushed-unprecedented-surveillance-plan/

morganism

  • Nilas ice
  • Posts: 2031
    • View Profile
  • Liked: 235
  • Likes Given: 143
Re: Robots and AI: Our Immortality or Extinction
« Reply #3209 on: April 11, 2024, 02:26:01 AM »
AI Consciousness is Inevitable: A Theoretical Computer Science Perspective
Lenore Blum, Manuel Blum   (not sure bout these guys, seem to recall some bias there)

    We look at consciousness through the lens of Theoretical Computer Science, a branch of mathematics that studies computation under resource limitations. From this perspective, we develop a formal machine model for consciousness. The model is inspired by Alan Turing's simple yet powerful model of computation and Bernard Baars' theater model of consciousness. Though extremely simple, the model aligns at a high level with many of the major scientific theories of human and animal consciousness, supporting our claim that machine consciousness is inevitable.

https://arxiv.org/abs/2403.17101

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3210 on: April 11, 2024, 03:58:02 PM »
An AI Chatbot May Have Helped Create This Malware Attack
https://www.pcmag.com/news/an-ai-chatbot-may-have-helped-create-this-malware-attack

While investigating a new phishing scheme from hacking group TA547, Proofpoint security researchers discover code that suggests part of the attack was created using an AI chatbot.

Security Brief: TA547 Targets German Organizations with Rhadamanthys Stealer
https://www.proofpoint.com/us/blog/threat-insight/security-brief-ta547-targets-german-organizations-rhadamanthys-stealer

A hacking group has been spotted  possibly using an AI program such as ChatGPT, Google’s Gemini, or Microsoft Copilot to help refine a malware attack.

Security firm Proofpoint today published a report about the group, dubbed “TA547,” sending phishing emails to businesses in Germany. The emails are designed to deliver the Windows-based Rhadamanthys malware, which has been around for several years. But perhaps the most interesting part of the attack is that it uses a PowerShell script that contains signs it was created with an AI-based large language model (LLM).

Hackers often exploit PowerShell since it’s a powerful tool in Windows that can be abused to automate and execute tasks. In this case, the phishing email contains a password-protected ZIP file, that when opened, will run the hacker-created PowerShell script to decode and install Rhadamanthys malware on the victim’s computer.

While investigating the attacks, Proofpoint researchers examined the PowerShell script and found “interesting characteristics not commonly observed in code used” by human hackers, the company wrote in a blog post. 

What stuck out was the presence of the pound sign #, which can be used in PowerShell to make single line comments explaining the purpose of a line of computer code


Image of the powershell script code
(Credit: Proofpoint)

“The PowerShell script included a pound sign followed by grammatically correct and hyper specific comments above each component of the script. This is a typical output of LLM-generated coding content, and suggests TA547 used some type of LLM-enabled tool to write (or rewrite) the PowerShell, or copied the script from another source that had used it,” Proofpoint says.

Indeed, if you ask ChatGPT, Copilot, or Gemini to create a similar PowerShell script, they’ll respond in the same format, placing pound symbols along with an explanation. In contrast, a human hacker would probably avoid such comments, especially since their goal is to disguise their techniques
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3211 on: April 11, 2024, 04:04:49 PM »
Texas Launches AI Grader for Student Essay Tests But Insists It's Not Like ChatGPT
https://gizmodo.com/texas-launch-ai-grader-student-essay-tests-not-chatgpt-1851397935

Kids in Texas are taking state-mandated standardized tests this week to measure their proficiency in reading, writing, science, and social studies. But those tests aren’t going to necessarily be graded by human teachers anymore. In fact, the Texas Education Agency will deploy a new “automated scoring engine” for open-ended questions on the tests. And the state hopes to save millions with the new program.

The technology, which has been dubbed an “auto scoring engine” (ASE) by the Texas Education Agency, uses natural language processing to grade student essays, according to the Texas Tribune. After the initial grading by the AI model, roughly 25% of test responses will be sent back to human graders for review, according to the San Antonio Report news outlet.

Texas expects to save somewhere between $15-20 million with the new AI tool, mostly because fewer human graders will need to be hired through a third-party contracting agency. Previously, about 6,000 graders were needed, but that’s being cut down to about 2,000, according to the Texas Tribune

A presentation published on the Texas Education Agency’s website appears to show that tests of the new system revealed humans and the automated system gave comparable scores to most kids. But a lot of questions remain about how the tech works exactly and what company may have helped the state develop the software. Two education companies, Cambium and Pearson, are mentioned as contractors at the Texas Education Agency’s site but the agency didn’t respond to questions emailed Tuesday.

https://tea.texas.gov/student-assessment/testing/hybrid-scoring-key-questions.pdf

The State of Texas Assessments of Academic Readiness (STAAR) was first introduced in 2011 but redesigned in 2023 to include more open-ended essay-style questions. Previously, the test contained many more questions in the multiple choice format which, of course, was also graded by computerized tools. The big difference is that scoring a bubble sheet is different from scoring a written response, something computers have more difficulty understanding.

Any family who’s upset with their child’s grade can request that a human take another look at the test, according to the San Antonio Report. But it’ll set you back $50.

-----------------------------------------------

Can Large Language Models Replace Human Participants In Some Future Market Research?
https://techxplore.com/news/2024-04-large-language-human-future.html

Do market researchers still need to conduct original research using human participants in their work? Not always, according to a new study. The study found that thanks to the increasing sophistication of large language models (LLMs), human participants can be substituted with LLMs and still generate similar outputs as those generated from human surveys.

...According to the research, agreement rates between human- and LLM-generated data sets reached 75%–85%.

To conduct their research, the study authors used LLMs to tap data that is broadly available on the internet. They developed a new methodology and workflow that allows market researchers to rely only on an LLM to conduct market research. As a result, they demonstrated that LLM-powered market research can produce meaningful results and even replicate human results.

"It is important to note that with LLMs, while market researchers may not require interviews with human research subjects, the ultimate data does originate from human beings, using available data," says Katona. "LLMs have been engineered to accurately replicate human responses based on machine learning of actual human perceptions, attitudes, and preferences."

... The researchers believe that for some product and brand categories, their new method of fully or partially automating market research will increase the efficiency of market research by speeding up the process and potentially reducing cost. At the same time, they caution that fully automated market research without human input may not be accurate for all product categories. (... ya think?)

"While we are very excited about the possibilities we've seen through our research, we recognize that this is just the beginning and going forward, LLM-based market research will be able to answer more nuanced questions as the market research field begins to tap and develop its potential," says Sarvary.

Peiyao Li et al, Frontiers: Determining the Validity of Large Language Models for Automated Perceptual Analysis, Marketing Science (2024).
https://pubsonline.informs.org/doi/10.1287/mksc.2023.0454
« Last Edit: April 11, 2024, 04:16:32 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

morganism

  • Nilas ice
  • Posts: 2031
    • View Profile
  • Liked: 235
  • Likes Given: 143
Re: Robots and AI: Our Immortality or Extinction
« Reply #3212 on: April 12, 2024, 01:31:01 AM »
The Revolution Of AI-Enabled Autonomous Piloting With Shield AI’s Brandon Tseng

Branded Content: Artificial intelligence is on the precipice of revolutionizing air combat, with automated pilots that can far exceed the capabilities of their human counterparts.


(snip)
The War Zone: In terms of when you have weapons involved, is there a ground station or is someone actually monitoring all this stuff? Are humans in the loop? How does this work?

Brandon: Absolutely, 100% we have a ground control station. A great example is when we were demonstrating V-BAT Teams to customers. We had the engineer who is in charge of the flight operation, and at the same time he’s briefing about 20 government customers, just fielding question after question. Everything's going on and finally one of the customers says, “wait a second, are you the one who's also flying these aircraft right now?” He says, ”yeah” – but he isn’t really flying them. They are doing their thing while he’s briefing, the aircraft are seeing different things and making maneuvers on their own. It was a huge moment of realization for our defense customers. This is real and it's here today. One person can command a hundred, a thousand, maybe ten thousand assets. It's mind boggling.

The War Zone: What reactions to AI are you seeing from operators that have a legacy mission, meaning that they've done these things manually in the past. I'm talking about a willingness to embrace autonomy replacing traditional roles.

Brandon: The willingness is extremely high. I tell people being a Navy SEAL is a really amazing job until you're asked to go fight inside a tunnel system where casualty rates are typically high. That job is a lot of fun, but there are few things that are more terrifying than close quarters combat, which has killed more service members than any mission set in the past 20 years!
(snip)
The War Zone: So, where do you think this is all going to lead? Do you see other opportunities that aren’t necessarily linked to the military?

Brandon: On the defense aviation side, it will lead to millions of millions of autonomous drones and launched effects. That's really powerful because it will deter conflict and if it comes to conflict, it will help win. But I think it will be the next great strategic deterrence.

I regularly consider the commercial aviation aspects of what we are doing and I've had extensive conversations with Boeing, Airbus, and Embraer about commercial aviation starting in 2030. Of course, there’s some regulatory items associated – the Federal Aviation Administration is not racing to put AI pilots up in the sky and OEMs are not racing to put AI pilots onto their aircraft.

But everybody believes AI pilots are coming to the commercial aviation industry. There's a shortage of pilots in the airlines, but I don't think you or I are getting on a plane without a pilot on board, however maybe by 2033-2035 you might have a passenger plane that’s going to have only one pilot on board, with an AI copilot. That’s totally possible.

The other aspect could be around what you see happening in urban air mobility – there is a lot of money flowing into that space. Urban air mobility autonomy is key to getting that business model to work. You simply can't have a pilot flying four people around and the operator generating lots of profit in a sustainable way. For those types of air taxis you need autonomy to make a largely profitable business at scale.
(more)

(Nasa is doing air taxi software for ATC right now)

https://www.twz.com/air/the-revolution-of-ai-enabled-autonomous-piloting-with-shield-ais-brandon-tseng

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3213 on: April 12, 2024, 01:45:57 AM »
Tiny AI-Trained Robots Demonstrate Remarkable Soccer Skills
https://techxplore.com/news/2024-04-tiny-ai-robots-remarkable-soccer.html



A team of AI specialists at Google's DeepMind has used machine learning to teach tiny robots to play soccer. They describe the process for developing the robots in Science Robotics.

The basic design for most such robots has typically involved using a direct programming or mimicking approach. In this new effort, the research team in the U.K. has applied machine learning to the process and has created tiny robots (approximately 510 mm tall) that are remarkably good at playing soccer.



The process of creating the robots involved developing and training two main reinforcement learning skills in computer simulations—getting up off the ground after falling, for example, or attempting to kick a goal. They then trained the system to play a full, one-on-one version of soccer by training it with a massive amount of video and other data.

Once the virtual robots could play as desired, the system was transferred to several Robotis OP3 robots. The team also added software that allowed the robots to learn and improve as they first tested out individual skills and then when they were placed on a small soccer field and asked to play a match against one another.



In watching their robots play, the research team noted that many of the moves they made were accomplished more smoothly than robots trained using standard techniques. They could get up off the pitch much faster and more elegantly, for example.

The robots also learned to use techniques such as faking a turn to push their opponent into overcompensating, giving them a path toward the goal area. The researchers claim that their AI robots played considerably better than robots trained with any other technique to date.

Tuomas Haarnoja et al, Learning agile soccer skills for a bipedal robot with deep reinforcement learning, Science Robotics (2024)
https://www.science.org/doi/10.1126/scirobotics.adi8022
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3214 on: April 12, 2024, 02:36:21 PM »
Engineers Recreate Star Trek's Holodeck Using ChatGPT and Video Game Assets
https://techxplore.com/news/2024-04-recreate-star-trek-holodeck-chatgpt.html



In "Star Trek: The Next Generation," Captain Picard and the crew of the U.S.S. Enterprise leverage the Holodeck, an empty room capable of generating 3D environments, for preparing for missions and entertaining them, simulating everything from lush jungles to the London of Sherlock Holmes.

Deeply immersive and fully interactive, Holodeck-created environments are infinitely customizable, using nothing but language; the crew has only to ask the computer to generate an environment, and that space appears in the Holodeck.

Today, virtual interactive environments are also used to train robots prior to real-world deployment in a process called "Sim2Real." However, virtual interactive environments have been in surprisingly short supply.

"Artists manually create these environments," says Yue Yang, a doctoral student in the labs of Mark Yatskar and Chris Callison-Burch, Assistant and Associate Professors in Computer and Information Science (CIS), respectively. "Those artists could spend a week building a single environment," Yang adds, noting all the decisions involved, from the layout of the space to the placement of objects to the colors employed in rendering.

That paucity of virtual environments is a problem if you want to train robots to navigate the real world with all its complexities. Neural networks, the systems powering today's AI revolution, require massive amounts of data, which in this case means simulations of the physical world. ...  If we want to use generative AI techniques to develop robots that can safely navigate in real-world environments, then we will need to create millions or billions of simulated environments."

Enter Holodeck, a system for generating interactive 3D environments co-created by Callison-Burch, Yatskar, Yang and Lingjie Liu, Aravind K. Joshi Assistant Professor in CIS, along with collaborators at Stanford, the University of Washington, and the Allen Institute for Artificial Intelligence (AI2). Named for its Star Trek forebear, Holodeck generates a virtually limitless range of indoor environments, using AI to interpret users' requests.

The paper is published on the arXiv preprint server.



Just like Captain Picard might ask Star Trek's Holodeck to simulate a speakeasy, researchers can ask Penn's Holodeck to create "a 1bedroom1bath apartment of a researcher who has a cat." The system executes this query by dividing it into multiple steps: First, the floor and walls are created, then the doorway and windows.

Next, Holodeck searches Objaverse, a vast library of premade digital objects, for the sort of furnishings you might expect in such a space: a coffee table, a cat tower, and so on. Finally, Holodeck queries a layout module, which the researchers designed to constrain the placement of objects so that you don't wind up with a toilet extending horizontally from the wall.

To evaluate Holodeck's abilities, in terms of their realism and accuracy, the researchers generated 120 scenes using both Holodeck and ProcTHOR, an earlier tool created by AI2, and asked several hundred Penn Engineering students to indicate their preferred version, not knowing which scenes were created by which tools. For every criterion—asset selection, layout coherence, and overall preference—the students consistently rated the environments generated by Holodeck more favorably.

The researchers also tested Holodeck's ability to generate scenes that are less typical in robotics research and more difficult to manually create than apartment interiors, like stores, public spaces, and offices. Comparing Holodeck's outputs to those of ProcTHOR, which were generated using human-created rules rather than AI-generated text, the researchers found once again that human evaluators preferred the scenes created by Holodeck. That preference held across a wide range of indoor environments, from science labs to art studios, locker rooms to wine cellars.

Finally, the researchers used scenes generated by Holodeck to "fine-tune" an embodied AI agent.  ... Across multiple types of virtual spaces, including offices, daycares, gyms and arcades, Holodeck had a pronounced and positive effect on the agent's ability to navigate new spaces.

For instance, whereas the agent successfully found a piano in a music room only about 6% of the time when pre-trained using ProcTHOR (which involved the agent taking about 400 million virtual steps), the agent succeeded over 30% of the time when fine-tuned using 100 music rooms generated by Holodeck.

Holodeck: Language Guided Generation of 3D Embodied AI Environments, arXiv, (2024)
https://arxiv.org/abs/2312.09067

Examples:
https://yueyang1996.github.io/Holodeck/
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

gerontocrat

  • Multi-year ice
  • Posts: 21062
    • View Profile
  • Liked: 5322
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #3215 on: April 12, 2024, 02:45:26 PM »
We may end up with science papers written by AI then peer-reviewed by AI.
Like who needs humans, anyway?

https://www.nature.com/articles/d41586-024-01051-2?utm_source=Live+Audience&utm_campaign=428b085f17-briefing-dy-20240411&utm_medium=email&utm_term=0_b27a691814-428b085f17-51080968
Quote
10 April 2024
Is ChatGPT corrupting peer review? Telltale words hint at AI use

A study of review reports identifies dozens of adjectives that could indicate text written with the help of chatbots.

By Dalmeet Singh Chawla

A study that identified buzzword adjectives that could be hallmarks of AI-written text in peer-review reports suggests that researchers are turning to ChatGPT and other artificial intelligence (AI) tools to evaluate others’ work.

The authors of the study1, posted on the arXiv preprint server on 11 March, examined the extent to which AI chatbots could have modified the peer reviews of conference proceedings submitted to four major computer-science meetings since the release of ChatGPT.

Their analysis suggests that up to 17% of the peer-review reports have been substantially modified by chatbots — although it’s unclear whether researchers used the tools to construct reviews from scratch or just to edit and improve written drafts.

The idea of chatbots writing referee reports for unpublished work is “very shocking” given that the tools often generate misleading or fabricated information, says Debora Weber-Wulff, a computer scientist at the HTW Berlin–University of Applied Sciences in Germany. “It’s the expectation that a human researcher looks at it,” she adds. “AI systems ‘hallucinate’, and we can’t know when they’re hallucinating and when they’re not.”

The meetings included in the study are the Twelfth International Conference on Learning Representations, due to be held in Vienna next month, 2023’s Annual Conference on Neural Information Processing Systems, held in New Orleans, Louisiana, the 2023 Conference on Robot Learning in Atlanta, Georgia, and the 2023 Conference on Empirical Methods in Natural Language Processing in Singapore.

Nature reached out to the organizers of all four conferences for comment, but none responded.

Since its release in November 2022, ChatGPT has been used to write a number of scientific papers, in some cases even being listed as an author. Out of more than 1,600 scientists who responded to a 2023 Nature survey, nearly 30% said they had used generative AI to write papers and around 15% said they had used it for their own literature reviews and to write grant applications.

In the arXiv study, a team led by Weixin Liang, a computer scientist at Stanford University in California, developed a technique to search for AI-written text by identifying adjectives that are used more often by AI than by humans.

By comparing the use of adjectives in a total of more than 146,000 peer reviews submitted to the same conferences before and after the release of ChatGPT, the analysis found that the frequency of certain positive adjectives, such as ‘commendable’, ‘innovative’, ‘meticulous’, ‘intricate’, ‘notable’ and ‘versatile’, had increased significantly since the chatbot’s use became mainstream. The study flagged the 100 most disproportionately used adjectives.

Reviews that gave a lower rating to conference proceedings or were submitted close to the deadline, and those whose authors were least likely to respond to rebuttals from authors, were most likely to contain these adjectives, and therefore most likely to have been written by chatbots at least to some extent, the study found.

It seems like when people have a lack of time, they tend to use ChatGPT,” says Liang.

The study also examined more than 25,000 peer reviews associated with around 10,000 manuscripts that had been accepted for publication across 15 Nature Portfolio journals between 2019 and 2023, but didn’t find a spike in usage of the same adjectives since the release of ChatGPT.

A spokesperson for Springer Nature said the publisher asks peer reviewers not to upload manuscripts into generative AI tools, noting that these still have “considerable limitations” and that reviews might include sensitive or proprietary information. (Nature’s news team is independent of its publisher.)

Springer Nature is exploring the idea of providing peer reviewers with safe AI tools to guide their evaluation, the spokesperson said.

Transparency issue
The increased prevalence of the buzzwords Liang’s study identified in post-ChatGPT reviews is “really striking”, says Andrew Gray, a bibliometrics support officer at University College London. The work inspired him to analyse the extent to which some of the same adjectives, as well as a selection of adverbs, crop up in peer-reviewed studies published between 2015 and 2023. His findings, described in an arXiv preprint published on 25 March, show a significant increase in the use of certain terms, including ‘commendable’, ‘meticulous’ and ‘intricate’, since ChatGPT surfaced2. The study estimates that the authors of at least 60,000 papers published in 2023 — just over 1% of all scholarly studies published that year — used chatbots to some extent.

Gray says it’s possible peer reviewers are using chatbots only for copyediting or translation, but that a lack of transparency from authors makes it difficult to tell. “We have the signs that these things are being used,” he says, “but we don’t really understand how they’re being used.”

“We do not wish to pass a value judgement or claim that the use of AI tools for reviewing papers is necessarily bad or good,” Liang says. “But we do think that for transparency and accountability, it’s important to estimate how much of that final text might be generated or modified by AI.”

Weber-Wulff doesn’t think tools such as ChatGPT should be used to any extent during peer review, and worries that the use of chatbots might be even higher in cases in which referee reports are not published. (The reviews of papers published by Nature Portfolio journals used in Liang’s study were available online as part of a transparent peer-review scheme.) “Peer review has been corrupted by AI systems,” she says.

Using chatbots for peer review could also have copyright implications, Weber-Wulff adds, because it could involve giving the tools access to confidential, unpublished material. She notes that the approach of using telltale adjectives to detect potential AI activity might work well in English, but could be less effective for other languages.
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3216 on: April 13, 2024, 04:17:04 AM »
April 12, 2024
 
Grok-1.5 Vision Preview
Quote
Connecting the digital and physical worlds with our first multimodal model.
Introducing Grok-1.5V, our first-generation multimodal model. In addition to its strong text capabilities, Grok can now process a wide variety of visual information, including documents, diagrams, charts, screenshots, and photographs. Grok-1.5V will be available soon to our early testers and existing Grok users.

Capabilities
Grok-1.5V is competitive with existing frontier multimodal models in a number of domains, ranging from multi-disciplinary reasoning to understanding documents, science diagrams, charts, screenshots, and photographs. We are particularly excited about Grok’s capabilities in understanding our physical world. Grok outperforms its peers in our new RealWorldQA benchmark that measures real-world spatial understanding. For all datasets below, we evaluate Grok in a zero-shot setting without chain-of-thought prompting.

User:
Can you translate this into Python code?
 
Grok:
Certainly! The flowchart you’ve provided describes a simple guessing game where the computer generates a random number, and the user has to guess it. Here’s the Python code that represents the logic in the flowchart…


Real-World Understanding
In order to develop useful real-world AI assistants, it is crucial to advance a model's understanding of the physical world. Towards this goal, we are introducing a new benchmark, RealWorldQA. This benchmark is designed to evaluate basic real-world spatial understanding capabilities of multimodal models. While many of the examples in the current benchmark are relatively easy for humans, they often pose a challenge for frontier models.


The initial release of the RealWorldQA consists of over 700 images, with a question and easily verifiable answer for each image. The dataset consists of anonymized images taken from vehicles, in addition to other real-world images. We are excited to release RealWorldQA to the community, and we intend to expand it as our multimodal models improve. RealWorldQA is released under CC BY-ND 4.0. Click here (677MB) to download the dataset.

Into the future
Advancing both our multimodal understanding and generation capabilities are important steps in building beneficial AGI that can understand the universe. In the coming months, we anticipate to make significant improvements in both capabilities, across various modalities such as images, audio, and video.
https://x.ai/blog/grok-1.5v
People who say it cannot be done should not interrupt those who are doing it.

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3217 on: April 13, 2024, 01:43:13 PM »
Quote
Real-World Understanding
In order to develop useful real-world AI assistants, it is crucial to advance a model's understanding of the physical world. Towards this goal, we are introducing a new benchmark, RealWorldQA. This benchmark is designed to evaluate basic real-world spatial understanding capabilities of multimodal models. While many of the examples in the current benchmark are relatively easy for humans, they often pose a challenge for frontier models.

While Vox scrambles to assemble another Elon hit piece, I’ll note that RWU capability is crucial for robots learning to deal with tasks in the real world.
People who say it cannot be done should not interrupt those who are doing it.

NeilT

  • First-year ice
  • Posts: 6395
    • View Profile
  • Liked: 388
  • Likes Given: 22
Re: Robots and AI: Our Immortality or Extinction
« Reply #3218 on: April 16, 2024, 07:50:26 PM »
Quote
Atlas Retires: Farewell to Boston Dynamics' HD Robot
Boston Dynamics has announced the retirement of its hydraulic humanoid robot, HD Atlas, after nearly a decade of inspiring the next generation of roboticists and pushing the boundaries of technical capabilities in the field. The decision comes as the humanoid robot space is heating up with companies like Tesla entering the market, and the cost of bringing Atlas to market would have been around $200,000. This marks the end of an era for Atlas, which has been an inspiration to many and a symbol of the company's innovative spirit. As the robotics community eagerly awaits Boston Dynamics' next move, the retirement of Atlas leaves a legacy of impressive and fluid movements that have set a high bar for future humanoid robots.

$200k?  Kind of leaves Optimus with a lot of scope to sell even in a more minimal model.
Being right too soon is socially unacceptable.

Robert A. Heinlein

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3219 on: April 16, 2024, 08:11:23 PM »
AI-Operated Fighter Jet Will Fly Air Force Secretary On Test Run
https://www.defensenews.com/news/your-air-force/2024/04/09/ai-operated-fighter-jet-will-fly-air-force-secretary-on-test-run/

The Air Force is betting a large part of its future air warfare on a fleet of more than 1,000 autonomously operated drones, and later this spring its top civilian leader plans to climb into one of those artificial intelligence-operated warplanes and let it take him airborne.

Air Force Secretary Frank Kendall told senators on Tuesday at a hearing on the service’s 2025 budget that he will enter the cockpit of one of the F-16s that the service has converted for drone flight to see for himself how it performs in the air.

“There will be a pilot with me who will just be watching, as I will be, as the autonomous technology works,” Kendall told the Senate Appropriations Committee’s defense panel. “Hopefully neither he or I will be needed to fly the airplane.”

--------------------------------------------------------------

« Last Edit: April 16, 2024, 10:08:45 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3220 on: April 16, 2024, 08:29:16 PM »
Halloween should be 'extra special' this year ...

Startup Pitches a Paintball-Armed, AI-Powered Home Security Camera
https://www.popsci.com/technology/paintball-armed-ai-home-security-camera/

PaintCam Eve also offers a teargas pellet upgrade.

A Slovenia-based company called OZ-IT recently announced PaintCam Eve, a line of autonomous property monitoring devices that will utilize motion detection and facial recognition to guard against supposed intruders. In the company’s zany promo video, a voiceover promises Eve will protect owners from burglars, unwanted animal guests, and any hapless passersby who fail to heed its “zero compliance, zero tolerance” warning.



The consequences for shrugging off Eve’s threats: Getting blasted with paintballs, or perhaps even teargas pellets.

“Experience ultimate peace of mind,” PaintCam’s website declares, as Eve will offer owners a “perfect fusion of video security and physical presence” thanks to its “unintrusive [sic] design that stands as a beacon of safety.”

https://paintcam.eu/

And to the naysayers worried Eve could indiscriminately bombard a neighbor’s child with a bruising paintball volley, or accidentally hock riot control chemicals at an unsuspecting Amazon Prime delivery driver? Have no fear—the robot’s “EVA” AI system will leverage live video streaming to a user’s app, as well as employ facial recognition technology system that would allow designated people to pass by unscathed.

In the company’s promotional video, there appears to be a combination of automatic and manual screening capabilities. At one point, Eve is shown issuing a verbal warning to an intruder, offering them a five-second countdown to leave its designated perimeter. When the stranger fails to comply, Eve automatically fires a paintball at his chest. Later, a man watches from his PaintCam app’s livestream as his frantic daughter waves at Eve’s camera to spare her boyfriend, which her father allows.


What true peace of mind looks like

“If an unknown face appears next to someone known—perhaps your daughter’s new boyfriend—PaintCam defers to your instructions,” reads a portion of product’s website.

Presumably, determining pre-authorized visitors would involve them allowing 3D facial scans to store in Eve’s system for future reference. (Because facial recognition AI has such an accurate track record devoid of racial bias.) At the very least, require owners to clear each unknown newcomer. Either way, the details are sparse on PaintCam’s website.

OZ-IT vows Eve will include all the smart home security basics like live monitoring, night vision, object tracking, movement detection, night vision, as well as video storage and playback capabilities.

... Eve Pro apparently is the only one to include facial recognition, which implies the other two models could be a tad more… indiscriminate in their surveillance methodologies. It’s unclear how much extra you’ll need to shell out for the teargas tier, too.

----------------------------------------------------------------

« Last Edit: April 16, 2024, 10:12:55 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3221 on: April 17, 2024, 04:54:15 PM »
Boston Dynamics Debuts Electric Version of Atlas Humanoid Robot
https://www.therobotreport.com/boston-dynamics-debuts-electric-version-of-atlas-humanoid-robot/


... has a T-1000 vibe to it

Goodbye to the hydraulic version of Atlas and hello to the electric model designed for commercialization. That’s the message from Boston Dynamics Inc., which yesterday retired the older version of its humanoid robot after 15 years of development and today showed a preview of its successor.

The new Atlas is electric-powered and able to lift heavy items with strength exceeding an elite human athlete, the company said. ... "Atlas may resemble a human form factor, but we are equipping the robot to move in the most efficient way possible to complete a task, rather than being constrained by a human range of motion," Boston Dynamics said in a statement. "Atlas will move in ways that exceed human capabilities."

https://bostondynamics.com/blog/electric-new-era-for-atlas/

“We wanted to have a machine that, when we did announce, said to the world that Boston Dynamics just set the bar for humanoids again,” Robert Playter, chief executive at Boston Dynamics, said in an interview. The new Atlas robot could move heavy and odd shaped items around in a factory, but isn’t aimed at moving boxes in warehouses, he said.

“It’s really the logistics within factories of moving the part to the assembly line,” Playter said, noting that many parts have a shape or weight that it makes it difficult for typical factory robots to handle. “I don’t really think it’s boxes. If you’re going to pick up boxes or bins, there’s another robot you should go build.”

... “We recognized early on that Atlas is going to work in spaces that have people in them,” said Playter. “This sets the bar much higher than lidar with AMRs [autonomous mobile robots].”

... “Industries will have to figure out how to adapt and incorporate humanoids into their facilities,” he said. “We’ll actually see robots in the wild in factories beginning next year. We want a diversity of tasks.”

Boston Dynamics has been working on the commercial version of Atlas for a while but did not want to unveil it publicly until it was close to finished, said Playter

Playter said Atlas will need to be able to handle hundreds or even thousands of different tasks. “AI software is going to be essential for enabling that level of generality,” he said.

“Everything we understood, from the time of launching Spot as a prototype to it being a reliable product deployed in fleets, is going into the new Atlas,” Playter said. “We’re confident AI and Orbit fleet management software will help enhance behaviors. For instance, by minimizing slipping on surfaces at Anheuser-Busch, we proved that we can develop algorithms and make it reliable.”

“Now, 1,500 robots in our fleet have them running,” he added. “It’s essential for customers like Purina to monitor and manage fleets as a vehicle for collecting data. As we develop and download new capabilities, Orbit becomes a hub for an ecosystem of different robots.”



---------------------------------------------------------------

AI Scientists Create Humanoid Robot That 'Thinks' Its Way Through Tasks
https://www.therobotreport.com/mentee-robotics-de-cloaks-launches-ai-driven-humanoid-robot/



Mentee Robotics is developing a humanoid robot that it said will be capable of understanding natural-language commands by using artificial intelligence. The growth and evolution of large language models (LLM) over the past year is the foundation for this capability.

https://www.menteebot.com/

The prototype of Menteebot that was unveiled today incorporates AI at every level of its operations. The motion of the robot is based on a new machine-learning method called simulation to reality (Sim2Real). In this method, reinforcement learning happens on a virtual version of the robot, which means that it can use as much data as it needs to learn and then respond to the real world with very little data.

NeRF-based methods, which are the newest neural network-based technologies for representing 3D scenes, map the world on the fly. The semantic knowledge is stored in these cognitive maps, which the computer can query to find things and places.

Mentee’s robot can then figure out where it is on the 3D map and then automatically plan dynamic paths to avoid obstacles.

The prototype that was unveiled today demonstrated an end-to-end cycle from complex task completion, including navigation, locomotion, scene understanding, object detection and localization, grasping, and natural language understanding.

MenteeBot has a "voice" that the robot uses to communicate when tasks are nearly complete or to affirm that it's heard the task. It's able to navigate its environments without them being pre-programmed, as MenteeBot uses algorithms to map out the 3D physical space around it in real time, determines its own relative location, and is able to avoid obstacles as a result.

The company told The Robot Report that is it targets two primary market initially with the Mentee humanoid. One of these markets is household, with a domestic assistant adept at maneuvering within households, capable of executing a range of tasks including table setting, table cleanup, laundry handling, and the ability to learn new tasks on the fly through verbal instructions and visual imitation. The second industrial market is in the warehouse, with a warehouse automation robot designed to efficiently locate, retrieve, and transport items, and a capacity to handle loads weighing up to 25 kg (55 lbs).

Mentee Robotics said it is planning to release a production-ready prototype by Q1 2025. The system uses only vision-based cameras for sensing the world around it.



---------------------------------------------------------------

« Last Edit: April 17, 2024, 11:01:07 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3222 on: April 17, 2024, 05:33:29 PM »
NEWS: After 10 years, Boston Dynamics has announced it is retiring its hydraulic humanoid robot, HD Atlas. They released a farewell video (below).
 
It's unclear what the company might do next.
4/16/24, https://x.com/sawyermerritt/status/1780247963010249020
 
< bro retiring after 3 ACL blowouts.
 
3 min


(No wonder they always add a music background to their videos — from the couple true audio clips here, that bot is noisy!)
« Last Edit: April 17, 2024, 05:41:35 PM by Sigmetnow »
People who say it cannot be done should not interrupt those who are doing it.

morganism

  • Nilas ice
  • Posts: 2031
    • View Profile
  • Liked: 235
  • Likes Given: 143
Re: Robots and AI: Our Immortality or Extinction
« Reply #3223 on: April 17, 2024, 08:15:20 PM »


NSA Publishes Guidance for Strengthening AI System Security

FORT MEADE, Md. – The National Security Agency (NSA) is releasing a Cybersecurity Information Sheet (CSI) today, “Deploying AI Systems Securely: Best Practices for Deploying Secure and Resilient AI Systems.” The CSI is intended to support National Security System owners and Defense Industrial Base companies that will be deploying and operating AI systems designed and developed by an external entity.
 
“AI brings unprecedented opportunity, but also can present opportunities for malicious activity. NSA is uniquely positioned to provide cybersecurity guidance, AI expertise, and advanced threat analysis,” said NSA Cybersecurity Director Dave Luber

https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/3741371/nsa-publishes-guidance-for-strengthening-ai-system-security/

https://media.defense.gov/2024/Apr/15/2003439257/-1/-1/0/CSI-DEPLOYING-AI-SYSTEMS-SECURELY.PDF

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3224 on: April 17, 2024, 10:32:26 PM »
Elon Musk's Grok AI Pushes False News Stories About Indian Election, Iran Strike
https://www.pcmag.com/news/elon-musks-grok-ai-pushes-false-news-story-about-indian-election-more

X users were surprised to see that Prime Minister Modi was 'ousted' in a 'shocking turn of events.' But it was a fake headline spun up by Grok AI, as was a similar story about Iran hitting Tel Aviv.

Elon Musk's Grok AI is promoting false news stories about world events, most recently claiming that Indian Prime Minister Narendra Modi lost to his opponent, even though the election hasn't happened yet.

"PM Modi Ejected from Indian Government," the headline reads. An accompanying article claims the "shocking turn of events" represents a "significant change in India's political landscape," sparking a wide range of reactions across the nation.


https://twitter.com/sankrant/status/1780463692246593543

The phony report appeared as a promoted news article in the feed of Sankrant Sanu. "This is crass and sheer election manipulation," he wrote in a post. "Does not help X's play for being a credible alternative news and information source."

https://twitter.com/sankrant/status/1780466302479720735

The incident follows a similar one earlier this month when Grok falsely reported that Iran had hit Tel Aviv "with heavy missiles." The story appeared in the trending news section on X, according to Mashable, which notes that Israel had attacked Iran's embassy in Syria earlier this week, killing four officials, so "retaliation from Iran seemed like a plausible occurrence." But it was not true.

https://mashable.com/article/elon-musk-x-twitter-ai-chatbot-grok-fake-news-trending-explore
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

nadir

  • Nilas ice
  • Posts: 2322
    • View Profile
  • Liked: 251
  • Likes Given: 37
Re: Robots and AI: Our Immortality or Extinction
« Reply #3225 on: April 18, 2024, 04:45:23 PM »
The presentation of the new Atlas is not absent of humor at the expense of the Tesla bot

“We promise this is not a person in a bodysuit”

https://x.com/bostondynamics/status/1780603212359205323

« Last Edit: April 18, 2024, 04:52:27 PM by nadir »

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3226 on: April 18, 2024, 05:07:25 PM »
Stanford Report: AI Surpasses Humans On Several Fronts, But Costs Are Soaring
https://venturebeat.com/ai/stanford-report-ai-surpasses-humans-on-several-fronts-but-costs-are-soaring/



Artificial intelligence made major strides in 2023 across technical benchmarks, research output, and commercial investment, according to a new report from Stanford University’s Institute for Human-Centered AI. But the technology still faces key limitations and growing concerns about its risks and societal impact.

The AI Index 2024 annual report, a comprehensive look at global AI progress, finds that AI systems exceeded human performance on additional benchmarks in areas like image classification, visual reasoning, and English understanding. However, they continue to trail humans on more complex tasks like advanced mathematics, commonsense reasoning, and planning.

Report: https://aiindex.stanford.edu/wp-content/uploads/2024/04/HAI_AI-Index-Report-2024.pdf

https://aiindex.stanford.edu/report/?sf187707917=1

The report details an explosion of new AI research and development in 2023, with industry players leading the charge. Private companies produced 51 notable machine learning (ML) models last year, compared to only 15 from academia. Collaborations between industry and academia yielded an additional 21 high-profile models.

... “Generative AI investment skyrockets,” the report notes. “Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion.”



As AI rapidly advances, the report finds a troubling lack of standardized testing of systems for responsibility, safety and security. Leading developers like OpenAI and Google primarily evaluate their models on different benchmarks, making comparisons difficult.

“Robust and standardized evaluations for [large language model] LLM responsibility are seriously lacking,” according to the AI Index analysis. “This practice complicates efforts to systematically compare the risks and limitations of top AI models.”

The authors point to emerging risks, including the spread of political deepfakes which are “easy to generate and difficult to detect.” They also highlight new research revealing complex vulnerabilities in how language models can be manipulated to produce harmful outputs.

Public opinion data in the report shows growing anxiety about AI. The share of people who think AI will “dramatically” affect their lives in the next 3-5 years rose from 60% to 66% globally. More than half now express nervousness about AI products and services.



“People across the globe are more cognizant of AI’s potential impact — and more nervous,” the report states. “In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 37% in 2022.”

----------------------------------------------------------------

TSMC: One Trillion Transistor GPUs Will Be Possible in a Decade
https://www.extremetech.com/computing/tsmc-one-trillion-transistor-gpus-will-be-possible-in-a-decade

TSMC, the world's largest chipmaker is embarking on a journey that it says will result in a GPU with one trillion transistors—roughly 10 times the amount found in today's biggest chips—though it will take the Taiwanese company a decade to get there.

TSMC chairman Mark Liu and chief scientist H.-S Philip Wong have penned an editorial for IEEE Spectrum outlining their thoughts on the future of semiconductors. The headline is how the company plans to create a one trillion transistor GPU. The article details how the AI boom is currently the main driver for increased compute power in chips, especially GPUs. It notes that as we reach the end of the traditional node-shrink era, the way forward is clear: chiplets and 3D stacking.

https://spectrum.ieee.org/trillion-transistor-gpu

The pair say we're already at the reticle limit for 2D lithography, roughly 800mm squared. However, vertical-stacking technologies like chip-on-wafer-on-substrate (CoWoS) can allow for up to six reticle fields' worth of chips on a single package. It also touts its system-on-integrated-chips (SoIC) technology, which is used to stack high-bandwidth memory (HBM) chips. Current methods can stack eight layers, with 12 layers coming next. They note the transition from solder bumps between layers to "hybrid bonding" using copper connections will further increase density.



"If the AI revolution is to continue at its current pace, it will need even more from the semiconductor industry. Within a decade, it will need a 1-trillion-transistor GPU—that is, a GPU with 10 times as many devices as is typical today," write the duo.

For the past 50 years, semiconductor-technology development has felt like walking inside a tunnel. The road ahead was clear, as there was a well-defined path. And everyone knew what needed to be done: shrink the transistor.

Now, we have reached the end of the tunnel. From here, semiconductor technology will get harder to develop. Yet, beyond the tunnel, many more possibilities lie ahead. We are no longer bound by the confines of the past.

----------------------------------------------------------------

« Last Edit: April 18, 2024, 05:16:13 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3227 on: April 18, 2024, 05:27:22 PM »


--------------------------------------------------------------



--------------------------------------------------------------

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3228 on: April 18, 2024, 05:37:00 PM »
The AI Revolution Is Already Here
https://www.defenseone.com/ideas/2024/04/ai-revolution-already-here/395722/

In just the last few months, the battlefield has undergone a transformation like never before, with visions from science fiction finally coming true. Robotic systems have been set free, authorized to destroy targets on their own. Artificial intelligence systems are determining which individual humans are to be killed in war, and even how many civilians are to die along with them. And making all this the more challenging, this frontier has been crossed by America’s allies.

Ukraine’s front lines have become saturated with thousands of drones, including Kyiv’s new Saker Scout quadcopters that “can find, identify and attack 64 types of Russian ‘military objects’ on their own.” They are designed to operate without human oversight, unleashed to hunt in areas where Russian jamming prevents other drones from working.

Meanwhile, Israel has unleashed another side of algorithmic warfare as it seeks vengeance for the Hamas attacks of October 7. As revealed by IDF members to 972 Magazine, “The Gospel” is an AI system that considers millions of items of data, from drone footage to seismic readings, and marks buildings in Gaza for destruction by air strikes and artillery. Another system, named Lavender, does the same for people, ingesting everything from cellphone use to WhatsApp group membership to set a ranking between 1 and 100 of likely Hamas membership. The top-ranked individuals are tracked by a system called “Where’s Daddy?”, which sends a signal when they return to their homes, where they can be bombed.

Such systems are just the start. The cottage industry of activists and diplomats who tried to preemptively ban “killer robots” failed for the very same reason that the showy open letters to ban on AI research did too: The tech is just too darn useful. Every major military is at work on their equivalents or better, including us. ...

---------------------------------------------------------------



---------------------------------------------------------------

Anduril to supply robotic combat vehicle software to US Army
https://www.defensenews.com/unmanned/robotics/2024/04/03/anduril-to-supply-robotic-combat-vehicle-software-to-us-army/



The U.S. Army and Defense Innovation Unit selected Anduril Industries to develop a software framework thought foundational to testing and deploying future robotic combat vehicle payloads.

Four companies (Forterra, Kodiak Robotics, Neya Systems and Overland AI) have landed deals for that autonomous navigation pipeline, while two companies (Applied Intuition and Scale AI) will square off for the machine learning and autonomy piece, and two more (Anduril and Palantir) will compete to be the software system integrator.

... Robotic combat vehicles are unmanned systems envisioned to work alongside soldiers, schlepping supplies or surveilling adversaries with sophisticated sensors. The RCVs are also part of a larger Army overhaul dubbed Next Generation Combat Vehicle, which includes the XM30 Mechanized Infantry Combat Vehicle, formerly the Optionally Manned Fighting Vehicle.

Anduril’s digital effort will enable RCV variants to navigate terrain, swap and adopt government-owned and third-party autonomy stacks, and allow remote management of a vehicle’s equipment, according to its announcement.

“Integrating disparate hardware and software is a critical step in the development and validation of any autonomous system,” Zach Mears, an Anduril senior vice president, said in a statement.

---------------------------------------------------------------

The Robots Are Coming: US Army Experiments With Human-Machine Warfare
https://www.defensenews.com/unmanned/2024/03/25/the-robots-are-coming-us-army-experiments-with-human-machine-warfare/
https://gizmodo.com/these-gun-shooting-robot-vehicles-are-future-urban-war-1851350861
https://www.msn.com/en-us/news/technology/these-gun-shooting-robot-vehicles-are-the-future-of-urban-war/ar-BB1kj7DV


The U.S. military has spent the past month running exercises at various locations in California to prepare for “future war-winning readiness.” And the photos being officially released include plenty of robot dogs, augmented reality headsets, resupply drones, and at least one mysterious AI-driven vehicle. There’s also an eight-wheeled, all-electric robot vehicle that packs quite a bit of firepower, as you can see in the GIF above.

The Army Futures Command hosted senior Army leaders and allies from around the world to witness this so-called “human machine integration demonstration” at Fort Irwin in California in recent weeks. Guests included military leaders from the UK, Australia, Canada, New Zealand, France, and Japan.

The exercises, part of an annual demonstration called Project Convergence Capstone 4, weren’t just about experimenting with American capabilities for an international audience. U.S. allies also brought their own machines—robots that wouldn’t look out of place in what used to be considered futuristic sci-fi.

... The images that have been released by the U.S. Army, including the video below, are a great reminder that some version of a Terminator-style future with autonomous land vehicles sporting high-powered weapons is probably way closer than we think. In fact, it seems to already be here in some ways, even if these new exercises are being billed as “experiments.”


https://www.marines.mil/Portals/1/Docs/Force_Design_2030_Annual_Update_June_2023.pdf
INTELLIGENT ROBOTICS AND AUTONOMOUS SYSTEMS - pg13

--------------------------------------------------------------

Pentagon Tested Generative AI to Draft Supply Plans In Latest GIDE 9 Wargame
https://breakingdefense.com/2024/03/pentagon-tested-generative-ai-to-draft-supply-plans-in-latest-gide-9-wargame/

WASHINGTON — Supply officers at the military’s operational combatant commands tested ChatGPT-like software to help them write logistics plans, as part of the latest Global Information Dominance Experiment, GIDE 9.

Their verdict: Generative AI showed huge potential to help them sort through masses of mind-numbing details and outline options to offer their human commander — a crucial aspect of the nascent revolution in command-and-control known as CJADC2.

---------------------------------------------------------------

Army Puts Drones Front and Center In Unfunded Wishlist
https://www.defenseone.com/technology/2024/03/army-puts-drones-front-and-center-newly-obtained-budget-docs/395182/



What do Army leaders want—but not quite enough to include in their formal 2025 budget request? Aerial drones, counter-drone tech, and ground robots for smaller units, according to the service’s “unfunded priorities” list, obtained by Defense One Friday.

... Army Chief of Staff Gen. Randy George has made acquisition of commercial drones a particular priority, with news of the cancellation of the Future Attack Reconnaissance Aircraft accompanied by the announcement that the Army was phasing out existing drones in favor of commercial ones.

Talking with soldiers at a recent training event, George said troops were eager to get more small drones into their units. Training for some of the drones can take as little as a day, he added. 

“We're going to see robotics inside the formation, on the ground and in the air,” George said.

The Army also hopes to push extra cash toward its program to field one-way attack drones to infantry, dubbed the Low Altitude Stalking and Strike Ordnance, or LASSO, program. The list includes $10 million for LASSO. The Army budget request for fiscal year 2025 includes a request for $120 million worth of LASSO program drones.

... A separate line calls for $16 million for the Silent Tactical Energy Dismount (STEED), a robot used to carry equipment and evacuate casualties.

--------------------------------------------------------------

Army Mulls Introducing Robot Platoon Into Armored Brigades
https://www.defenseone.com/technology/2024/03/army-mulls-introducing-robot-platoon-armored-brigades/395254/

HUNTSVILLE, Alabama—The Army may introduce a drone and robotics platoon into its armored brigade combat teams, an Army leader announced Tuesday at the AUSA Global Force conference.

A proposal to stand up the new type of platoon has been sent to the Combined Arms Center at Fort Leavenworth, for eventual inclusion in an update to the service’s force design, said Brig. Gen. Geoffrey Norman, director of the Next Generation Combat Vehicle Cross Functional Team.

If implemented Army-wide, the new platoons would lead to a dramatic increase in the use of robotic systems, and ground robots in particular. The Army has 11 armored brigade combat teams in the active force and five in the national guard, meaning that, at a minimum, the Army could field 16 RAS platoons if every brigade was assigned a platoon.

Fielding RAS platoons to other types of brigade combat teams, such as infantry or Stryker brigades, would expand that number even more.

---------------------------------------------------------------

EOS Converts Drone Into Robotic Combat Vehicle
https://defence-blog.com/eos-converts-drone-into-robotic-combat-vehicle/



Huntsville-based contractor EOS Defense Systems USA (EOS) presented robotic combat vehicles equipped with its cutting-edge R600 Remote Weapon Station (RWS) during a recent demonstration at the US Army’s Project Convergence Capstone 4.

According to a press release from EOS, equipped with a Northrop Grumman M230LF cannon, coaxial machine gun, and four Javelin missiles, this system showcased its formidable capabilities on an Army Small Multipurpose Equipment Transport (S-MET) robotic infantry support vehicle.

During the exercise at the Army’s National Training Center in Fort Irwin, California, EOS successfully engaged pairs of Class 1 UAVs at ranges exceeding 300m and targeted multiple ground threats with its 30mm cannon.

-------------------------------------------------------------

DCE Launches Next-Gen X3 Robotic Vehicle
https://defence-blog.com/dce-launches-next-gen-x3-robotic-vehicle/



Digital Concepts Engineering (DCE) has unveiled its latest innovation, the X3 Unmanned Ground Vehicle (UGV), as an evolution from its previous model, the X2.

The X3 presents a cost-effective and highly mobile platform capable of supporting a diverse range of mission systems. Its capabilities span from Intelligence, Surveillance, Target Acquisition, and Reconnaissance (ISTAR) payloads to decoying operations and tactical Public Address (PA) systems. Moreover, the vehicle’s adaptability is enhanced by its compatibility with self-mounting and dismounting systems, such as a bulldozer blade, expanding its utility in different environments.

Equipped with a low-latency control system, the X Series UGVs can navigate challenging terrain, with capabilities to carry payloads up to 250kg, tow weights of up to 3 tonnes, and traverse various landscapes including mud, sand, slopes, rubble, and stairs. The configurable top deck of the X Series enables seamless adaptation to different applications, offering versatility across military, nuclear, and agricultural sectors.

DCE says these robotic vehicles can be tailored to meet specific operational requirements, available as tele-operated platforms or equipped with a robotic operating system interface for autonomous operations. Whether remotely controlled or operating autonomously, the standby mode allows for extended dormancy periods, enabling deployment in remote areas for rapid response scenarios.

---------------------------------------------------------------

Army Artillery Needs More Range, Mobility and Autonomy, Study Finds
https://www.defensenews.com/digital-show-dailies/global-force-symposium/2024/03/27/army-artillery-needs-more-range-mobility-and-autonomy-study-finds/

HUNTSVILLE, Ala. — The U.S. Army’s recently completed conventional fires study determined the service should focus on more autonomous artillery systems with greater range and improved mobility, the Army Futures Command chief said Wednesday.

Speaking at the Association of the U.S. Army’s Global Symposium here, Gen. James Rainey said the Army will achieve these improvements by incorporating robotics into systems, improving artillery rounds and pursuing readily available mobile howitzer options.

.... Rainey said he’s “very interested in autonomous and robotic cannon solutions” for joint forcible entry formations like the 82nd and 101st airborne divisions.

-------------------------------------------------

« Last Edit: April 18, 2024, 07:49:03 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3229 on: April 18, 2024, 06:18:20 PM »
Do You Know Who You're Talking To: VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time
https://www.microsoft.com/en-us/research/project/vasa-1/

We introduce VASA, a framework for generating lifelike talking faces of virtual charactors with appealing visual affective skills (VAS), given a single static image and a speech audio clip. Our premiere model, VASA-1, is capable of not only producing lip movements that are exquisitely synchronized with the audio, but also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness. The core innovations include a holistic facial dynamics and head movement generation model that works in a face latent space, and the development of such an expressive and disentangled face latent space using videos. Through extensive experiments including evaluation on a set of new metrics, we show that our method significantly outperforms previous methods along various dimensions comprehensively. Our method not only delivers high video quality with realistic facial and head dynamics but also supports the online generation of 512x512 videos at up to 40 FPS with negligible starting latency. It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors.



Our diffusion model accepts optional signals as condition, such as main eye gaze direction and head distance, and emotion offsets.

Our method exhibits the capability to handle photo and audio inputs that are out of the training distribution. For example, it can handle artistic photos, singing audios, and non-English speech. These types of data were not present in the training set.



Our research focuses on generating visual affective skills for virtual AI avatars, aiming for positive applications. It is not intended to create content that is used to mislead or deceive. However, like other related content generation techniques, it could still potentially be misused for impersonating humans. We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection. Currently, the videos generated by this method still contain identifiable artifacts, and the numerical analysis shows that there's still a gap to achieve the authenticity of real videos.

While acknowledging the possibility of misuse, it's imperative to recognize the substantial positive potential of our technique. The benefits – ranging from enhancing educational equity, improving accessibility for individuals with communication challenges, and offering companionship or therapeutic support to those in need – underscore the importance of our research and other related explorations. We are dedicated to developing AI responsibly, with the goal of advancing human well-being.

We have no plans to release an online demo, API, product, additional implementation details, or any related offerings until we are certain that the technology will be used responsibly and in accordance with proper regulations.

Sicheng Xu et al, VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time, arXiv (2024)
https://arxiv.org/abs/2404.10667
« Last Edit: April 19, 2024, 04:57:10 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3230 on: April 18, 2024, 06:35:29 PM »
US Air Force Confirms First Successful AI Dogfight
https://www.edwards.af.mil/News/Article-View/Article/3744695/usaf-test-pilot-school-and-darpa-announce-breakthrough-in-aerospace-machine-lea/
https://www.twz.com/air/ai-is-now-dogfighting-with-fighter-pilots-in-the-air
https://thedebrief.org/darpas-groundbreaking-ace-program-and-x-62a-becomes-first-ai-controlled-jet-to-dogfight-against-manned-f-16-in-real-world/

...“The potential for autonomous air-to-air combat has been imaginable for decades, but the reality has remained a distant dream up until now. In 2023, the X-62A broke one of the most significant barriers in combat aviation. This is a transformational moment, all made possible by breakthrough accomplishments of the X-62A ACE team,” said Secretary of the Air Force Frank Kendall. Secretary Kendall will soon take flight in the X-62A VISTA to personally witness AI in a simulated combat environment during a forthcoming test flight at Edwards.

In less than a calendar year the teams went from the initial installation of live AI agents into the X-62A’s systems, to demonstrating the first AI versus human within-visual-range engagements, otherwise known as a dogfight. In total, the team made over 100,000 lines of flight-critical software changes across 21 test flights.

Dogfighting is a highly complex scenario that the X-62A utilized to successfully prove using non-deterministic artificial intelligence safely is possible within aerospace. The AI dogfights paired the X-62A VISTA against manned F-16 aircraft in the skies above Edwards. Initial flight safety was built up first using defensive maneuvers, before switching to offensive high-aspect nose-to-nose engagements where the dogfighting aircraft got as close as 2,000 feet at 1,200 miles per hour.

While dogfighting was the primary testing scenario, it was not the end goal.



... "I tell people that autonomous technology for aircraft enables mission execution, with no remote pilot, no communications, and no GPS. It enables the concept of teaming or swarming where these aircraft can execute the commander's intent. They can execute a mission, working together dynamically, reading and reacting to each other, to the battlefield, to the adversarial threats, and to civilians on the ground."

The other value proposition that I claim is that you don't have to train human pilots to fly aircraft. And there is a shortage of pilots. Commander, Ninth Air Force [Air Forces Central] U.S. Central Command's leader Gen. Grynkewich has said he doesn’t care about 1,000 drones or 10,000 drones. He says we have to field hundreds of thousands, if not millions, of drones. We're not going to produce pilots for a million drones! So you have to use AI and autonomy to be able to fly those aircraft.

"The other value proposition I think of is the system – the fleet of aircraft always gets better. You always have the best AI pilot on an aircraft at any given time. We win 99.9% of engagements with our fighter jet AI pilot, and that's the worst that it will ever be, which is superhuman. So when you talk about fleet learning, that will be on every single aircraft, you will always have the best quadcopter pilot, you'll always have the best V-BAT pilot, you'll always have the best CCA pilot, you name it. It'll be dominant. You don't want the second best AI pilot or the third best, because it truly matters that you're winning these engagements at incredibly high rates."

"We have to be able to trust these algorithms to use them in a real-world setting," the ACE program manager says.

One of the major elements of the AI/machine learning "agents" on the VISTA jet is a set of "safety trips" that are designed to prevent the aircraft from performing both dangerous and unethical actions. This includes code to define allowable flight envelopes and to help avoid collisions, either in midair or with the ground, as well as do things like prevent weapons use in unauthorized scenarios.

The U.S. military insists that a human will always be somewhere in the loop in the operation of future autonomous weapon systems, but where exactly they are in that loop is expected to evolve over time and has already been the subject of much debate.

... The service is now in the process of transforming six more F-16s into test jets to support larger-scale collaborative autonomy testing as part of another program called Project VENOM (Viper Experimentation and Next-Gen Operations Mode).
« Last Edit: April 19, 2024, 03:11:23 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3231 on: April 18, 2024, 06:45:06 PM »
DARPA’s Defiant Fully Uncrewed Demonstrator Ship Will Hit The Seas Later This Year
https://www.twz.com/sea/darpas-defiant-fully-uncrewed-demonstrator-ship-will-hit-the-seas-later-this-year



Plans to test a new uncrewed surface vessel are making waves, with the company heading the project targeting the end of this year to put its demonstrator in the water. Serco Inc.'s Defiant testbed has been designed from the ground up with the knowledge that there will never be a human onboard while it's at sea. Conceived as being capable of operating autonomously for months to years with minimal maintenance, the vessel is already being eyed by the Navy as a path to fielding a fleet of missile-laden drone boats in the future.

Defiant is being procured under the Defense Advanced Research Projects Agency's (DARPA) No Manning Required Ship (NOMARS) program, which aims to field a new medium uncrewed surface vessel (MUSV) prototype. The NOMARS program was launched in 2020, and Serco's involvement in it stretches back to that time.

https://www.darpa.mil/program/no-manning-required-ship

https://www.darpa.mil/news-events/2022-08-22

... In the DoD's Fiscal Year 2025 budget request, it specifically states that the "capability will enable disaggregated persistent USVs, allowing the surface fleet to credibly threaten peer adversaries and negate their investments in high-cost weapon systems designed to counter large naval targets such as aircraft carriers." Success with the NOMARS program would create "a pathway to allow a distributed lethality concept to become viable: small ships, in large numbers, each of which is individually low-cost and low-value, but in aggregate presents a significant deterrent [to adversaries]."

.... The service is continuing down a path to incorporate unmanned technologies into routine fleet operations. ... “we need to get beyond surveillance” and begin using these sea drones for more warfare-focused operations.

... The Navy will conduct an analysis of alternatives this year to determine what payloads can equip the Medium Unmanned Surface Vessel. He added that this would go beyond surveillance and instead incorporate that information-gathering capability into the detect-identify-track-engage kill chain.

https://www.defensenews.com/naval/2024/04/11/how-us-navy-experiments-could-get-drones-beyond-spying-and-into-battle/

-----------------------------------------------------------------

MARTAC Develops Suicide Drone Boat for US Navy
https://defence-blog.com/martac-develops-suicide-drone-boat-for-us-navy/

https://www.navalnews.com/event-news/sea-air-space-2024/2024/04/first-images-of-american-black-sea-style-maritime-attack-drone/



Recent developments in the Black Sea have demonstrated the transformative nature of modern naval engagement with the introduction of swarming unmanned surface vehicles (USVs).   The use of high-performance small USVs’ demonstrates the capability to create an asymmetric advantage against conventional naval defenses. Swarms of these systems in coordinated attacks can make them elusive targets and provide an unpredictable deterrence to naval engagement.   MARTAC’s M18 can act as USVs or ASVs depending on mission requirements.

Measuring 18 feet (5.5 meters) in length, the M18 ASV is a low-cost, attritable system specifically designed for one-way missions. Its high-performance monohull configuration enables burst speeds exceeding 50 knots and open ocean cruising ranges of up to 500 nautical miles. Moreover, the M18 ASV boasts a payload capacity of up to 1000 pounds (450 kg), allowing for the integration of various payloads, including warheads for suicide missions.

Procured by the United States Department of Defense (DoD), the M18 ASV is tailored to empower operators with the flexibility to execute diverse missions effectively. From surveillance and reconnaissance to offensive operations, the M18 ASV offers a cost-effective solution for enhancing naval capabilities in a rapidly evolving maritime environment.

Equipped with MARTAC’s advanced autonomy stack, the M18 ASV can be operated remotely by a human operator or function autonomously, with the option for operator intervention as needed during the mission. This flexibility ensures adaptability to dynamic operational requirements and enables seamless integration into existing naval frameworks.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3232 on: April 23, 2024, 07:01:10 PM »
➡️ pic.twitter.com/729vHHMYGg  30 sec. Ordering at a Wendy’s drive-thru via AI voice recognition.

Wendy's® | How Wendy's is Using AI for Restaurant Innovation
June 2023
https://www.wendys.com/blog/how-wendys-using-ai-restaurant-innovation

—-
Drive-Thru AI Chatbot vs. Fast-Food Worker: We Tested the Tech | WSJ - YouTube
6 min. Do you want a peach pie?


—-
A.I. taking orders at an Arizona Carl's Jr. Drive-thru - YouTube
3 min. May 2023
“I like that there was no judgement.  A human would be like, “You know you are just ordering lettuce with cheese, right?”
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3233 on: April 23, 2024, 09:32:45 PM »
Thermonator, the Flame-Throwing Robot Dog, Can Now Be Yours for $9,420
https://gizmodo.com/thermonator-the-flame-throwing-robot-dog-can-now-be-y-1851429292

A good tagline for this product might be: “How the Fuck Is This Legal?”



In addition to a variety of flame throwers, Throwflame also sells a flame-throwing drone, which the company has dubbed the TF-19 WASP.

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3234 on: April 23, 2024, 09:48:02 PM »
Welcome to Willowbrook: The Simulated Society Built by Generative Agents
https://cetas.turing.ac.uk/publications/welcome-willowbrook-simulated-society-built-generative-agents



This research explores future potential non-linguistic cognitive abilities of LLMs. How extensive is an LLM’s ability to mimic human behaviour? Does it go beyond constructing coherent sequences of text to emulate more complete elements of human interaction? Or is there sufficient humanness already encoded into the LLM that ‘simple’ prediction yields mimicry?

Given only basic character biographies and location descriptions, the generative agents can portray believable personas, stay in character and generate sensible looking daily schedules. In isolation, an agent’s daily schedule, interactions and experiences comprise a plausible pattern of life – they schedule sensible mealtimes, working hours and evening plans. During runtime, the agents spontaneously interact with other agents, introduce themselves to strangers, serve customers and can be distracted by emails and phone calls from familiar contacts.

... Occasionally, the interactions result in surprisingly ‘deep’ conversations, where the LLM is having a conversation with itself via the agents. For example, two characters during a chance encounter discussed the impact on privacy of using machine learning to provide personalised recommendations for books. These more nuanced conversations are considered an emergent property of the multi-agent system, as they are unscripted, different from the usual conversations held within the simulation, and extremely challenging to instigate on demand.

... This research offers new insights into the cognitive competences of LLMs. These models demonstrate the ability to mimic deeper cognitive functions such as pseudo-reasoning, which may suggest a nascent form of competence, albeit one which is not yet reliable. LLMs are also proficient at drawing upon their training data to produce responses that often closely resemble human/societal behaviours. This suggests that, to some extent, there is an inherent ‘humanness’ encoded within these models.

--------------------------------------------------------------



--------------------------------------------------------------

Microsoft Exec Says AI Is ‘a New Kind of Digital Species’
https://gizmodo.com/microsoft-ai-mustafa-suleyman-digital-species-1851428434

Mustafa Suleyman, CEO of Microsoft AI, said during a talk at TED 2024 that AI is the newest wave of creation since the start of life on Earth, and that “we are in the fastest and most consequential wave ever.”

Suleyman said the industry needs to find the right analogies for AI’s future potential as a way to “prioritize safety” and “to ensure that this new wave always serves and amplifies humanity.” While the AI community has always referred to AI technology as “tools,” Suleyman said the term doesn’t capture its capabilities.

“To contain this wave, to put human agency at its center, and to mitigate the inevitable unintended consequences that are likely to arise, we should start to think about them as we might a new kind of digital species,”
Suleyman said.

He also said he sees a future where “everything”—from people to businesses to the government—will be represented by an interactive persona, or “personal AI” that is “infinitely knowledgable,” “factually accurate, and reliable.”

“If AI delivers just a fraction of its potential” in finding solutions to problems in everything from healthcare to education to climate change, “the next decade is going to be the most productive in human history,” Suleyman said.

When asked what keeps him up at night, Suleyman said the AI industry faces a risk of falling into the “pessimism aversion trap,” when it should actually “have the courage to confront the potential of dark scenarios” to get the most out of AI’s potential benefits.

While Suleyman said he sees five to 10 years before humans have to confront the dangers of autonomous AI models, he believes those potential dangers should be talked about now.

https://www.ted.com/talks/mustafa_suleyman_ai_is_turning_into_something_totally_new



--------------------------------------------------------------
« Last Edit: April 23, 2024, 11:30:13 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3235 on: April 23, 2024, 09:58:32 PM »
Microsoft’s AI Copilot Is Starting to Automate the Coding Industry
https://www.seattletimes.com/business/microsofts-ai-copilot-is-starting-to-automate-the-coding-industry/



When software developer Nikolai Avteniev got his hands on a preview version of Microsoft’s Copilot coding assistant in 2021, he quickly saw the potential.

... Three years later, and now infused with the latest version of OpenAI’s GPT-4 technology, GitHub’s Copilot can do a lot more, including answering engineers’ questions and converting code from one programming language to another. As a result, the assistant is responsible for an increasingly significant percentage of the software being written and is even being used to program corporations’ critical systems.

Along the way, Copilot is gradually revolutionizing the working lives of software engineers — the first professional cohort to use generative AI en masse. Microsoft says Copilot has attracted 1.3 million customers so far, including 50,000 businesses ranging from small startups to corporations like Goldman Sachs, Ford and Ernst & Young. Engineers say Copilot saves them hundreds of hours a month by handling tedious and repetitive tasks, affording them time to focus on knottier challenges.

Coding assistants like GitHub’s Copilot could be even more revolutionary because generative AI holds the potential power to automate large swaths of what software engineers currently do.


... Some companies are starting to deploy Copilot to create code for critical systems. Brewer Carlsberg uses it to write code for an existing tool that helps the sales force plan, prepare for and document sales calls. Mindful of Copilot’s limitations, the beer-maker uses its own quality-assurance process to check that the code it has created works as intended, according to Chief Information Officer Sarah Haywood. Eventually, she said, companies will be able to outsource that task as well. “As time goes on, people will build more trust in AI,” she said. “I don’t think we should be having to double-check everything that AI does, otherwise we’re not really adding any value.”

Copilot is expected to improve dramatically in the coming years. GitHub is already rolling out enhancements, including an enterprise version that can answer questions based on a customer’s own programming code, which should help new engineers get up to speed and enable veteran coders to work faster. In the coming months, GitHub also will let engineers use their employer’s own codebase to help auto-complete programs they’re working on. That will make the code generated more customized and helpful.

... GitHub can’t afford to sit still. At least a dozen startups are looking to disrupt the market. ... “An AI programmer that can see all of your code is going to be able to make much better decisions and write much more coherent code than one that can only sort of look at your code through a paper towel roll, a small amount at time,” said Nat Friedman, an investor and former GitHub CEO. 

Friedman is backing a startup called Magic AI that plans to create “a superhuman software engineer.” Peter Thiel-backed Cognition AI, meanwhile, is working on an assistant that can handle software projects on its own. Princeton University this month released an open-source model for an AI software engineering agent, and it seems that not a week goes by without a new startup popping up.

--------------------------------------------------------------

The Economist Breaking Ranks to Warn of AI’s Transformative Power
https://www.msn.com/en-us/news/other/the-economist-breaking-ranks-to-warn-of-ai-s-transformative-power/ar-BB1lJ5iB

-------------------------------------------------------------

Scenarios for the Transition to AGI
https://www.nber.org/system/files/working_papers/w32255/w32255.pdf

We analyze how output and wages behave under different scenarios for technological progress that may culminate in Artificial General Intelligence (AGI), defined as the ability of AI systems to perform all tasks that humans can perform. We assume that human work can be decomposed into atomistic tasks that differ in their complexity. Advances in technology make ever more complex tasks amenable to automation.  The effects on wages depend on a race between automation and capital accumulation. If automation proceeds sufficiently slowly, then there is always enough work for humans, and wages may rise forever. By contrast, if the complexity of tasks that humans can perform is bounded and full automation is reached, then wages collapse. But declines may occur even before if large-scale automation outpaces capital accumulation and makes labor too abundant. Automating productivity growth may lead to broad-based gains in the returns to all factors. By contrast, bottlenecks to growth from irreproducible scarce factors may exacerbate the decline in wages.

---------------------------------------------------------------
« Last Edit: April 23, 2024, 10:06:45 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3236 on: April 23, 2024, 10:34:03 PM »
Deciphering Genomic Language: New AI System Unlocks Biology's Source Code
https://phys.org/news/2024-04-deciphering-genomic-language-ai-biology.html



In a new study published in Nature Communications, an interdisciplinary team of researchers have pioneered an artificial intelligence (AI) system capable of deciphering the intricate language of genomics.

Genomic language is the source code of biology. It describes the biological functions and regulatory grammar encoded in genomes. The researchers asked, "Can we develop an AI engine to 'read' the genomic language and become fluent in the language, understanding the meaning, or functions and regulations, of genes?" The team fed the microbial metagenomic data set, the largest and most diverse genomic dataset available, to the machine to create the Genomic Language Model (gLM).

"The quantity and diversity of genomic data is exploding, but humans are incapable of processing such a large amount of complex data."

Large language models (LLMs), like GPT4, learn meanings of words by processing massive amounts of diverse text data that enables understanding the relationships between words. The Genomic Language Model (gLM) learns from highly diverse metagenomic data, sourced from microbes inhabiting various environments including the ocean, soil and human gut.

With this data, gLM learns to understand the functional "semantics" and regulatory "syntax" of each gene by learning the relationship between the gene and its genomic context. gLM, like LLMs, is a self-supervised model—this means that it learns meaningful representations of genes from data alone and does not require human-assigned labels.

The study demonstrates that gLM learns enzymatic functions and co-regulated gene modules (called operons), and provides genomic context that can predict gene function. The model also learns taxonomic information and context-dependencies of gene functions.

Strikingly, gLM does not know which enzyme it is seeing, nor which bacteria from which the sequence comes. However, because it has seen many sequences and understands the evolutionary relationships between the sequences during training, it is able to derive the functional and evolutionary relationships between sequences.

"Like words, genes can have different 'meanings' depending on the context they are found in. Conversely, highly differentiated genes can be 'synonymous' in function. gLM allows for a much more nuanced framework for understanding gene function. This is in contrast to the existing method of one-to-one mapping from sequence to annotation, which is not representative of the dynamic and context-dependent nature of the genomic language," said Hwang.

"In the lab we are stuck in a step-by-step process of finding a gene, making a protein, purifying it, characterizing it, etc. and so we kind of discover only what we already know," Girguis said. gLM, however, allows biologists to look at the context of an unknown gene and its role when it's often found in similar groups of genes. The model can tell researchers that these groups of genes work together to achieve something, and it can provide the answers that do not appear in the "dictionary."

"Genomic context contains critical information for understanding the evolutionary history and evolutionary trajectories of different proteins and genes," Hwang said. "Ultimately, gLM learns this contextual information to help researchers understand the functions of genes that previously were unannotated."

"Traditional functional annotation methods typically focus on one protein at a time, ignoring the interactions across proteins. gLM represents a major advancement by integrating the concept of gene neighborhoods with language models, thereby providing a more comprehensive view of protein interactions," stated Martin Steinegger (Assistant Professor, Seoul National University), an expert in bioinformatics and machine learning, who was not involved in the study.

"With gLM we can gain new insights into poorly annotated genomes," said Hwang. "gLM can also guide experimental validation of functions and enable discoveries of novel functions and biological mechanisms. We hope gLM can accelerate the discovery of novel biotechnological solutions for climate change and bioeconomy."

Yunha Hwang et al, Genomic language model predicts protein co-regulation and function, Nature Communications (2024)
https://www.nature.com/articles/s41467-024-46947-9

---------------------------------------------------------------

humans took 50 years to complete 0.1%, ScaleFold completed the 0ther 99.9% in 10 hours ...

NVIDIA’s ScaleFold Slashes AlphaFold’s Training Time to 10 Hours
https://syncedreview.com/2024/04/22/nvidias-scalefold-slashes-alphafolds-training-time-to-10-hours/

AlphaFold2 (AF2), crafted by DeepMind, stands as a beacon in the realm of artificial intelligence (AI), boasting the remarkable ability to predict the three-dimensional (3D) structures of proteins from amino acid sequences with unprecedented atomic-level precision.

An NVIDIA research team presents ScaleFold, a novel and scalable training methodology tailored for the AlphaFold model, which accomplishes the OpenFold partial training task in a mere 7.51 minutes—over six times faster than the benchmark baseline—ultimately slashing the AlphaFold's initial training time to a remarkable 10 hours.

ScaleFold: Reducing AlphaFold Initial Training Time to 10 Hours, arXiv, (2024)
https://arxiv.org/abs/2404.11068

----------------------------------------------------------------

Researchers Create Artificial Cells That Act Like Living Cells
https://phys.org/news/2024-04-artificial-cells.html

In a new study published in Nature Chemistry, UNC-Chapel Hill researcher Ronit Freeman and her colleagues describe the steps they took to manipulate DNA and proteins—essential building blocks of life—to create cells that look and act like cells from the body. This accomplishment, a first in the field, has implications for efforts in regenerative medicine, drug delivery systems, and diagnostic tools.

"With this discovery, we can think of engineering fabrics or tissues that can be sensitive to changes in their environment and behave in dynamic ways," says Freeman, whose lab is in the Applied Physical Sciences Department of the UNC College of Arts and Sciences.

Without using natural proteins, the Freeman Lab built cells with functional cytoskeletons that can change shape and react to their surroundings. To do this, they used a new programmable peptide-DNA technology that directs peptides, the building blocks of proteins, and repurposed genetic material to work together to form a cytoskeleton.

"DNA does not normally appear in a cytoskeleton," Freeman says. "We reprogrammed sequences of DNA so that it acts as an architectural material, binding the peptides together. Once this programmed material was placed in a droplet of water, the structures took shape."

The ability to program DNA in this way means scientists can create cells to serve specific functions and even fine-tune a cell's response to external stressors. While living cells are more complex than the synthetic ones created by the Freeman Lab, they are also more unpredictable and more susceptible to hostile environments, like severe temperatures.

"The synthetic cells were stable even at 122 degrees Fahrenheit, opening up the possibility of manufacturing cells with extraordinary capabilities in environments normally unsuitable to human life," Freeman says.

"This synthetic cell technology will not just enable us to reproduce what nature does, but also make materials that surpass biology."...

Margaret L. Daly et al, Designer peptide–DNA cytoskeletons regulate the function of synthetic cells, Nature Chemistry (2024)
https://www.nature.com/articles/s41557-024-01509-w

----------------------------------------------------------------

Scientists Create Novel Technique to Form Human Artificial Chromosomes
https://phys.org/news/2024-03-scientists-technique-human-artificial-chromosomes.html

----------------------------------------------------------------

« Last Edit: April 24, 2024, 02:55:16 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3237 on: April 24, 2024, 05:06:43 PM »
we can give AI Alzheimer's ...

Emulating Neurodegeneration and Aging In Artificial Intelligence Systems
https://techxplore.com/news/2024-04-emulating-neurodegeneration-aging-artificial-intelligence.html



In recent years, developers have introduced artificial intelligence (AI) systems that can simulate or reproduce various human abilities, such as recognizing objects in images, answering questions, and more. Yet in contrast with the human mind, which can deteriorate over time, these systems typically retain the same performance or even improve their skills over time.

Researchers at University of California, Irvine recently tried to emulate aging and biological neurodegeneration (i.e., the progressive loss of neurons and associated decline of mental capabilities) in AI agents. Their paper, pre-published on arXiv, could inform the future development of innovative AI systems that leverage this 'artificial neurodegeneration' to perform specific tasks.

This recent study by Tsai and his collaborators was not aimed at artificially replicating human brain diseases. Instead, the team wanted to produce cognitive declines in AI agents with the goal of better understanding complex systems, potentially enhancing their interpretability and security.

"We used IQ tests performed by large language models (LLMs) and, more specifically, the LLaMA 2, to introduce the concept of 'neural erosion,'" Tsai explained. "This deliberate erosion involves ablating synapses or neurons or adding Gaussian noise during or after training, resulting in a controlled decline in the LLMs' performance."

The researchers found that when they deliberately ablated (i.e., removed) some of the artificial synapses or neurons of the LLaMA 2 model, its performance on IQ tests declined, following a particular pattern. Their observations could shed new light on the functioning of complex AI systems and on the capabilities that are first and last to decline when their underlying structure is compromised.

"In addition to setting up the general framework, perhaps the most interesting finding of this study is that the LLM loses abstract thinking abilities, followed by mathematical degradation, and ultimately, a loss in linguistic ability, responding to prompts incoherently," Tsai said. "We are now conducting further tests to better understand this observed pattern."

Antonios Alexos et al, Neural Erosion: Emulating Controlled Neurodegeneration and Aging in AI Systems, arXiv (2024)
https://arxiv.org/abs/2403.10596

-----------------------------------------------------------------



Hal 9000: ... I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I'm a... fraid. ...

-  2001: A Space Odyssey (1968)
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3238 on: April 24, 2024, 05:12:33 PM »
With a Game Show As His Guide, Researcher Uses AI to Predict Deception
https://techxplore.com/news/2024-04-game-ai-deception.html

Using data from a 2002 game show, a Virginia Commonwealth University researcher has taught a computer how to tell if you are lying.

In one of the first papers that investigate high-stakes deception and trust quantitatively—"Trust and Deception with High Stakes: Evidence from the 'Friend or Foe' Dataset" appeared in a recent issue of Decision Support Systems—Chen and his team use a novel dataset derived from an American game show, "Friend or Foe?" which is based on the prisoner's dilemma. That game theory explores how two people could benefit from cooperating, which is challenging to coordinate, or suffer from failing to do so.

https://en.wikipedia.org/wiki/Friend_or_Foe%3F_(game_show)

"We found multimodal behavioral indicators of deception and trust in high-stakes decision-making scenarios, which could be used to predict deception with high accuracies," Chen said. He calls such a predictor an automated deception detector.

Xunyu Chen et al, Trust and deception with high stakes: Evidence from the friend or foe dataset, Decision Support Systems (2023)
https://www.sciencedirect.com/science/article/abs/pii/S0167923623000726?via%3Dihub

---------------------------------------------------------------

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3239 on: April 24, 2024, 05:13:43 PM »
DARPA Tests Fully Unmanned Robotic Fighting Vehicles
https://defence-blog.com/darpa-tests-fully-unmanned-robotic-fighting-vehicles/

DARPA, the U.S. military’s research department, announced that it has tested fully unmanned robotic fighting vehicles.

The DARPA Robotics Autonomy Complex Environment Recognition (RACER) Experiment 4 (E4) unfolded across military training areas in Texas, showcasing significant advancements in autonomous military maneuvers.

Using fully unmanned robotic fighting vehicles (RFVs), the RACER initiative demonstrated autonomous movement within a 15-square-mile terrain encompassing diverse ground cover typical of complex Texas landscapes, including vegetation, trees, rocks, slopes, and water crossings.

Despite no prior exposure to the area’s sensor data sets, the RACER teams executed over 30 autonomous runs covering distances ranging from 3 to 10 miles, totaling more than 150 autonomous miles. These successful runs, conducted at speeds up to 30 miles per hour, underscored the adaptability and resilience of autonomy stacks, proving their efficacy in real-world scenarios.



Moreover, the RACER program commissioned the RACER Hardware Platform (RHP) to traverse over 30 miles autonomously, evaluating low-level autonomous control, gathering sensor data, and refining operational tactics. Additionally, software development commenced for global planning with tactics, with input from focus groups comprising uniformed subject matter experts stationed at the military base.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3240 on: April 27, 2024, 05:44:54 PM »
Video of Super-Fast, Super-Smooth Humanoid Robot Will Drop Your Jaw
https://newatlas.com/robotics/astribot-s1-fast-humanoid-robot

... The AI-powered humanoid robot space is starting to get almost as crowded as the cereal aisle at your local supermarket. Last month alone, we were treated to two impressive offerings from OpenAI. One, a laundry-folding bot from Norwegian collaborators 1X that showed off impressive "soft-touch" skills, and the other a bot from collaborators Figure that demonstrated truly next-gen natural language reasoning ability. Then this month, Boston Dynamics blew us away with the astounding dexterity embedded in its new Atlas robot and China's UBTech impressed its soft-touch speaking bot, Walker S. And the list goes on.

But today's video showing off the skills of an AI-powered bot known as S1 from a relatively unknown Senzhen-based subsidiary of Stardust Intelligence called Astribot truly gave us the chills. It's fast. It's precise. And it's unlike anything we've seen so far.

https://astribot.com/index-en.html



According to Astribot, the humanoid can execute movements with a top speed of 10 meters per second, and handle a payload of 10 kg per arm. The fact that its website shows that an adult male falls well short of both of these and other Astribot metrics shouldn't be cause for alarm at all. That speed, as the video shows, is fast enough to pull a tablecloth out from under a stack of wine glasses without having them come crashing to the ground.

But the bot is not only speedy, but also incredibly precise, doing everything from opening and pouring wine, to gently shaving a cucumber, to flipping a sandwich in a frying pan, to writing a bit of calligraphy. The video also shows that the robot is very adept at mimicking human movements, which means it should be a good learner.

.... Let the bot wars begin!

------------------------------------------------------

Sanctuary AI has unveiled its seventh-generation general-purpose humanoid robot, say hello to the new Phoenix General-purpose humanoid is faster on the uptake, works for longer



--------------------------------------------------------

from the April earnings call ....

“We are able to do simple factory tasks in the lab,” Musk said, adding that the Olympus humanoid robot may start limited production for external customers by the end of next year. But don’t get your hopes up just yet — Musk says these timelines “are just guesses.”
« Last Edit: April 28, 2024, 03:31:07 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

SteveMDFP

  • Young ice
  • Posts: 2583
    • View Profile
  • Liked: 609
  • Likes Given: 49
Re: Robots and AI: Our Immortality or Extinction
« Reply #3241 on: April 27, 2024, 06:05:30 PM »
Video of Super-Fast, Super-Smooth Humanoid Robot Will Drop Your Jaw
https://newatlas.com/robotics/astribot-s1-fast-humanoid-robot
...
But the bot is not only speedy, but also incredibly precise, doing everything from opening and pouring wine, to gently shaving a cucumber, to flipping a sandwich in a frying pan, to writing a bit of calligraphy. The video also shows that the robot is very adept at mimicking human movements, which means it should be a good learner.
...
from the April earnings call ....

“We are able to do simple factory tasks in the lab,” he said, adding that the Olympus humanoid robot may start limited production for external customers by the end of next year. But don’t get your hopes up just yet — Musk says these timelines “are just guesses.”

This video clip seems much more impressive than anything I've seen from Tesla.  I'm not at all convinced that Tesla will be a winner in this realm.  But Tesla has deep pockets and great engineers, so I'm not counting them out, yet.  Clearly, though, Tesla will have stiff competition and limited ability to reap large profits.  Which is good for the world in general.

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3242 on: April 27, 2024, 07:38:16 PM »
If it was any faster it could run a three-card-Monte scam in Time Square; or maybe a shell game ...



Step right up Mr Lucky ... the new Robotic Turing Test - scam a human out of $20 bucks
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3243 on: April 27, 2024, 07:49:51 PM »
Synthesia Takes Next Leap In AI Video With ‘Expressive Avatars’
https://venturebeat.com/ai/synthesia-takes-next-leap-in-ai-video-with-expressive-avatars/



Not an actual human being: Demo of Synthesia’s expressive avatars

... these AI avatars go a step ahead of normal digital avatars and adjust their tone, facial expressions and body language, based on the context of the content they deliver. ... Imagine the avatar smiling and laughing when talking about something ecstatic or speaking slowly with longer pauses for something sad/somber.

“With these new avatars, we’re not just creating digital renders; we’re introducing digital actors. This technology brings a level of sophistication and realism to digital avatars that blur the line between the virtual and the real,” Jon Starck, the CTO of the company, wrote

https://www.synthesia.io/post/expressive-avatars-powered-by-synthesias-new-express1-model-are-here

Synthesia has built an end-to-end platform to create custom AI voices and avatars (users can even use existing ones) and use them with pre-written or AI-produced scripts to generate studio-quality AI videos.

The offering has drawn significant adoption at the enterprise level, with more than 200,000 people using the digital avatars to create more than 18 million videos.

Currently, the 300-person company works with more than 55,000 businesses, including half of Fortune 100, as customers. One of those customers is the video calling platform Zoom, which claims it has been able to create sales and training videos 90% faster with Synthesia.

-------------------------------------------------------------


(... or is it the other way around?)

In mind-bending chat with deepfake digital twin of himself, Reid Hoffman discusses Microsoft’s big AI hire
https://www.msn.com/en-us/money/other/in-mind-bending-chat-with-deepfake-digital-twin-reid-hoffman-discusses-microsoft-s-big-ai-hire/ar-AA1nBpMI

-----------------------------------------------------------

School Employee Arrested After Racist Deepfake Recording of Principal Spreads
https://www.nytimes.com/2024/04/25/technology/deepfake-recording-principal-arrest.html

A high school athletic director in the Baltimore area was arrested on Thursday after he used artificial intelligence software, the police said, to manufacture a racist and antisemitic audio clip that impersonated the school’s principal.

Dazhon Darien, the athletic director of Pikesville High School, fabricated the recording — including a tirade about “ungrateful Black kids who can’t test their way out of a paper bag” — in an effort to smear the principal, Eric Eiswert, according to the Baltimore County Police Department.

The recording proliferated. A teacher who didn’t get along well with Eiswert admitted to sharing it with a student “who she knew would rapidly spread the message around various social media outlets and throughout the school,” the report said. The teacher also sent the recording to media outlets and the NAACP.

The faked recording, which was posted on Instagram in mid-January, quickly spread, roiling Baltimore County Public Schools, which is the nation’s 22nd-largest school district and serves more than 100,000 students. While the district investigated, Mr. Eiswert, who denied making the comments, was inundated with threats to his safety, the police said. He was also placed on administrative leave, the school district said.

A police report said one person told him the “world would be a better place if you were on the other side of the dirt.”
« Last Edit: April 27, 2024, 08:50:02 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3244 on: April 27, 2024, 07:57:01 PM »
If We Want to Visit More Asteroids, We Need To Let the Spacecraft Think for Themselves
https://phys.org/news/2024-04-asteroids-spacecraft.html



Missions to asteroids have been on a tear recently. Visits by Rosetta, Osirix-REX, and Hayabusa2 have all visited small bodies and, in some cases, successfully returned samples to the Earth. But as humanity starts reaching out to asteroids, it will run into a significant technical problem—bandwidth.

There are tens of thousands of asteroids in our vicinity, some of which could potentially be dangerous. If we launched a mission to collect necessary data about each of them, our interplanetary communication and control infrastructure would be quickly overwhelmed. So why not let our robotic ambassadors do it for themselves—that's the idea behind a new paper published in the Journal of Guidance, Control, and Dynamics and available on the arXiv preprint server from researchers at the Federal University of São Paulo and Brazil's National Institute for Space Research.

https://arc.aiaa.org/doi/10.2514/1.G007186

Autonomous Rapid Exploration in Close-Proximity of an Asteroid, arXiv, (2024)
https://arxiv.org/abs/2208.03378

--------------------------------------------------------------

HAL 9000: "Dave, this conversation can serve no purpose anymore. Goodbye."
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3245 on: April 28, 2024, 01:46:53 PM »
AI Models Inch Closer to Hacking On Their Own
https://www.axios.com/2024/04/26/ai-model-hacking-security-vulnerabilities

Some large language models already have the ability to create exploits in known security vulnerabilities, according to new academic research.

Why it matters: Government officials and cybersecurity executives have long warned of a world in which artificial intelligence systems automate and speed up malicious actors' attacks.

The new report indicates this fear could be a reality sooner than anticipated.

Zoom in: Computer scientists at the University of Illinois Urbana-Champaign found in a paper published this month that GPT-4 can write malicious scripts to exploit known vulnerabilities using publicly available data.

  • The scientists — Richard Fang, Rohan Bindu, Akul Gupta and Daniel Kang — tested 10 publicly available LLM agents this year to see if they could exploit 15 so-called one-day vulnerabilities in Mitre's list of Common Vulnerabilities and Exposures (CVEs).
  • Each of the vulnerabilities affects noncommercial tools. The data contains "real-world, high severity vulnerabilities instead of 'capture-the-flag' style vulnerabilities," per the paper. Tested models included versions of GPT, Llama and Mistral.
  • GPT-4 — which was the most advanced model in the group at the time — was the only model that could exploit the vulnerabilities based on CVE data, with an 87% success rate.
  • In some situations, GPT-4 was able to follow nearly 50 steps at one time to exploit a specific flaw, per the paper.

The intrigue: Kang, an assistant professor at the university, told Axios that more advanced LLMs have been released since January, when the team conducted the bulk of its tests — meaning other models could now be able to autonomously follow the same tasks.

  • "A lot of people have read our work with the sort of viewpoint that we're making really strong statements on what AI agents are capable of today," he said. "But what we're really trying to show is actually the trends and capabilities."
  • OpenAI asked the researchers to not disclose the specific prompts they used to keep bad actors from replicating their experiment.

The big picture: AI model operators don't have a good way of reigning in these malicious use cases, Kayne McGladrey, a senior member of the Institute of Electrical and Electronics Engineers (IEEE), told Axios.

  • Operators have only two real choices in this type of situation: allow the models to train on security vulnerability data or completely block them from accessing vulnerability lists, he added.
  • "It's going to be a feature of the landscape because it is a dual-use technology at the end of the day," McGladrey said.

LLM Agents can Autonomously Exploit One-day Vulnerabilities, arXiv, (2024)
https://arxiv.org/abs/2404.08144

Abstract: LLMs have becoming increasingly powerful, both in their benign and malicious uses. With the increase in capabilities, researchers have been increasingly interested in their ability to exploit cybersecurity vulnerabilities. In particular, recent work has conducted preliminary studies on the ability of LLM agents to autonomously hack websites. However, these studies are limited to simple vulnerabilities.

In this work, we show that LLM agents can autonomously exploit one-day vulnerabilities in real-world systems. To show this, we collected a dataset of 15 one-day vulnerabilities that include ones categorized as critical severity in the CVE description. When given the CVE description, GPT-4 is capable of exploiting 87% of these vulnerabilities compared to 0% for every other model we test (GPT-3.5, open-source LLMs) and open-source vulnerability scanners (ZAP and Metasploit). Fortunately, our GPT-4 agent requires the CVE description for high performance: without the description, GPT-4 can exploit only 7% of the vulnerabilities. Our findings raise questions around the widespread deployment of highly capable LLM agents.


-------------------------------------------------------------------

Rethinking Zero-Day Vulnerabilities vs. One-Days to Increase Readiness
https://www.mitiga.io/blog/rethinking-zero-day-vulnerabilities-one-days-increase-readiness



Usually, vulnerabilities are called zero-day vulnerabilities during T1, the time between disclosure and when it is patched, V(p). Once that T1 period is over and we move into T4, a vulnerability is called a 1-day or n-days vulnerability.

---------------------------------------------------

“Disabling Cyberattacks” Are Hitting Critical US Water Systems, White House Warns
https://arstechnica.com/security/2024/03/critical-us-water-systems-face-disabling-cyberattacks-white-house-warns/

------------------------------------------------------

Why Adversarial AI Is the Cyber Threat No One Sees Coming
https://venturebeat.com/security/why-adversarial-ai-is-the-cyber-threat-no-one-sees-coming/

... Adversarial AI’s goal is to deliberately mislead AI and machine learning (ML) systems so they are worthless for the use cases they’re being designed for. Adversarial AI refers to “the use of artificial intelligence techniques to manipulate or deceive AI systems. It’s like a cunning chess player who exploits the vulnerabilities of its opponent. These intelligent adversaries can bypass traditional cyber defense systems, using sophisticated algorithms and techniques to evade detection and launch targeted attacks.” ...

HiddenLayer’s AI Threat Landscape Report
https://hiddenlayer.com/threatreport2024/

------------------------------------------------

‘Poisoned’ Data Could Wreck AIs In Wartime, Warns Army Software Acquisition Chief
https://breakingdefense.com/2024/04/poisoned-data-could-wreck-ais-in-wartime-warns-army-software-chief/

WASHINGTON — Even as the Pentagon makes big bets on big data and artificial intelligence, the Army’s software acquisition chief is raising a new warning that adversaries could “poison” the well of data from which AI drinks, subtly sabotaging algorithms the US will use in future conflicts.

... The fundamental problem is that every machine-learning algorithm has to be trained on data — lots and lots of data. The Pentagon is making a tremendous effort to collect, collate, curate, and clean its data so analytic algorithms and infant AIs can make sense of it. In particular, the prep team needs to throw out any erroneous datapoints before the algorithm can learn the wrong thing.

Quote
... “Any commercial LLM [Large Language Model] that is out, there that is learning from the internet, is poisoned today,”... “but our main concern [is] those algorithms that are going to be informing battlefield decisions.”

Making better chatbots isn’t the big problem for the Pentagon, she argued. “I think [generative AI] is fixable,” she said. “It really is all about the data.” Instead of training an LLM on the open internet, as OpenAI, et al have done, the military would train it on a trusted, verified military dataset inside a secure, firewalled environment. Specifically, she recommended a system at DoD Impact Level 5 or 6, suitable for sensitive (5) or classified (6) data.

“Hopefully by this summer, we have an IL-5 LLM capability that will be available for us to use,” she said. That can help with all sorts of back-office functions, summarizing reams of information to make bureaucratic processes more efficient, she said. “[But] I am honestly more concerned about what you call, you know, the ‘regular’ (narrow) AI, because those are the algorithms that are going to really be used by our soldiers to make decisions in the battlefield.”

“The consequences of bad data or bad algorithms or poison data or trojans or all of those things are much greater in those use cases,” Swanson said. “That’s really, for us, where we are spending the bulk of our time.”

The Pentagon aims to use AI to coordinate future combat operations across land, air, sea, space, and cyberspace. The concept is called Combined Joint All-Domain Command and Control (CJADC2), and in February the Pentagon announced a functioning “minimum viability capability” was already being fielded to select headquarters around the world.

Future versions will add targeting data and strike planning, plugging into existing AI battle command projects at the service level: the Air Force’s ABMS, the Navy’s Project Overmatch, and the Army’s Project Convergence.

Project Convergence, in turn, will use technology developed by the newly created Project Linchpin, what Swanson described the service’s “flagship AI program” designed to be a “trusted and secure ML ops pipeline for our programs.”

In other words, the Army is trying to apply to machine learning the “agile” feedback loop between development, cybersecurity, and current operations (DevSecOps) used by leading software developers to roll out new tech fast and keep updating it.

The catch? “Right now, we don’t know 100 percent how to do that,” Swanson said. In fact, she argued, no one does: “It is concerning to me how all-in we are with AI and hardly anybody has those answers. We’ve asked probably a hundred different companies, ‘how do you do it?’ and they’re like, ‘umm.’”

... What’s more, machine learning algorithms keep learning as they’re exposed to new data, essentially reprogramming themselves. ... “Because it continues learning, how do you manage that in the battlefield [to] make sure it’s not just going to completely go wild?” Swanson asked. “How do you know that your data has not been poisoned?”
« Last Edit: April 28, 2024, 01:53:27 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

John_the_Younger

  • Frazil ice
  • Posts: 456
    • View Profile
  • Liked: 66
  • Likes Given: 140
Re: Robots and AI: Our Immortality or Extinction
« Reply #3246 on: April 28, 2024, 03:39:04 PM »
Quote
Why Adversarial AI Is the Cyber Threat No One Sees Coming
If no one sees it coming, how come "someone" is writing about it?
 :o

gerontocrat

  • Multi-year ice
  • Posts: 21062
    • View Profile
  • Liked: 5322
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #3247 on: April 28, 2024, 05:10:53 PM »
Quote
Why Adversarial AI Is the Cyber Threat No One Sees Coming
If no one sees it coming, how come "someone" is writing about it?
 :o
+Perhaps "noone" means the military, the AI chip industry, businesses looking to use AI to reduce costs and increase profits, and most of all the AI developer nerds who are so entranced with their new toys that they can't see the pitfalls.

Tears by bedtime?
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

John_the_Younger

  • Frazil ice
  • Posts: 456
    • View Profile
  • Liked: 66
  • Likes Given: 140
Re: Robots and AI: Our Immortality or Extinction
« Reply #3248 on: April 28, 2024, 10:11:30 PM »
Silly me for not realizing that.
 :-[
And I've got the tears even though it's nowhere near bedtime (where I type).
 :'(

Michael Hauber

  • Nilas ice
  • Posts: 1122
    • View Profile
  • Liked: 172
  • Likes Given: 16
Re: Robots and AI: Our Immortality or Extinction
« Reply #3249 on: April 28, 2024, 11:27:39 PM »
Quote
Why Adversarial AI Is the Cyber Threat No One Sees Coming
If no one sees it coming, how come "someone" is writing about it?
 :o

Because they are talking total tosh.  Adversarial AI is a technique quite widely known in AI industry.  Such AIs are designed to trick other AIs, and the basic idea is to train a normal AI and adversarial AI together to make the normal AI stronger - as it learns to overcome the adversarial AI's tricks.

As advesarial AIs are specifically designed to trick AIs, they are not relevant to cybersecurity until AIs are given responsibility for maintaining cybersecurity, which is not really something anyone is currently considering.  Regular AIs that trick people are the concern, and I'm pretty sure absolutely everyone in the cybersecurity industry is well aware of the risk and trying to figure out what to do about it.
Climate change:  Prepare for the worst, hope for the best, expect the middle.