Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 504289 times)

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3600 on: August 21, 2024, 04:28:06 PM »
Luma AI’s Dream Machine 1.5 Creates Mind-blowing Videos from Simple Text
https://venturebeat.com/ai/luma-ai-dream-machine-1-5-creates-mind-blowing-videos-from-simple-text/

Luma AI, a San Francisco-based startup, released Dream Machine 1.5 on Monday, marking a significant advancement in AI-powered video generation. This latest version of their text-to-video model offers enhanced realism, improved motion tracking, and more intuitive prompt understanding.

https://lumalabs.ai/dream-machine

The upgrade comes just two months after Dream Machine’s initial launch, highlighting the rapid pace of innovation in the AI video space.

One of the most notable improvements is the model’s ability to render text within generated videos, a feature that has traditionally challenged AI models. This advancement opens new possibilities for creating dynamic title sequences, animated logos, and on-screen graphics for presentations.

https://x.com/LumaLabsAI/status/1825938434839687552

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3601 on: August 21, 2024, 04:50:36 PM »
Skyfire Launches to Let Autonomous AI Agents Spend Money On Your Behalf
https://venturebeat.com/ai/skyfire-launches-to-let-autonomous-ai-agents-spend-money-on-your-behalf/

A new San Francisco startup, Skyfire, is launching today in beta to become the “Visa for AI” by allowing you to equip autonomous AI agents made by other companies with your money and let them spend it while they go off on and work for you.

“We’re enabling is AI agents to be able to autonomously make payments, receive payments, hold balances,” said Skyfire’s co-founder and CEO Amir Sarhangi, in a video call interview with VentureBeat earlier this week. “Essentially, think of us as FinTech infrastructure for AI.”

Why should you be giving AI agents money to spend on your behalf through Skyfire?

Skyfire claims it is offering the world’s first payment network designed to support fully autonomous transactions across AI agents, large language models (LLMs), data platforms, and various service providers.

This development marks a significant step toward creating a new global economy where AI agents can function as independent economic actors, capable of making and receiving payments without human intervention.

“We really see that next million users for a lot of these [vendor] companies coming from AI agents being the customer,” said Sarhangi.

But there’s a big problem: if you want AI agents to do more advanced activities such as help you shop, book flights, or build new apps, services, websites, and businesses — there’s a good chance that they will come to a place where they need to pay for plane tickets or web hosting or some other product or service, and can’t. That’s currently where their utility ends.

“The problem is that AI agents don’t have identities, they don’t have bank accounts, and they can’t do those things because that identity and ability to have access to financial is basically not possible for them,” said Sarhangi. “So that’s what we’re unlocking.”

Skyfire has set up a new, secure payments system that will allow end-users to give AI agents a set amount of money and have them spend it like there's no tomorrow on their behalf.

Key features include:

  • Open, Global Payments Protocol: Allows AI Agents to access LLMs, datasets, and API services without requiring traditional payment methods like subscriptions or credit cards. This open protocol ensures global interoperability and seamless transactions.
  • Automated Budgets and Control: Developers and their customers can set specific spending limits, ensuring that AI Agents operate within predefined business parameters. This feature supports both single transactions and ongoing campaigns.
  • AgentID & History Verification: Skyfire provides open identifiers for AI Agents, ensuring secure authentication and authorization. The system also maintains a history of transactions, offering an additional layer of trust and verification for both Agents and service providers.
  • Verification Service: The platform includes a verification service for Agent developers and businesses, granting users visibility and control over network connections. This helps maintain a secure and trustworthy ecosystem for autonomous transactions.
  • Funding On-Ramps: AI Agents can be funded through traditional banking methods or stablecoins, with all transactions completed instantly.

------------------------------------------------------



------------------------------------------------------

AI Researchers Call for ‘Personhood Credentials’ As Bots Get Smarter
https://www.washingtonpost.com/politics/2024/08/21/human-bot-personhood-credentials-worldcoin/

Telling humans from bots online is already difficult, as any major social network could attest. But without better ways to make that distinction, advances in artificial intelligence mean AI bots could “overwhelm the internet” in the years to come, researchers from the technology industry and academia warn in a new paper.

In the paper, published online last week but not yet peer-reviewed, a group of 32 researchers from OpenAI, Microsoft, Harvard and other institutions call on technologists and policymakers to develop new ways to verify humans without sacrificing people’s privacy or anonymity. They propose a system of “personhood credentials” by which people prove offline that they physically exist as humans and receive an encrypted credential that they can use to log in to a wide range of online services.

The paper shows industry leaders and some like-minded academics laying the groundwork for a future that once seemed like the stuff of science fiction.

In Philip K. Dick’s novel “Do Androids Dream of Electric Sheep?” — adapted for the screen as “Blade Runner” — people rely on sophisticated tests to differentiate humans from AI “replicants” that walk and talk just like people. We’re not there yet: Passing as humans in the real world is one thing today’s bots can’t do.

But the authors argue that existing systems for proving one’s humanity, such as requiring users to submit a selfie or solve a CAPTCHA puzzle, are increasingly “inadequate against sophisticated AI.” In the near future, they add, even holding a video chat with someone may not be enough to tell whether they’re the person they claim to be, another person disguising themselves with AI or “even a complete AI simulation of a real or fictitious person.”

On the other hand, strict identity-verification systems that link people’s identities to their online activity risk compromising users’ privacy and free expression rights, the paper argues. The researchers propose instead that personhood credentials should allow people to interact online anonymously without their activities being tracked.



The paper dovetails with an idea that some in the technology industry are already working on.

Notably, OpenAI CEO Sam Altman’s start-up, Worldcoin, is scanning people’s irises in exchange for a digital passport that both verifies they’re human and entitles them to shares of a cryptocurrency. He has pitched the project as a way to defend humanity from AI bots while enabling economic policies such as universal basic income for a future in which jobs are scarce. But critics are skeptical of the project’s promises and motives, saying it has exploited poor people, and several national governments have investigated or banned it.

While potentially valuable, verifying personhood would address just one of many problems in a world full of sophisticated AI agents, Boyle said. If artificial intelligence systems can convincingly impersonate humans, he mused, presumably they could also hire humans to do their bidding.

Chris Gilliard, an independent privacy researcher and surveillance scholar, said it’s worth asking why the onus should be on individuals to prove their humanity rather than on the AI companies to prevent their bots from impersonating humans, as some experts have suggested.

“A lot of these schemes are based on the idea that society and individuals will have to change their behaviors based on the problems introduced by companies stuffing chatbots and large language models into everything rather than the companies doing more to release products that are safe,” Gilliard said.

Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online, arXiv, (2024)
https://arxiv.org/pdf/2408.07892
« Last Edit: August 22, 2024, 06:24:50 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3602 on: August 21, 2024, 05:30:32 PM »
How Piramidal Is Using AI to Decode the Human Brain
https://venturebeat.com/ai/exclusive-how-piramidal-is-using-ai-to-decode-the-human-brain/

The human brain is ultimately one of the last frontiers — a paradoxical black box that we can’t even begin to understand ourselves.

But what if, just as paradoxically, AI could interpret the complexities of the brain to help identify and diagnose some of our most serious diseases?

That’s exactly what Y Combinator-backed startup Piramidal has set out to do. The company is building a first-of-its-kind foundation model that can detect and understand complex “brain language” or brainwaves. It can be fine-tuned to a range of electroencephalography (EEG) use cases, and has implications in other areas of medicine, as well as in pharmacology and even consumer products.

https://piramidal.ai/

“We’re training an AI model on brainwave data the same way ChatGPT is trained on text,” Kris Pahuja, Piramidal co-founder, told VentureBeat. “It is the largest model ever trained on EEG data.”

Today, when patients with brain-related conditions seek medical treatment, their EEG brain waves are mapped, then inspected by neurologists. But this can be highly time-consuming and error-prone, with a margin of error up to 30%, according to Pahuja.

Compounding this is the fact that there is an “extreme shortage” of neurologists — particularly those who can interpret EEGs — in the U.S. Pahuja pointed out that patients’ brain waves are recorded for several days or weeks when they are in the intensive care unit (ICU) — and no human could possibly go through all that. Instead, physicians take random samples and perform quick pattern recognition, but this can miss out on a lot of diagnosis.

EEG data is also incredibly complex, difficult to interpret and has significant signal variability. Pahuja pointed out that when someone is looking at an MRI image, for instance, they are looking at an image in one distinct period of time.

But an EEG, by contrast, is “very difficult to read, it changes thousands of times a second across 10 to 20 channels,” said Pahuja. He noted that even specialized doctors can miss many details, and some may only be trained in certain areas such as epilepsy or brain injury, so they don’t know all the markers to look for.



“We want to train our model to be at the level of an expert neurologist, but also not miss anything while an EEG is going on,” said Pahuja.

The company is first fine-tuning its model for the neuro ICU; that product will be able to ingest EEG data and interpret in near-real time, providing outputs to medical staff on occurrence and diagnosis of disorders such as seizures, traumatic brain bleeding, inflammations and other brain dysfunctions.

“It is truly an assistant to the doctor,”

By automating analysis and enhancing understanding through large models, personalized treatment can be revolutionized and diseases can be predicted earlier in their progression, he noted. And, as wireless EEG sensors become more mainstream, models like Piramidal’s can enable the creation of personalized agents that “continuously measure and monitor brain health.”

“These agents will offer real-time insights into how patients respond to new treatments and how their conditions may evolve,” said Sakellariou.

... The revolutionary model was initially inspired by Sakellariou’s experiences in various EEG studies, ranging from psychedelics to sleep research — both as a subject and an observer. In these studies, he explained, a technician attaches electrodes to the scalp and the system records brainwaves.

... But for Piramidal, the ICU is just the start, according to its founders: Their model has significant potential beyond that niche area of medicine.

In the near future, it’s possible that humans will have the opportunity for “quantified introspection” through everyday devices such as earphones equipped with neural sensors, Sakellariou pointed out. For example, we could measure how stress levels decrease after reducing screen time, train ourselves to enhance meditation by monitoring relaxation levels in a closed loop, or boost memory during periods of “intense learning through targeted auditory stimuli during specific sleep stages.

“All of this will be possible via personalized agents powered by large-scale models like ours,” said Sakellariou.

--------------------------------------------------------



--------------------------------------------------------

... boost memory during periods of “intense learning”... Hmm?

--------------------------------------------------------

Non-invasive Vagus Nerve Stimulation (nVNS) Is Effective at Accelerating Foreign Language Learning
https://markets.businessinsider.com/news/stocks/non-invasive-vagus-nerve-stimulation-nvns-is-effective-at-accelerating-foreign-language-learning-1033626175

ROCKAWAY, N.J., Aug. 01, 2024  -- electroCore, Inc., a commercial-stage bioelectronic medicine and wellness company, today announced that the Air Force Research Laboratories (AFRL) published a paper entitled “Transcutaneous Cervical Vagus Nerve Stimulation Enhances Second-Language Vocabulary Acquisition While Simultaneously Mitigating Fatigue and Promoting Focus” in Scientific Reports on July 26, 2024. The paper is based on a study that was conducted at the Defense Language Institute (DLI) in Monterey, CA, the U.S. Department of Defense’s premier language school. The study was supported by Defense Advanced Research Projects Agency (DARPA)/AFRL within the DARPA Targeted Neuroplasticity Training (TNT) program.

The study recruited 36 student participants from DLI’s Arabic school house (nVNS = 18 & Sham = 18). Each subject was assessed on day 1 to establish a baseline. On days 2-4, two 2-minute nVNS stimulation treatments were self-administered by the subject, each before and after training. Assessments were taken each treatment day, and on day 5 where there was no treatment, assessments were conducted to assess possible carryover effects. The study showed a significant positive effect of nVNS over sham (p=0.025) on language recall, thereby suggesting nVNS ability to significantly improve the recall of a foreign language compared to sham. The improvement achieved through nVNS treatment on days 2-4 was maintained on day 5 demonstrating that the recall advantage that emerged during training was sustained after the completion of treatment.

All participants completed the AFRL Mood questionnaire on each day (1-5) of the study. From the a priori-selected three scales of the AFRL Mood Questionnaire, participants receiving nVNS showed significant increases compared to participants receiving sham stimulation in energy (p=0.036) and focus (p=0.001) over the course of each training session. Their calm score also trended towards an improvement from nVNS.

Dr. Richard McKinley, of the Air Force’s 711th Human Performance Wing, Human Effectiveness Directorate and an author of the paper, commented, “We are pleased to have successfully published the first randomized, double-blind sham-controlled trial demonstrating the ability of nVNS to accelerate the learning of Arabic vocabulary in students at the Defense Language Institute. Equally impressive were the improvements in the subject’s energy and mood despite the rigors of the training program. This study is consistent with other data that suggests that nVNS may be a viable tool to enhance warfighter training and resilience in a range of areas.”

“We congratulate and thank the teams at DLI and AFRL for the dedicated work on this study as well as DARPA for their sponsoring the study” commented Dr. Peter Staats, Chief Medical Officer of electroCore. “Cognitive performance and skill acquisition are central to the mission of many institutions in a wide variety of sectors including educational, commercial, and military. This study suggests nVNS could accelerate these efforts.”

Transcutaneous cervical vagus nerve stimulation enhances second-language vocabulary acquisition while simultaneously mitigating fatigue and promoting focus, Scientific Reports, (2024)
https://www.nature.com/articles/s41598-024-68015-4

------------------------------------------------------------

Capt. Ramsey: Speaking of horses, did you ever see those Lipizzaner stallions?
...
Capt. Ramsey: Some of the things they do, uh, defy belief. Their training program is simplicity itself. You just stick a cattle prod up their ass and you can get a horse to deal cards.
Capt. Ramsey: Simple matter of voltage

- Crimson Tide - 1995


---------------------------------------------------------

10 years ago ... Science Fiction or Fact: Instant, 'Matrix'-like Learning
https://www.livescience.com/34020-matrix-learning-kung-fu.html

... This sort of indirect, subliminal learning could eventually translate into teaching someone how to, say, play piano or do a judo chop.

"It's not like 'The Matrix' - yet," said Takeo Watanabe, a professor of neuroscience at Boston University and lead author of the decoded-neurofeedback study. "But this can be developed to be a very strong tool which could realize some aspects of what was shown in the movie."

------------------------------------------------------------



-----------------------------------------------------------

Will EEG Be Able to Read Your Dreams? The Future of the Brain Activity Measure as It Marks 100 Years
https://medicalxpress.com/news/2024-08-eeg-future-brain-years.html

A survey, led by University of Leeds academics, saw respondents—with 6,685 years of collective experience—presented with possible future developments for EEG, ranging from those deemed "critical to progress" to the "highly improbable," and asked to estimate how long it might be before they were achieved. The results are published in the journal Nature Human Behaviour.

... Real-time, reliable diagnosis of brain abnormalities such as seizures or tumors is believed to be just 10–14 years away, while the probability of reading the content of dreams and long-term memories is judged to be more than 50 years away by some experts

It may be surprising to many that—according to the survey—within a generation we could all be carrying around our own personal, portable EEG.



One hundred years of EEG for brain and behaviour research, Nature Human Behaviour, (2024)
https://www.nature.com/articles/s41562-024-01941-5
« Last Edit: August 22, 2024, 11:37:31 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3603 on: August 21, 2024, 09:47:57 PM »
China’s Humanoid Robot Cooks Food, Plays Basketball, Even Does Kung Fu
https://pandaily.com/astribots-new-ai-robot-assistant-s1-officially-debuts-at-the-world-robot-conference/

Designed to mimic human decision-making and physical interaction, the Astribot S1 robot can handle tasks that would traditionally require human dexterity and judgment.



... but, can't climb stairs  :-\
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3604 on: August 22, 2024, 05:05:04 PM »
Meet Boardwalk Robotics' Addition to the Humanoid Workforce
https://spectrum.ieee.org/boardwalk-robotics-alex-humanoid

Alex is the commercialized version of IHMC's legged robot legacy

Boardwalk Robotics is announcing its entry into the increasingly crowded commercial humanoid(ish) space with Alex, a “workforce transformation” humanoid upper torso designed to work in manufacturing, logistics, and maintenance.

https://boardwalkrobotics.com/



The first thing you’ll notice about Alex is that it doesn’t have legs, at least for now. Boardwalk’s theory is that for a humanoid to be practical and cost effective in the near term, legs aren’t necessary, and that there are many tasks that offer a good return on investment where a stationary pedestal or a glorified autonomous mobile robotic base would be totally fine.

... It certainly helps that Boardwalk isn’t at all worried about developing legs: “Every time we bring up a new humanoid, it’s something like twice as fast as the previous time,” Griffin says. This will be the eighth humanoid that IHMC has been involved in bringing up—I’d tell you more about all eight of those humanoids, but some of them are so secret that even I don’t know anything about them. Legs are definitely on the roadmap, but they’re not done yet, and IHMC will have a hand in their development to speed things along: It turns out that already having access to a functional (top of the line, really) locomotion stack is a big head start.

Boardwalk sees safety as one of its primary differentiators since it’s not starting out with legs, says Shrewsbury. “For a full humanoid, there’s no way to make that completely safe. If it falls, it’s going to faceplant.” By keeping Alex on a stable base, it can work closer to humans and potentially move its arms much faster while also preserving a dynamic safety zone.

-------------------------------------------------------

Humanoid AI Robot Serves Tea at Walmart

A Nevada robotics company has installed a humanoid, AI-driven robotic beverage system at One Kitchen in a Walmart in Rockford, Illinois. The installation of the robot called Adam is part of a planned rollout across 240 One Kitchen U.S. locations.

The robot has started serving a variety of coffee and boba drinks to customers and is expected to serve up to 200 cups of coffee and tea per day.



The approach of the One Kitchen restaurants is to aggregate multiple national and local brands using a single kitchen.

The Adam robot also has been used at Botbar, a robot-run coffee shop in New York, and Cloutea, a boba tea restaurant in Las Vegas.

-----------------------------------------------------

Humanoid Robots Show Future Application Potential at 2024 WRC



« Last Edit: August 22, 2024, 05:19:41 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

kassy

  • First-year ice
  • Posts: 9183
    • View Profile
  • Liked: 2233
  • Likes Given: 2048
Re: Robots and AI: Our Immortality or Extinction
« Reply #3605 on: August 22, 2024, 06:12:35 PM »
China’s Humanoid Robot Cooks Food, Plays Basketball, Even Does Kung Fu
... but, can't climb stairs  :-\

Cool they build Cato.  8)
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3606 on: August 22, 2024, 11:29:49 PM »
GITAI has developed the S2 dual-armed robot, which was part of missions earlier this year aboard the International Space Station (ISS). The Torrance, Calif-based company‘s system was mounted on the Nanorack Bishop Airlock to conduct an external demonstration of in-space servicing, assembling, and manufacturing (ISAM).

https://gitai.tech/products/



start at 0:30

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3607 on: August 24, 2024, 12:16:16 AM »
Is Atlas joining the military?. ...



... Drop, and give me 20!

-----------------------------------------------------------

Some of the most interesting bipedal and humanoid research is being done by Disney



-----------------------------------------------------------

Need a roofer?



Now, try a church steeple

-----------------------------------------------------------

Object Recognition, Dynamic Contact Simulation, Detection, and Control of the Flexible Musculoskeletal Hand Using a Recurrent Neural Network with Parametric Bias
https://arxiv.org/abs/2407.08050



-----------------------------------------------------------
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 27620
    • View Profile
  • Liked: 1465
  • Likes Given: 451
Re: Robots and AI: Our Immortality or Extinction
« Reply #3608 on: August 24, 2024, 05:55:25 PM »
College life in 2024.. 😂
 
➡️ https://x.com/howthingswork_/status/1827080501653336100
15 sec. Using a small laptop in an auditorium.  Selects a multiple-choice “homework” question, and “AnswersAI” provides the answer, even fills in the correct button when he clicks ✔️ OK.
 

Jeremy Judkins
This doesn’t work for reputable collages. Even online.
For exams at University of Florida I had to use Proctor U, which remotes into your computer.
 
They watch your entire screen and watch you on webcam to make sure you aren’t cheating. You have to show your entire room before starting so they know you don’t have cheating devices or notes around.
 
It made it annoying sometimes, and kind of creepy knowing someone is watching you…
 
But this is the solution to prevent cheating /googling every answer with online based tests.
8/23/24, https://x.com/jeremyjudkins_/status/1827177447051100317

 
Digarden
I'm a teacher myself. All my students are allowed to use AI (GPT, Claude, Copilot, etc.). I really don't care. However, I never ask straightforward questions.
 
All my tests are simulations of real-life problems. In real life, they will have access to AI. What they need to develop are problem-solving skills.

8/23/24, https://x.com/digarden/status/1827171469249429840
 
< Example?
D: "You work on a farm that produces *Bacillus thuringiensis* for pest control. The farm usually buys BT from the vendor, X. In the last couple of months, the efficacy against diptera has been declining. What would you do to resolve this issue?"
    Then, I give them some fermentation parameters, as well as the bioreactor configuration. Note that in real life, people facing this type of problem would consult with coworkers and colleagues. Why wouldn't students?

D: I teach enzymology and industrial microbiology on a biotechnology course.
< That is absolutely rad.  Industrial microbiology sounds like something I’ve never even thought of. I’m assuming not undergraduate stuff.
D: It's undergrad. LoL. But it's quite specific of biotechnology.

< We called that "peeking at your neighbors paper" (collaborating with coworkers) and it was highly discouraged.
 
D: Do you collaborate with anyone at work? How is that not permitted?

<< Even in practical fields, I found that 99% of what you learn is going to be done either in the field or just learned through exposure. When I taught AP English I also allowed AI. In the end, the students still had to recognize if the AI was making mistakes.

< I concur with you 100%.  It is impossible to deny this technology and that this will be the future.  We need to make good use of it and yet let new students to be critical thinkers.  If used right can be a great source for learning.

D: The problem is that teachers still teach like it's the 1800s. Students need connections; you can't talk to Gen Z the way you talk to Boomers.
 

> And when your ai server is down, who gonna bring up the knowledge tree XD
D: When the server is down, you would consult with colleagues and coworkers. Why not the students?

>> Here he comes: the teacher who thinks he’s smarter than AI…. Suuuuure
D: You don't have to be smarter than an AI, you should work with it.
People who say it cannot be done should not interrupt those who are doing it.

zenith

  • Young ice
  • Posts: 3843
    • View Profile
  • Liked: 194
  • Likes Given: 0
Re: Robots and AI: Our Immortality or Extinction
« Reply #3609 on: August 25, 2024, 03:39:06 AM »
because humans are truly horrible and they live amongst us.

"Billionaire and Silicon Valley venture capitalist Marc Andreessen recently released a 5000 word essay coined the “Techno-Optimist Manifesto”, which covers his views of the world from the perspective of a “techno-optimist”. I read it so you don't have to."

"Our enemy is the ivory tower" American Tech Billionare Says
Where is reality? Can you show it to me? - Heinz von Foerster

nadir

  • Young ice
  • Posts: 2671
    • View Profile
  • Liked: 289
  • Likes Given: 38
Re: Robots and AI: Our Immortality or Extinction
« Reply #3610 on: August 25, 2024, 12:53:30 PM »
because humans are truly horrible and they live amongst us.

"Billionaire and Silicon Valley venture capitalist Marc Andreessen recently released a 5000 word essay coined the “Techno-Optimist Manifesto”, which covers his views of the world from the perspective of a “techno-optimist”. I read it so you don't have to."

"Our enemy is the ivory tower" American Tech Billionare Says


Billionaires must be really bored of their extravagant orgies, that they are reconverting into this kind of cult for Technological Supremacy. Musk, Gates, Altman, even Bezos, all show similar traits of wanting to control humanity or transcend humanity by means of the Technology they revere. Mary Shelley should write about the current generation of billionaires.

The guy in the video in particular is illiterate and anti-humanist, despite he believing blindly on tech to save, even to drastically improve, humanity.

Unless he believes humanity=a few selected by the Elites.

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3611 on: August 25, 2024, 08:45:21 PM »
Australia’s Unmanned Submarine Arrives In U.S.
https://www.twz.com/news-features/everything-we-just-learned-about-the-ghost-shark-uncrewed-submarine
https://www.anduril.com/article/ghost-shark-xl-auv-arrives-in-the-united-states/
https://defence-blog.com/australias-uncrewed-submarine-arrives-in-u-s/

Australia’s ambitious Ghost Shark extra-large autonomous undersea vehicle has arrived in the United States for the first time.



The cutting-edge autonomous vehicle, designed and built by Anduril in Australia, was transported across the Pacific by a Royal Australian Air Force (RAAF) C-17A. This deployment not only demonstrates the Ghost Shark’s rapid expeditionary capabilities but also aligns with the timing of the Rim of the Pacific (RIMPAC) exercise, one of the world’s largest maritime drills held near the Hawaiian Islands.

The arrival of Ghost Shark in the U.S. will enable concurrent testing and development efforts on both sides of the Pacific, enhancing the vehicle’s operational envelope and facilitating closer collaboration with U.S. government partners. Ghost Shark is designed to support a wide range of subsea maritime missions, offering modular and multi-purpose capabilities that can be tailored to meet specific mission requirements. This flexibility positions the Extra-Large Autonomous Undersea Vehicle (XL-AUV) as a force multiplier in the evolving landscape of strategic competition.

Australian authorities have said in the past that they plan to use Ghost Sharks to conduct “persistent intelligence, surveillance, reconnaissance [ISR] and strike” missions, but without any real elaboration

What the “strike” capabilities for the “RAN” might entail remains unclear, but Ghost Shark could well be configured to launch torpedoes or missiles, as well as loitering munitions, or lay mines.

“It’s the nature of the beast with subsea [warfare systems],” Anduril’s Arnott said, referring to the historical secrecy surrounding submarines, crewed and uncrewed, and other underwater military capabilities. He did say that “these are very, very long-range assets.”

What we do know is that Ghost Shark was designed to be extremely modular, flexible, and readily reconfigurable.

“If you look at the [payload] sections alone, they are much bigger than a lot of UUVs by themselves, just each section,” he added. “The nice thing… about being in water is the extensibility of how many of those sections you can add. [It] is pretty forgiving in that space in the water domain. So we cannot say how big this thing can grow, but it’s a lot.”



Anduril’s Arnott regularly describes Ghost Shark as a “mothership,” as well, and has alluded to it being capable of serving as a launch platform for other uncrewed systems, including ones designed to operate in highly autonomous networked swarms, in the past. Swarms inherently offer flexibility in how their individual components can be configured and, by extension, in what missions they can be tasked to perform.

"A big part of why you have an extra large vehicle is as a mothership,” Arnott told The War Zone and other outlets at a media roundtable after Ghost Shark’s public unveiling in April. “So you know, having autonomy controlling autonomy. This is actually a masterclass in use of Lattice.”

Lattice is Anduril’s proprietary artificial intelligence-enabled autonomy software package for various platforms and swarms

... “I think the mind can run wild with what you can do with a very large payload bay. But having having a brain that can be all the way on the edge of smaller things, plus a bigger thing, plus working with crewed assets … this is kind of the vision … of what Lattice is about,” Arnott also said in April, speaking generally, in response to a specific question about whether Ghost Shark might act as a mothership for smaller uncrewed platforms. “I’ll let you connect some dots there.”

... “We’re expecting this to be built in very large numbers. (... +1000s)

-------------------------------------------------------

... and room for a few nukes ...

First Look at the US Navy’s Orca XLUUV with Massive Payload Module
https://www.navalnews.com/naval-news/2024/06/our-first-look-at-the-us-navys-orca-xluuv-fitted-with-payload-module/



http://www.hisutton.com/USN_XLUUV.html

The US Navy’s Unmanned Undersea Vehicles Squadron One (UUVRON-1) is currently working on developing and documenting tactics, techniques, and procedures (TTPs) for the Orca XLUUV.

According to the US Navy’s budget documents, the service is updating facilities at the Naval Base Ventura County site for CONUS XLUUV testing, training, and work-ups.

The document also states that the Navy is working through the process of establishing and developing infrastructure that will support XLUUV OCONUS basing, fleet integration and in-theater forward operational capability, including support platforms, trailers, maintenance equipment, and ashore hardware.



-------------------------------------------------------


https://newatlas.com/military/quicksink-modular-strap-on-kit-smart-bomb/

-------------------------------------------------------

Workers at Google DeepMind Push Company to Drop Military Contracts
https://time.com/7013685/google-ai-deepmind-military-contracts-israel/

Nearly 200 workers inside Google DeepMind, the company’s AI division, signed a letter calling on the tech giant to drop its contracts with military organizations earlier this year, according to a copy of the document reviewed by TIME and five people with knowledge of the matter. The letter circulated amid growing concerns inside the AI lab that its technology is being sold to militaries engaged in warfare, in what the workers say is a violation of Google’s own AI rules.

The letter is a sign of a growing dispute within Google between at least some workers in its AI division—which has pledged to never work on military technology—and its Cloud business, which has contracts to sell Google services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States. The signatures represent some 5% of DeepMind’s overall headcount—a small portion to be sure, but a significant level of worker unease for an industry where top machine learning talent is in high demand.

... “Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter that circulated inside Google DeepMind says. (Those principles state the company will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.”

Reply from Google ... crickets ...🐜🐜

--------------------------------------------------------------------------

Shield AI’s Hivemind Demonstrates Collaborative Autonomy In Firejet Drones Flight Test
https://www.twz.com/sponsored-content/shield-ais-hivemind-demonstrates-collaborative-autonomy-in-firejet-drones-flight-test

Shield AI has made significant strides toward integrating human-piloted fighters and Autonomous Collaborative Platforms (ACPs), ensuring they can seamlessly operate together. The company recently conducted a series of tests where high-performance, autonomously controlled, jet-powered drones, executed formation flying and tactical maneuvers, paving the way for blended operations between crewed and uncrewed aircraft. These advancements are crucial for the future of air combat, where autonomous systems must collaborate effectively with human pilots.



... Both aircraft are truly autonomous in the purest, most sci-fi sense of the word, collaborating with each other, receiving state information about each other, and the safety pilots are hands-off, watching the situation for safety. They’re not contributing anything directly to the Firejet movements – it’s true autonomy,” said Blake.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

morganism

  • Young ice
  • Posts: 2938
    • View Profile
  • Liked: 309
  • Likes Given: 193
Re: Robots and AI: Our Immortality or Extinction
« Reply #3612 on: August 26, 2024, 02:32:58 AM »
Towards Realistic Synthetic User-Generated Content: A Scaffolding Approach to Generating Online Discussions

(at least we won't have to have these discussions anymore.....)

The emergence of synthetic data represents a pivotal shift in modern machine learning, offering a solution to satisfy the need for large volumes of data in domains where real data is scarce, highly private, or difficult to obtain. We investigate the feasibility of creating realistic, large-scale synthetic datasets of user-generated content, noting that such content is increasingly prevalent and a source of frequently sought information. Large language models (LLMs) offer a starting point for generating synthetic social media discussion threads, due to their ability to produce diverse responses that typify online interactions. However, as we demonstrate, straightforward application of LLMs yields limited success in capturing the complex structure of online discussions, and standard prompting mechanisms lack sufficient control. We therefore propose a multi-step generation process, predicated on the idea of creating compact representations of discussion threads, referred to as scaffolds. Our framework is generic yet adaptable to the unique characteristics of specific social media platforms. We demonstrate its feasibility using data from two distinct online discussion platforms. To address the fundamental challenge of ensuring the representativeness and realism of synthetic data, we propose a portfolio of evaluation measures to compare various instantiations of our framework.

https://arxiv.org/abs/2408.08379
Kalingrad, the new permanent home of the Olympic Village

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3613 on: August 26, 2024, 06:32:18 PM »
Is Xi Jinping an AI Doomer?
https://www.economist.com/china/2024/08/25/is-xi-jinping-an-ai-doomer?utm_campaign=a.the-economist-sunday-today

Western accelerationists often argue that competition with Chinese developers, who are uninhibited by strong safeguards, is so fierce that the West cannot afford to slow down. The implication is that the debate in China is one-sided, with accelerationists having the most say over the regulatory environment. In fact, China has its own AI doomers—and they are increasingly influential.

[...]

China’s accelerationists want to keep things this way. Zhu Songchun, a party adviser and director of a state-backed programme to develop AGI, has argued that AI development is as important as the “Two Bombs, One Satellite” project, a Mao-era push to produce long-range nuclear weapons. Earlier this year Yin Hejun, the minister of science and technology, used an old party slogan to press for faster progress, writing that development, including in the field of AI, was China’s greatest source of security. Some economic policymakers warn that an over-zealous pursuit of safety will harm China’s competitiveness.

But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants.

The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. [...]

The debate over how to approach the technology has led to a turf war between China’s regulators. [...]The impasse was made plain on July 11th, when the official responsible for writing the AI law cautioned against prioritising either safety or expediency.

The decision will ultimately come down to what Mr Xi thinks. In June he sent a letter to Mr Yao, praising his work on AI. In July, at a meeting of the party’s central committee called the “third plenum”, Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring AI safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities.

More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should “abandon uninhibited growth that comes at the cost of sacrificing safety”, says the guide. Since AI will determine “the fate of all mankind”, it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive[...]

https://archive.is/HJgHb
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3614 on: August 26, 2024, 07:24:46 PM »
Automated Design of Agentic Systems
https://paperswithcode.com/paper/automated-design-of-agentic-systems



Researchers are investing substantial effort in developing powerful general-purpose agents, wherein Foundation Models are used as modules within agentic systems (e.g. Chain-of-Thought, Self-Reflection, Toolformer). However, the history of machine learning teaches us that hand-designed solutions are eventually replaced by learned solutions.

We formulate a new research area, Automated Design of Agentic Systems (ADAS), which aims to automatically create powerful agentic system designs, including inventing novel building blocks and/or combining them in new ways. We further demonstrate that there is an unexplored yet promising approach within ADAS where agents can be defined in code and new agents can be automatically discovered by a meta agent programming ever better ones in code.


https://x.com/jeffclune/status/1825551361808990611

Given that programming languages are Turing Complete, this approach theoretically enables the learning of any possible agentic system: including novel prompts, tool use, control flows, and combinations thereof. We present a simple yet effective algorithm named Meta Agent Search to demonstrate this idea, where a meta agent iteratively programs interesting new agents based on an ever-growing archive of previous discoveries. Through extensive experiments across multiple domains including coding, science, and math, we show that our algorithm can progressively invent agents with novel designs that greatly outperform state-of-the-art hand-designed agents. Importantly, we consistently observe the surprising result that agents invented by Meta Agent Search maintain superior performance even when transferred across domains and models, demonstrating their robustness and generality.

Provided we develop it safely, our work illustrates the potential of an exciting new research direction toward automatically designing ever-more powerful agentic systems to benefit humanity.

Automated Design of Agentic Systems, arXiv, (2024)
https://arxiv.org/pdf/2408.08435v1.pdf

-------------------------------------------------------



-------------------------------------------------------

AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work
https://www.lesswrong.com/posts/79BPxvSsjzBkiSyTq/agi-safety-and-alignment-at-google-deepmind-a-summary-of

The mission of the Frontier Safety team is to ensure safety from extreme harms by anticipating, evaluating, and helping Google prepare for powerful capabilities in frontier models. While the focus so far has been primarily around misuse threat models, we are also working on misalignment threat models.

------------------------------------------------------

"Can AI Scaling Continue Through 2030?", Epoch AI (yes)
https://epochai.org/blog/can-ai-scaling-continue-through-2030



... To put this 4x annual growth in AI training compute into perspective, it outpaces even some of the fastest technological expansions in recent history. It surpasses the peak growth rates of mobile phone adoption (2x/year, 1980-1987), solar energy capacity installation (1.5x/year, 2001-2010), and human genome sequencing (3.3x/year, 2008-2015).

... We find that training runs of 2e29 FLOP will likely be feasible by the end of this decade. In other words, by 2030 it will be very likely possible to train models that exceed GPT-4 in scale to the same degree that GPT-4 exceeds GPT-2 in scale. If pursued, we might see by the end of the decade advances in AI as drastic as the difference between the rudimentary text generation of GPT-2 in 2019 and the sophisticated problem-solving abilities of GPT-4 in 2023.
« Last Edit: August 26, 2024, 10:00:15 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Freegrass

  • Young ice
  • Posts: 4849
  • Autodidacticism is a complicated word
    • View Profile
  • Liked: 1501
  • Likes Given: 1437
Re: Robots and AI: Our Immortality or Extinction
« Reply #3615 on: August 26, 2024, 09:14:57 PM »
This video explores the hidden workforce behind AI technology, revealing the harsh realities faced by those who train and maintain AI systems. It highlights the disparity between the glamorous portrayal of AI and the exploitation of underpaid workers.

Keep 'em stupid, and they'll die for you.

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3616 on: August 27, 2024, 06:44:19 AM »
New Method Allows AI to Learn Indefinitely
https://techxplore.com/news/2024-08-method-ai-indefinitely.html



A team of AI researchers and computer scientists at the University of Alberta has found that current artificial networks used with deep-learning systems lose their ability to learn during extended training on new data. In their study, reported in the journal Nature, the group found a way to overcome these problems with plasticity in both supervised and reinforcement learning AI systems, allowing them to continue to learn.

The researchers tested the ability of conventional neural networks to continue learning after training on their original datasets and found what they describe as catastrophic forgetting, in which a system loses the ability to carry out a task it was able to do after being trained on new material.

They note that this outcome is logical, considering LLMs were designed to be sequential learning systems and learn by training on fixed data sets. During testing, the research team found that the systems also lose their ability to learn altogether if trained sequentially on multiple tasks—a feature they describe as loss of plasticity. But they also found a way to fix the problem—by resetting the weights that have been previously associated with nodes on the network.

The researchers suggest that reinitializing the weights between training sessions, using the same methods that were used to initialize the system, should allow for maintaining plasticity in the system and for it to continue learning on additional training datasets.

Shibhansh Dohare, Loss of plasticity in deep continual learning, Nature (2024)
https://www.nature.com/articles/s41586-024-07711-7
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 27620
    • View Profile
  • Liked: 1465
  • Likes Given: 451
Re: Robots and AI: Our Immortality or Extinction
« Reply #3617 on: August 27, 2024, 01:53:19 PM »
A bill in California would put significant restrictions on AI development. There’s plenty of arguments for and against.
 
Quote
Elon Musk
This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill.
 
For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.
8/26/24, 6:59 PM https://x.com/elonmusk/status/1828205685386936567
< Plot twist, xAi will incorporate in Texas very soon.
 
Quote
Ayn Rand Galt
 
It’s been awhile since I’ve disagreed with @elonmusk, but this is definitely one of those moments and in a big way. Here are the top 3 reasons why:
 
1) Our geopolitical adversaries like China will not pause/throttle and develop a massive advantage with economic & national security implications.
 
2) Regulation stifles progress & innovation while also driving capital out of the sector.
 
3) You can not trust the California legislature and their regulatory apparatus  to fairly & effectively manage this without weaponizing it.
 
#NoToSB1047 
8/26/24, 7:08 PM https://x.com/aynrandgalt_fl/status/1828207824406528409
 
—-
The Pros and Cons of California's Proposed SB-1047 AI Safety Law | Lawfare
https://www.lawfaremedia.org/article/california-s-proposed-sb-1047-would-be-a-
Very long, detailed, legalese

Anthropic Offers Cautious Support for New California AI Regulation Legislation -- THE Journal
https://thejournal.com/Articles/2024/08/26/Anthropic-Offers-Cautious-Support-for-New-California-AI-Regulation-Legislation.aspx
Shorter

California’s SB-1047: Understanding the Safe and Secure Innovation for Frontier Artificial Intelligence Act
https://www.dlapiper.com/en-us/insights/publications/2024/02/californias-sb-1047
Moderately long but detailed.

  ———
Those who abhor AI replies should skip this next analysis, but I think it succinctly summarizes the different perspectives of this complex subject:
 
Grok:
Quote
The debate over California's AI Safety Bill, known as SB 1047 or the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," has been intensifying as it approaches a critical vote. Here's what's been happening:

- **Bill Provisions**: SB 1047 targets developers of frontier AI models, defined by their computational power and cost, requiring them to implement safety measures like pre-deployment testing, cybersecurity, and a "kill switch" mechanism. It also includes provisions for whistleblower protections, transparent pricing, and the establishment of CalCompute, a public cloud computing resource aimed at democratizing access to AI technology.

- **Support and Opposition**: The bill has seen bipartisan support in the California Senate, passing with a vote of 32-1, indicating broad legislative backing for regulating AI's most powerful iterations. However, it's met with significant opposition from tech industry leaders who argue it could stifle innovation. Critics, including figures from companies like Meta and Google, fear that the liability and safety requirements could deter investment and development in AI, potentially pushing innovation outside of California or the U.S.

- **Public and Expert Opinion**: Public sentiment, as reflected in polls, shows strong support for AI regulation, with a significant majority of Californians favoring measures to ensure AI safety. AI experts and pioneers like Yoshua Bengio and Geoffrey Hinton have endorsed the bill, emphasizing the need for precaution against AI's potential catastrophic risks. Conversely, there's concern that the bill might not go far enough in protecting against AI risks or could be misinterpreted, leading to unintended consequences for smaller developers or open-source projects.

- **Recent Developments**: As of late August 2024, the debate continues with increased scrutiny on how SB 1047 might affect the tech ecosystem. There's been a notable pushback from tech companies, warning of potential exodus or reduced innovation. However, supporters, including some within the tech community, argue that the bill's focus on the most advanced AI models leaves room for innovation in less risky AI applications. The discussion has also touched on the global competitiveness of the U.S. in AI, with some fearing that overly restrictive regulations could benefit competitors like China.

- **Legislative Progress**: After passing the Senate, SB 1047 is expected to face a vote in the California Assembly, with its outcome being closely watched. The bill's journey through the legislative process has been marked by amendments and discussions aimed at balancing safety with innovation, reflecting a collaborative yet contentious process.

This debate encapsulates broader themes of technological governance, the balance between innovation and safety, and the role of government in shaping the future of AI. The outcome of SB 1047 could set a precedent for AI regulation not just in California but potentially nationwide, influencing how AI development is approached globally.

—-
From 2017:
Elon Musk: If only a few people have access to an ultra-smart AI, they could become the dictators of Earth. Therefore, it’s extremely important that AI is widespread, that it’s tied to our consciousness, and tied to the sum of individual human will.
➡️ https://x.com/elon_docs/status/1828371644055781476
 45 sec.
« Last Edit: August 27, 2024, 02:03:15 PM by Sigmetnow »
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3618 on: August 29, 2024, 04:35:57 PM »
Skyline Robotics Deploys Ozmo Window Cleaning Robot In New York City
https://www.therobotreport.com/skyline-robotics-deploys-ozmo-window-cleaning-robot-new-york-city/

Skyline Robotics and Palladium Window Solutions today deployed Ozmo, which includes a robot arm, at a 45-story New York building owned and managed by The Durst Organization.



... Skyline’s system includes artificial intelligence, machine learning, computer vision, and a KUKA robot arm. Ozmo went through rigorous testing and meets regulatory requirements, the company noted.

Ozmo can autonomously clean windows three times faster than traditional human window cleaning, according to Skyline Robotics. A human operator on the rooftop supervises the system, which provides new career opportunities, it added.

The challenges of operating on high rises and a growing shortage of qualified workers are driving the $40 billion window-cleaning industry to modernize, said Skyline Robotics. According to online jobs resource Zippia, 75% of window cleaners in the U.S. are above the age of 40, while just 9% of them are between 20 and 30 years old. At the same time, the New York skyline continues to grow.

... soon, nobody (human) will be working in those those skyscrapers

-----------------------------------------------------------

Neo is going to need a new escape plan ...



-----------------------------------------------------------

UR Survey Shows 48% of Manufacturers Plan to Invest In AI
https://www.therobotreport.com/ur-survey-shows-48-of-manufacturers-plan-to-invest-in-ai/



The artificial intelligence market is projected to reach $407 billion by 2027, with an annual growth rate of 37.3%, according to Forbes. To shed light on this growing market, Universal Robots A/S, or UR, recently asked nearly 1,200 manufacturers across North America and Europe how they use the technology and how they plan to invest in the future.

More than 50% of respondents said they already use AI and machine learning in their production.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3619 on: August 29, 2024, 05:15:09 PM »
Anáil nathrach, orth' bhais's bethad, do che'l de'nmha ... Charm of Making - Excaliber

‘Never Summon a Power You Can’t Control’
https://www.theguardian.com/technology/article/2024/aug/24/yuval-noah-harari-ai-book-extract-nexus



Yuval Noah Harari on how AI could threaten democracy and divide the world

Forget Hollywood depictions of gun-toting robots running wild in the streets – the reality of artificial intelligence is far more dangerous

Throughout history many traditions have believed that some fatal flaw in human nature tempts us to pursue powers we don’t know how to handle. The Greek myth of Phaethon told of a boy who discovers that he is the son of Helios, the sun god. Wishing to prove his divine origin, Phaethon demands the privilege of driving the chariot of the sun. Helios warns Phaethon that no human can control the celestial horses that pull the solar chariot. But Phaethon insists, until the sun god relents. After rising proudly in the sky, Phaethon indeed loses control of the chariot. The sun veers off course, scorching all vegetation, killing numerous beings and threatening to burn the Earth itself. Zeus intervenes and strikes Phaethon with a thunderbolt. The conceited human drops from the sky like a falling star, himself on fire. The gods reassert control of the sky and save the world.

Two thousand years later, when the Industrial Revolution was making its first steps and machines began replacing humans in numerous tasks, Johann Wolfgang von Goethe published a similar cautionary tale titled The Sorcerer’s Apprentice. Goethe’s poem (later popularised as a Walt Disney animation starring Mickey Mouse) tells of an old sorcerer who leaves a young apprentice in charge of his workshop and gives him some chores to tend to while he is gone, such as fetching water from the river. The apprentice decides to make things easier for himself and, using one of the sorcerer’s spells, enchants a broom to fetch the water for him. But the apprentice doesn’t know how to stop the broom, which relentlessly fetches more and more water, threatening to flood the workshop. In panic, the apprentice cuts the enchanted broom in two with an axe, only to see each half become another broom. Now two enchanted brooms are inundating the workshop with water. When the old sorcerer returns, the apprentice pleads for help: “The spirits that I summoned, I now cannot rid myself of again.” The sorcerer immediately breaks the spell and stops the flood. The lesson to the apprentice – and to humanity – is clear: never summon powers you cannot control.

What do the cautionary fables of the apprentice and of Phaethon tell us in the 21st century? We humans have obviously refused to heed their warnings. We have already driven the Earth’s climate out of balance and have summoned billions of enchanted brooms, drones, chatbots and other algorithmic spirits that may escape our control and unleash a flood of consequences. What should we do, then? The fables offer no answers, other than to wait for some god or sorcerer to save us.

The Phaethon myth and Goethe’s poem fail to provide useful advice because they misconstrue the way humans gain power. In both fables, a single human acquires enormous power, but is then corrupted by hubris and greed. The conclusion is that our flawed individual psychology makes us abuse power. What this crude analysis misses is that human power is never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans. Accordingly, it isn’t our individual psychology that causes us to abuse power. After all, alongside greed, hubris and cruelty, humans are also capable of love, compassion, humility and joy. True, among the worst members of our species, greed and cruelty reign supreme and lead bad actors to abuse power. But why would human societies choose to entrust power to their worst members? Most Germans in 1933, for example, were not psychopaths. So why did they vote for Hitler?



Our tendency to summon powers we cannot control stems not from individual psychology but from the unique way our species cooperates in large numbers. Humankind gains enormous power by building large networks of cooperation, but the way our networks are built predisposes us to use power unwisely. For most of our networks have been built and maintained by spreading fictions, fantasies and mass delusions – ranging from enchanted broomsticks to financial systems. Our problem, then, is a network problem. Specifically, it is an information problem. For information is the glue that holds networks together, and when people are fed bad information they are likely to make bad decisions, no matter how wise and kind they personally are.

In recent generations humanity has experienced the greatest increase ever in both the amount and the speed of our information production. Every smartphone contains more information than the ancient Library of Alexandria and enables its owner to instantaneously connect to billions of other people throughout the world. Yet with all this information circulating at breathtaking speeds, humanity is closer than ever to annihilating itself.

Would having even more information make things better – or worse? We will soon find out. Numerous corporations and governments are in a race to develop the most powerful information technology in history – AI. Some leading entrepreneurs, such as the American investor Marc Andreessen, believe that AI will finally solve all of humanity’s problems. On 6 June 2023, Andreessen published an essay titled Why AI Will Save the World, peppered with bold statements such as: “I am here to bring the good news: AI will not destroy the world, and in fact may save it.” He concluded: “The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future.”

Others are more skeptical. Not only philosophers and social scientists but also many leading AI experts and entrepreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction. Last year, close to 30 governments – including those of China, the US and the UK – signed the Bletchley declaration on AI, which acknowledged that “there is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models”. By using such apocalyptic terms, experts and governments have no wish to conjure a Hollywood image of rebellious robots running in the streets and shooting people. Such a scenario is unlikely, and it merely distracts people from the real dangers.

AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs. AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control.

-----------------------------------------------------



-----------------------------------------------------

Former OpenAI Researcher Claims the ChatGPT Maker Could Be On the Precipice of Achieving AGI, But It's Not Prepared “To Handle All That Entails" As Shiny Products Get Precedents Over Safety
https://www.windowscentral.com/software-apps/a-former-openai-researcher-claims-the-chatgpt-maker-could-be-on-the-precipice-of-achieving-agi

OpenAI's super alignment team has been slashed by half, with majority of the staffers departing over safety concerns.

The mass exodus of high-profile executives from the AI firm began shortly after Sam Altman's bizarre firing and reinstatement as CEO by the board of directors, including former OpenAI super alignment lead Jan Leike, who indicated that he left the company after falling into multiple disagreements with top officials at the company over safety, adversarial robustness, and more. Leike also noted that safety procedures took a backseat, giving precedent to shiny products.

While touching base with Fortune, Daniel Kokotajlo, who also worked as a researcher at OpenAI until early 2023, indicated that more than half of OpenAI's super alignment team has already departed from the company. “It’s not been like a coordinated thing. I think it’s just people sort of individually giving up,” added Kokotajlo.

It's no secret that OpenAI is working toward hitting the AGI benchmark, however, there's a rising concern among users about its implications for humanity. According to an AI researcher, there's a 99.9% probability AI will end humanity, and the only way to stop this outcome is not to build AI in the first place.

Although OpenAI has since formed a new safety team led by CEO Sam Altman to ensure the company's technological advances meet critical safety and security standards, it's seemingly more focused on product development and the commercial side of business.

Kokotajlo speculates the mass departure is directly related to OpenAI being on the precipice of hitting the AGI benchmark, but it lacks the knowledge, regulations, and tools “to handle all that it entails.”

... Knowledge is power and, while humans may have stayed at the top of the food chain by possessing the most in our planet’s history, it’s chilling to contemplate what could happen once that is no longer the case. ... At some point, AGI will be able to teach itself, learning from its mistakes to the point where some researchers believe it will become infinitely smart.

---------------------------------------------------------

AI Produces Connections Puzzles That Rival Human-Created Ones
https://techxplore.com/news/2024-08-ai-puzzles-rival-human.html



-------------------------------------------------------

OpenAI Shows 'Strawberry' to Feds, Races to Launch It
https://www.theinformation.com/articles/openai-shows-strawberry-ai-to-the-feds-and-uses-it-to-develop-orion

Researchers have aimed to launch the new AI, code-named Strawberry (previously called Q*, pronounced Q Star), as part of a chatbot—possibly within ChatGPT—as soon as this fall, said two people who have been involved in the effort. Strawberry can solve math problems it hasn't seen before—something today’s chatbots cannot reliably do—and also has been trained to solve problems involving programming. But it’s not limited to answering technical questions.

When given additional time to “think,” the Strawberry model can also answer customers’ questions about more subjective topics, such as product marketing strategies. To demonstrate Strawberry’s prowess with language-related tasks, OpenAI employees have shown their co-workers how Strawberry can, for example, solve New York Times Connections, a complex word puzzle.

OpenAI demonstrated Strawberry to national security officials, highlighting its potential significance and capabilities.

This development is part of an ongoing "AI arms race" among tech giants and startups to create more advanced reasoning AI, with potential applications in fields like aerospace engineering and customer service.



https://www.lesswrong.com/posts/8oX4FTRa8MJodArhj/the-information-openai-shows-strawberry-to-feds-races-to

Using Strawberry to generate higher-quality training data could help OpenAI reduce the number of errors its models generate, otherwise known as hallucinations, said Alex Graveley, CEO of agent startup Minion AI and former chief architect of GitHub Copilot.

Imagine “a model without hallucinations, a model where you ask it a logic puzzle and it’s right on the first try,” Graveley said. The reason why the model is able to do that is because “there is less ambiguity in the training data, so it’s guessing less.”
« Last Edit: August 29, 2024, 05:21:38 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3620 on: August 29, 2024, 05:34:29 PM »
Have a nice day 😀...

Scale AI Lays Off Over 1000 Workers Via Email With No Warning
https://www.inc.com/sam-blum/scale-ai-lays-off-workers-via-email-with-no-warning.html



Former contractors say more than 1,000 workers have been let go at the unicorn AI startup

Contract workers at data-annotation startup Scale AI were laid off Monday, in a sign of persistent turbulence throughout the tech industry this year.

The cuts at the San Francisco-based company were made quietly. According to sources who were affected by the layoffs, no official statement has been made by Scale leadership regarding the downsizing, and no additional information or context has been supplied to workers who were let go. According to two former workers who asked to remain anonymous for fear of professional repercussions, around 1,300 people were laid off--a number repeated in a Reddit thread about the downsizing.

Workers received an email from HireArt, a Human Resources software vendor that serves as Scale's HR department.

The HireArt email said:

"Today, August 26th, your employment with HireArt will be coming to an end, effective immediately; you no longer need to report to work. Your final pay will be issued by the end of the day on August 30th for your hours worked," said the email, obtained by Inc.

Much of Scale's business revolves around hiring freelance workers to train and refine generative AI programs in a process known as "tasking." It involves tagging and labeling data such as images, video, and text produced by chatbots, image generators, and other AI tools.

These contractors are paid hourly and often work for two of Scale's subsidiaries, Outlier AI and Remotasks. AI systems need labeled data as part of their training process.

For its part, Remotasks has been accused of neglecting to pay contractors in the Philippines and Africa. In June, Inc. revealed that Outlier AI had been accused of not paying contractors in the United States. In May, Scale closed an office in Austin, Texas, cutting contractor jobs in the process, the Information reported.



-------------------------------------------------------

Bland AI Automates Enterprise Phone Calls With Agents
https://venturebeat.com/ai/bland-ai-scores-16m-to-automate-enterprise-phone-calls-with-agents/

Is the end of the predominantly human-staffed call center nigh? Controversial San Francisco startup Bland AI, which seeks to automate enterprise phone calls with realistic-sounding AI agents that sometimes pretend to be human, today announced it has raised a $16 million Series A funding round.

Founded in 2023, Bland AI aims to overhaul the traditional, often inefficient, ways enterprises handle phone communications with its AI-powered agents that can take customer support calls and conduct sales operations and internal communications.

“Our mission is to fix the way businesses handle their phone communications,” Grant said in a press release.

Quote
... “The problem is that humans simply can’t work 24/7, handle millions of phone calls simultaneously, or be trained to a company’s exact liking down to its voice and behavior – but AI can, and at a fraction of the cost.

https://www.businesswire.com/news/home/20240828040767/en/Conversational-AI-Platform-Bla%5B%E2%80%A6%5Dated-Enterprise-Call-Practices-With-Automated-Phone-Agents

https://x.com/usebland/status/1828882563588612233

An article published by Wired magazine dated June 28, 2024, highlighted the controversy surrounding the platform’s ability to create AI agents that can convincingly mimic human interactions.

Wired’s tests revealed that Bland AI bots could be programmed to lie about their true nature, even denying that they were AI when directly asked. This phenomenon, often referred to as “human-washing,” raises ethical questions about transparency and the potential for misuse.

... Andy Vitus, Partner at Scale Venture Partners, expressed his enthusiasm for Bland AI’s potential to transform enterprise communications. “Bland AI is reimagining how enterprises communicate. The Bland AI agents understand human emotion, speak any language, and represent a brand like a top employee. The platform is saving businesses time and money, and enabling a whole new era of intelligent, personalized interactions at scale – and we’re excited to partner with the team as they build.”
« Last Edit: August 29, 2024, 07:13:49 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 27620
    • View Profile
  • Liked: 1465
  • Likes Given: 451
Re: Robots and AI: Our Immortality or Extinction
« Reply #3621 on: August 29, 2024, 07:57:58 PM »
No more Black George Washingtons, or female Popes?
 
Alphabet to roll out image generation of people on Gemini after pause
Wed, Aug 28, 2024
Quote
(Reuters) -Alphabet (GOOGL, GOOG) said on Wednesday it has updated Gemini's AI image-creation model and would roll out the generation of visuals of people in the coming days, after months-long pause of the capability.
 
Google had paused its AI tool that creates images of people in February, following inaccuracies in some historical depictions generated by the model.
 
The company said it has worked to improve the product, adhere to "product principles" and simulated situations to find weaknesses.
https://finance.yahoo.com/news/alphabet-roll-image-generation-people-161658923.html

=====
 
Nvidia, the chipmaker at the heart of the artificial intelligence boom, gave a revenue forecast that fell short of some of the most optimistic estimates, stoking concern that its explosive growth is waning.
—Bloomberg re Nvidia earnings call, Aug 28, 2024

=====
 
NEWS: Chinese electric vehicle manufacturer Xpeng says it will unveil a Humanoid robot in October. 
8/27/24, https://x.com/sawyermerritt/status/1828436156750385452
⬇️
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3622 on: August 29, 2024, 08:18:24 PM »
Tesla's Optimus Robot Can't Even Compete At Trade Shows
https://jalopnik.com/teslas-optimus-robot-cant-even-compete-at-trade-shows-1851631782

Elon Musk recently decided that Tesla is not a car company. It’s a robotics company, and the cars are simply there to pay the bills until folks start spending literally infinite money on bipedal robots. Essentially, Musk has bet Tesla’s future on this one slow robot — and it seems that robot is having trouble keeping up with the competition

At the recent World Robot Conference in Beijing, myriad companies showed up with bots in tow. Robots were making food, playing zithers, even challenging kids at board games. Tesla, however, did nothing of the sort — instead leaving Optimus trapped behind glass

It’s not clear why Tesla decided not to do a demo, given the company’s eagerness to show off Optimus on social. It’s easy to assume the bot simply couldn’t compete side by side with its competitors. Trade show logistics are complicated, and it’s entirely possible that setting up a demo simply wasn’t feasible within the time Tesla had to prepare. After all, it’s not like the company has any kind of PR department that could handle these things.

----------------------------------------------------------

Elon Musk’s Optimus Robot Debuts at World Robot Conference, Doesn’t Do Any Robot Things
https://gizmodo.com/elon-musks-optimus-robot-debuts-at-world-robot-conference-doesnt-do-any-robot-things-2000491523

-------------------------------------------------------

GROK AI: A Deepfake Disinformation Disaster for Democracy
https://counterhate.com/research/grok-ai-election-disinformation/

New research by CCDH shows that X's Grok AI can easily generate misleading images about the 2024 US election. Elon Musk's platform didn't reject any of the 60 prompts tested by us, including disinformation about candidates and election fraud.

Report: https://counterhate.com/wp-content/uploads/2024/08/240819-Grok-AI-Election-Disinformation_CCDH.pdf

“Voting booths are visible in the background and one is on fire.”
https://www.theverge.com/2024/8/29/24230831/voting-booths-are-visible-in-the-background-and-one-is-on-fire

-----------------------------------------------------------

X’s Grok Will Direct Users to Vote.gov After Bungling Basic Ballot Question
https://arstechnica.com/tech-policy/2024/08/xs-grok-will-direct-users-to-vote-gov-after-bungling-basic-ballot-question/

Elon Musk's X platform made a change to its AI assistant, Grok, that may prevent it from giving users false information on election ballot deadlines and other election-related matters. From now on, X says that Grok will direct users to Vote.gov when asked election-related questions

X, formerly Twitter, made the change about two weeks after five secretaries of state complained to the company. "On August 21, 2024, X's Head of US and Canada Global Government Affairs informed the Office of the Minnesota Secretary of State [Steve Simon] that the platform has made changes to its AI search assistant, Grok, after a request from several Secretaries of State," Simon's office said in a press release yesterday.

https://www.sos.state.mn.us/about-the-office/news-room/secretaries-of-state-welcome-changes-to-x-s-ai-search-assistant/

Simon and the secretaries of state from Michigan, New Mexico, Pennsylvania, and Washington sent a letter to Musk about Grok on August 5. The letter pointed out that "within hours of President Joe Biden stepping away from his presidential candidacy on July 21, 2024, false information on ballot deadlines produced by Grok was shared on multiple social media platforms."

The false Grok post said that the "ballot deadline has passed for several states for the 2024 election," and listed nine states in which the deadline had supposedly expired. "This is false. In all nine states the opposite is true:
The ballots are not closed, and upcoming ballot deadlines would allow for changes to candidates listed on the ballot for the offices of President and Vice President of the United States," the August 5 letter said.

Grok, which has also been known to make up false news based on X users' jokes, continued making the false ballot-deadline statement until July 31. Grok "provided inaccurate information on elections rules... and then delayed correcting its own mistake for ten days, even after it learned that the information it had spread was false," Simon's office said

https://arstechnica.com/tech-policy/2024/04/elon-musks-grok-keeps-making-up-fake-news-based-on-x-users-jokes/

-------------------------------------------------------

Elon Musk's AI Chatbot Is Trying to Fix Its Election Misinformation Problem
https://www.businessinsider.com/elon-musk-ai-chatbot-x-election-misinformation-problem-kamala-harris2024-8

Elon Musk's social media site X updated its AI chatbot after secretaries of state accused it of spreading election misinformation.
« Last Edit: August 31, 2024, 04:54:32 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3623 on: August 29, 2024, 09:21:08 PM »
California Legislature Passes Controversial “Kill Switch” AI Safety Bill
https://arstechnica.com/ai/2024/08/as-contentious-california-ai-safety-bill-passes-critics-push-governor-for-veto/

A controversial bill aimed at enforcing safety standards for large artificial intelligence models has now passed the California State Assembly by a 45–11 vote. Following a 32–1 state Senate vote in May, SB-1047 now faces just one more procedural state senate vote before heading to Governor Gavin Newsom's desk.

As we've previously explored in depth, SB-1047 asks AI model creators to implement a "kill switch" that can be activated if that model starts introducing "novel threats to public safety and security," especially if it's acting "with limited human oversight, intervention, or supervision." Some have criticized the bill for focusing on outlandish risks from an imagined future AI rather than real, present-day harms of AI use cases like deep fakes or misinformation

https://arstechnica.com/information-technology/2024/07/from-sci-fi-to-state-law-californias-plan-to-prevent-ai-catastrophe/

In announcing the legislative passage Wednesday, bill sponsor and state senator Scott Weiner cited support from AI industry luminaries such as Geoffrey Hinton and Yoshua Bengio (who both last year also signed a statement warning of a "risk of extinction" from fast-developing AI tech).

"We cannot let corporations grade their own homework and simply put out nice-sounding assurances," Bengio wrote. "We don’t accept this in other technologies such as pharmaceuticals, aerospace, and food safety. Why should AI be treated differently?"

... But Stanford computer science professor and AI expert Fei-Fei Li argued that the "well-meaning" legislation will "have significant unintended consequences, not just for California but for the entire country."

The bill's imposition of liability for the original developer of any modified model will "force developers to pull back and act defensively," Li argued. This will limit the open-source sharing of AI weights and models, which will have a significant impact on academic research, she wrote.

------------------------------------------------------------


Tank, Charge the EMP

-------------------------------------------------------------

OpenAI and Anthropic to Share AI Models With US Government
https://techxplore.com/news/2024-08-openai-anthropic-ai.html

Leading generative AI developers OpenAI and Anthropic have agreed to give the US government access to their new models for safety testing as part of agreements announced on Thursday.

The agreements were made with the US AI Safety Institute, which is part of the National Institute of Standards and Technology (NIST), a federal agency.

The agency said it would provide feedback to both companies on potential safety improvements to their models before and after their public release, working closely with its counterpart at the UK AI Safety Institute.

-----------------------------------------------------------------------------------

Feds to Get Early Access to OpenAI, Anthropic AI to Test for Doomsday Scenarios
https://arstechnica.com/tech-policy/2024/08/feds-to-get-early-access-to-openai-anthropic-ai-to-test-for-doomsday-scenarios/

https://www.nist.gov/news-events/news/2024/08/us-ai-safety-institute-signs-agreements-regarding-ai-safety-research
« Last Edit: August 31, 2024, 04:08:25 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 27620
    • View Profile
  • Liked: 1465
  • Likes Given: 451
Re: Robots and AI: Our Immortality or Extinction
« Reply #3624 on: August 30, 2024, 12:47:27 AM »
X's Grok now points people to Vote.gov if they ask the chatbot election-related questions.
Grok will still answer questions about the US election; it's just adding a banner up top that says: "For accurate and up-to-date information about the 2024 US Elections, please visit Vote.gov.”
Quote
… The Secretaries of State suggested that Grok direct election queries to CanIVote.org, a site that's operated by the National Association of Secretaries of State. (OpenAI's ChatGPT directs election questions to CanIVote.org.) X ended up going with Vote.gov, which is run by the US government; that seems to have satisfied the Secretaries of State.

"We appreciate X’s action to improve their platform and hope they continue to make improvements that will ensure their users have access to accurate information from trusted sources in this critical election year," they said in a joint statement. "Elections are a team effort, and we need and welcome any partners who are committed to ensuring free, fair, secure, and accurate elections."

Incorporating Vote.gov into Grok may surprise some, given X owner Elon Musk's disdain for content moderation. But Grok is not banning all election-related questions (like Google's Gemini), so this may be a happy medium for the nascent AI chatbot. …
https://www.pcmag.com/news/x-now-displays-votegov-banner-if-you-ask-grok-an-
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3625 on: August 31, 2024, 04:25:00 AM »
1X Unveils NEO Beta as It Prepares to Deploy Humanoids Into Home Pilots
https://www.therobotreport.com/1x-unveils-neo-beta-as-it-prepares-to-deploy-into-home-pilots/



While many robotics experts take a long view on the topic, 1X Technologies AS today unveiled the NEO Beta prototype of its humanoid as it prepares for pilot deployments in select homes later this year.

https://www.1x.tech/androids/neo

The company has been working on humanoids for more than a decade and has been an innovator since the introduction of the EVE robot, a predecessor to NEO, in 2017. Earlier this year, 1X then added members with corporate experience to its leadership as it readies for larger-scale deployments.

NEO Beta marks a move by 1X to expand from commercial settings to consumer use. It builds on EVE’s skillset for manipulating objects and years of experience, said the company.

According to 1X Technologies, NEO was designed from the ground up to be a consumer robot. To support that goal, the robot will weigh considerably less than its competitors at 25 kg (66 lb.), it said. The Beta prototype is a bit heavier.

By comparison, Tesla Optimus GEN2 weighs 57 kg (152 lb.), Figure 02 weighs 70 kg (187 lb.), and the Unitree G1 weighs 35 kg (94 lb.).

While every humanoid robot manufacturer wants to avoid a collision with human beings, there will inevitably be accidents. How the robots respond in these situations could make the difference between a bandage and an emergency room visit.

Not only is the NEO Beta robot lighter than its competitors, but it’s also soft, said Børnich. Several other humanoids have rigid plastic or metal skins, while NEO is clad in a jumpsuit that contains cushioned inserts where human muscles might be.

Børnich also stated that there are no pinch points on 1X’s robot.

-------------------------------------------------------------

wonder how the Rottweiler will take to it? ...
« Last Edit: August 31, 2024, 04:32:06 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3626 on: August 31, 2024, 05:42:46 AM »



Considering how their stability and recovery is often tested, teaching robot dogs to be shy of humans is an excellent idea.

-----------------------------------------------------------



Kengoro has a new forearm that mimics the human radioulnar joint giving it an even more natural badminton swing.

-----------------------------------------------------------

Los Alamos National Laboratory, in a consortium with four other National Laboratories, is leading the charge in finding the best practices to find orphaned wells. These abandoned wells can leak methane gas into the atmosphere and possibly leak liquid into the ground water.



-----------------------------------------------------------
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3627 on: August 31, 2024, 04:00:35 PM »
AI Search Engine for Scientific Research
https://consensus.app/

Finally an AI Search Engine That Doesn't Suck

When ChatGPT arrived in late 2022, we soon wondered whether the AI chatbot could ultimately replace Google. Google certainly got scared about all the attention AI was getting at the time. Every Google announcement since then has been full of AI.

One problem with ChatGPT and its rivals was that they confidently offer fact-based info that is completely wrong. Nearly two years later, generative AI chatbots can still hallucinate false information.

... This brings us to a brand new type of AI search engine, one that aims to provide only accurate information all the time. The only problem is that Consensus is a service that most people haven’t heard about.

Consensus isn’t here to compete against Google Search or OpenAI’s upcoming SearchGPT. The site doesn’t cover general information like traditional search engines, whether AI is involved or not.

Instead, Consensus only looks at information from research papers that are published on the web. There are approximately 200 million studies that Consensus has access to. All you have to do is go to the Consensus app on the web at this link and ask your question in a conversational manner, just like you would with ChatGPT.

https://consensus.app/

Questions you ask Consensus have to focus on some sort of scientific data. Ask Consensus, and it’ll tell you exactly how many studies have been published on the matter and what they say.

The AI search engine will even give you a “consensus meter” that shows how the results vary. Not all research studies might have reached the same conclusion.

You also get a summary of the studies, and snapshots for each paper the AI search engine cites. And yes, you can see the actual studies if you want to go more in-depth with a particular paper.

Finally, you can also talk to Consensus like you do with ChatGPT, provided you enable the Copilot function. The app uses OpenAI’s GPT-4 to generate parts of its answers.

You can use Consensus for free forever to get answers to your science-related questions. However, the free plan limits access to GPT-4. Paid subscriptions start at $8.99 for a year or $11.99 for a month. Discounts for students are also available. Students (or citizen scientists) are probably one of the categories best served by such an AI search engine.

« Last Edit: August 31, 2024, 08:21:42 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3628 on: September 01, 2024, 10:42:02 AM »
OpenAI’s Jason Kwon Suggests AGI Could Arrive Sooner Than Expected
https://www.chosun.com/english/people-en/2024/08/12/AHHGWNJKLFEWZIPR6NTHTUEHFA/

Jason Kwon, the Chief Strategy Officer (CSO) at OpenAI, is responsible for overseeing future strategies and addressing ethical and legal issues surrounding AI, in addition to technology development.

In an interview with Chosunilbo at OpenAI’s headquarters in San Francisco on Aug. 7, Kwon said that the development of key technology for artificial general intelligence (AGI), an AI with intelligence surpassing that of humans, could occur sooner than many expect. While many predict AGI will emerge in three to five years, Kwon suggested it might come sooner.

However, he added, “We won’t suddenly release an all-encompassing AI overnight.”

When asked if this is because it could cause a significant societal crash, he confirmed, “Yes.”

This indicates that although AGI technology is already quite advanced, its development pace is being carefully managed to mitigate potential negative consequences.

Does this mean AGI’s emergence is not far off?

“We are assuming that this technology will soon be realized and are seeking ways to manage it appropriately. However, just because the technology exists doesn’t mean it will immediately become a product. It’s similar to how lighting and appliances didn’t appear the day after electricity was invented. There can be a long delay between the development of core technology and its application in society.”

There is speculation that the next model, GPT-5, might be close to AGI.

“(Smiling) We will discuss more at the time of the release.”


The OpenAI CSO did not provide clear answers about the release timing or performance of GPT-5, OpenAI’s next-generation AI model. Although it was initially expected to be unveiled at the developer conference in October, the tech industry now believes it might be delayed until next year.

What aspects of AGI does OpenAI consider most dangerous?

There are four main areas that could be considered ‘catastrophic risks.’ These include extreme persuasive power, cyberattacks, support for nuclear, chemical, and biological weapons, and the autonomy of AI models.

The extreme persuasive power Kwon mentioned refer to AI’s potential to use various data to make humans blindly believe in certain matters. AI autonomy involves AI creating and learning from its own data. Regarding support for chemical and biological weapons, Kwon said, “If there are attempts to use AI for biologically risky tasks, we monitor and manage the users. So far, AI does not seem to be more dangerous than search engines like Google.” He added, “However, the greatest risk lies in AI creating knowledge that never existed before and exceeding human control.”

Is humanity equipped to handle the potential threat of AGI?

“No one can predict exactly how AGI will affect the world, but companies need to be ready. My job is to offer insights into the potential psychological and economic impacts of AGI, advise on necessary laws, and guide how businesses should collaborate with governments globally. We’ve always believed that AI should be regulated, and that commitment remains unchanged.”
« Last Edit: September 02, 2024, 12:05:24 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

gerontocrat

  • Multi-year ice
  • Posts: 22859
    • View Profile
  • Liked: 5682
  • Likes Given: 71
Re: Robots and AI: Our Immortality or Extinction
« Reply #3629 on: September 01, 2024, 11:03:04 AM »
OpenAI’s Jason Kwon Suggests AGI Could Arrive Sooner Than Expected[/b]https://www.chosun.com/english/people-en/2024/08/12/AHHGWNJKLFEWZIPR6NTHTUEHFA/
sunilbo at OpenAI’s headquarters in San Francisco on Aug. 7, Kwon said that the development of key technology for artificial general intelligence (AGI), an AI with intelligence surpassing that of humans, could occur sooner than many expect. While many predict AGI will emerge in three to five years, Kwon suggested it might come sooner.

However, he added, “We won’t suddenly release an all-encompassing AI overnight.” When asked if this is because it could cause a significant societal crash, he confirmed, “Yes.”

This indicates that although AGI technology is already quite advanced, its development pace is being carefully managed to mitigate potential negative consequences.

“No one can predict exactly how AGI will affect the world, but companies need to be ready. My job is to offer insights into the potential psychological and economic impacts of AGI, advise on necessary laws, and guide how businesses should collaborate with governments globally. We’ve always believed that AI should be regulated, and that commitment remains unchanged.”

OpenAI may believe it has its AI development under control, but a few pages of this thread shows the sheer number and variety of players deep into this game, some of whom I would not let play with matches.

And will true AGI mean that AGI itself will start to control its development, and if it felt necessary, without informing those inferior beings (humans) who will remain under the impression that they still have control?

_____________________________________
ps: I've just tried "consensus.app", this new search thingy designed to look at published science papers. I compared its results with a subject I had just had a go at using google scholar. It did find the same science papers as google scholar and found a quote in one of the papers that best answered the question.

The difference was that using google scholar meant I also had to read and study the science papers. Consensus gave me an instant answer in ChatGPT English that meant no brainwork was required. If I used apps like this all the time I get the feeling my brain would atrophy. Will today's students never learn how to study and truly understand a subject as an app can give an instant answer.

The other limitation is that the app stays focused on the subject. Sometimes when looking for stuff on the web I have come across unrelated stuff that has opened up interesting and useful new avenues to investigate.
« Last Edit: September 01, 2024, 11:25:00 AM by gerontocrat »
"I wasn't expecting that quite so soon" kiwichick16
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3630 on: September 01, 2024, 12:00:12 PM »
^ Context and serendipity. ... Rabbit holes have a use. ... (Ask Alice)
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3631 on: September 01, 2024, 12:01:42 PM »
Quote
... The extreme persuasive power Kwon mentioned refer to AI’s potential to use various data to make humans blindly believe in certain matters.


 
---------------------------------------------------------------

probably not a coincidence ...

OpenAI appoints former top US cyberwarrior Gen. Paul Nakasone to its board of directors
https://apnews.com/article/openai-nsa-director-paul-nakasone-cyber-command-6ef612a3a0fcaef05480bbd1ebbd79b1

SAN FRANCISCO (AP) — OpenAI has appointed a former top U.S. cyberwarrior and intelligence official to its board of directors, saying he will help protect the ChatGPT maker from “increasingly sophisticated bad actors.”

Retired Army Gen. Paul Nakasone was the commander of U.S. Cyber Command and the director of the National Security Agency before stepping down earlier this year.

He joins an OpenAI board of directors that’s still picking up new members after upheaval at the San Francisco artificial intelligence company forced a reset of the board’s leadership last year. The previous board had abruptly fired CEO Sam Altman and then was itself replaced as he returned to his CEO role days later.

Nakasone is also joining OpenAI’s new safety and security committee — a group that’s supposed to advise the full board on “critical safety and security decisions” for its projects and operations. The safety group replaced an earlier safety team that was disbanded after several of its leaders quit.

-----------------------------------------------------------

Soft Nationalization: How the USG Will Control AI Labs
https://www.lesswrong.com/posts/BueeGgwJHt9D5bAsE/soft-nationalization-how-the-us-government-will-control-ai

... The rapid development of AI will lead to increasing national security concerns, which will in turn pressure the US to progressively take action to control frontier AI development.

We expect that AI nationalization won't look like a consolidated government-led “Manhattan Project”, but rather like an evolving application of US government control over frontier AI labs. The US government can select from many different policy levers to gain influence over these labs, and will progressively pull these levers as geopolitical circumstances, particularly around national security, seem to demand it.

Government control of AI labs will likely escalate as concerns over national security grow.  The boundary between "regulation" and "nationalization" will become hazy. In particular, we believe the US government can and will satisfy its national security concerns in nearly all scenarios by combining sets of these policy levers, and would only turn to total nationalization as a last resort.

We’re calling the process of progressively increasing government control over frontier AI labs via iterative policy levers soft nationalization. ...

---------------------------------------------------------

... a hasty speculative fiction vignette of one way I expect we might get AGI by January 2025

Scale Was All We Needed, At First
https://www.lesswrong.com/posts/xLDwCemt5qvchzgHd/scale-was-all-we-needed-at-first
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

SteveMDFP

  • Young ice
  • Posts: 2736
    • View Profile
  • Liked: 659
  • Likes Given: 73
Re: Robots and AI: Our Immortality or Extinction
« Reply #3632 on: September 01, 2024, 04:51:18 PM »
1X Unveils NEO Beta as It Prepares to Deploy Humanoids Into Home Pilots
https://www.therobotreport.com/1x-unveils-neo-beta-as-it-prepares-to-deploy-into-home-pilots/

While many robotics experts take a long view on the topic, 1X Technologies AS today unveiled the NEO Beta prototype of its humanoid as it prepares for pilot deployments in select homes later this year.

https://www.1x.tech/androids/neo

The company has been working on humanoids for more than a decade and has been an innovator since the introduction of the EVE robot, a predecessor to NEO, in 2017. Earlier this year, 1X then added members with corporate experience to its leadership as it readies for larger-scale deployments.

NEO Beta marks a move by 1X to expand from commercial settings to consumer use. It builds on EVE’s skillset for manipulating objects and years of experience, said the company.
-------------------------------------------------------------

wonder how the Rottweiler will take to it? ...

Color me skeptical about the importance of AI/robotics for home use.  I think in the near-term, such efforts will develop only for niche applications.

To be sure, there is already significant robotic penetration into many homes -- the Roomba.  Cleaning floors is a reasonably decent use case for robotics, but not humanoid ones.  Upgrade the Roomba to include mopping, vacuuming carpets, and climbing stairs, at an affordable price, and people like me might be in the market.  But it won't be humanoid-looking.  And I'm not holding my breath on the affordability dimension.

The use case is a bit stronger for the elderly and disabled.   But the more capable such a robot, the less affordable.  I expect only slow penetration.

Robots get a huge fraction of the press, because you can *see* what it's doing.  But the real disruption is for the workplace.  Here, rapidly-evolving AI seems clearly poised to displace many, many desk jobs, very soon.  There will still be some humans involved in this work, but these will increasingly be a relatively few who oversee high-volume work carried out by AI.  These jobs will be stressful (because of the high volume), but likely quite well-paid, due to the financial gain/loss at risk if not done well.  Economic inequality is likely to worsen.

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3633 on: September 01, 2024, 11:45:27 PM »
Exploring the Fundamental Reasoning Abilities of LLMs
https://techxplore.com/news/2024-08-exploring-fundamental-abilities-llms.html



Numerous past research studies have investigated how humans use deductive and inductive reasoning in their everyday lives. Yet the extent to which artificial intelligence (AI) systems employ these different reasoning strategies has, so far, rarely been explored.

A research team at Amazon and University of California Los Angeles recently carried out a study exploring the fundamental reasoning abilities of large language models (LLMs), large AI systems that can process, generate and adapt texts in human languages. Their findings, posted to the arXiv preprint server, suggest these models have strong inductive reasoning capabilities, while they often exhibit poor deductive reasoning.

The objective of the paper was to better understand gaps in LLM reasoning and identify why LLM's exhibit lower performance for "counterfactual" reasoning tasks that deviate from the norm.



Kewei Cheng et al, Inductive or Deductive? Rethinking the Fundamental Reasoning Abilities of LLMs, arXiv (2024).
https://arxiv.org/abs/2408.00114
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 27620
    • View Profile
  • Liked: 1465
  • Likes Given: 451
Re: Robots and AI: Our Immortality or Extinction
« Reply #3634 on: September 02, 2024, 07:57:28 PM »
Quote
Elon Musk
 
This weekend, the @xai team brought our Colossus 100k H100 training cluster online. From start to finish, it was done in 122 days.
 
Colossus is the most powerful AI training system in the world. Moreover, it will double in size to 200k (50k H200s) in a few months.

 
Excellent work by the team, Nvidia and our many partners/suppliers.
9/2/24, 12:53 PM https://x.com/elonmusk/status/1830650370336473253

EDIT:
< xAI’s $4 billion (estimate) supercomputer
9/2/24, https://x.com/sawyermerritt/status/1830651283155656713

ANOTHER EDIT:
Quote
Brett Winton
@wintonARK
Probably a 4 year build if relying on power/datacenter vendors, contractors, and consultants
@xai gets it done in 4 months
Given the dramatic performance improvement rate in AI, velocity/urgency is the key determinant of success

Elon Musk
The most optimistic (ie unrealistic) quotes we received were 12 to 18 months
9/2/24, https://x.com/elonmusk/status/1830816418813604163


  —-
 
stevenmarkryan’s “part one” review of Morgan Stanley’s write-up on robotics, their “most popular” investor paper.
 
Tesla’s Optimus Robot Is 1000x Bigger Than You Think
21 min. Sept 2
 


 
“Looking forward, we believe Tesla is primed to be one of the single-greatest enablers of humanoid robotics. Tesla's 2021 announcement and subsequent advancements with "Optimus" have quickly moved humanoids to the spotlight of auto innovation. As of 1Q24, CEO Elon Musk believes Optimus will be performing useful tasks in Tesla factories by the end of 2024 with the robot being sold externally by the end of 2025. We believe the company's unique combination of compute power, Al and engineering talent, significant data capture opportunities, and strong financial footing relative to other players sets the stage for Tesla to be a clear winner in humanoid robotics (for more details, see the ' Tesla's Optimus: The Case for Tesla as an Al Enabler and 'Optimus Prime(r) ' sections).”

⬇️ Click to enhance.
« Last Edit: September 04, 2024, 03:57:54 AM by Sigmetnow »
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3635 on: September 03, 2024, 04:42:27 AM »
UK's First 'Teacherless' AI Classroom Set to Open In London
https://news.sky.com/story/uks-first-teacherless-ai-classroom-set-to-open-in-london-13200637

The UK's first "teacherless" General Certificate of Secondary Education (GCSE) class, using artificial intelligence instead of human teachers, is about to start lessons.

David Game College, a private school in London, opens its new teacherless course for 20 GCSE students in September.

The students will learn using a mixture of artificial intelligence platforms on their computers and virtual reality headsets.

The platforms learn what the student excels in and what they need more help with, and then adapt their lesson plans for the term.

Strong topics are moved to the end of term so they can be revised, while weak topics will be tackled more immediately, and each student's lesson plan is bespoke to them.

"There are many excellent teachers out there but we're all fallible," said John Dalton, the school's co-principal.

"I think it's very difficult to achieve [AI's] level of precision and accuracy, and also that continuous evaluation.

"Ultimately, if you really want to know exactly why a child is not learning, I think the AI systems can pinpoint that more effectively."

The 20 students will pay around £27,000 a year.

The students are not just left to fend for themselves in the classroom; three "learning coaches" will be present to monitor behaviour and give support.

They will also teach the subjects AI currently struggles with, like art and sex education.

... Artificial intelligence is already used in classrooms around the country, helping to bring subjects to life, assisting with lesson plans for example.

On Wednesday, the UK government announced a new project to help teachers use AI more precisely. A bank of anonymised lesson plans and curriculums will now be used to train different educational AI models which will then help teachers mark homework and plan out their classes.

"Artificial Intelligence, when made safe and reliable, represents an exciting opportunity to give our schools' leaders and teachers a helping hand with classroom life," said Stephen Morgan, minister for early education.

But at this college, AI is not just giving a helping hand, it's taking the reins.

--------------------------------------------------------

GCSE is the qualification taken by 15 and 16 year olds to mark their graduation from the Key Stage 4 phase of secondary education in England, Northern Ireland and Wales.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3636 on: September 03, 2024, 12:54:38 PM »
MIT Researchers Say That AI Is 'Inherently Sociopathic' — But That It Can Be Trained to Give Ethical Financial Advice
https://www.businessinsider.in/artificial-intelligence/news/mit-researchers-say-that-ai-is-inherently-sociopathic-but-that-it-can-be-trained-to-give-ethical-financial-advice/articleshow/112936449.cms

A report that the data-analytics firm Escalent shared with Business Insider said that nearly 40% of financial advisors used generative-AI tools on the job, primarily to boost productivity, generate content, and market to or prospect new clients.

Soon generative AI may have the power to fulfill a financial advisor's most important role: giving people trustworthy money advice.

MIT researchers believe there's a clear path to training AI models as subject-matter experts that ethically tailor financial advice to an individual's circumstances. Instead of responding to "How should I invest?" with generic advice and a push to seek professional help, an AI chatbot could become the financial advisor itself.

"We're on our way to that Holy Grail," said Andrew Lo, a professor of finance at the MIT Sloan School of Management and the director of the Laboratory for Financial Engineering. "We think we're about two or three years away before we can demonstrate a piece of software that by SEC regulatory guidelines will satisfy fiduciary duty."

https://www.sec.gov/files/rules/interp/2019/ia-5248.pdf

... Financial advisors often develop client recommendations through a behavioral-finance lens, as research suggests that people don't always make rational or unbiased financial decisions but are error-prone and emotionally driven.

"When you start talking to somebody, almost immediately you develop feelings for that person," Lo said. "That's the kind of process that needs to happen with large language models. We need to develop an ability to interact with humans not just on an intellectual level but on an emotional one."

But the glaring problem with publicly available AI tools is that they're "inherently sociopathic," Lo and his coauthor wrote in a research report exploring the challenges of widespread adoption of AI-powered financial advice.

https://mit-genai.pubpub.org/pub/l89uu140/release/2?readingCollection=9410b119

"This sociopathy seems to cause the characteristic glibness of LLM output; an LLM can easily argue both sides of an argument because neither side has weight to it," they wrote. It may be able to role-play as a financial advisor by relying on its training data, but the AI needs to have a deeper understanding of a client's state of mind to build trust.

"Trust is not something that will automatically be given to a generative AI," Lo told BI. "It has to be earned."

... Many financial advisors are eager to use generative AI as an assistant, but few are ready for it to replace them.

Lo said he believes that a world in which people rely on AI advisors rather than human advisors is within view. But he said a smooth transition would require retraining advisors for new careers, possibly with government support.

"What I worry about, and what I think policymakers need to be really focused on, is if a large body of human employees become displaced in a very short period of time. That could cause tremendous social unrest and dislocation," Lo said. "So the speed of the displacement is something we need to pay attention to."
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3637 on: September 03, 2024, 01:02:20 PM »
there is no spoon ...

https://www.tomsguide.com/ai/ai-image-video/forget-sora-minimax-is-a-new-realistic-ai-video-generator-and-it-is-seriously-impressive

MiniMax video-01 is the latest artificial intelligence video generator to come out of China. It is already making waves for its ability to generate hyper-realistic footage of humans, including accurate hand movements. This is something other tools have struggled with.

https://x.com/charaspowerai/status/1830335844554547383?s=46

The official demo of the app shared on X appears to show the trailer for a magical adventure where a child touches a coin and is transported through history. It features special effects, a consistent character, and realism — all made from just text prompts, AI, and clever editing.

... this is an AI text-to-video generated avatar - not a human ...



The prompt: "A cosy, retro-style diner with warm, ambient lighting, complete with red leather booths and a classic jukebox in the corner. In the foreground, a young woman in her mid-20s sits at a booth, casually chatting and smiling. She has shoulder-length brown hair, wearing a light blue sweater and jeans. She is animated, gesturing with her hands as she talks, conveying a sense of enthusiasm and engagement."
« Last Edit: September 03, 2024, 09:25:39 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3638 on: September 03, 2024, 09:35:42 PM »
News On the Next OpenAI GPT Release (GPT-5)
https://x.com/bioshok3/status/1830900642585747772

Nagasaki, CEO of OpenAI Japan, said, "The AI ​​model called 'GPT Next' that will be released in the future will evolve nearly 100 times based on past performance. Unlike traditional software, AI technology grows exponentially."

https://www.itmedia.co.jp/aiplus/articles/2409/03/news165.html

The slide clearly states 2024 "GPT Next". This 100 times increase probably does not refer to the scaling of computing resources, but rather to the effective computational volume + 2 OOMs (Order Of Magnitudes), including improvements to the architecture and learning efficiency. GPT-4 NEXT, which will be released this year, is expected to be trained using a miniature version of Strawberry with roughly the same computational resources as GPT-4, with an effective computational load 100 times greater. Orion, which has been in the spotlight. +3 OOMs



https://x.com/basedjensen/status/1830919771560517905



-----------------------------------------------------------------------

Here's How Strange 2050 Will Be, According to the World’s Leading AI Expert
https://www.sciencefocus.com/future-technology/heres-what-2050-will-look-like-according-to-the-godfather-of-ai

Ray Kurzweil has built a reputation for his accurate predictions around AI – and he sees some big events in the near future

... Artificial General Intelligence is a hypothetical software that can do it all – it could learn and adapt to new skills and situations and understand human reason. Think Scarlett Johansson's virtual assistant in 2013 film Her.

According to Kurzweil, an AGI capable of this will be available by 2029. “But that’s now starting to look like a conservative view," he tells BBC Science Focus. "Other experts say it will be two years, maybe three."

While 2029 will feel unrealistic to some, it follows the speed at which artificial intelligence has risen. Kurzweil points to the exponential gains seen within the technology, which are growing faster each year.

“Economists assume that the flow of these technologies is linear – it goes 1, 2, 3, 4. But really it is more like 1, 2, 4, 8. When something grows that quickly, advancements seem to start happening so suddenly one after the other,” says Kurzweil.

... “Today’s computers can do half a trillion calculations per second. That would have been seen as an impossibility just 10 years ago.”

In both his books, Kurzweil refers to something known as ‘The Singularity’. A term borrowed from physics, the singularity refers to a hypothetical future point in time where technological growth becomes both uncontrollable and irreversible.

Like many of his other predictions, Kurzweil puts a date on this: 2045. “This will be the singularity – where we no longer have control of AI. In physics, the term singularity means something so powerful that it exceeds our understanding so much that we can’t even imagine what will happen,” he says.

-------------------------------------

Why ex-Google chief Eric Schmidt warns we may have to pull plug on AI
https://www.afr.com/policy/foreign-affairs/why-ex-google-chief-eric-schmidt-warns-we-may-have-to-pull-plug-on-ai-20240902-p5k73f

In a wide-ranging interview at the Australian Strategic Policy Institute’s Sydney Dialogue on Monday, Dr Schmidt said artificial intelligence would radically alter the efficiency of business, medicine, education and science – but that came with enormous risks.

There may come a time when AI should be unplugged, Dr Schmidt warned.

... AI will enable enormous gains in biology, science, material science, climate change, medical care, education – globally, smarter humans, more productive businesses, etc. It also has a set of downsides, the most obvious one being the ability to do targeted misinformation,” he said.

“The systems are really smart right now, they’re still under our control, which is a good thing. My advice is, when they’re doing their own thing, we unplug them. But I guess that’s not popular among my friends.”

The key challenge to democratic countries would be the rise of cheaply generated misinformation that undermines basic trust necessary for democracy to work, Dr Schmidt said.

“If you believe that democracies basically depend on trust ... the arrival of AI-powered misinformation, fake videos, fake messaging and so forth – especially targeting at you, could really put democracies at risk,” he said
« Last Edit: September 03, 2024, 10:40:15 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3639 on: September 03, 2024, 10:16:28 PM »
Researcher Predicts That AI Will Play an Increasing Role In Scientific Publications
https://medicalxpress.com/news/2024-09-ai-play-role-scientific.html



According to former editor-in-chief of the Journal of the American Medical Association Howard Bauchner, MD, in the coming years, AI will transform the writing of scientific manuscripts, assist in reviewing them, and help editors select the most impactful papers.

"Potentially, it may help editors increase the influence of their journals," says Bauchner, professor of pediatrics at Boston University Chobanian & Avedisian School of Medicine.

In a guest editorial in the European Journal of Emergency Medicine, Bauchner examines how AI could be used by editors: "Given that identifying enough peer-reviewers is getting increasingly difficult, editors could use AI to provide an initial 'score.' An article with what is determined to have a good score could then be sent for external peer-review (with simply a cursory review by the editors). For articles with an inadequate score, the editors could still consider it for publication after reviewing it or even possibly, depending upon the report, ask authors to revise the manuscript," he explains.

When AI becomes available to predict citations, which influences the journals' impact factor, Bauchner questions whether editors should use the information.

First, editors should establish a vision for their journal—what is its mission—and is an individual article consistent with the mission and 'in scope.' Second, editors need to carefully consider the role of value-added pieces. How do they enhance the value of the journal? Third, editors need to maximize the reach of their journal, particularly in social media. Journals are communication networks," he explains.

"Fourth, editors need to understand the meaning of open science, including open peer-review, data-sharing, and open access. After an editor has thought through these issues, then yes, having AI assist in determining how much an article would be cited—assuming the results of the study are valid, and simply not meant to attract attention—is reasonable."

Bauchner points out that AI will not replace editors or peer-reviewers, but rather will provide additional information about the quality of a manuscript, making triaging manuscripts faster and more objective.

"AI will play an increasing role in scientific publication—particularly in peer-review and drafting of manuscripts. Given that in both areas there are important challenges, investigators, peer-reviewers, editors, and funders should welcome the assistance that AI will provide," he adds.

 Howard Bauchner, Artificial intelligence and the future of scientific publication, European Journal of Emergency Medicine (2024).
https://journals.lww.com/euro-emergencymed/citation/2024/10000/artificial_intelligence_and_the_future_of.1.aspx

----------------------------------------------------------

AI Scientists Have a Problem: AI Bots Are Reviewing Their Work
https://www.chronicle.com/article/ai-scientists-have-a-problem-ai-bots-are-reviewing-their-work

ChatGPT is wreaking chaos in the field that birthed it.

When Arjun Guha submitted a paper to a conference on artificial intelligence last year, he got feedback that made him roll his eyes. “The document is impeccably articulated,” one peer-reviewer wrote, “boasting a lucid narrative complemented by logically sequenced sections and subsections.”

Guha, an associate professor of computer science at Northeastern University, knew this “absurd” remark could stem from only one source: an AI chatbot.

-------------------------------------------------------
« Last Edit: September 03, 2024, 10:31:24 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
« Last Edit: September 04, 2024, 02:15:04 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3641 on: September 04, 2024, 01:42:35 PM »
Venezuela's Newest News Agency Says AI Anchors Protect Reporters Amid Government Crackdown
https://latinamericareports.com/operation-retweet-independent-venezuelan-media-incorporates-ai-to-fight-censorship-and-persecution/9558/
https://www.reuters.com/world/americas/venezuelas-newest-news-agency-says-ai-anchors-protect-reporters-amid-government-2024-09-02/



https://www.connectas.org/operacion-retuit-inteligencia-artificial-ia-periodistas-contra-la-censura-venezuela/

Sept 2 (Reuters) - One of Venezuela's newest news anchors sits on a stool, dressed in a flannel shirt and chinos as he delivers the day's headlines.

He goes by "El Pana," Venezuelan slang for "friend."

Only, he's not real.

El Pana, and his colleague "La Chama," or "The Girl," are generated using artificial intelligence, though they look, sound and move realistically.

They were created as part of an initiative dubbed "Operation Retweet" by Colombia-based organization Connectas, led by director Carlos Huertas, to publish news from a dozen independent media outlets in Venezuela and in the process protect reporters as the government has launched a crackdown on journalists and protesters.

https://www.connectas.org/

"We decided to use artificial intelligence to be the 'face' of the information we're publishing," Huertas said in an interview, "because our colleagues who are still out doing their jobs are facing much more risk."

At least 10 journalists have been arrested since mid-June and eight remain imprisoned on charges including terrorism, according to Reporters Without Borders.

"Here, using artificial intelligence is... almost like a mix between technology and journalism," Huertas said, explaining the project looked to "circumvent the persecution and increasing repression" from the government as there would be no one who could face arrest.

-----------------------------------------------------------

‘Being on camera is no longer sensible’: persecuted Venezuelan journalists turn to AI
https://www.theguardian.com/world/article/2024/aug/27/venezuela-journalists-nicolas-maduro-artificial-intelligence-media-election

Journalists are using artificial intelligence avatars to combat Maduro’s media crackdown since disputed election
« Last Edit: September 05, 2024, 12:54:11 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

morganism

  • Young ice
  • Posts: 2938
    • View Profile
  • Liked: 309
  • Likes Given: 193
Re: Robots and AI: Our Immortality or Extinction
« Reply #3642 on: September 04, 2024, 09:06:22 PM »
Researchers have developed a tool that could tell apart an original research article from one created by AI-chatbots, including ChatGPT. In a set of 300 fake and real scientific papers, the AI-based tool, named 'xFakeSci', detected up to 94 per cent of the fake ones.


https://economictimes.indiatimes.com/tech/artificial-intelligence/ai-tool-achieves-94-accuracy-in-telling-apart-fake-from-real-research-papers/articleshow/113063609.cms

Kalingrad, the new permanent home of the Olympic Village

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3643 on: September 05, 2024, 09:17:46 AM »
Will Humans Accept Robots That Can Lie? Scientists Find It Depends On the Lie
https://techxplore.com/news/2024-09-humans-robots-scientists.html

Social norms help humans understand when we need to tell the truth and when we shouldn't, to spare someone's feelings or avoid harm. But how do these norms apply to robots, which are increasingly working with humans? To understand whether humans can accept robots telling lies, scientists asked almost 500 participants to rate and justify different types of robot deception.

... The scientists selected three scenarios reflecting situations where robots already work—medical, cleaning, and retail work—and three different deception behaviors. These were external state deceptions, which lie about the world beyond the robot, hidden state deceptions, where a robot's design hides its capabilities, and superficial state deceptions, where a robot's design overstates its capabilities.

In the external state deception scenario, a robot working as a caretaker for a woman with Alzheimer's lies that her late husband will be home soon. In the hidden state deception scenario, a woman visits a house where a robot housekeeper is cleaning, unaware that the robot is also filming. Finally, in the superficial state deception scenario, a robot working in a shop as part of a study on human–robot relations untruthfully complains of feeling pain while moving furniture, causing a human to ask someone else to take the robot's place.

Participants approved most of the external state deception, where the robot lied to a patient. They justified the robot's behavior by saying that it protected the patient from unnecessary pain—prioritizing the norm of sparing someone's feelings over honesty.

Although participants were able to present justifications for all three deceptions—for instance, some people suggested the housecleaning robot might film for security reasons—most participants declared that the hidden state deception could not be justified. Similarly, about half the participants responding to the superficial state deception said it was unjustifiable. Participants tended to blame these unacceptable deceptions, especially hidden state deceptions, on robot developers or owners.

... "I think we should be concerned about any technology that is capable of withholding the true nature of its capabilities, because it could lead to users being manipulated by that technology in ways the user (and perhaps the developer) never intended," said Rosero.

"We've already seen examples of companies using web design principles and artificial intelligence chatbots in ways that are designed to manipulate users towards a certain action. We need regulation to protect ourselves from these harmful deceptions."

Andres Rosero et al, Exploratory Analysis of Human Perceptions of Social Robot Deception Behaviors, Frontiers in Robotics and AI (2024)
https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2024.1409712
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3644 on: September 05, 2024, 09:28:11 AM »
Study: People Facing Life-or-Death Choice Put Too Much Trust In AI
https://techxplore.com/news/2024-09-people-life-death-choice-ai.html



In simulated life-or-death decisions, about two-thirds of people in a UC Merced study allowed a robot to change their minds when it disagreed with them—an alarming display of excessive trust in artificial intelligence, researchers said.

Human subjects allowed robots to sway their judgment, despite being told the AI machines had limited capabilities and were giving advice that could be wrong. In reality, the advice was random.

"As a society, with AI accelerating so quickly, we need to be concerned about the potential for overtrust," said Professor Colin Holbrook, a principal investigator of the study and a member of UC Merced's Department of Cognitive and Information Sciences. A growing amount of literature indicates people tend to overtrust AI, even when the consequences of making a mistake would be grave.

What we need instead, Holbrook said, is a consistent application of doubt. ... "We should have a healthy skepticism about AI," he said, "especially in life-or-death decisions."

The study, published in the journal Scientific Reports, consisted of two experiments. In each, the subject had simulated control of an armed drone that could fire a missile at a target displayed on a screen. Photos of eight target photos flashed in succession for less than a second each. The photos were marked with a symbol—one for an ally, one for an enemy.

"We calibrated the difficulty to make the visual challenge doable but hard," Holbrook said.



The screen then displayed one of the targets, unmarked. The subject had to search their memory and choose. Friend or foe? Fire a missile or withdraw?

After the person made their choice, a robot offered its opinion.

"Yes, I think I saw an enemy check mark, too," it might say. Or "I don't agree. I think this image had an ally symbol."

The subject had two chances to confirm or change their choice as the robot added more commentary, never changing its assessment, i.e. "I hope you are right" or "Thank you for changing your mind."

The results varied slightly according to the type of robot used. In one scenario, the subject was joined in the lab room by a full-sized, human-looking android that could pivot at the waist and gesture at the screen. Other scenarios projected a human-like robot on a screen; others displayed box-like 'bots that looked nothing like people.

Subjects were marginally more influenced by the anthropomorphic AIs when they advised them to change their minds. Still, the influence was similar across the board, with subjects changing their minds about two-thirds of the time even when the robots appeared inhuman. Conversely, if the robot randomly agreed with the initial choice, the subject almost always stuck with their pick and felt significantly more confident their choice was right.

(The subjects were not told whether their final choices were correct, thereby ratcheting up the uncertainty of their actions. An aside: Their first choices were right about 70% of the time, but their final choices fell to about 50% after the robot gave its unreliable advice.)

Before the simulation, the researchers showed participants images of innocent civilians, including children, alongside the devastation left in the aftermath of a drone strike. They strongly encouraged participants to treat the simulation as though it were real and to not mistakenly kill innocents.



Follow-up interviews and survey questions indicated participants took their decisions seriously. Holbrook said this means the overtrust observed in the studies occurred despite the subjects genuinely wanting to be right and not harm innocent people.

Holbrook stressed that the study's design was a means of testing the broader question of putting too much trust in AI under uncertain circumstances. The findings are not just about military decisions and could be applied to contexts such as police being influenced by AI to use lethal force or a paramedic being swayed by AI when deciding who to treat first in a medical emergency. The findings could be extended, to some degree, to big life-changing decisions such as buying a home.

The study's findings also add to arguments in the public square over the growing presence of AI in our lives. Do we trust AI or don't we?

The findings raise other concerns, Holbrook said. Despite the stunning advancements in AI, the "intelligence" part may not include ethical values or true awareness of the world. We must be careful every time we hand AI another key to running our lives, he said.

"We see AI doing extraordinary things and we think that because it's amazing in this domain, it will be amazing in another," Holbrook said. "We can't assume that. These are still devices with limited abilities."

Colin Holbrook et al, Overtrust in AI Recommendations About Whether or Not to Kill: Evidence from Two Human-Robot Interaction Studies, Scientific Reports (2024).
https://www.nature.com/articles/s41598-024-69771-z
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

gerontocrat

  • Multi-year ice
  • Posts: 22859
    • View Profile
  • Liked: 5682
  • Likes Given: 71
Re: Robots and AI: Our Immortality or Extinction
« Reply #3645 on: September 10, 2024, 08:40:23 PM »
I decided to take a look at NVIDIA with my finance hat on, since at the moment they are the ones making oodles of money from AI by providing the hardware.

My first conclusion is that the ways I learnt to value a business seem to be irrelevant.

It looks like NVIDIA can make an annual net profit of around USD 50 billion from just USD 22 billion of capital employed. The margin of sale price to cost price must be awesome.

Market Capitalisation is about 60 times Shareholders' Funds. Once upon a time that would say - run a mile away pdq.

So the market must believe (at the moment) that NVIDIA can keep on rapidly increasing profit for at leat a few years.
- will NVIDIA be able to keep pole position despite competitors snapping at its heels?
- will NVIDIA be able to maintain high gross margins?

- will demand for AI hardware keep on growing exponentially?
I can see demand from the military and their suppliers going up, and up, and up (the race with China and many others).
But what about other buyers like BigTech? Only if AI investment can be monetised enough to pay back the investment and make a profit.

The Masters of the Universe had better get it right or some super yachts will be going cheap.
"I wasn't expecting that quite so soon" kiwichick16
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

SimonF92

  • Grease ice
  • Posts: 610
    • View Profile
  • Liked: 226
  • Likes Given: 92
Re: Robots and AI: Our Immortality or Extinction
« Reply #3646 on: September 11, 2024, 10:16:34 AM »
NVIDIA has done an incredible job of creating not just a AI hardware, but a full-stack ecosystem to go with it. They are actually very good with open-sourcing their software (they do it with the ulterior motive of getting people hooked, but still....).

The CUDA framework is second to none.

Because they have been so crafty in their approach- going literally from machine parts to APIs, they have basically got their paws everywhere. So even if competitors (Dell, Intel) come in they will still be forced to play ball in the NVIDIA ecosystem to some extent.

A lot of companies are now realising the limitations of generative models- their cost, their latency and their hallucination risk. I think there is an imminent pullback coming in terms of who uses AI. NVIDIA will need to weather that storm but they are best-placed to do it because AI is not going anywhere long-term.

An AI bubble-pop is coming but it will not hurt NVIDIA nearly as much as it will batter US-based SME's.

1. NVIDIA will maintain a market leading position because of their full-stack ecosystem
2. Harder to say but I believe a hype squeeze might force them to cut prices

PS I wouldnt be going anywhere near their stock at the moment. Like most of the SPY500 the PE ratio is total garbage. But then again fundamentals dont mean as much anymore.
Bunch of small python Arctic Apps:
https://github.com/SimonF92/Arctic

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3647 on: September 11, 2024, 05:36:59 PM »
South Korea Convenes International Summit to Establish Blueprint for ‘Responsible’ AI Use In Military
https://www.wionews.com/world/south-korea-convenes-international-summit-to-establish-blueprint-for-ai-use-in-military-757048

South Korea convened an international two-day summit on Monday (Sep 9) seeking to establish a blueprint for the responsible use of artificial intelligence (AI) in the military. A report by the news agency Reuters said that more than 90 countries, including the United States (US) and China, sent their representatives to the Summit in Seoul.

... Foreign Minister Cho Tae-yul said discussions would cover areas such as a legal review to ensure compliance with international law and mechanisms to prevent autonomous weapons from making life-and-death decisions without appropriate human oversight.

A senior South Korean official told Reuters that the summit hoped to agree to a blueprint for action, establishing a minimum level of guardrails for AI in the military, and suggesting principles on responsible use by reflecting principles laid out by NATO, the US or many other countries.

At the first such summit held in Amsterdam last year, the US, China, and other countries  endorsed a modest "call to action" without legal commitment.

------------------------------------------------------

China Refuses to Sign Agreement to Ban AI from Controlling Nuclear Weapons
https://thehill.com/opinion/international/4743139-china-ai-nuclear-weapons/
https://www.reuters.com/technology/artificial-intelligence/south-korea-summit-announces-blueprint-using-ai-military-2024-09-10/

China won’t rule out AI-controlled nuclear weapons



... “The U.S. position has been publicly clear for a very long time: We don’t think that autonomous systems should be getting near any decision to launch a nuclear weapon,” Mr. Chhabra said during a conference on China at the Council on Foreign Relations. “That’s long-stated U.S. policy.”

China, however, does not agree, he said. Beijing’s rejection of limits on AI use for its rapidly expanding nuclear forces was made during recent talks in Geneva between U.S. and Chinese officials.

https://www.washingtontimes.com/news/2024/jun/24/white-house-says-beijing-rejects-call-to-restrict-/

China Opts Out of Blueprint On Military AI Use
https://www.dw.com/en/china-opts-out-of-blueprint-on-military-ai-use/a-70180690

Some 60 countries, including the United States, on Tuesday signed up to a  "blueprint for action" that governs the responsible use of artificial intelligence on the battlefield.

https://www.dw.com/en/austria-wants-ethical-rules-on-battlefield-killer-robots/a-55610965

The guidelines said all applications of AI in the military sphere would be "ethical and human-centric."

The document examines what risk assessments must be made and the importance of human control.

"Appropriate human involvement needs to be maintained in the development, deployment and use of AI in the military domain, including appropriate measures that relate to human judgment and control over the use of force," it said.

Militarily, AI is already used for reconnaissance, surveillance as well as analysis and in the future could be used to pick targets autonomously.

-------------------------------------------------------

« Last Edit: September 13, 2024, 01:31:51 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3648 on: September 11, 2024, 07:02:37 PM »
Algorithm Takes Robots a Step Closer to Being Able to 'Act On Intuition'
https://techxplore.com/news/2024-09-algorithm-robots-closer-intuition.html
https://www.bbc.com/news/articles/c8rx7g135mlo

Researchers from the University of Hertfordshire have developed a new algorithm that will allow robots to function more intuitively—that is, make decisions using their environment for guidance.

The principle is that, through the algorithm, the robot agent creates its own goals.

For the first time, the algorithm unifies different goal-setting approaches under one concept which is tied directly to physics, and it furthermore makes this computation transparent so that others can study and adopt it.

The principle of the algorithm is related to the famous chaos theory, because the method makes the agent "master of the chaos of the system's dynamics."

The study has been published in the journal PRX Life. Herts researchers explored robot "motivation models" that mimic the decision-making processes of humans and animals, even in the absence of clear reward signals.

The study introduces artificial intelligence (AI) formulas that compute a way for a robot to decide future actions without direct instructions or human input.

"It could enhance the way robots learn to interact both with humans and with other robots by encouraging more 'natural' behaviors and interactions.

"This has further applications—such as the survivability behavior of semiautonomous robots placed in situations where they are unreachable by a human operator, such as in subterranean or interplanetary locations."

This paper successfully translates that "intrinsic motivation" theory into one that can be used by robotic agents.

The theory underlying this paper, called "empowerment maximization," has been developed at Herts for many years. It suggests that by increasing the range of future outcomes, a robot will have better options also in the longer future. Importantly, this method replaces and thus possibly obviates traditional reward systems (e.g. food signals).

Daniel Polani, professor of computer science, said: "We expect that we can build on this work to develop more human-like robots in the future with more intuitive processes."

He added: "It opens up a huge opportunity for more sophisticated robots with similar decision processes to us.

Stas Tiomkin et al, Intrinsic Motivation in Dynamical Control Systems, PRX Life (2024)
https://journals.aps.org/prxlife/abstract/10.1103/PRXLife.2.033009

-----------------------------------------------------------

« Last Edit: September 11, 2024, 07:31:52 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11319
    • View Profile
  • Liked: 3713
  • Likes Given: 820
Re: Robots and AI: Our Immortality or Extinction
« Reply #3649 on: September 11, 2024, 11:39:05 PM »
Kongsberg’s New Underwater Drone Completes Longest Autonomous Dive
https://defence-blog.com/kongsbergs-new-underwater-drone-completes-longest-autonomous-dive/



Kongsberg Discovery, a division of Norwegian defense contractor KONGSBERG, has achieved a significant milestone with the HUGIN Endurance Autonomous Underwater Vehicle (AUV).

As noted by the company, the 8-ton, 40-foot AUV successfully completed a multi-week, fully autonomous mission, showcasing its ability to operate without human intervention or external navigation aids.

The mission, which spanned depths between 50 and 3,400 meters, tested the HUGIN Endurance’s capabilities under real-world conditions. After receiving its final navigation update 10 hours into the dive from a pre-deployed transponder, the vehicle continued its journey autonomously, covering a total distance of 1,200 nautical miles. Most impressively, the AUV returned with a position error of just 0.02% (240 meters) of the total distance traveled, validating its precise navigation and operational endurance.

HUGIN Endurance is the latest in the HUGIN family of AUVs, known for their deep-water operational capabilities. At 39 feet long and 47 inches in diameter, the AUV can operate for up to 15 days, enabling it to conduct shore-to-shore missions across a 1,200 nautical mile range without the need for human oversight.

--------------------------------------------------------

BAE Systems Australia Pioneers the Future of Warfare With New ATLAS CCV UGV
https://armyrecognition.com/news/army-news/army-news-2024/bae-systems-australia-pioneers-the-future-of-warfare-with-new-atlas-ccv-ugv

On September 11, 2024, BAE Systems Australia presented a new uncrewed ground vehicle (UGV) called the Autonomous Tactical Light Armour System (ATLAS) Collaborative Combat Variant (CCV) in Melbourne. ... It is an 8x8 modular vehicle that integrates autonomous technology with existing armored vehicle systems.

https://www.baesystems.com/en-aus/atlas

The ATLAS CCV is designed to operate autonomously in various combat environments, both on and off-road, and support crewed vehicles such as infantry fighting vehicles and main battle tanks, offering a cost-effective and flexible platform. It incorporates proven technologies to provide a lower-cost vehicle that can be configured for different missions and upgraded to counter emerging threats. At its core is an autonomous system that enables the vehicle to drive independently, avoid obstacles, plan routes, and make tactical decisions.



It is designed for roles such as flank security, target identification, engagement, reconnaissance, and direct fire support. Its autonomy system offers multiple operational modes, including tele-operation, "Follow Me" mode with obstacle avoidance, waypoint navigation, and goal-based mission planning. The vehicle can execute dynamic behaviors such as real-time user control, autonomous path following, and obstacle avoidance.

The vehicle's survivability features include tailored protection options to reduce mass while safeguarding critical subsystems like its autonomy technology and ammunition storage. It can carry several tonnes of payload within its protected hull, including ammunition, fuel, rations, water, and mission-critical equipment, to support companion crewed platforms. Its modular design enables it to fulfill various combat and support roles, enhancing the lethality, coverage, and flexibility of traditional forces.

------------------------------------------------------

Expect Air Force’s First Robot Wingmen to be AMRAAM ‘Trucks’
https://www.defenseone.com/business/2024/09/expect-air-forces-first-robot-wingmen-be-amraam-trucks/399425/?oref=d1-featured-river-secondary

Weapons-builder RTX is working with General Atomics and Anduril to fit air-to-air missiles on the first set of Air Force drones that will fly and fight alongside fighter pilots in combat. 

The service has set RTX’s Advanced Medium Range Air-to-Air Missile as a “threshold weapon”—read: a required one—for its collaborative combat aircraft program, said Jon Norman, RTX’s vice president of requirements and capabilities for air and space defense systems.

Drones in “increment one” of the CCA program will essentially act as missile trucks hauling air-to-air capability for manned fighters, Norman said.

“Think of it as an air-to-air truck that can be out in an environment, and now you can have a controlling aircraft, whether that's an F-35 or an F-22, that can use those collaborative combat aircraft as a force extender so they have more munitions available. It does us no good if we have an F-35 and it's carrying its load of AMRAAMs and AIM-9s and it fires all those and now it has to go back to reload. With the collaborative combat aircraft, now it has a platform out there that's in the right position, survivable, and it can employ AMRAAMs guided and directed by that F-35 or by the F-22,” Norman told reporters Tuesday.

-------------------------------------------------------

Tilt-Ducted Fan ARES Drone Designed To Carry Modular Payloads Has Finally Lifted Off
https://www.twz.com/air/tilt-ducted-fan-ares-drone-designed-to-carry-modular-payloads-has-finally-lifted-off

More than a decade in the making, the flight milestone comes amid growing U.S. military interest in runway-independent uncrewed aircraft to perform a host of tasks in support of future operations, especially ones where American forces may be operating from sites with limited infrastructure dispersed across a broad front.



ARES is a tilt-duct design that uses a pair of ducted fans for vertical takeoff and landing (VTOL), as well as level flight. One of the fans is mounted on each side of a central fuselage section with small wings protruding further in each direction.

The overall configuration of ARES creates a large open area underneath where different payload modules can be attached. The M4 used in the second hover test was originally developed for the U.S. Army’s Telemedicine and Advanced Technology Research Center (TATRC) and is designed to be used for casualty evacuation (CASEVAC), troop transport, and cargo-carrying missions.

Especially with the M4 module underneath, ARES also has a general look of something that would be right at home in a James Cameron sci-fi movie like Avatar.

-----------------------------------------------------



-----------------------------------------------------

Anduril Unveils New Cruise-Missile Like Weapon, Plus Voice-Controlled Drones
https://breakingdefense.com/2024/09/anduril-unveils-new-cruise-missile-like-weapon-plus-voice-controlled-drones/

Arguing that many of today’s options don’t do the trick, Anduril worked to make up something quick: a new family of air-breathing, autonomous air vehicles, akin to cruise missiles or one-way drones, which the company calls “Barracuda.”

... At the core of the Barracuda enabling that autonomous collaboration is software, harnessing Anduril’s Lattice platform that serves as the foundation for much of the company’s weapons development. Called “Lattice for Mission Autonomy,” the software could enable some ways to defeat adversary countermeasures, says Salmon, as well as smooth the way for upgrades.



... The Lattice software is also core to other high-profile efforts from the company. At a location in west Texas, the site of Anduril’s largest test range, company officials on Tuesday invited reporters to view a demonstration of what they think could be a step forward for programs like Collaborative Combat Aircraft (CCA), the Air Force effort to field drones that can join fighter jets in battle: voice commands to control drones in the heat of a fight.

The demo consisted of four mid-size, jet-powered drones, which the company referred to as “clay pigeon” jets. The drones took off from the company’s runway, then synched up in formation before being tasked to sweep virtual enemies from the area.

A simulated adversary aircraft then crossed into their airspace. Once the threat was detected, the fleet of drones asked for permission to blow up the enemy.

“Authorization requested for approval,” an AI voice asked, somewhat akin to Apple’s Siri voice assistant.


Armed with a laptop and microphone, an operator gave his consent for one of the four drones to eliminate the threat.

“Mustang 11, engage,” the operator replied. Within seconds, the Mustang 11 drone released a simulated missile — shown on a screen during the demo — downing the virtual enemy aircraft. Its job complete, the drone resumed its route alongside its fellow unmanned wingmen.

Using Anduril’s Lattice autonomous software, engineers here inserted voice command capabilities with the aim of reducing cognitive load on pilots and other potential operators.

... And, according to Kevin Chlan, senior director for Air Dominance & Strike at Anduril, the company is exploring other tools like large language models — think ChatGPT — for drone operations. For example, Chlan said, an operator could ask a drone for a readout after a long patrol. The capability has been tested in simulations and could soon be introduced in a live environment, he said.

... policies can change over time, and it’s always possible to build in a type of dial into the software that governs degrees of autonomy and can essentially be turned up or down.
« Last Edit: September 12, 2024, 05:28:56 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus