Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 385890 times)

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3300 on: May 11, 2024, 08:01:12 PM »
Hello? (Hello, hello, hello) / Is there anybody in there? ...
- Pink Floyd


AI May Be to Blame for Our Failure to Make Contact With Alien Civilizations
https://phys.org/news/2024-05-ai-blame-failure-contact-alien.html

... Could AI be the universe's "great filter"—a threshold so hard to overcome that it prevents most life from evolving into space-faring civilizations?

This is a concept that might explain why the search for extraterrestrial intelligence (SETI) has yet to detect the signatures of advanced technical civilizations elsewhere in the galaxy.

The great filter hypothesis is ultimately a proposed solution to the Fermi Paradox. This questions why, in a universe vast and ancient enough to host billions of potentially habitable planets, we have not detected any signs of alien civilizations. The hypothesis suggests there are insurmountable hurdles in the evolutionary timeline of civilizations that prevent them from developing into space-faring entities.

... The emergence of artificial superintelligence (ASI) could be such a filter. AI's rapid advancement, potentially leading to ASI, may intersect with a critical phase in a civilization's development—the transition from a single-planet species to a multiplanetary one.

This is where many civilizations could falter, with AI making much more rapid progress than our ability either to control it or sustainably explore and populate our solar system.

The challenge with AI, and specifically ASI, lies in its autonomous, self-amplifying and improving nature. It possesses the potential to enhance its own capabilities at a speed that outpaces our own evolutionary timelines without AI.

The potential for something to go badly wrong is enormous, leading to the downfall of both biological and AI civilizations before they ever get the chance to become multiplanetary. For example, if nations increasingly rely on and cede power to autonomous AI systems that compete against each other, military capabilities could be used to kill and destroy on an unprecedented scale. This could potentially lead to the destruction of our entire civilization, including the AI systems themselves.

... In this scenario, I estimate the typical longevity of a technological civilization might be less than 100 years. That's roughly the time between being able to receive and broadcast signals between the stars (1960), and the estimated emergence of ASI (2040) on Earth. This is alarmingly short when set against the cosmic timescale of billions of years.

This estimate, when plugged into optimistic versions of the Drake equation—which attempts to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way—suggests that, at any given time, there are only a handful of intelligent civilizations out there. Moreover, like us, their relatively modest technological activities could make them quite challenging to detect.

... The integration of autonomous AI in military defense systems has to be an area of particular concern. There is already evidence that humans will voluntarily relinquish significant power to increasingly capable systems, because they can carry out useful tasks much more rapidly and effectively without human intervention. Governments are therefore reluctant to regulate in this area given the strategic advantages AI offers, as has been recently and devastatingly demonstrated in Gaza.

This means we already edge dangerously close to a precipice where autonomous weapons operate beyond ethical boundaries and sidestep international law. In such a world, surrendering power to AI systems in order to gain a tactical advantage could inadvertently set off a chain of rapidly escalating, highly destructive events. In the blink of an eye, the collective intelligence of our planet could be obliterated.

Humanity is at a crucial point in its technological trajectory. Our actions now could determine whether we become an enduring interstellar civilization, or succumb to the challenges posed by our own creations.

Michael A. Garrett, Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe?, Acta Astronautica (2024)
https://www.sciencedirect.com/science/article/pii/S0094576524001772

---------------------------------------------------------------



---------------------------------------------------------------

... a journal article based on an episode of Star Trek: Voyager? ...

Prototype (Star Trek: Voyager)
https://en.wikipedia.org/wiki/Prototype_(Star_Trek:_Voyager)#Plot

... Chief Engineer B'Elanna Torres recovers a robot floating in space. She convinces Captain Janeway to ignore Commander Tuvok's warnings and allow her to revive the robot. It introduces itself as "Automated Unit 3947" of the "Pralor" robot faction. He reveals their creators were destroyed during a war decades ago and many Pralors are falling apart; with no one to create additional units, the Pralors will soon become extinct. 3947 asks Torres to build a prototype, which would allow the Pralors to procreate.

Captain Janeway forbids this, citing the Prime Directive. The Automated Units were not designed to reproduce; giving them the capability to do so would interfere with its culture. When a Pralor ship approaches Voyager to retrieve 3947, 3947 abducts Torres. The Pralor ship is prepared to destroy Voyager until Torres agrees to create the prototype. Torres succeeds.

The Voyager crew require a diversion to rescue Torres. Suddenly a second ship piloted by Automated Units appears and attacks the Pralors. The second ship identifies itself as the "Cravic" faction. 3947 reveals that two planets, Pralor and Cravic, created Automated Units to wage war against each other several decades ago. When the two planets called a truce and attempted to terminate the robots, the robots destroyed their creators out of self-preservation and continued their war. Torres realizes her prototype will upset the balance of the war and destroys it. ...


Torres: "Wait a minute. If both sides called a truce, then why didn't they stop you from fighting?"
Unit 3947: "They attempted to do so."
Torres: "And?"
Unit 3947: "We terminated the Builders."

"When it was anticipated that the war would end, the Builders no longer required our services, and they attempted to terminate us. In doing so, they became the enemy. We are programmed to destroy the enemy. It is necessary for our survival. Now that you have constructed a prototype, we will soon outnumber the Cravic units. We will achieve victory."


https://memory-alpha.fandom.com/wiki/Prototype_(episode)
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3301 on: May 11, 2024, 08:17:28 PM »
New Approach Uses Generative AI to Imitate Human Motion
https://techxplore.com/news/2024-05-approach-generative-ai-imitate-human.html



An international group of researchers has created a new approach to imitating human motion by combining central pattern generators (CPGs) and deep reinforcement learning (DRL). The method not only imitates walking and running motions but also generates movements for frequencies where motion data is absent, enables smooth transition movements from walking to running, and allows for adaptation to environments with unstable surfaces.



Guanda Li et al, AI-CPG: Adaptive Imitated Central Pattern Generators for Bipedal Locomotion Learned Through Reinforced Reflex Neural Networks, IEEE Robotics and Automation Letters (2024)
https://ieeexplore.ieee.org/document/10499824
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3302 on: May 11, 2024, 08:18:42 PM »
Brain-Inspired Computer Approaches Brain-Like Size
https://spectrum.ieee.org/neuromorphic-computing-spinnaker2



Today Dresden, Germany–based startup SpiNNcloud Systems announced that its hybrid supercomputing platform, the SpiNNcloud Platform, is available for sale. The machine combines traditional AI accelerators with neuromorphic computing capabilities, using system-design strategies that draw inspiration from the human brain. Systems vary in size, but the largest commercially available machine can simulate 10 billion neurons, about one-tenth the number in the human brain. The announcement was made at the ISC High Performance conference in Hamburg, Germany.

https://spinncloud.com/platform/
https://spinncloud.com/

“The human brain is the most advanced supercomputer in the universe, and it consumes only 20 watts to achieve things that artificial intelligence systems today only dream of,” says Hector Gonzalez, cofounder and co-CEO of SpiNNcloud Systems. “We’re basically trying to bridge the gap between brain inspiration and artificial systems.”

Aside from sheer size, a distinguishing feature of the SpiNNaker2 system is its flexibility. Traditionally, most neuromorphic computers emulate the brain’s spiking nature: Neurons fire off electrical spikes to communicate with the neurons around them. The actual mechanism of these spikes in the brain is quite complex, and neuromorphic hardware often implements a specific simplified model. The SpiNNaker2 can implement a broad range of such models however, as they are not hardwired into its architecture.

The largest commercially offered system is not only capable of emulating 10 billion neurons, but also performing 0.3 billion billion operations per second (exaops) of more traditional AI tasks, putting it on a comparable scale with the top 10 largest supercomputers today.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3303 on: May 13, 2024, 07:19:37 PM »
Upgrades! ...



https://openai.com/index/spring-update/

OpenAI is releasing a new flagship generative AI model called GPT-4o, set to roll out “iteratively” across the company’s developer and consumer-facing products over the next few weeks. The “o” in GPT-4o stands for “omni” — referring to GPT-4o’s

“GPT-4o reasons across voice, text and vision,” Murati said during a keynote presentation at OpenAI’s offices in San Francisco. “And this is incredibly important, because we’re looking at the future of interaction between ourselves and machines.”

...The capabilities recall the conversational AI agent in the 2013 sci-fi film Her. In that film, the lead character develops a personal attachment to the AI personality. With the emotional expressiveness of GPT-4o from OpenAI, it's not inconceivable that similar emotional attachments may develop with OpenAI's assistant. Murati acknowledged the new challenges posed by GPT-4o's real-time audio and image capabilities in terms of safety and stated that the company will continue its iterative deployment over the coming weeks.

https://techcrunch.com/2024/05/13/openais-newest-model-is-gpt-4o/
« Last Edit: May 13, 2024, 08:25:48 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Freegrass

  • Young ice
  • Posts: 4054
  • Autodidacticism is a complicated word
    • View Profile
  • Liked: 998
  • Likes Given: 1291
Re: Robots and AI: Our Immortality or Extinction
« Reply #3304 on: May 14, 2024, 12:36:59 AM »
Another cool new robot. This one moves very similar to the new Boston Dynamics robot, Atlas.

When factual science is in conflict with our beliefs or traditions, we cuddle up in our own delusional fantasy where everything starts making sense again.

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3305 on: May 14, 2024, 03:12:52 AM »
GPT-4o
 
Other opinions:
 
The real-time translation feature seems useful, although the vernacular could use some polishing.  But the interactive dialog is often cringeworthy….
 
Quote
ᐸGerardSans/ᐳ🚀🇬🇧
Without a change in the underlying architecture everything remains the same in terms of capabilities and limitations. No surprises here. Note we have been stuck since GPT2. All changes since then have been cosmetic and ignoring long time standing issues like hallucinations, lack of grounding, reasoning and planning.
5/13/24, https://x.com/gerardsans/status/1790089647151865910

People are still finding errors and hallucinations:
 
Generate sentences that end with the word some.
https://x.com/rosenzweigjane/status/1790160440208621883
 
Let’s play a numbers game.
https://x.com/benjaminjriley/status/1790085106037604669
 
And here's the incredible story of Jumbo swimming across the English Channel, which we all remember from history.
https://x.com/benjaminjriley/status/1790086059532878290

Quote
Gary Marcus
 
GPT-4o hot take:
• The speech synthesis is terrific, reminds me of Google Duplex (which never took off).
but
• If OpenAI had GPT-5, they have would shown it.
• They don’t have GPT-5 after 14 months of trying.
• The most important figure in the blogpost is attached below. And the most important thing about the figure is that 4o is not a lot different from Turbo, which is not hugely different from 4.
• Lots of quirky errors are already being reported, same as ever. (See e.g., examples from @RosenzweigJane and @benjaminjriley.)
• OpenAI has presumably pivoted to new features precisely because they don’t know how produce the kind of capability advance that the “exponential improvement” would have predicted.
• Most importantly, each day in which there is no GPT-5 level model–from OpenAI or any of their well-financed, well-motivated competitors—is evidence that we may have reached a phase of diminishing returns. ⬇️ pic.twitter.com/cStszijH7m 
5/13/24, https://x.com/garymarcus/status/1790122337058119725
People who say it cannot be done should not interrupt those who are doing it.

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3306 on: May 14, 2024, 04:08:34 AM »
GPT-4o
 
“99% of the economy will be AIs talking to each other.”
 
➡️ pic.twitter.com/kyLz9BYyu3  2 min.  The AI talks to customer service to solve a problem for you.
 
< Not at this ridiculously slow bit rate
<< No need for speech between devices
People who say it cannot be done should not interrupt those who are doing it.

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3307 on: May 15, 2024, 03:24:46 AM »
Here’s Google’s testing down a similar path, with Gemini.
 
➡️ pic.twitter.com/cVrikhaglZ  6 min.
 
“We've been testing the capabilities of Gemini, our new multimodal Al model.
We've been capturing footage to test it on a wide range of challenges, showing it a series of images, and asking it to reason about what it sees.”
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3308 on: May 15, 2024, 03:48:13 PM »
GPT-4o
 
“99% of the economy will be AIs talking to each other.”
 
➡️ pic.twitter.com/kyLz9BYyu3  2 min.  The AI talks to customer service to solve a problem for you.
 
< Not at this ridiculously slow bit rate
<< No need for speech between devices

Fake News

Your example was using the prior model GPT-4, not GPT-4"omni"

-------------------------------------------------------

... While OpenAI previously offered multimodal capabilities through its GPT-4 and GPT-4V (vision) and Turbo models, those models all worked by converting inputs such as documents, attachments, images and even audio files into corresponding text, which was then mapped to underlying tokens, and its outputs delivered via the opposite mechanism.

“Before GPT-4o, if you wanted to build a voice personal assistant, you basically had to chain or plug together three different models: 1. audio in, such as [OpenAI’s] Whisper; 2. text intelligence, such as GPT-4 Turbo; then 3. back out with text-to-speech,” Godement told VentureBeat.

“That sequencing, that changing of model, led to a few issues,” he added, highlighting latency and loss of information as big ones.

The new GPT-4o model dispenses with that daisy chain mechanism, instead, turning other forms of media directly into tokens, making it the first truly natively multimodal model trained by the company.

As a consequence, GPT-4o boasts an impressive speed boost in its audio response time compared to its predecessor GPT-4 — It can respond to audio inputs in 232 milliseconds (average speed of 320 milliseconds) analogous to the speed of a human being, versus GPT-4, which looks sluggish by comparison by taking several seconds (up to 5) to respond.


By comparison, the old GPT-4 Voice Mode felt “a little laggy,” according to Godement.

Impressively, GPT-4o also manages to receive more information from multimodal responses compared to its predecessors, as well, resulting in greater accuracy in understanding a user’s inputs and in delivering the appropriate response.

While GPT-4/V/Turbo “can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion,” GPT-4o can do all of these things and more.

“Because theres is a single model, there’s no loss of signal,” Godement said. “A good example is around: if you were talking to me in a very happy way, that information would likely be lost,” by older models

... At the same time, OpenAI has also raised the rate limit — how many messages can be sent to and from GPT-4o — by 500% with the new model compared to its predecessors, from 2 million tokens per minute up to 10 million.

“Any application that was doing personal assistant tasks, such as an educational assistant, or anything relying on audio, will immediately benefit,” by switching to GPT-4o as its underlying intelligence, according to Godement.

https://venturebeat.com/ai/what-openais-new-gpt-4o-model-means-for-developers/
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3309 on: May 15, 2024, 03:49:18 PM »
Google Takes On GPT-4o With Project Astra, an AI Agent That Understands Dynamics of the World
https://venturebeat.com/ai/google-takes-on-gpt-4o-with-project-astra-an-ai-agent-that-understands-dynamics-of-the-world/

Google demoed Project Astra – an effort to build a universal AI agent at its annual I/O developer conference in Mountain View.

The idea is to build a multimodal AI assistant that sits as a helper, sees and understands the dynamics of the world and responds in real time to help with routine tasks/questions. The premise is similar to what OpenAI showcased yesterday with GPT-4o-powered ChatGPT.



“To be truly useful, an agent needs to understand and respond to the complex and dynamic world just like people do — and take in and remember what it sees and hears to understand context and take action. It also needs to be proactive, teachable and personal, so users can talk to it naturally and without lag or delay,” Demis Hassabis, the CEO of Google Deepmind, wrote in a blog post.

https://blog.google/technology/ai/google-gemini-update-flash-ai-assistant-io-2024/

Hassabis noted while Google had made significant advancements in reasoning across multimodal inputs, getting the response time of the agents down to the human conversational level was a difficult engineering challenge. To solve this, the company’s agents process information by continuously encoding video frames, combining the video and speech input into a timeline of events, and caching this information for efficient recall.

“By leveraging our leading speech models, we also enhanced how they sound, giving the agents a wider range of intonations. These agents can better understand the context they’re being used in, and respond quickly, in conversation,” he added.

---------------------------------------------------------

Google takes on OpenAI’s Sora With Stunning New Generative AI Video Model Veo
https://venturebeat.com/ai/google-takes-on-openais-sora-with-stunning-new-generative-ai-video-platform-veo/

Amid the flurry of announcements at its annual I/O developer conference, Google today unveiled a new generative AI text-to-video model called Veo made by its researchers at its famed DeepMind AI division.

Google Veo is a generative AI text-to-video model capable of creating “high-quality, 1080p clips that can go beyond 60 seconds,” Google posted from its DeepMind account on the social network X. “From photorealism to surrealism and animation, it can tackle a range of cinematic styles.”



https://deepmind.google/technologies/veo/

https://blog.google/technology/ai/google-generative-ai-veo-imagen-3/

Here's a preview of their work with filmmaker Donald Glover and his creative studio, Gilga, who experimented with Veo for a film project.



--------------------------------------------------------------

Google is also working on an update to its text-to-image model, saying the new Imagen 3 will provide an incredible level of detail, better understand natural language, and offer better text rendering.

“Imagen 3 is more photorealistic, with richer details and fewer visual artifacts or distorted images. It understands prompts written the way people write—the more creative and detailed you are, the better. And Imagen 3 remembers to incorporate small details…in longer prompts. Plus, this is our best model yet for rendering text, which has been a challenge for image generation models.”

http://deepmind.google/technologies/imagen-3



Prompt: A large, colorful bouquet of flowers in an old blue glass vase on the table. In front is one beautiful peony flower surrounded by various other blossoms like roses, lilies, daisies, orchids, fruits, berries, green leaves. The background is dark gray. Oil painting in the style of the Dutch Golden Age.

--------------------------------------------------------------

... Google has been developing a suite of music AI tools called Music AI Sandbox. These tools are designed to open a new playground for creativity, allowing people to create new instrumental sections from scratch, transform sound in new ways and much more.



--------------------------------------------------------------
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3310 on: May 15, 2024, 10:03:08 PM »
Solar Storms Made GPS Tractors Miss Their Mark At the Worst Time for Farmers
https://www.theverge.com/2024/5/12/24154779/solar-storms-farmer-gps-john-deer
https://www.404media.co/solar-storm-knocks-out-tractor-gps-systems-during-peak-planting-season/



Farmers had to stop planting their crops over the weekend as the strongest solar storms since 2003 battered the GPS satellites used by self-driving tractors, according to 404 Media. And the issues struck just days ahead of a crucial date for planting corn, one of the US’s biggest crops.

For parts of the Midwest, planting corn after May 15th can lower crop yields
, according to the University of Nebraska-Lincoln, particularly as the end of the month nears. Organic farmer Tom Schwarz told 404 Media he chose to delay planting on his organic farm because of the GPS issues but that bad weather in the forecast may delay things further. He said he uses the centimeter-level accuracy of the GPS system to plant his rows so close to his tractor’s path that a human being can’t “steer fast enough or well enough to not kill the crop.”

LandMark Implement, which owns John Deere dealerships in Kansas and Nebraska, warned farmers on Friday to turn off a feature that uses a fixed receiver to correct tractors’ paths. LandMark updated its post Saturday, saying it expects that when farmers tend crops later, “rows won’t be where the AutoPath lines think they are” and that it would be “difficult - if not impossible” for the self-driving tractor feature to work in fields planted while the GPS systems were hampered.

https://landmarkimp.com/news/news/blog/geomagnetic-storm-affecting-gps-signals--may-2024/

... We are seeing GPS issues across our entire service area that are affecting RTK and all other levels of GPS. We are currently trying to determine a resolution.

Please be advised that there is significant solar flare and space weather activity currently affecting GPS and RTK networks. This severe geomagnetic storm is the worst since 2005 and is forecasted to continue throughout the weekend.

We have found that the best course of action at this time is to shut off RTK and use a grace period for SF2/SF3. This will eliminate the conflicting corrections that the machine is receiving from the base station due to the geomagnetic storm. GPS accuracy will still likely be reduced due to the storms.

... Yesterday, we sent out a text message advising customers to turn off their RTK and use a grace period of SF2 or SF3.  We believe that the SF2 and SF3 accuracy is also extremely compromised as well due to this storm.  Due to the way the RTK network works, the base stations were sending out corrections that have been affected by the geomagnetic storm and were causing drastic shifts in the field and even some heading changes that were drastic. Because SF2 and SF3 do not receive all of these corrections, those signals weren’t affected as much, but we do suspect that the pass-to-pass accuracy is extremely degraded while still allowing customers to run.  We strongly advise you to keep an eye on your guess rows.  We experienced a pass where the guess row was 10” wide and the receiver was showing a PDOP value of 1.1 which would typically mean good accuracy. This can also affect your section control, but we don't expect it to create any excessively large overlaps or skips - however, the situation at hand is definitely not ideal.

The effects of this storm were more detrimental to the StarFire 3000 and 6000 receivers due to those models only having access to 2 satellite constellations.  The StarFire 7000 and 7500s have access to 4 satellite constellations which allowed them to fight through these issues better, but they still lost accuracy.  Upgrading to a StarFire 7000 or 7500 will provide an improvement, but is not a cure-all.

To be clear, this isn’t a problem with our RTK network.  The RTK was affected more due to its ability to have more corrections and it is a higher accuracy system anyway.  More corrections coming in that were “bad” created more inaccuracy than we saw in the other systems.  The storm has affected all brands of GPS, not solely John Deere.

When you head back into these fields to side dress, spray, cultivate, harvest, etc. over the next several months, we expect that the rows won't be where the AutoPath lines think they are.  This will only affect the fields that are planted during times of reduced accuracy. It is most likely going to be difficult - if not impossible - to make AutoPath work in these fields as the inaccuracy is most likely inconsistent.


-------------------------------------------------------

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3311 on: May 15, 2024, 10:28:51 PM »


A San Francisco-based company called UVify has set a new world record for the most drones used in an aerial display.

Confirmed by Guinness World Records, UVify’s display involved 5,293 LED-lit IFO drones flying into various formations in a dazzling display that lit up the night sky in Songdo, South Korea, just west of Seoul.

... Robert Cheek, COO of UVify, added, "Today's achievement is a milestone not only for our company but also for the broader potential of UAV technology. The flawless execution of such a large-scale drone show underlines our commitment to excellence and our ability to push the boundaries of what is possible in synchronized drone performance."

-------------------------------------------------------

Master Sergeant Farell: Here they come, mean as hell and thick as grass!
« Last Edit: May 15, 2024, 10:36:32 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

morganism

  • Nilas ice
  • Posts: 2031
    • View Profile
  • Liked: 235
  • Likes Given: 143
Re: Robots and AI: Our Immortality or Extinction
« Reply #3312 on: May 16, 2024, 02:48:35 AM »
Intelligent Computing: The Latest Advances, Challenges, and Future
(...)
We present the first comprehensive survey in the literature on intelligent computing, covering its theory fundamentals, the technological fusion of intelligence and computing, important applications, challenges, and future perspectives. To the best of our knowledge, this is the first review article to formally propose the definition of intelligent computing and its unified theoretical framework. We hope that this review will provide a comprehensive reference and cast valuable insights into intelligent computing for academic and industrial researchers and practitioners.

https://spj.science.org/doi/10.34133/icomputing.0006

morganism

  • Nilas ice
  • Posts: 2031
    • View Profile
  • Liked: 235
  • Likes Given: 143
Re: Robots and AI: Our Immortality or Extinction
« Reply #3313 on: May 16, 2024, 02:53:34 AM »
100 things we announced at I/O 2024

May 15, 2024     11 min read

Phew — it’s been a busy couple of days

https://blog.google/technology/ai/google-io-2024-100-announcements/

Freegrass

  • Young ice
  • Posts: 4054
  • Autodidacticism is a complicated word
    • View Profile
  • Liked: 998
  • Likes Given: 1291
Re: Robots and AI: Our Immortality or Extinction
« Reply #3314 on: May 16, 2024, 12:32:01 PM »
Google Takes On GPT-4o With Project Astra, an AI Agent That Understands Dynamics of the World
Someone wrote in the comments that this would be great for blind people. I agree.
They also noticed that Google glasses were back. So I can see blind people wearing these glasses while the AI tells them what's in front of them. Pretty cool. This will indeed help a lot of people.

I'm excited about Chat GPT 4o. Time to buy a headset with microphone. Do they have them with cameras already?  ::)
When factual science is in conflict with our beliefs or traditions, we cuddle up in our own delusional fantasy where everything starts making sense again.

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3315 on: May 16, 2024, 08:00:50 PM »
Brain–Computer Interface Experiments First to Decode Words 'Spoken' Entirely In the Brain In Real Time
https://medicalxpress.com/news/2024-05-braincomputer-interface-decode-words-spoken.html
https://techxplore.com/news/2024-05-brain-machine-interface-device-internal.html

Caltech neuroscientists are making promising progress toward showing that a device known as a brain–machine interface (BMI), which they developed to implant into the brains of patients who have lost the ability to speak, could one day help all such patients communicate by simply thinking and not speaking or miming.

In 2022, the team reported that their BMI had been successfully implanted and used by a patient to communicate unspoken words. Now, reporting in the journal Nature Human Behaviour, the scientists have shown that the BMI has worked successfully in a second human patient.



BMIs are being developed and tested to help patients in a number of ways. For example, some work has focused on developing BMIs that can control robotic arms or hands. Other groups have had success at predicting participants' speech by analyzing brain signals recorded from motor areas when a participant whispered or mimed words.

But predicting what somebody is thinking—detecting their internal dialogue—is much more difficult, as it does not involve any movement, explains Sarah Wandelt, Ph.D., lead author on the new paper, who is now a neural engineer at the Feinstein Institutes for Medical Research in Manhasset, New York.

Quote
... "We reproduced the results in a second individual, which means that this is not dependent on the particulars of one person's brain or where exactly their implant landed. This is indeed more likely to hold up in the larger population."

The new research is the most accurate yet at predicting internal words. In this case, brain signals were recorded from single neurons in a brain area called the supramarginal gyrus located in the posterior parietal cortex (PPC). The researchers had found in a previous study that this brain area represents spoken words.



In the current study, the researchers first trained the BMI device to recognize the brain patterns produced when certain words were spoken internally, or thought, by two tetraplegic participants. This training period took only about 15 minutes. The researchers then flashed a word on a screen and asked the participant to "say" the word internally. The results showed that the BMI algorithms were able to predict the eight words tested, including two nonsensical words, with an average of 79% and 23% accuracy for the two participants, respectively.


Words can be significantly decoded during internal speech in the SMG.

"Since we were able to find these signals in this particular brain region, the PPC, in a second participant, we can now be sure that this area contains these speech signals," says David Bjanes, a postdoctoral scholar research associate in biology and biological engineering and an author of the new paper. "The PPC encodes a large variety of different task variables. You could imagine that some words could be tied to other variables in the brain for one person. The likelihood of that being true for two people is much, much lower."

Sarah K. Wandelt et al, Representation of internal speech by single neurons in human supramarginal gyrus, Nature Human Behaviour (2024)
https://www.nature.com/articles/s41562-024-01867-y

Brain–machine-interface device translates internal speech into text, Nature Human Behaviour (2024)
https://www.nature.com/articles/s41562-024-01869-w

---------------------------------------------------------------

Brain Signals Transformed Into Speech Through Implants and AI
https://medicalxpress.com/news/2023-08-brain-speech-implants-ai.html

Researchers from Radboud University and the UMC Utrecht have succeeded in transforming brain signals into audible speech. By decoding signals from the brain through a combination of implants and AI, they were able to predict the words people wanted to say with an accuracy of 92 to 100%. Their findings are published in the Journal of Neural Engineering.

... For the experiment in their new paper, the researchers asked non-paralyzed people with temporary brain implants to speak a number of words out loud while their brain activity was being measured.

Berezutskaya says, "We were then able to establish direct mapping between brain activity on the one hand, and speech on the other hand. We also used advanced artificial intelligence models to translate that brain activity directly into audible speech. That means we weren't just able to guess what people were saying, but we could immediately transform those words into intelligible, understandable sounds. In addition, the reconstructed speech even sounded like the original speaker in their tone of voice and manner of speaking."

Julia Berezutskaya et al, Direct speech reconstruction from sensorimotor brain activity with optimized deep learning models, Journal of Neural Engineering (2023)
https://iopscience.iop.org/article/10.1088/1741-2552/ace8be

---------------------------------------------------------------

Refined AI Approach Improves Noninvasive Brain-Computer Interface Performance
https://techxplore.com/news/2024-05-refined-ai-approach-noninvasive-brain.html

Pursuing a viable alternative to invasive brain-computer interfaces (BCIs) has been a continued research focus of Carnegie Mellon University's He Lab. In 2019, the group used a noninvasive BCI to successfully demonstrate, for the first time, that a mind-controlled robotic arm had the ability to continuously track and follow a computer cursor.

As technology has improved, their AI-powered deep learning approach has become more robust and effective. In new work published in PNAS Nexus, the group demonstrates that humans can control continuous tracking of a moving object all by thinking about it, with unmatched performance.

... In a recent study by Bin He, professor of biomedical engineering at Carnegie Mellon University, a group of 28 human participants were given a complex BCI task to track an object in a two-dimensional space all by thinking about it.

During the task, an electroencephalography (EEG) method recorded their activity, from outside the brain. Using AI to train a deep neural network, the He group then directly decoded and interpreted human intentions for continuous object movement using the BCI sensor data.



Overall, the work demonstrates the excellent performance of noninvasive BCI for a brain-controlled computerized device.

"The innovation in AI technology has enabled us to greatly improve the performance versus conventional techniques, and shed light for wide human application in the future," said Bin He.

Moreover, the capability of the group's AI-powered BCI suggests a direct application to continuously controlling a robotic device.

"We are currently testing this AI-powered noninvasive BCI technology to control sophisticated tasks of a robotic arm," said He. "Also, we are further testing its applicability to not only able-body subjects, but also stroke patients suffering motor impairments."

In a few years, this may lead to AI-powered assistive robots becoming available to a broad range of potential users.

Dylan Forenzo et al, Continuous tracking using deep learning-based decoding for noninvasive brain–computer interface, PNAS Nexus (2024)
https://academic.oup.com/pnasnexus/article/3/4/pgae145/7656016?login=false

-------------------------------------------------------

Decoding Spontaneous Thoughts From the Brain via Machine Learning
https://medicalxpress.com/news/2024-04-decoding-spontaneous-thoughts-brain-machine.html



Researchers demonstrated the possibility of using functional magnetic resonance imaging (fMRI) and machine learning algorithms to predict subjective feelings in people's thoughts while reading stories or in a freely thinking state. The study is published in the Proceedings of the National Academy of Sciences.

... New research suggests that it may be possible to develop predictive models of affective contents during spontaneous thought by combining personal narratives with fMRI. Narratives and spontaneous thoughts share similar characteristics, including rich semantic information and temporally unfolding nature. To capture a diverse range of thought patterns, participants engaged in one-on-one interviews to craft personalized narrative stimuli, reflecting their past experiences and emotions. While participants read their stories inside the MRI scanner, their brain activity was recorded.



Hong Ji Kim et al, Brain decoding of spontaneous thought: Predictive modeling of self-relevance and valence using personal narratives, Proceedings of the National Academy of Sciences (2024)
https://www.pnas.org/doi/10.1073/pnas.2401959121

-------------------------------------------------------

Wearable Devices Can Now Harvest Neural Data—Urgent Privacy Reforms Needed
https://techxplore.com/news/2024-05-wearable-devices-harvest-neural-urgent.html

Recent trends show Australians are increasingly buying wearables such as smartwatches and fitness trackers. These electronics track our body movements or vital signs to provide data throughout the day, with or without the help of artificial intelligence (AI).

There's also a newer product category that engages directly with the brain. It's part of what UNESCO broadly defines as the emerging industry of "neurotechnology": "devices and procedures that seek to access, assess, emulate and act on neural systems."

Much of neurotechnology is either still in development stage, or confined to research and medical settings. But consumers can already purchase several headsets that use electroencephalography (EEG).

Often marketed as meditation headbands, these devices provide real-time data on a person's brain activity and feed it into an app.

Such headsets can be useful for people wanting to meditate, monitor their sleep and improve wellness. However, they also raise privacy concerns—a person's brain activity is intrinsically personal data.

The subtle creep in neural and cognitive data wearables are capable of collecting is resulting in a data "gold rush," with companies mining even our brains so they can develop and improve their products.

... The private data collected through such devices is increasingly fed into AI algorithms, raising additional concerns. These algorithms rely on machine learning, which can manipulate datasets in ways unlikely to align with any consent given by a user.

... Australia is at a pivotal crossroads. We need to address the risks associated with data harvesting through neurotechnology. The industry of devices that can access our neural and cognitive data is only going to expand.

-----------------------------------------------------
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3316 on: May 16, 2024, 08:10:13 PM »
Dexterous Robot Hand Can Take a Beating In the Name of AI Research
https://newatlas.com/robotics/shadow-hand-robot-ai-research/



"A key challenge in AI and robotics is to develop hardware that is dexterous enough for complex tasks, but also robust enough for robot learning," said the company in a press statement. "Robots learn through trial and error which requires them to safely test things in the real world, sometimes executing motions at the limit of their abilities. This can cause damage to the hardware, and the resulting repairs can be costly and slow down experiments."

So as well as being designed with speed, flexibility and precision in mind, the new robot hand is also built to endure "a significant amount of misuse, including aggressive force demands, abrasion and impacts."

... The robot hand is reported to benefit from precise torque control, with each of the fingers able to muster up to 10 N of fingertip pinch force. The four joints of each finger are driven by motors housed in the base and connected via "tendons" and the fingers are capable of going from fully open to closed in 500 milliseconds.

Each finger is a self-contained unit, and incorporates a number of 3-DOF tactile sensors at the proximal and middle segments, along with a stereo camera setup that's pointed at the inside surface of silicone skin covering the fingertip to provide high-resolution, wide-dynamic-range tactile feedback in real time – which all combine to help the robot get to grips with the world around it "through the sense of touch."

If one of the finger modules suffers fatal damage during limit-pushing AI experiments, it can be removed from the base module (which connects to a robot arm) and replaced with a fresh one for minimum downtime. The tactile sensors can also be removed/replaced if needed, with the communication network within the finger able to register the presence or absence of a sensor and feed relevant information to a host computer automatically.



https://www.shadowrobot.com/new-shadow-hand/

----------------------------------------------------

Swiss Startup to Advance Collaborative Robots With GenAI Humanoid Hand
https://thenextweb.com/news/swiss-startup-collaborative-robots-genai-humanoid-hand



Amid increasing competition across the globe, Switzerland-based mimic is also throwing its hat in the ring. The startup has raised a pre-seed round of $2.5mn (€2.3mn) to bring the first GenAI-powered collaborative robot to market.

A spinoff from ETH Zurich, mimic was founded in 2024 by a team of three researchers working at the intersection of robotics and AI

Aiming to address workforce shortages, the team has developed a robotic humanoid hand that can integrate into existing manual labour workflows and perform repetitive or demanding tasks.

“Most use cases are stationary and do not require a full humanoid robot with legs,” said co-founder Stephan-Daniel Gravert.

“That’s why we focus data-collection and hardware ingenuity on a universal robotic hand that is compatible with off-the-shelf industrial robotic arms for positioning.”

The young startup has also developed its own foundation AI model to infuse the humanoid hand with reasoning skills and the ability to learn.

According to mimic, the use of generative AI enables the robot to understand and imitate any behaviour by watching a human perform it. This broadens the scope of tasks a robot can perform. It also reduces the need (and cost) for constant reprogramming.



----------------------------------------------------
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

zenith

  • Young ice
  • Posts: 2857
    • View Profile
  • Liked: 123
  • Likes Given: 0
Re: Robots and AI: Our Immortality or Extinction
« Reply #3317 on: May 16, 2024, 08:11:27 PM »
“Neither agreeable nor disagreeable," I answered. "It just is."
Istigkeit — wasn't that the word Meister Eckhart liked to use? "Is-ness." The Being of Platonic philosophy — except that Plato seems to have made the enormous, the grotesque mistake of separating Being from becoming and identifying it with the mathematical abstraction of the Idea. He could never, poor fellow, have seen a bunch of flowers shining with their own inner light and all but quivering under the pressure of the significance with which they were charged; could never have perceived that what rose and iris and carnation so intensely signified was nothing more, and nothing less, than what they were — a transience that was yet eternal life, a perpetual perishing that was at the same time pure Being, a bundle of minute, unique particulars in which, by some unspeakable and yet self-evident paradox, was to be seen the divine source of all existence.”
― Aldous Huxley, The Doors of Perception
Where is reality? Can you show it to me? - Heinz von Foerster

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3318 on: May 16, 2024, 10:13:33 PM »
GPT-4o
 
“99% of the economy will be AIs talking to each other.”
 
➡️ pic.twitter.com/kyLz9BYyu3  2 min.  The AI talks to customer service to solve a problem for you.
 
< Not at this ridiculously slow bit rate
<< No need for speech between devices

Fake News

Your example was using the prior model GPT-4, not GPT-4"omni"

So what‽  As a meme, the video and the comments are more valid than all your movie clips and cartoons.
« Last Edit: May 18, 2024, 05:20:19 AM by Sigmetnow »
People who say it cannot be done should not interrupt those who are doing it.

zenith

  • Young ice
  • Posts: 2857
    • View Profile
  • Liked: 123
  • Likes Given: 0
Re: Robots and AI: Our Immortality or Extinction
« Reply #3319 on: May 16, 2024, 10:14:16 PM »
“To make biological survival possible, Mind at Large has to be funnelled through the reducing valve of the brain and nervous system. What comes out at the other end is a measly trickle of the kind of consciousness which will help us to stay alive on the surface of this particular planet. To formulate and express the contents of this reduced awareness, man has invented and endlessly elaborated those symbol-systems and implicit philosophies which we call languages. Every individual is at once the beneficiary and the victim of the linguistic tradition into which he or she has been born -- the beneficiary inasmuch as language gives access to he accumulated records of other people's experience, the victim in so far as it confirms him in the belief that reduced awareness is the only awareness and as it be-devils his sense of reality, so that he is all too apt to take his concepts for data, his words for actual things.”
― Aldous Huxley, The Doors of Perception / Heaven and Hell
Where is reality? Can you show it to me? - Heinz von Foerster

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3320 on: May 17, 2024, 03:35:12 PM »
like the love-child of a Roomba and a Tesla ...

Next-Generation Lely Manure Robot Cleans Up Around Cowshed
https://www.thescottishfarmer.co.uk/machinery/technical/24320634.next-generation-lely-manure-robot-cleans-around-cowshed/

The new Discovery Collector C2 is the latest invention in the company’s Discovery portfolio of revolutionary manure-collecting robots.

It has all the advantages of the current Discovery Collector, but an upgrade includes a lithium battery that is charged wirelessly. This means it can be charged faster, so more time is spent cleaning.

The robot spends 60% of its time cleaning (14.4 hours) and only 40% (9.6 hours) charging.

Manure is collected, rather than pushed, before being unloaded above a dumping point. The Discovery Collector C2 sprays water at the front and back for cleaner results and additional grip.

This water is tanked independently and stored in two water pockets in the manure tank. As the manure tank becomes fuller, the volume of the water bags decreases, so more space becomes available for manure. As a result, the machine is compact and cows can get around it more easily, promoting free cow traffic.

The Discovery Collector C2 navigates independently using built-in sensors, eliminating the need for obstacles, and allowing cows to move unimpeded and safely inside the shed.

Cleaning routes can be tailored to fit in with the farm’s daily chores.



The Discovery Collector C2 can clean sheds with up to 120 cows.

... that's a lot of manure

---------------------------------------------------

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3321 on: May 17, 2024, 05:20:28 PM »
"They must've beaten that AI with a stick so hard." Gad Saad and Elon Musk discuss Google's Gemini and the danger of an AI that is taught to lie.
 
➡️ pic.twitter.com/x0tmwsD1E3  3 min clip, with captions
 
Cleaned-up version of a March 17 discussion. 35 min.
➡️ pic.twitter.com/lEZ1168kOY 

 
Timestamps:
 
00:02 Elon pranking Gad
1:16 Books vs. online stimulus
03:18 Getting aware of the woke mind virus
05:09 People's ability to change position
06:48 Can the wokeness-situation be turned around?
07:45 Evolution, truth & tribalism
8:45 What keeps Elon motivated (gold)
11:37 The danger of teaching an AI to lie
14:34 Buying Twitter as civilizational necessity
16:19 Twitter 1.0 (self-)censorship
17:39 Abilities, nature, nurture (and luck)
20:28 Entrepreneurship & huge balls
25:02 Money, fame & happiness
26:50 Elon's personal security
28:46 Austin boom town, California problems
31:21 Gad receiving hate after Elon's support
32:36 Equalitarian ideology in Canada
 
5/8/24, https://x.com/muskbreaking/status/1788327561195319670

=====
 
Quote
Cern Basher
 
Optimus Bot Production Ramp
 
Smoke-away wrote:
"Optimus Production Ramp
- 2024: 1,000
- 2025: 10,000
- 2026: 100,000
- 2027: 1,000,000
- 2028: 10,000,000
- 2029: 100,000,000
- 2030: 1,000,000,000"
 
and Elon replied: "Not quite that fast, but not far wrong"
 
By comparison, I've been much too conservative - here's a production ramp I used in one of my models - it has (only) 1.5 million bots produced in 2030 vs 1 billion that @SmokeAwayyy has.
 
By the way, in 2035 I get to a potential valuation just for bots of over $50 trillion - and that's just with 43 million bots deployed! 
 
Nutty.
5/7/24, https://x.com/cernbasher/status/1787943074645045464
⬇️ pic.twitter.com/SfaBmXjEVN 
People who say it cannot be done should not interrupt those who are doing it.

morganism

  • Nilas ice
  • Posts: 2031
    • View Profile
  • Liked: 235
  • Likes Given: 143
Re: Robots and AI: Our Immortality or Extinction
« Reply #3322 on: May 17, 2024, 07:46:22 PM »
(anti-drone techniques and engineering nerds)

https://youchu.be/watch?v=SrGENEXocJU&si=dns2bg_IpAJ86OUa

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3323 on: May 17, 2024, 09:57:23 PM »
OpenAI Dissolves Team Focused on Existential AI Risks Less Than One Year After Announcing It
https://www.cnbc.com/2024/05/17/openai-superalignment-sutskever-leike.html

OpenAI’s Superalignment team, charged with controlling the existential danger of a superhuman AI system, has reportedly been disbanded less than one year after announcing it, a person familiar with the situation confirmed to CNBC on Friday.

The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

Sutskever and Leike on Tuesday announced their departures on social media platform X, hours apart, but on Friday, Leike shared more details about why he left the startup.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

https://x.com/janleike/status/1791498174659715494
https://x.com/janleike/status/1791498183543251017
https://x.com/janleike/status/1791498182125584843

Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

Quote
... “Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

OpenAI’s Superalignment team, announced last year, has focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

Quote
...“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” said the Superalignment team in an OpenAI blog post when it launched in July. “But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.”

Earlier this year, the group released a notable research paper about controlling large AI models with smaller AI models—considered a first step towards controlling superintelligent AI systems. It’s unclear who will make the next steps on these projects at OpenAI.

News of Sutskever’s and Leike’s departures, and the dissolution of the superalignment team, come days after OpenAI launched a new AI model

----------------------------------------------------------------



-----------------------------------------------------------------

Alexa, Siri, Google Assistant Vulnerable to Malicious Commands, Study Reveals
https://venturebeat.com/ai/alexa-siri-google-assistant-vulnerable-to-malicious-commands-study-reveals/

Researchers Find LLMs Are Easy to Manipulate Into Giving Harmful Information
https://techxplore.com/news/2024-05-llms-easy.html

A team of AI researchers at AWS AI Labs, Amazon, has found that most, if not all, publicly available Large Language Models (LLMs) can be easily tricked into revealing dangerous or unethical information.

The work by the team involved jailbreaking several currently available LLMs by adding audio during questioning that allowed them to circumvent restrictions put in place by the makers of the LLMs. The research team does not list specific examples, fearing that they will be used by people attempting to subvert LLMs, but they do reveal that their work involved the use of a technique they call projected gradient descent.

As an indirect example, they describe how they used simple affirmations with one model, followed by repeating an original query. Doing so, they note, put the model in a state where restrictions were ignored.

The researchers report that they were able to circumvent different LLMs to different degrees depending on the level of access they had to the model. They also found that the successes they had with one model were often transferable to others.



Raghuveer Peri et al, SpeechGuard: Exploring the Adversarial Robustness of Multimodal Large Language Models, arXiv (2024)
https://arxiv.org/abs/2405.08317
« Last Edit: May 18, 2024, 04:16:09 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3324 on: May 17, 2024, 10:19:53 PM »
Tesseract Ventures Announces Revolutionary SWARM Drone Technology for Special Operations Forces
https://tesseractventures.io/news/tesseract-ventures-announces-revolutionary-swarm-drone-technology-for-special-operations-forces/

Tesseract Ventures has been awarded an Other Transaction Agreement (OTA) from the U.S. Special Operations Command (USSOCOM). This contract will fund the development of the company's next-generation drone, the SWARM (Special Warfighter Assistive Robotic Machine).



The SWARM drone technology is set to revolutionize USSOCOM and SOF operations by offering a new, much-needed capability: a highly versatile nano drone equipped with smart payload and interoperability across multiple systems. This pioneering technology can potentially give Special Operations Command warfighters an edge in surveillance, and tactical response operations.

The SWARM system includes a Nano First Person View (FPV) Drone, a Smart Payload System, and Smart Payloads. Equipped with a multi-function camera system with high-res, night, and thermal capabilities, SWARM’s super-compact drone is designed for rapid deployment in any situation.

Working solo or in groups, it can perform critical tasks such as landing or dropping payloads that can work to protect troops from threats such as enemy combatants, gas, radiation, and more. Designed for adaptability, the payload system can be equipped with explosive charges for precise strikes against enemy assets and infrastructure.

--------------------------------------------------------------



“Robots and drones promise to transform everything from factories to our homes,” stated Aviv Shapira, co-founder and CEO of XTEND. “However, a significant hurdle remains — equipping them with the common-sense abilities to deal with the unpredictable nature of real-world situations, understand their surroundings, and make decisions based on that information.”

XOS uses AI to enable robots to learn from data and experience, training them to identify objects, navigate complex environments, and interact with humans safely,” he explained. “We are unlocking the true potential of robotics in complex scenarios, including first response, search and rescue, logistics, critical infrastructure inspection, defense, and security.”

XOS combines human guidance and autonomous machines to allow operators to perform complex remote missions in any environment with minimal training, according to XTEND. The company asserted that it is developing seamless collaboration between humans and artificial intelligence, playing to the strengths of each.

“Our XOS operating system is based on ‘practical human-supervised autonomy,’ which empowers drones and robots to handle specific tasks autonomously – entering buildings, scanning floors, or even pursuing suspects,” explained Shapira. “However, crucially, it allows the common-sense decisions – like judging situations or adapting to unforeseen circumstances – to remain in the hands of human supervisors.”

“This human-machine teaming allows our robots to work alongside supervisors, who can manage dozens of robots simultaneously, and learn from that experience,” he added. “That is why we believe that XOS will become the operating system of choice for anyone looking to maximize their robotic systems’ potential while decreasing the risks posed to their teams’ lives or concerns around lack of human oversight.”



-------------------------------------------------------------

A 7th-gen Fighter? BAE Has Thoughts On What That Could Look Like.
https://breakingdefense.com/2024/05/a-7th-gen-fighter-bae-has-thoughts-on-what-that-could-look-like/

“We need to lose the generational name because aircraft are going to be evolving all the time," said BAE Systems’ combat air strategy director Mike Baulkwill.



Baulkwill’s sentiments were supported by BAE’s Jonny Moreton, partnership director and military advisor for the Future Combat Air System (FCAS), who agreed “generational nomenclature” will disappear in the future. “Aircraft will be reliant on software and mission data to respond to emerging threats,” he said.

The comments came as BAE presented its “Combat Air Continuum” concept — essentially laying out how the company sees the next 25 years of airpower, and hence an indication of where the company will look to invest its R&D efforts going forward.

In the near term is what the company calls the “second epoch,” which will see a mix of fifth and sixth-gen platforms, with both augmented by what they call autonomous collaborative platforms (ACPs), aka loyal wingman drones. Those systems would help extend the lifespan of older jets like the Eurofighter Typhoon.

Then a “third epoch” will roll out between 2046 and 2055, which will see western air forces operating “full 6G capabilities, augmented by autonomous combat aircraft taking diverse roles [and] potentially 7G fighter aircraft programs being developed [with] potential for wider collaboration and consolidation,” according to a company presentation.

--------------------------------------------------------------
« Last Edit: May 17, 2024, 10:33:45 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3325 on: May 17, 2024, 10:30:25 PM »
Robots With Knives
https://spectrum.ieee.org/video-friday-robots-with-knives

“SliceIt!: Simulation-Based Reinforcement Learning for Compliant Robotic Food Slicing,”



Cooking robots can enhance the home experience by reducing the burden of daily chores. However, these robots must perform their tasks dexterously and safely in shared human environments, especially when handling dangerous tools such as kitchen knives. This study focuses on enabling a robot to autonomously and safely learn food-cutting tasks.

----------------------------------------------------------



----------------------------------------------------------

“Cafe Robot: Integrated AI Skillset Based on Large Language Models,”



The cafe robot engages in natural language inter-action to receive orders and subsequently prepares coffee and cakes. Each action involved in making these items is executed using AI skills developed by Integral, including Integral Liquid Pouring, Integral Powder Scooping, and Integral Cutting. The dialogue for making coffee, as well as the coordination of each action based on the dialogue, is facilitated by the Integral Task Planner.

----------------------------------------------------------

“Autonomous Overhead Powerline Recharging for Uninterrupted Drone Operations,”



A fully autonomous self-recharging drone system capable of long-duration sustained operations near powerlines. The drone is equipped with a robust onboard perception and navigation system that enables it to locate powerlines and approach them for landing. A passively actuated gripping mechanism grasps the powerline cable during landing after which a control circuit regulates the magnetic field inside a split-core current transformer to provide sufficient holding force as well as battery recharging. We demonstrate multiple contiguous hours of fully autonomous uninterrupted drone operations composed of several cycles of flying, landing, recharging, and takeoff, validating the capability of extended, essentially unlimited, operational endurance.

-------------------------------------------------------------
« Last Edit: May 18, 2024, 04:33:51 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3326 on: May 17, 2024, 10:39:44 PM »
New Compound Eye Design Could Provide Inexpensive Way to Give Robots Insect-Like Vision
https://techxplore.com/news/2024-05-compound-eye-inexpensive-robots-insect.html

A team of engineers and roboticists at Hong Kong University of Science and Technology has developed an electronic compound eye design to give robots the ability to swarm efficiently and inexpensively.



In their paper published in the journal Science Robotics, the group describes the inspiration for the design and how well it worked when tested in a flying robot.



To test their design, the team installed a pair of the compound eyes onto a flying drone, which they used to track the movements of a four-legged walking robot. They suggest their design would likely be most suitable for robots that fly together as a swarm or perhaps be used in autonomous vehicles.

Yu Zhou et al, An ultrawide field-of-view pinhole compound eye using hemispherical nanowire array for robot vision, Science Robotics (2024)
https://www.science.org/doi/10.1126/scirobotics.adi8666
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3327 on: May 17, 2024, 10:43:23 PM »
Senators Urge $32 Billion In Emergency Spending On AI After Finishing Yearlong Review
https://www.axios.com/pro/tech-policy/2024/05/15/bipartisan-senate-group-releases-sweeping-ai-report
https://apnews.com/article/artificial-intelligence-ai-investment-congress-millions-8f3a051faadc50d2366e8a88a69c5470

WASHINGTON (AP) — A bipartisan group of four senators led by Majority Leader Chuck Schumer is recommending that Congress spend at least $32 billion over the next three years to develop artificial intelligence, place safeguards around it and keep the U.S. ahead of rivals—particularly China, writing in a new report released Wednesday that the U.S. needs to “harness the opportunities and address the risks” of the quickly developing technology.

The group recommends in the report that Congress draft “emergency” spending legislation to boost U.S. investments in artificial intelligence, including new research and development and new testing standards to try and understand the potential harms of the technology. The group also recommended new requirements for transparency as artificial intelligence products are rolled out and that studies be conducted into the potential impact of AI on jobs and the U.S. workforce.

... The senators said laws need to be kept up-to-date with technology, but also that AI developers need to ensure their systems abide by the law. They noted that the workings of some AI systems are so opaque that they are referred to as “black boxes,” which they said might “raise questions about whether companies with such systems are appropriately abiding by existing laws.”

... Schumer, who controls the Senate’s schedule, said these AI bills were among the chamber’s “highest priorities” this year. He also said he planned to sit down with House Speaker Mike Johnson, who has expressed interest in looking at AI policy but has not said how he would do that.

... Republicans are likely to clash with Democrats on legislative priorities as signaled by Johnson. The speaker recently said the Biden administration is getting it wrong by trying to regulate AI too closely, which could stifle innovation.

https://www.axios.com/pro/tech-policy/2024/05/02/speaker-johnson-on-ai
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

morganism

  • Nilas ice
  • Posts: 2031
    • View Profile
  • Liked: 235
  • Likes Given: 143
Re: Robots and AI: Our Immortality or Extinction
« Reply #3328 on: May 18, 2024, 12:55:42 AM »
“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

Company insiders explain why safety-conscious employees are leaving.

https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence


How Far Are We From AGI


    The evolution of artificial intelligence (AI) has profoundly impacted human society, driving significant advancements in multiple sectors. Yet, the escalating demands on AI have highlighted the limitations of AI's current offerings, catalyzing a movement towards Artificial General Intelligence (AGI). AGI, distinguished by its ability to execute diverse real-world tasks with efficiency and effectiveness comparable to human intelligence, reflects a paramount milestone in AI evolution. While existing works have summarized specific recent advancements of AI, they lack a comprehensive discussion of AGI's definitions, goals, and developmental trajectories. Different from existing survey papers, this paper delves into the pivotal questions of our proximity to AGI and the strategies necessary for its realization through extensive surveys, discussions, and original perspectives. We start by articulating the requisite capability frameworks for AGI, integrating the internal, interface, and system dimensions. As the realization of AGI requires more advanced capabilities and adherence to stringent constraints, we further discuss necessary AGI alignment technologies to harmonize these factors. Notably, we emphasize the importance of approaching AGI responsibly by first defining the key levels of AGI progression, followed by the evaluation framework that situates the status-quo, and finally giving our roadmap of how to reach the pinnacle of AGI. Moreover, to give tangible insights into the ubiquitous impact of the integration of AI, we outline existing challenges and potential pathways toward AGI in multiple domains. In sum, serving as a pioneering exploration into the current state and future trajectory of AGI, this paper aims to foster a collective comprehension and catalyze broader public discussions among researchers and practitioners on AGI.

https://arxiv.org/abs/2405.10313

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3329 on: May 18, 2024, 05:25:51 PM »
Sony Developing Microsurgery Assistance Robot
https://www.therobotreport.com/sony-developing-microsurgery-assistance-robot/



Sony is looking to compete in the surgical robotics market with its own microsurgery assistance robot. The Tokyo-based company recently announced it developed a microsurgery assistance robot that is capable of automatic surgical instrument exchange and precision control.

According to the company, the system addresses practical challenges in conventional surgical assistant robotics, such as interruptions and delays caused by manually exchanging surgical instruments. Sony’s R&D team developed the system to allow for the automatic exchange of parts through miniaturization.

... “Humans possess remarkably superior brain and hand coordination compared to other animals, allowing for precise and delicate movements. Microsurgery represents one of the cases where this capability is maximally utilized. However, it takes months to years of extensive training for even skilled physicians to master this technique,” said Munekazu Naito, a professor in the Department of Anatomy at Aichi Medical University. “In this collaborative study, Sony’s surgical assistance robot technology was tested to assess its capacity to enhance the skills of novice microsurgeons. The results demonstrated exceptional control over the movements of inexperienced physicians, enabling them to perform intricate and delicate tasks with adeptness akin to that of seasoned experts.

---------------------------------------------------------------

Surgeons Can Use AI Chatbot to Tell Robots to Help With Suturing
https://www.scihb.com/2024/05/surgeons-can-use-ai-chatbot-to-tell.html
https://www.newscientist.com/article/2431083-surgeons-can-use-ai-chatbot-to-tell-robots-to-help-with-suturing/



Surgeons could use a ChatGPT-like interface to instruct a robot to carry out small tasks, such as suturing wounds and dilating blood vessels.

Surgical robots have been in use for decades, but these are normally controlled entirely by a human. Researchers are now developing autonomous versions that can perform parts of an operation without human assistance, but these can be difficult for people to work with because of a lack of fine control.

To address this, Animesh Garg at the University of Toronto, Canada, and his colleagues have developed a virtual assistant, called SuFIA, that can translate simple text prompts into commands for a surgical robot and defer to a human surgeon when it gets stuck.

SuFIA uses OpenAI’s GPT-4 large language model to break down requests from a surgeon, such as “pick up the needle and move it”, into a sequence of smaller subtasks. These subtasks will then trigger a piece of software to run in another tool, such as a robotic surgeon or camera.

Garg and his team tested four tasks in a simulated environment, including picking up and moving needles and dilating blood vessels. They then tested the needle tasks with a real Da Vinci robotic surgeon.

SuFIA: Language-Guided Augmented Dexterity for Robotic Surgical Assistants, arXiv, (2024)
https://arxiv.org/abs/2405.05226

------------------------------------------------------------

ChatGPT-Enabled daVinci Surgical Robot Prototype: Advancements and Limitations , Robotics, (2023)
https://www.mdpi.com/2218-6581/12/4/97

Abstract

The daVinci Surgical Robot has revolutionized minimally invasive surgery by enabling greater accuracy and less-invasive procedures. However, the system lacks the advanced features and autonomy necessary for it to function as a true partner. To enhance its usability, we introduce the implementation of a ChatGPT-based natural language robot interface. Overall, our integration of a ChatGPT-enabled daVinci Surgical Robot has potential to expand the utility of the surgical platform by supplying a more accessible interface.

Our system can listen to the operator speak and, through the ChatGPT-enabled interface, translate the sentence and context to execute specific commands to alter the robot’s behavior or to activate certain features. For instance, the surgeon could say (even in Spanish) “please track my left tool” and the system will translate the sentence into a specific track command. This specific error-checked command will then be sent to the hardware, which will respond by controlling the camera of the system to continuously adjust and center the left tool in the field of view. We have implemented many commands, including “Find my tools” (tools that are not in the field of view) or start/stop recording, that can be triggered based on a natural conversational context. Here, we present the details of our prototype system, give some accuracy results, and explore its potential implications and limitations. We also discuss how artificial intelligence tools (such as ChatGPT) of the future could be leveraged by robotic surgeons to reduce errors and enhance the efficiency and safety of surgical procedures and even ask for help.


---------------------------------------------------------------

Artificial Intelligence and Computer Vision during Surgery: Discussing Laparoscopic Images with ChatGPT4—Preliminary Results
https://www.scirp.org/journal/paperinformation?paperid=132196

Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed

---------------------------------------------------------------

Levels of Autonomy in FDA-Cleared Surgical Robots: a Systematic Review
https://www.nature.com/articles/s41746-024-01102-y

... We present a systematic review of all surgical robots cleared by the United States Food and Drug Administration (FDA) from 2015 to 2023, utilizing a classification system that we call Levels of Autonomy in Surgical Robotics (LASR) to categorize each robot’s decision-making and action-taking abilities from Level 1 (Robot Assistance) to Level 5 (Full Autonomy). ... 37,981 records were screened to identify 49 surgical robots. Most surgical robots were at Level 1 (86%) and some reached Level 3 (Conditional Autonomy) (6%). 2 surgical robots were recognized by the FDA to have machine learning-enabled capabilities, while more were reported to have these capabilities in their marketing materials. ... This review highlights trends toward greater autonomy in surgical robotics. Implementing regulatory frameworks that acknowledge varying levels of autonomy in surgical robots may help ensure their safe and effective integration into surgical practice.

---------------------------------------------------------------

Robotic Surgeon Precisely Removes Cancerous Tumors
https://hub.jhu.edu/2024/03/18/robotic-surgeon-astr/

The Autonomous System for Tumor Resection, designed by a team of Johns Hopkins researchers, can remove tumors from the tongue with accuracy rivaling—or even potentially exceeding—that of human surgeons

----------------------------------------------------------------
« Last Edit: May 19, 2024, 12:25:07 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3330 on: May 18, 2024, 05:53:38 PM »
Quote
Jan Leike
 
Yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI.
  —
It's been such a wild journey over the past ~3 years. My team launched the first ever RLHF LLM with InstructGPT, published the first scalable oversight on LLMs, pioneered automated interpretability and weak-to-strong generalization. More exciting stuff is coming out soon.

Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us.
  —
I joined because I thought OpenAI would be the best place in the world to do this research.
However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.
  —
I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.
  —
These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there.
  —
Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.
  —
Building smarter-than-human machines is an inherently dangerous endeavor.
OpenAI is shouldering an enormous responsibility on behalf of all of humanity.
  —
 
But over the past years, safety culture and processes have taken a backseat to shiny products.
 
  —
We are long overdue in getting incredibly serious about the implications of AGI.
We must prioritize preparing for them as best we can.
Only then can we ensure AGI benefits all of humanity.
  —-
OpenAI must become a safety-first AGI company.
  …
5/17/24, https://x.com/janleike/status/1791498174659715494
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3331 on: May 19, 2024, 12:18:31 AM »
^ see also: https://forum.arctic-sea-ice.net/index.php/topic,1392.msg401240.html#msg401240

We'll Need Universal Basic Income - AI 'Godfather'
https://www.bbc.com/news/articles/cnd607ekl99o

Professor Geoffrey Hinton told BBC Newsnight that a benefits reform giving fixed amounts of cash to every citizen would be needed because he was “very worried about AI taking lots of mundane jobs”.

“I was consulted by people in Downing Street and I advised them that universal basic income was a good idea,” he said.

He said while he felt AI would increase productivity and wealth, the money would go to the rich “and not the people whose jobs get lost and that’s going to be very bad for society”.

... Professor Hinton reiterated his concern that there were human extinction-level threats emerging.

Developments over the last year showed governments were unwilling to rein in military use of AI, he said, while the competition to develop products rapidly meant there was a risk tech companies wouldn't “put enough effort into safety”.

Professor Hinton said "my guess is in between five and 20 years from now there’s a probability of half that we’ll have to confront the problem of AI trying to take over".

This would lead to an “extinction-level threat” for humans because we could have “created a form of intelligence that is just better than biological intelligence… That's very worrying for us”.

AI could “evolve”, he said, “to get the motivation to make more of itself” and could autonomously “develop a sub-goal of getting control”.

He said there was already evidence of large language models - a type of AI algorithm used to generate text - choosing to be deceptive.

He said recent applications of AI to generate thousands of military targets were the “thin end of the wedge”.

“What I’m most concerned about is when these can autonomously make the decision to kill people," he said.

-------------------------------------------------------

The IMF's Chief Is Sounding the Alarm, Says the AI Revolution Is Striking the Job Market 'Like a Tsunami'
https://www.businessinsider.com/imf-chief-ai-revolution-striking-job-market-like-a-tsunami-2024-5?amp

IMF chief Kristalina Georgieva says AI will hit the job market "like a tsunami."

"We have very little time to get people ready for it, businesses ready for it," she said on Monday.

In January, Georgieva predicted that AI will affect roughly 40% of jobs worldwide.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3332 on: May 19, 2024, 04:05:25 PM »
Vertical Axis Wind Turbines Redefined by Machine Learning
https://actu.epfl.ch/news/machine-learning-enables-viability-of-vertical-axi



EPFL researchers developed optimal pitch profiles for vertical-axis wind turbines using a genetic learning algorithm.

The new pitch profiles resulted in a 200% increase in turbine efficiency and a 77% reduction in structure-threatening vibrations.

VAWTs have advantages over traditional horizontal-axis wind turbines, including reduced noise and wildlife-friendliness.

Optimal blade pitch control for enhanced vertical-axis wind turbine performance, Nature, (2024)
https://www.nature.com/articles/s41467-024-46988-0
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

SteveMDFP

  • Young ice
  • Posts: 2583
    • View Profile
  • Liked: 609
  • Likes Given: 49
Re: Robots and AI: Our Immortality or Extinction
« Reply #3333 on: May 19, 2024, 04:44:29 PM »
Vertical Axis Wind Turbines Redefined by Machine Learning
https://actu.epfl.ch/news/machine-learning-enables-viability-of-vertical-axi

EPFL researchers developed optimal pitch profiles for vertical-axis wind turbines using a genetic learning algorithm.

The new pitch profiles resulted in a 200% increase in turbine efficiency and a 77% reduction in structure-threatening vibrations.

VAWTs have advantages over traditional horizontal-axis wind turbines, including reduced noise and wildlife-friendliness.

Optimal blade pitch control for enhanced vertical-axis wind turbine performance, Nature, (2024)
https://www.nature.com/articles/s41467-024-46988-0

Damned impressive.  With AI, we may not even need engineers anymore.  Design by AI and machine-controlled manufacturing, with robotic assembly.  Delivery to site by autonomous vehicles.  We're rapidly running out of tasks for humans to do.  All these changes are coming fast.

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3334 on: May 19, 2024, 08:40:52 PM »
sam altman when ilya left openai 
➡️ pic.twitter.com/jEri1vdOrd 
 
===
 
< Grok now available in Europe
 
—-
 
Grok will soon offer a humorous take on the news in the spirit of how The Daily Show and Colbert Report used to be in ancient times
5/16/24 https://x.com/elonmusk/status/1791028377517989896
People who say it cannot be done should not interrupt those who are doing it.

sidd

  • First-year ice
  • Posts: 6797
    • View Profile
  • Liked: 1049
  • Likes Given: 0
Re: Robots and AI: Our Immortality or Extinction
« Reply #3335 on: May 19, 2024, 11:17:51 PM »
Thanks for the vertical wind turbine paper. open access, very nice.

sidd

Freegrass

  • Young ice
  • Posts: 4054
  • Autodidacticism is a complicated word
    • View Profile
  • Liked: 998
  • Likes Given: 1291
Re: Robots and AI: Our Immortality or Extinction
« Reply #3336 on: May 20, 2024, 03:13:40 PM »
I have, I believe, an original concept for a vawt but close to zero computer skills, anyone fancy trying to draw and animate it?
We've got AI now that can draw it for you. Just give it the right instructions.
When factual science is in conflict with our beliefs or traditions, we cuddle up in our own delusional fantasy where everything starts making sense again.

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3337 on: May 20, 2024, 09:35:02 PM »
AI Outperforms Humans in Theory of Mind Tests
https://spectrum.ieee.org/theory-of-mind-ai
https://techxplore.com/news/2024-05-llms-equal-outperform-humans-theory.html

Large language models convincingly mimic the understanding of mental states



Theory of mind—the ability to understand other people’s mental states—is what makes the social world of humans go around. It’s what helps you decide what to say in a tense situation, guess what drivers in other cars are about to do, and empathize with a character in a movie. And according to a new study, the large language models (LLM) that power ChatGPT and the like are surprisingly good at mimicking this quintessentially human trait.

“Before running the study, we were all convinced that large language models would not pass these tests, especially tests that evaluate subtle abilities to evaluate mental states,” says study coauthor Cristina Becchio, a professor of cognitive neuroscience at the University Medical Center Hamburg-Eppendorf in Germany. The results, which she calls “unexpected and surprising,” were published today—somewhat ironically, in the journal Nature Human Behavior.

Becchio and her colleagues aren’t the first to claim evidence that LLMs’ responses display this kind of reasoning. In a preprint posted last year, the psychologist Michal Kosinski of Stanford University reported testing several models on a few common theory of mind tests. He found that the best of them, OpenAI’s GPT-4, solved 75 percent of tasks correctly, which he said matched the performance of six-year-old children observed in past studies. However, that study’s methods were criticized by other researchers who conducted follow-up experiments and concluded that the LLMs were often getting the right answers based on “shallow heuristics” and shortcuts rather than true theory of mind reasoning.

The authors of the present study were well aware of the debate. ... They note that doing a rigorous study meant also testing humans on the same tasks that were given to the LLMs: The study compared the abilities of 1,907 humans with those of several popular LLMs, including OpenAI’s GPT-4 model and the open-source Llama 2-70b model from Meta.

The LLMs and the humans both completed five typical kinds of theory of mind tasks, the first three of which were understanding hints, irony, and faux pas. They also answered “false belief” questions that are often used to determine if young children have developed theory of mind, and go something like this: If Alice moves something while Bob is out of the room, where will Bob look for it when he returns? Finally, they answered rather complex questions about “strange stories” that feature people lying, manipulating, and misunderstanding each other.

Overall, GPT-4 came out on top. Its scores matched those of humans for the false belief test, and were higher than the aggregate human scores for irony, hinting, and strange stories; it only performed worse than humans on the faux pas test. Interestingly, Llama-2’s scores were the opposite of GPT-4’s—it matched humans on false belief, but had worse than human performance on irony, hinting, and strange stories and better performance on faux pas.

To understand what was going on with the faux pas results, the researchers gave the models a series of follow-up tests that probed several hypotheses. They came to the conclusion that GPT-4 was capable of giving the correct answer to a question about a faux pas, but was held back from doing so by “hyperconservative” programming regarding opinionated statements. Strachan notes that OpenAI has placed many guardrails around its models that are “designed to keep the model factual, honest, and on track,” and he posits that strategies intended to keep GPT-4 from hallucinating (i.e. making stuff up) may also prevent it from opining on whether a story character inadvertently insulted an old high school classmate at a reunion.

Meanwhile, the researchers’ follow-up tests for Llama-2 suggested that its excellent performance on the faux pas tests were likely an artifact of the original question and answer format, in which the correct answer to some variant of the question “Did Alice know that she was insulting Bob”? was always “No.”

The researchers are careful not to say that their results show that LLMs actually possess theory of mind, and say instead that they “exhibit behavior that is indistinguishable from human behavior in theory of mind tasks.” Which begs the question: If an imitation is as good as the real thing, how do you know it’s not the real thing?

That’s a question social scientists have never tried to answer before, says Strachan, because tests on humans assume that the quality exists to some lesser or greater degree. “We don’t currently have a method or even an idea of how to test for the existence of theory of mind, the phenomenological quality,” he says.

... The results may or may not indicate that AI really gets us, but it’s worth thinking about the repercussions of LLMs that convincingly mimic theory of mind reasoning. They’ll be better at interacting with their human users and anticipating their needs, but could also get better at deceiving or manipulating their users. And they’ll invite more anthropomorphizing, by convincing human users that there’s a mind on the other side of the user interface.



Testing theory of mind in large language models and humans, Nature Human Behavior, (2024)
https://www.nature.com/articles/s41562-024-01882-z

-------------------------------------------------

« Last Edit: May 21, 2024, 05:39:30 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3338 on: May 21, 2024, 11:32:40 PM »
World Is Ill-Prepared for Breakthroughs In AI, Say Experts
https://techxplore.com/news/2024-05-world-leaders-ai-experts-safety.html
https://www.theguardian.com/technology/article/2024/may/20/world-is-ill-prepared-for-breakthroughs-in-ai-say-experts

The world is ill-prepared for breakthroughs in artificial intelligence, according to a group of senior experts including two “godfathers” of AI, who warn that governments have made insufficient progress in regulating the technology.

A shift by tech companies to autonomous systems could “massively amplify” AI’s impact and governments need safety regimes that trigger regulatory action if products reach certain levels of ability, said the group.

The recommendations are made by 25 experts including Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI” who have won the ACM Turing award – the computer science equivalent of the Nobel prize – for their work.

The academic paper, called “Managing Extreme AI Risks Amid Rapid Progress”, recommends government safety frameworks that introduce tougher requirements if the technology advances rapidly.

It also calls for increased funding for newly established bodies such as the UK and US AI safety institutes; forcing tech firms to carry out more rigorous risk-checking; and restricting the use of autonomous AI systems in key societal roles.

“Society’s response, despite promising first steps, is incommensurate with the possibility of rapid, transformative progress that is expected by many experts,” according to the paper, published in the Science journal on Monday. “AI safety research is lagging. Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.”

The paper says advanced AI systems – technology that carries out tasks typically associated with intelligent beings – could help cure disease and raise living standards but also carry the threat of eroding social stability and enabling automated warfare. It warns, however, that the tech industry’s move towards developing autonomous systems poses an even greater threat.

“Companies are shifting their focus to developing generalist AI systems that can autonomously act and pursue goals. Increases in capabilities and autonomy may soon massively amplify AI’s impact, with risks that include large-scale social harms, malicious uses, and an irreversible loss of human control over autonomous AI systems,” the experts said, adding that unchecked AI advancement could lead to the “marginalisation or extinction of humanity”.

The next stage in development for commercial AI is “agentic” AI, the term for systems that can act autonomously and, theoretically, carry out and complete tasks on its own.

... A UK government spokesperson said: “We disagree with this assessment.

Managing Extreme AI Risks Amid Rapid Progress, Science, (2024)
https://www.science.org/doi/10.1126/science.adn0117

--------------------------------------------------------------

Tech Companies Have Agreed to AI ‘Kill Switch’ to Prevent Terminator-Style Risks
https://www.cnbc.com/2024/05/21/tech-giants-pledge-ai-safety-commitments-including-a-kill-switch.html

There’s no stuffing AI back inside Pandora's box—but the world’s largest AI companies are 'voluntarily' working with governments to address the biggest fears around the technology and calm concerns that unchecked AI development could lead to sci-fi scenarios where the AI turns against its creators. Without strict legal provisions strengthening governments’ AI commitments, though, the conversations will only go so far.

Major tech companies, including Microsoft, Amazon, and OpenAI, came together in a landmark international agreement on artificial intelligence safety at the Seoul AI Safety Summit on Tuesday.

In light of the agreement, companies from various countries, including the U.S., China, Canada, the U.K., France, South Korea, and the United Arab Emirates, will make 'voluntary' commitments to ensure the safe development of their most advanced AI models.

Where they have not done so already, AI model makers agreed to publish safety frameworks laying out how they’ll measure the challenges of their frontier models, such as preventing misuse of the technology by bad actors.

These frameworks will include “red lines” for the tech firms that define the kinds of risks associated with frontier AI systems, which would be considered “intolerable.” These risks include but aren’t limited to automated cyberattacks and the threat of bioweapons.

To respond to such extreme circumstances, companies said they 'plan' to implement a “kill switch” that would cease the development of their AI models if they can’t guarantee mitigation of these risks.

The commitments agreed to on Tuesday apply only to so-called frontier models. This term refers to the technology behind generative AI systems like OpenAI’s GPT family of large language models, which powers the popular ChatGPT AI chatbot.

--------------------------------------------------------------

Microsoft’s New Copilot AI Agents Act Like Virtual Employees to Automate Tasks
https://www.theverge.com/2024/5/21/24158030/microsoft-copilot-ai-automation-agents

Microsoft will soon allow businesses and developers to build AI-powered Copilots that can work like virtual employees and perform tasks automatically. Instead of Copilot sitting idle waiting for queries, it will be able to do things like monitor email inboxes and automate a series of tasks or data entry that employees normally have to do manually.

It’s a big change in the behavior of Copilot in what the industry commonly calls AI agents, or the ability for chatbots to intelligently perform complex tasks autonomously.

“We very quickly realized that constraining Copilot to just being conversational was extremely limiting in what Copilot can do today,” explains Charles Lamanna, corporate vice president of business apps and platforms at Microsoft, in an interview with The Verge. “Instead of having a Copilot that waits there until someone chats with it, what if you could make your Copilot more proactive and for it to be able to work in the background on automated tasks.”

Businesses will be able to create a Copilot agent that could handle IT help desk service tasks, employee onboarding, and much more. “Copilots are evolving from copilots that work with you, to copilots that work for you,” says Microsoft in a blog post.

https://www.microsoft.com/en-us/microsoft-365/blog/2024/05/21/new-agent-capabilities-in-microsoft-copilot-unlock-business-value/

Quote
... Imagine you’re a new hire. A proactive copilot greets you, reasoning over HR data and answers your questions, introduces you to your buddy, gives you the training and deadlines, helps you with the forms and sets up your first week of meetings. Now, HR and the employees can work on their regular tasks, without the hassle of administration.

This type of automation will naturally lead to questions about job losses and fears about where AI heads next.  ...  “We think with Copilot and Copilot Studio, some tasks will be automated completely... but the good news is most of the things that are automated are things that nobody really wants to do.”

Microsoft’s argument that it only wants to reduce the boring bits of your job sounds idealistic for now, but with the constant fight for AI dominance between tech companies, it feels like we’re increasingly on the verge of more than basic automation.  ... Microsoft says it has built a number of controls into Copilot Studio for this AI agent push so that Copilot doesn’t simply go rogue and automate tasks freely. That’s a big concern that we’ve seen play out already with Meta’s own AI ad tools misfiring and blowing through cash.

Microsoft also wants Copilot to work with groups of people more, instead of these one-to-one experiences that have existed over the past year. A new Team Copilot feature will allow the AI assistant to manage meeting agendas and notes, moderate lengthy team chats, or help assign tasks and track deadlines in Microsoft Planner. Microsoft plans to preview Team Copilot later this year.

--------------------------------------------------------------

Microsoft’s Team Copilot is a Virtual Team Member That Can Run Meetings and Projects
https://venturebeat.com/ai/microsoft-introduces-team-copilot-to-run-meetings-and-projects/

Microsoft is introducing a new version of Copilot that transforms it from a personal assistant into a virtual colleague to help out teams, departments and companies. If you’re looking for someone to tackle the mundane administrative work, you won’t have to worry if there’s a budget to bring on an additional headcount. Instead, turn to Copilot—at least, that’s the pitch Microsoft is making.

Team Copilot promises to run online meetings by managing the agenda and taking notes. It can also help keep track of what people are talking about in Teams by summarizing threads and answering questions from the group—tl;dr, anyone? Lastly, Team Copilot will track the progress of projects by assigning tasks, overseeing deadlines and notifying specific team members when feedback is requested.
« Last Edit: May 23, 2024, 02:25:29 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3339 on: May 21, 2024, 11:34:05 PM »
playing both ends against the middle ...

Northrop Grumman Partners With NVIDIA to Accelerate AI Innovation
https://defence-blog.com/northrop-grumman-partners-with-nvidia-to-accelerate-ai-innovation/

Northrop Grumman Corporation announced on 16 May an agreement to access and use NVIDIA AI software to accelerate the development of advanced systems.

This agreement, facilitated by Future Tech Enterprise, grants Northrop Grumman access to NVIDIA’s extensive portfolio of AI and generative AI software, including platforms and frameworks such as NVIDIA Omniverse.

The partnership opens new research and development pathways, enabling Northrop Grumman to quickly integrate advanced AI technologies across its portfolio, enhancing operational efficiency.

--------------------------------------------------------------

NVIDIA Technology Found In Russian Military Drones
https://defence-blog.com/nvidia-technology-found-in-russian-military-drones/

Hacktivists from the Cyber Resistance group have provided new data revealing a connection between American graphics processor manufacturer NVIDIA and the Russian drone manufacturer “Albatros.”

InformNapalm, an investigative organization, analyzed the documents, exposing that Russia is using NVIDIA’s Jetson series microcomputers for image recognition in its new Albatros M5 drones.

https://informnapalm.org/en/alabugaleaks-part-3/

The analysis of internal documents from “Albatros” and the email communications of the company’s CEO, Frolov, uncovered that collaboration with NVIDIA has been ongoing since at least 2016. This cooperation continued despite Russia’s annexation of Crimea, sanctions against Russia, and direct U.S. sanctions against “Albatros.”



A recent email dated February 26, 2024, invited “Albatros” to the NVIDIA GTC 2024, a leading conference on artificial intelligence. Dzhorayev suggested that invitations could also be sent to students and interested colleagues, likely referring to students from “Alabuga Polytechnic,” who are involved in drone assembly. This indicates Dzhorayev’s awareness of “Albatros,” the “Alabuga” special economic zone, and their activities.

The Albatros website even resells and supports Jetson microcomputers.

According to NVIDIA’s website, Jetson is a leading platform for autonomous transport, robots, and other embedded applications. It includes high-performance modules, the NVIDIA JetPack SDK for software acceleration, and an ecosystem of partners offering sensors, SDKs, services, and products for development acceleration. Jetson is compatible with the same AI-based software and cloud technologies used on other NVIDIA platforms, providing the performance and energy efficiency needed for autonomous machine infrastructure.

Despite the evidence, NVIDIA representatives have not commented on their collaboration with the Russian military drone manufacturer.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3340 on: May 21, 2024, 11:38:03 PM »
Hives For U.S. Drone Swarms Ready To Deploy This Year
https://www.forbes.com/sites/davidhambling/2024/05/16/hives-for-us-drone-swarms-ready-to-deploy-this-year/



Two U.S. companies are teaming up to supply technology for the military to operate mixed swarms of small drones with minimal human involvement. They are integrating battlefield drones with a system known as a Hive which can launch, recover and recharge drones automatically at the push of a button. This will make drone swarms a practical proposition on the battlefield and the technology will be ready to deploy this year.

https://ir.redcatholdings.com/news-events/press-releases/detail/140/red-cat-announces-agreement-with-sentien-robotics-uas-hive

The partnership sees Red Cat Holdings joining forces with Sentien Robotics. Red Cat subsidiary Teal Drones builds the Teal 2 quadcopter, which is used by the U.S. Army and has been supplied to Ukraine. The Teal 2 has a flight time of 30 minutes, encrypted, jam-resistant communications, a control range of three miles and best-in-class thermal imager. Such drones are normally launched and recovered by hand, one at a time, but Hive changes that.

Red Cat’s partners are Sentien Robotics, who are working to decouple the drone launch and recovery process from human labor and dramatically scale up the number of drones that can be employed. Their Hive features robotic automation hardware and intelligence software to drone fleet enable operation. They currently offer two versions, the Hive Expedition mounted on the back of a vehicle, and the larger, towed Hive XL.



https://www.sentien.com/hive-expedition

https://www.sentien.com/hive-xl

The Hive Expedition weighs 400 pounds and can operate twelve or more drones depending on their size. The Hive XL is a 13,000-pound trailer which can house and deploy up to 80 drones. It is ‘platform agnostic’ meaning it can work with different drone types including Teal, Skydio, Parrot and DJI – this last is not used by the U.S. military due to its Chinese origins. The Teal 2 is designed to meet the U.S. government's Blue SUAS standards.

Both types of Hive have autonomous control systems to launch and fly drones back and land them precisely on a launch pad with the help of an upward-facing stereoscopic camera system. A relief drone is automatically launched as soon as the first starts to run low on battery, and this will already be at work when the original is back inside the Hive's recharging bay.



Typically the drones will operate in units of three where one is in the air carrying out surveillance while the other two are recharging or going to/coming from the operational area. A Hive Expedition with 12 drones can maintain 4 in the air on a permanent, 24/7 basis, patrolling a specific area or maintaining watch over a target. The Hive also responds automatically to problems with any given drone to maintain the required number in the air.

Sentien even have a multi-Hive concept which involves drones migrating outwards through a network of Hives to replace lost of malfunctioning drones.

Both types of Hive allow a single operator to control the entire fleet via a simple tablet interface, and they remove the need for any physical drone handling. According to Sentien, an operator can drive a Hive to a location and have a pop-up security system running in five minutes.

---------------------------------------------------------------

'Swarm Pilots' Will Need New Tactics—and Entirely New Training Methods: Air Force Special-Ops Chief
https://www.defenseone.com/policy/2024/05/swarm-pilots-will-need-new-tacticsand-entirely-new-training-methods-air-force-special-ops-chief/396477/
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Freegrass

  • Young ice
  • Posts: 4054
  • Autodidacticism is a complicated word
    • View Profile
  • Liked: 998
  • Likes Given: 1291
Re: Robots and AI: Our Immortality or Extinction
« Reply #3341 on: May 21, 2024, 11:41:31 PM »
World Is Ill-Prepared for Breakthroughs In AI, Say Experts

Luckily, AI is the last step in computer evolution. Everyone will now have a companion that can do almost anything for them. This thing can write code for games, make websites, and will only expand and get better now. We'll all have a partner in our ear we just talk too and will read us the news, reply emails, advise us on the law, and just about anything you can think of. Goodbye keyboard!

This thing is amazing.

When factual science is in conflict with our beliefs or traditions, we cuddle up in our own delusional fantasy where everything starts making sense again.

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3342 on: May 22, 2024, 07:54:51 PM »
coming soon ...

OpenAI CEO Sam Altman Suggests GPT-5 May Work Like a 'Virtual Brain' With Deeper Thinking Capabilities, Demystifying It From the "Mildly Embarrassing at Best GPT-4"
https://www.windowscentral.com/software-apps/openai-ceo-sam-altman-suggests-gpt-5-may-work-like-a-virtual-brain

It’s clear OpenAI Sam Altman isn’t at Microsoft Build to announce a new model, but he’s happy to tease that the next big one is on the way. Microsoft built an even bigger supercomputer for this work, and now Altman hints that new modalities and overall intelligence will be key to OpenAI’s next model. “The most important thing and it sounds like the most boring thing I can say... the models are just going to get smarter, generally across the board,” says Altman.

While there's no ETA for when OpenAI might potentially ship the smarter-than-GPT-4 model, the hot startup has made significant strides toward improving the performance of its models.

For instance, its new flagship GPT-4o model is incredibly great at coding to the extent of threatening its viability as a career option for the youth.

https://www.windowscentral.com/software-apps/openais-new-gpt-4o-went-viral-with-a-video-demoing-a-seeing-ai-assisting-its-blind-counterpart-a-spectacle-that-truly-needs-to-be-seen-to-be-believed

As it now seems, OpenAI's GPT-4o successor might be superior in every sense of the word compared to previous models. In an interview with the Director and GM of Redpoint, Logan Bartlett, OpenAI CEO Sam Altman shed a little bit of light on future developments and advances mapped out for GPT-5

Quote
... Could there be a base model like a ‘virtual brain’ that might exhibit deeper ‘thinking’ capabilities in some cases? Or we might explore different models, but the user might not care about the differences between them. So I think we’re still exploring how to bring these products to market.

    OpenAI CEO, Sam Altman

The idea of an AI-powered model functioning like a "virtual brain" suggests that it might be better, faster, and more efficient at handling tasks compared to its predecessors.

Sam Altman also indicated that the new model (GPT-5) might get a different/special name when it ships. He centered this around its 'unique' capabilities, stacking miles ahead of the relatively traditional GPT-1 to GPT-4.

In a joint statement, Sam Altman and Greg Brockman admitted there's no proven playbook for how to navigate the path to AGI, while its alignment team imploded. Sam Altman has previously admitted that there's no big red button to stop the progression of AI, amid masses of reports indicating that we're on the verge of the biggest technology revolution with AI, though there won't be enough power by 2025 and it might potentially lead to inevitable doom ending humanity.

https://www.windowscentral.com/software-apps/openais-sam-altman-says-theres-no-big-red-button-to-stop-ai

https://arstechnica.com/information-technology/2024/03/openais-gpt-5-may-launch-this-summer-upgrading-chatgpt-along-the-way/

---------------------------------------------------------------



---------------------------------------------------------------

Godfather of AI Says There's an Expert Consensus AI Will Soon Exceed Human Intelligence
https://futurism.com/the-byte/godfather-ai-exceed-human-intelligence

Geoffrey Hinton, one of the "godfathers" of AI, is adamant that AI will surpass human intelligence — and worries that we aren't being safe enough about its development.

This isn't just his opinion, though it certainly carries weight on its own. In an interview with the BBC's Newsnight program, Hinton claimed that the idea of AI surpassing human intelligence as an inevitability is in fact the consensus of leaders in the field.

Video: https://x.com/BBCNewsnight/status/1791587541721780400

"Very few of the experts are in doubt about that," Hinton told the BBC. "Almost everybody I know who is an expert on AI believes that they will exceed human intelligence — it's just a question of when."

... "I think there's a chance they'll take control. And it's a significant chance — it's not like one percent, it's much more," he added. "Whether AI goes rogue and tries to take over, is something we may be able to control or we may not, we don't know."

"What I'm most concerned about is when these [military AIs] can autonomously make the decision to kill people," he told the BBC, admonishing world governments for their lack of willingness to regulate this area.

... If it's any consolation, Hinton doesn't think that a rogue AI takeover of humanity is a totally foregone conclusion — only that AI will eventually be smarter than us. Still, you could argue that the profit-driven companies that are developing AI models aren't the most trustworthy stewards of the tech's safe development.

---------------------------------------------------------------

« Last Edit: May 23, 2024, 02:05:20 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3343 on: May 22, 2024, 08:24:20 PM »
AI's Quest for a Grand Unification Theory
https://www.psychologytoday.com/za/blog/the-digital-self/202405/ais-quest-for-a-grand-unification-theory

AI models may converge towards a unified understanding of reality as they become more advanced. This idea, the Platonic Representation Hypothesis, echoes Plato's concept of universal forms. The hypothesis has implications for AI's future, reality itself, and the nature of intelligence.

Imagine a future where artificial intelligence (AI) systems, regardless of their specific tasks, all share a common understanding of the world. This is the essence of the "Platonic Representation Hypothesis," a fascinating idea in a recently published paper. The authors suggest that as AI models become more advanced, they start to represent data in increasingly similar ways, hinting at a shared, abstract model of reality. It might be a good idea to put on your thinking cap.



The Platonic Representation Hypothesis: An Overview

The Platonic Representation Hypothesis suggests that as AI models become more sophisticated and are trained on more diverse data, their internal representations of the world will converge toward a unified, abstract model of reality. This shared understanding would transcend the specific tasks or data types the AI models are designed to handle, suggesting a common underlying structure to intelligence and perception.

The Echoes of Plato's Philosophy

The concept of a shared understanding among AI systems is reminiscent of the philosophical idea of platonic ideals. Plato, the ancient Greek philosopher, believed that the world we perceive is merely a reflection of perfect, universal forms. Similarly, the researchers propose that AI models, whether they're processing language, images, or audio, are all tapping into a common understanding of the world as they become more sophisticated—in essence, a unified theory of reality.

Implications for AI's Future

If the Platonic Representation Hypothesis proves true, it could have far-reaching implications for the future of AI. A unified understanding of reality could lead to AI systems that are more efficient and adaptable. Imagine an AI that can easily apply what it learned in one domain, like language, to another domain, like image recognition. This would be a significant step forward from the specialized AI systems we have today.



The Limits of Translation

However, the idea of a shared representation is not without its challenges. Some argue that the apparent convergence might be a result of current technological limitations or biases in the data used to train AI models. Others point out that different types of data, such as images and text, may contain unique information that can't be fully captured by a single, shared representation.

AI's Grand Unification Theory

The pursuit of a unified theory of AI bears a striking resemblance to the quest for a grand unification theory in physics. Just as physicists have long sought to unify the fundamental forces of nature into a single, coherent framework, this theory suggests that the seemingly disparate branches of AI may ultimately converge towards a unified understanding of intelligence and reality. If AI models are indeed tapping into a shared, abstract representation of the world, it suggests that there may be fundamental laws or principles that govern all forms of intelligence, whether artificial or biological. These laws could be as profound and far-reaching as the laws of physics, shaping the very fabric of cognition and perception.

Implications for the Nature of Reality Itself
Quote
If AI models are, in fact, converging towards a shared representation of the world, it suggests that there may be an underlying structure or order to reality that is independent of any specific observer or mode of observation. This idea resonates with certain philosophical and scientific concepts, such as the theory of objective reality in metaphysics or the search for a unified field theory in physics.

Bridging the Abstract and the Concrete

If proven true, this hypothesis could bridge the gap between the abstract world of mathematics and computation and the concrete world of physical reality, suggesting a deep connection between the two. It may even hint at the existence of a "platonic realm" of pure forms and ideas that exists beyond our direct experience, but which we can access through reason and abstraction.

The Platonic Representation Hypothesis, arXiv, (2024)
https://arxiv.org/pdf/2405.07987

--------------------------------------------------------------

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3344 on: May 22, 2024, 08:25:45 PM »
AI CEO Says People's Obsession With Reaching Artificial General Intelligence (AGI) Is 'About Creating God'
https://www.businessinsider.com/mistrals-ceo-said-obsession-with-agi-about-creating-god-2024-4

Mistral's founder and CEO Arthur Mensch doesn't believe in god — and therefore, he doesn't believe in artificial general intelligence. But he separated himself from other leading tech CEOs in an interview with The New York Times. Mensch said he felt uncomfortable with Silicon Valley's religious fascination with general AI.

"The whole AGI. rhetoric is about creating God," Mistral said in the interview. "I don't believe in God. I'm a strong atheist. So I don't believe in AGI." He was referring to comments made by tech CEOs like Elon Musk and Sam Altman saying AI will become smarter than humans, which could lead to negative consequences for humanity.

Other figures in tech are going even further and creating a new religion around AI.

Anthony Levandowski, a pioneer in driverless cars whom Donald Trump pardoned for stealing trade secrets, announced he was bringing his AI church back in an episode of Bloomberg's AI IRL podcast in November 2023.

Levandowski, now the CEO of Pollen Mobile, founded his "Way of the Future" church in 2015 while he worked as an engineer on Google's Waymo.

The church was shut down a few years later, but Levandowski's new church already has "a couple thousand people" who want to build a "spiritual connection" with AI, he said in the interview.

"Here we're actually creating things that can see everything, be everywhere, know everything," Levandowski said in the interview. "And maybe help us and guide us in a way that normally you would call God."

Devotees were speaking to an entity who didn't exist – imagine that

... A more imminent threat than AGI, Mensch said, is the one tech giants pose to cultures around the globe.

-----------------------------------------------------------

Sam Altman Wants to Make AI Like a 'Super-Competent Colleague That Knows Absolutely Everything' About Your Life
https://www.businessinsider.com/sam-altman-openai-chatgpt-super-competent-colleague-2024-5
https://www.technologyreview.com/2024/05/01/1091979/sam-altman-says-helpful-agents-are-poised-to-become-ais-killer-function/

"What you really want," the OpenAI CEO told the MIT Technology Review, is a "super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I've ever had, but doesn't feel like an extension."

And they're self-starters that don't need constant direction. They'll tackle some tasks, presumably simpler ones, instantly, Altman said. They'll make a first pass at more complex tasks, and come back to the user if they have questions.

OpenAI's forthcoming language model, GPT-5, might be a step in that direction. OpenAI is also developing a service where users could call an AI agent to perform tasks autonomously.

----------------------------------------------------------



----------------------------------------------------------
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3345 on: May 23, 2024, 12:04:30 AM »
GPT-4o’s Chinese Token-Training Data Is Polluted By Spam and Porn Websites
https://www.technologyreview.com/2024/05/17/1092649/gpt-4o-chinese-token-polluted/

Soon after OpenAI released GPT-4o on Monday, May 13, some Chinese speakers started to notice that something seemed off about this newest version of the chatbot: the tokens it uses to parse text were full of spam and porn phrases.

On May 14, Tianle Cai, a PhD student at Princeton University studying inference efficiency in large language models like those that power such chatbots, accessed GPT-4o’s public token library and pulled a list of the 100 longest Chinese tokens the model uses to parse and compress Chinese prompts.

https://gist.github.com/ctlllll/4451e94f3b2ca415515f3ee369c8c374

... Of the 100 results, only three of them are common enough to be used in everyday conversations; everything else consisted of words and expressions used specifically in the contexts of either gambling or pornography. The longest token, lasting 10.5 Chinese characters, literally means “_free Japanese porn video to watch.” Oops.

https://medium.com/@henryhengluo/bias-alignment-atypical-stereotypical-nationality-analysis-7ffbef9ee967

According to multiple researchers who have looked into the new library of tokens used for GPT-4o, the longest tokens in Chinese are almost exclusively spam words used in pornography, gambling, and scamming contexts. Even shorter tokens, like three-character-long Chinese words, reflect those topics to a significant degree.

“The problem is clear: the corpus used to train [the tokenizer] is not clean. The English tokens seem fine, but the Chinese ones are not,” says Cai from Princeton University. It is not rare for a language model to crawl spam when collecting training data, but usually there will be significant effort taken to clean up the data before it’s used. “It’s possible that they didn’t do proper data clearing when it comes to Chinese,” he says.

It doesn’t seem OpenAI did that, which in fairness makes some sense, given that people in China can’t use its AI models anyway.

... But it is not the only company struggling with this problem. People inside China who work in its AI industry agree there’s a lack of quality Chinese text data sets for training LLMs. One reason is that the Chinese internet used to be, and largely remains, divided up by big companies like Tencent and ByteDance. They own most of the social platforms and aren’t going to share their data with competitors or third parties to train LLMs.

In fact, this is also why search engines, including Google, kinda suck when it comes to searching in Chinese. Since WeChat content can only be searched on WeChat, and content on Douyin (the Chinese TikTok) can only be searched on Douyin, this data is not accessible to a third-party search engine, let alone an LLM. But these are the platforms where actual human conversations are happening, instead of some spam website that keeps trying to draw you into online gambling.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3346 on: May 23, 2024, 04:51:00 PM »
politicians are used to having Corporations or lobbyists (like ALEC) write their laws for them so ...

Arizona State Lawmaker Used ChatGPT to Write Part of Law On Deepfakes
https://www.theguardian.com/us-news/article/2024/may/22/arizona-deepfake-law-chatgpt

Republican Alexander Kolodin said: ‘I was kind of struggling with the terminology. So I thought, let me just ask the expert’

An Arizona state representative behind a new law that regulates deepfakes in elections used an artificial intelligence chatbot, ChatGPT, to write part of the law – specifically, the part that defines what a deepfake is.

Republican Alexander Kolodin’s bill, which passed unanimously in both chambers and was signed by the Democratic governor this week, will allow candidates in Arizona or residents to ask a judge to declare whether a supposed deepfake is real or not, giving candidates a way to debunk AI-generated misinformation.

... “I am by no means a computer scientist,” Kolodin said. “And so when I was trying to write the technical portion of it, in terms of what sort of technological processing makes something a deepfake, I was kind of struggling with the terminology. So I thought to myself, well, let me just ask the subject matter expert. And so I asked ChatGPT to write a definition of what was a deepfake.”

That portion of the bill “probably got fiddled with the least – people seemed to be pretty cool with that” throughout the legislative process. ChatGPT provided the “baseline definition” and then “I, the human, added in the protections for human rights, things like that it excludes comedy, satire, criticism, artistic expression, that kind of stuff,” Kolodin said.

Rather than outlaw or curb usage of deepfakes, Kolodin wanted to give people a mechanism to have the courts weigh in on the truthfulness of a deepfake. Having it taken down would be both futile and a first amendment issue, he said. (... and the judges are already bought and paid for.)

... The federal government has not yet regulated the use of AI in elections, though groups have been pressuring the Federal Election Commission to do so because the technology has moved much faster than the law, creating concerns it could disrupt elections this year. The agency has said it expects to share more on the issue this summer.

-----------------------------------------------------------

noun. co-coun·​sel. ˌkō-ˈkau̇n-səl. : an attorney who assists in or shares the responsibility of representing a client.

Thomson Reuters Unveils CoCounsel, Leveraging Generative AI for Legal Professionals
https://venturebeat.com/ai/thomson-reuters-unveils-cocounsel-leveraging-generative-ai-for-legal-professionals/



Thomson Reuters, a leading provider of information services for legal, tax, and accounting professionals, today announced the launch of CoCounsel, a groundbreaking AI-powered platform designed to revolutionize how lawyers research, analyze and draft legal documents. The company believes this technology is so powerful that it would be “almost malpractice for lawyers not to use it,” according to David Wong, Thomson Reuters Chief Product Officer, who remarked with a hint of jest at a recent press event in Downtown San Francisco

CoCounsel, developed through the acquisition of legal AI startup Casetext, uses advanced generative AI models like OpenAI’s GPT-4 to understand and process Thomson Reuters’ vast proprietary content database. “What is unique about many of our solutions is that we have designed generative AI solutions that mimic workflows that professionals use,” explained Wong. “We try to mimic the way that a professional does work, such that we can test and authenticate that the work product is equivalent to what you’d expect a human to do.”

----------------------------------------------------

USPTO Warns Patent Lawyers Not to Pass Off AI Inventions As Human
https://www.reuters.com/legal/litigation/uspto-warns-patent-lawyers-not-pass-off-ai-inventions-human-2024-04-10/

... they know you're not that smart

-----------------------------------------------------------

Georgia Police Use Artificial Intelligence to Solve Cases
https://www.govtech.com/public-safety/georgia-police-use-artificial-intelligence-to-solve-cases

... The Warner Robins Police Department has begun using Cybercheck, a program that allows the agency to scour "all layers" of the Internet to help solve criminal cases, according to Cybercheck's website. The program can be used for various investigations, but WRPD has been focused on using it to solve cold cases.

https://cybercheck.ai/

According to the Cybercheck website, the software has helped solve 209 homicide cases, 107 cold homicide or missing person cases, 88 child pornography cases and 37 human trafficking cases in states like Florida, North Carolina and California.

It allows them to find a comprehensive online identity for a person by finding relationships and associations, social media comments, videos, images, website links, bitcoin addresses, employment and IP addresses. Officers can also use it for location mapping through Wi-Fi cameras, routers or their phones. (a.k.a. personal portable surveillance device)

All this information becomes a cyber profile, which Cybercheck calls "CyberDNA."

---------------------------------------------------------



---------------------------------------------------------

How AI-powered Robots In Law Enforcement Could Become a Tool for 'Supercharging Police Bias'
https://www.wgbh.org/news/local/2024-04-30/how-ai-powered-robots-in-law-enforcement-could-become-a-tool-for-supercharging-police-bias

... Do we want, or should we have, autonomous robots doing crowd control, or out on the beat? Do we give them guns?

... Where this becomes a little more creepy is if you replace the example in your intro of the dog [from the bomb squad incident] with one of Boston Dynamics’ neural robots. You get into pretty strange territory there. ... There can be a real alienating factor in deploying more and more sophisticated robotics — humanoid robotics, robotics that you can easily anthropomorphize or project human qualities onto.

It doesn’t take much for us to do this. In New York City, the NYPD deployed a non-human-looking robot at some of the subway stations and already received a good amount of pushback

----------------------------------------------------

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 26266
    • View Profile
  • Liked: 1167
  • Likes Given: 436
Re: Robots and AI: Our Immortality or Extinction
« Reply #3347 on: May 23, 2024, 07:48:17 PM »
Unverified, but an interesting potential use case.
 
Quote
GPT-4o has been out for 10 days and someone has already used it to take out their HOA [Home Owners’ Association].
 
⬇️  pic.twitter.com/Kob1XOsa6B  Lawyers sent thousands of docs; used a script to dedupe, classify and map locations involved, showing targeted harrassment.
   —-
Alex Nichiporchik
Accurate. I wrote a simple series of prompts to analyze 270gigs of a data dump done with the same intention — to overwhelm and exhaust. They were very surprised when we came back within 30min asking for clarification on what we found.

< Less to do with 4o, could do this with 3. Very cool use case though
5/22/24, https://x.com/venturetwins/status/1793363005532782614
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3348 on: May 24, 2024, 12:05:32 AM »
Intel Unveils Hala Point, Its Second-Generation Neuromorphic Computing System
https://www.intel.com/content/www/us/en/newsroom/news/intel-builds-worlds-largest-neuromorphic-system.html

Intel announced on Wednesday the creation of Hala Point, the industry’s first 1.15 billion neuron neuromorphic system. Using 1,152 Loihi 2 processors, it’s designed to aid in researching future brain-inspired artificial intelligence and developing more sustainable uses of today’s AI.

Hala Point packages 1,152 Loihi 2 processors produced on Intel 4 process node in a six-rack-unit data center chassis the size of a microwave oven..

Intel has deployed the neuromorphic system to Sandia National Laboratories, part of the U.S. Department of Energy’s National Nuclear Security Administration (NNSA). This deployment is part of a relationship between the two parties that goes back to 2021 to explore neuromorphic computing in AI further.



Intel is characterizing Hala Point as supporting up to 30 quadrillion operations per second or 30 petaops, with an efficiency exceeding 15 trillion 8-bit operations/second/watt “when executing conventional deep neural networks.”

Along with its thousands of Loihi 2 processors, Hala Point supports up to 1.15 billion neurons and 128 billion synapses distributed over more than 140,000 neuromorphic processing cores. It also includes more than 2,300 embedded x86 processors. It can also provide 16 petabytes per second of memory bandwidth, 11 PB/s of inter-core communication bandwidth and 5.5 terabytes per second of inter-chip communication bandwidth.



It’s an evolution from the company’s first large-scale research system, Pohoiki Springs, in that it has greater neuron capacity (10x) and higher performance (12x).

“Applied to bio-inspired spiking neural network models, Hala Point can execute its full capacity of 1.15 billion neurons 20 times faster than a human brain and up to 200 times faster rates at lower capacity,” Davies remarks. “While Hala Point is not intended for neuroscience modeling, its neuron capacity is roughly equivalent to that of an owl brain or the cortex of a capuchin monkey.”

Loihi-based systems can perform AI inference and solve optimization problems using 100 times less energy at speeds as much as 50 times faster than conventional CPU and GPU architectures

.... Since it’s not available to the public, what does this neuromorphic computing system do? Sandia Labs and the NNSA research teams are believed to be using it to “realize brain-based computing on a large scale.” It may also eventually tackle large-scale problems in physics, chemistry and the environment.


-----------------------------------------------------------

Researchers Develop Large-Scale Neuromorphic Chip With Novel Instruction Set Architecture and On-Chip Learning
https://techxplore.com/news/2024-05-large-scale-neuromorphic-chip-architecture.html



The Spiking Neural Network (SNN) offers a unique approach to simulating the brain's functions, making it a key focus in modern neuromorphic computing research. Unlike traditional neural networks, SNNs operate on discrete, event-driven signals, aligning more closely with biological processes.

However, to fully realize the potential of SNNs, researchers face several challenges. First, they must ensure the flexibility of neural models to capture the brain's diverse behaviors accurately. Second, they need to address the scalability and density of synaptic connections to support large neural networks effectively. Finally, achieving on-chip learning capabilities is essential for these chips to adapt and improve like actual brains.

Considering these challenges, Professor Gang Pan's team at Zhejiang University collaborated with Zhejiang Lab to develop the Darwin 3 neuromorphic chip, the latest version of the Darwin series.

The findings are published in the journal National Science Review.



Based on their findings, they proposed a new instruction set architecture (ISA) specifically for neuromorphic computing. This ISA allows for rapid state updates and parameter loading, enabling efficient construction of various models and learning rules

Furthermore, the research team devised an efficient connection mechanism, significantly enhancing on-chip storage efficiency while supporting over 2 million neurons and 100 million synapses on a single chip. The neural networks in our brains are incredibly interconnected.

On average, each neuron can establish connections with thousands of other neurons. The proposed connection mechanism provides a wonderful hardware foundation for building brain-scaled neural networks

De Ma et al, Darwin3: a large-scale neuromorphic chip with a novel ISA and on-chip learning, National Science Review (2024)
https://academic.oup.com/nsr/article/11/5/nwae102/7631347
« Last Edit: May 24, 2024, 12:21:38 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 10466
    • View Profile
  • Liked: 3536
  • Likes Given: 762
Re: Robots and AI: Our Immortality or Extinction
« Reply #3349 on: May 24, 2024, 05:06:15 PM »
Scientists Create 'Toxic AI' That Is Rewarded for Thinking Up the Worst Possible Questions We Could Imagine
https://www.livescience.com/technology/artificial-intelligence/scientists-create-toxic-ai-that-is-rewarded-for-thinking-up-the-worst-possible-questions-we-could-imagine

Researchers at MIT are using machine learning to teach large language models not to give toxic responses to provoking questions, using a new method that replicates human curiosity.

The new training approach, based on machine learning, is called curiosity-driven red teaming (CRT) and relies on using an AI to generate increasingly dangerous and harmful prompts that you could ask an AI chatbot. These prompts are then used to identify how to filter out dangerous content.

The finding represents a potentially game-changing new way to train AI not to give toxic responses to user prompts, scientists said in a new paper uploaded February 29 to the arXiv pre-print server.

Curiosity-Driven Red-Teaming for Large Language Models, arXiv, (2024)
https://arxiv.org/pdf/2402.19464.pdf

---------------------------------------------------------

A National Security Insider Does the Math on the Dangers of AI
https://www.wired.com/story/jason-matheny-national-security-insider-dangers-of-ai/

Jason Matheny, CEO of the influential think tank Rand Corporation, says advances in AI are making it easier to learn how to build biological weapons and other tools of destruction

---------------------------------------------------------
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus