Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 376342 times)

Michael Hauber

  • Nilas ice
  • Posts: 1118
    • View Profile
  • Liked: 168
  • Likes Given: 16
Re: Robots and AI: Our Immortality or Extinction
« Reply #3250 on: April 28, 2024, 11:27:39 PM »
Quote
Why Adversarial AI Is the Cyber Threat No One Sees Coming
If no one sees it coming, how come "someone" is writing about it?
 :o

Because they are talking total tosh.  Adversarial AI is a technique quite widely known in AI industry.  Such AIs are designed to trick other AIs, and the basic idea is to train a normal AI and adversarial AI together to make the normal AI stronger - as it learns to overcome the adversarial AI's tricks.

As advesarial AIs are specifically designed to trick AIs, they are not relevant to cybersecurity until AIs are given responsibility for maintaining cybersecurity, which is not really something anyone is currently considering.  Regular AIs that trick people are the concern, and I'm pretty sure absolutely everyone in the cybersecurity industry is well aware of the risk and trying to figure out what to do about it.
Climate change:  Prepare for the worst, hope for the best, expect the middle.

morganism

  • Nilas ice
  • Posts: 1906
    • View Profile
  • Liked: 230
  • Likes Given: 135
Re: Robots and AI: Our Immortality or Extinction
« Reply #3251 on: April 29, 2024, 12:56:03 AM »
(how low can you go)


Implementing Neural Networks on the “10-cent” RISC-V MCU without Multiplier

I have been meaning for a while to establish a setup to implement neural network based algorithms on smaller microcontrollers. After reviewing existing solutions, I felt there is no solution that I really felt comfortable with. One obvious issue is that often flexibility is traded for overhead. As always, for a really optimized solution you have to roll your own. So I did. You can find the project here and a detailed writeup here.

It is always easier to work with a clear challenge: I picked the CH32V003 as my target platform. This is the smallest RISC-V microcontroller on the market right now, addressing a $0.10 price point. It sports 2kb of SRAM and 16kb of flash. It is somewhat unique in implementing the RV32EC instruction set architecture, which does not even support multiplications. In other words, for many purposes this controller is less capable than an Arduino UNO.

https://cpldcpu.wordpress.com/2024/04/24/implementing-neural-networks-on-the-10-cent-risc-v-mcu-without-multiplier/

morganism

  • Nilas ice
  • Posts: 1906
    • View Profile
  • Liked: 230
  • Likes Given: 135
Re: Robots and AI: Our Immortality or Extinction
« Reply #3252 on: April 29, 2024, 01:31:06 AM »
(this is a cool design. We ended up choosing a design by a group that used coffee grounds and vacuum.)

Tiny rubber spheres used to make a programmable fluid
The spheres collapse under pressure, giving the fluid very unusual properties.

Building a robot that could pick up delicate objects like eggs or blueberries without crushing them took lots of control algorithms that process feeds from advanced vision systems or sensors that emulate the human sense of touch. The other way was to take a plunge into the realm of soft robotics, which usually means a robot with limited strength and durability.

Now, a team of researchers at Harvard University published a study where they used a simple hydraulic gripper with no sensors and no control systems at all. All they needed was silicon oil and lots of tiny rubber balls. In the process, they’ve developed a metafluid with a programmable response to pressure.

Swimming rubber spheres

“I did my PhD in France on making a spherical shell swim. To make it swim, we were making it collapse. It moved like a [inverted] jellyfish,” says Adel Djellouli, a researcher at Bertoldi Group, Harvard University, and the lead author of the study. “I told my boss, 'hey, what if I put this sphere in a syringe and increase the pressure?' He said it was not an interesting idea and that this wouldn’t do anything,” Djellouli claims. But a few years and a couple of rejections later, Djellouli met Benjamin Gorissen, a professor of mechanical engineering at the University of Leuven, Belgium, who shared his interests. “I could do the experiments, he could do the simulations, so we thought we could propose something together,” Djellouli says. Thus, Djellouli’s rubber sphere finally got into the syringe. And results were quite unexpected.

The sphere has a radius of 10 mm, and its 2-mm-thick silicone rubber walls surround a pocket of air. It was placed in a container with 300 ml of water. When the pressure in the container started to increase, the sphere, at 120 kPa, started to buckle. Once it started to buckle, pressure remained relatively steady for a while, even though the volume occupied by the fluid continued dropping. The liquid with a sphere in it did not behave like water anymore—it had a pronounced plateau in its pressure/volume curve. “Metafluids—liquids with tunable properties that do not exist in nature—were theorized by Federico Capasso and colleagues, who wanted to achieve a liquid with negative refractive index. They started with optics back then, but looking at the behavior of water with this rubber sphere in it, we knew what we had was a metafluid,” says Djellouli.

Mixing programmable fluids

Putting a single rubber sphere in the water was just a starting point. “I always had this idea in the back of my head: Like, what would happen if I put in a lot of them?” Djellouli told Ars. So, his team started to experiment with different sizes and numbers of the spheres in the medium and using different mediums like silicon oil. “You can tune pressure at which the spheres activate by changing their radius and thickness of their walls. When you make the spheres thicker, you need more energy to make them buckle and thus the activation pressure will be higher,” explains Djellouli.

There are other parameters that can be changed to program desired properties in the metafluid. These include the volume fraction—basically how much of the total fluid’s volume is taken by the spheres—and the structure of the spheres, as the fluid behaves differently when you put spheres with different sizes and thickness in it. You can also tune this by using mixtures of spheres with different properties. “If the variation in size and thickness of the spheres is very tight, you are going to have a very flat plateau of pressure when they activate. If you have a wider distribution, the transition from all unbuckled to all buckled will be smoother,” says Djellouli. Using different mixtures of spheres also enables multiple plateaus at different pressures in one fluid. “This way you can precisely tune the pressure/volume curve,” Djellouli adds.

By tuning those curves, his team managed to build a smart hydraulic gripper that works without the need for sensors or control systems.

Self-controlled robots

The goal for the gripper was to grab and hold three objects—a water bottle, an egg, and a blueberry—without crushing them. The basic design was very simple: one static finger and a second that opened and closed the grip based on the motion of a hydraulic piston. “Let’s say I want to give this actuator control but without me doing any control, and I want it to grab many different objects that vary in size, weight, and fragility,” says Djellouli.

His team started by doing this experiment with plain water and air acting as the hydraulic fluids driving the piston. It turned out there was no single volume of hydraulic material that would allow the device to grab all of them. Too little and it wouldn’t close on small objects.

“In this scenario you need to spend some fluid volume to reach the object first,” Djellouli explained. This reach volume was the highest for the blueberry, the smallest object, and the lowest for the bottle, the largest of the three. “When the gripper gets in contact with the object, it stops moving, and adding more fluid to the system starts to increase pressure to the point your object is crushed,” Djellouli said. “But with the metafluid we could do this. We tuned it to reach and hold all the objects without crushing them,” says Djellouli. His team introduced two plateaus in the metafluid that enabled the gripper to reach and hold the blueberry but kept the pressure in the safe range while grabbing the bottle and the egg.

The same trick can be used to introduce some degree of intelligence to otherwise crude and simple robots. “We can make hydraulic actuators soft and self-controlled. The fluid itself is doing all the control for us, so we don’t have to control the robot from the outside,” he adds.
(more)

https://arstechnica.com/science/2024/04/metafluid-gives-robotic-gripper-a-soft-touch/

gerontocrat

  • Multi-year ice
  • Posts: 20865
    • View Profile
  • Liked: 5312
  • Likes Given: 69
Re: Robots and AI: Our Immortality or Extinction
« Reply #3253 on: April 29, 2024, 07:40:04 AM »
As advesarial AIs are specifically designed to trick AIs, they are not relevant to cybersecurity until AIs are given responsibility for maintaining cybersecurity, which is not really something anyone is currently considering.  Regular AIs that trick people are the concern, and I'm pretty sure absolutely everyone in the cybersecurity industry is well aware of the risk and trying to figure out what to do about it. My italics..Gero
I wish I could share your confidence in the people in the AI industry - I get the impression that hubris is in the air.
"Para a Causa do Povo a Luta Continua!"
"And that's all I'm going to say about that". Forrest Gump
"Damn, I wanted to see what happened next" (Epitaph)

Ranman99

  • Frazil ice
  • Posts: 129
    • View Profile
  • Liked: 28
  • Likes Given: 11
Re: Robots and AI: Our Immortality or Extinction
« Reply #3254 on: April 29, 2024, 12:20:11 PM »
AIs and even home-built LLMs are being used to augment many facets of Cybersecurity. Right now, it is a bit like how autopilot is used to pilot aircraft and various machinery. It can help a lot.

One big area is using it to help build code quickly to link to many APIs that you require in order to draw in data from other platforms. This one use case of AI in code development simply helps you do it in one day with one senior developer, which used to take several weeks. I see this now with my own eyes. I did SW development back in the mid-80s into the early 90s, and the assistance these tools give blows my mind.

Follow these guys on LinkedIn. The two founders both have experience and education in AI and how to leverage it for the good guys to stay ahead of the bad actors.

https://cloud9security.ai/
https://www.linkedin.com/company/101755293/admin/feed/posts/


😎

SteveMDFP

  • Young ice
  • Posts: 2547
    • View Profile
  • Liked: 602
  • Likes Given: 45
Re: Robots and AI: Our Immortality or Extinction
« Reply #3255 on: April 29, 2024, 12:49:12 PM »
As advesarial AIs are specifically designed to trick AIs, they are not relevant to cybersecurity until AIs are given responsibility for maintaining cybersecurity, which is not really something anyone is currently considering.  Regular AIs that trick people are the concern, and I'm pretty sure absolutely everyone in the cybersecurity industry is well aware of the risk and trying to figure out what to do about it. My italics..Gero
I wish I could share your confidence in the people in the AI industry - I get the impression that hubris is in the air.

Making an AI system resistant to misuse/abuse/crime is automatically a very tall order.  They're amazingly versatile, flexible, and actually creative.  Their internal workings are murky to owners/operators/users.  You can only engineer safeguards for abuses that you can envision.  Using a second AI system to test the production system is wise, but almost surely inadequate. 

With enough time, multiple layers and strategies to avoid misuse can almost certainly be developed.  Sufficient time and resources will not be deployed, because of the financial pressures of capitalism.  AI is being deployed *now* with prospects of potentially huge financial rewards.  The people putting their money into these systems are expecting (demanding, really) fast deployment in order to become a "first mover" in the field.

Safeguards and time for thoughtful analysis and engineering delays deployment, robbing investors of money, potentially thwarting a huge payout.  Such delays and expenses will not be tolerated.

In principle, government regulation would be called for.  However, enacting adequate legislation is a slow and messy process, and subject to being undermined by corporate influences.  Only after legislation is passed can operational regulations be developed -- slowly.  Then, regulators need to be hired and trained.  What people smart enough to regulate AI development are going to work for government on a civil servant's salary?  Such people can command far higher income in industry.

By the time effective, enforceable regulations are in place, AI technology will already be ubiquitous in the developed world, affecting many aspects of our daily lives.  And continuing advances will leapfrog the outdated regulations.

AI deployment is effectively a wild west scenario.  The only safeguards we can expect to be reasonably in place are ones that protect the owners/operators from financial loss. 

The most useful government action here would be to make sure these operators of AI retain liability for losses sustained by users and the general public.  Industry will fight such liability tooth and nail.

Nations that enact such liability requirements will see AI investments dry up, as they shift to places without such burdensome regulations.  This may be why the EU is a relative desert in AI development and deployment, while more free-wheeling jurisdictions (like Silicon Valley) are going gangbusters in the field. 

Developments in AI deployment will surely be interesting, but highly hazardous to many, in novel ways.

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3256 on: April 29, 2024, 04:50:02 PM »
AI Shows Near-Expert Clinical Knowledge, Reasoning for Eye Issues
https://medicalxpress.com/news/2024-04-ai-good-clinical-knowledge-eye.html

Large language models (LLMs) are approaching expert-level knowledge and reasoning skills in ophthalmology, according to a study published online April 17 in PLOS Digital Health.

Arun James Thirunavukarasu, M.B., B.Chir., from University of Oxford in the United Kingdom, and colleagues evaluated the clinical potential of state-of-the-art LLMs in ophthalmology. Responses to 87 questions were compared for GPT-3.5, GPT-4, PaLM 2, LLaMA, expert ophthalmologists, and doctors in training.

The researchers found that the performance of GPT-4 (69 percent) was superior to performance of GPT-3.5 (48 percent), LLaMA (32 percent), and PaLM 2 (56 percent) and compared favorably with expert ophthalmologists (median, 76 percent), ophthalmology trainees (median, 59 percent), and unspecialized junior doctors (median, 43 percent). Low agreement between LLMs and doctors was due to idiosyncratic differences in knowledge and reasoning, with overall consistency across individuals and type. Grading ophthalmologists preferred GPT-4 responses over GPT-3.5 due to higher accuracy and relevance.

"LLMs are approaching expert-level ophthalmological knowledge and reasoning, and may be useful for providing eye-related advice where access to health care professionals is limited," the authors write. "Further research is required to explore potential avenues of clinical deployment."

Arun James Thirunavukarasu et al, Large language models approach expert-level clinical knowledge and reasoning in ophthalmology: A head-to-head cross-sectional study, PLOS Digital Health (2024).
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000341

---------------------------------------------------------

AI Outperforms Pathologists in Predicting Cancer Spread to Brain
https://www.technologynetworks.com/informatics/news/ai-outscores-pathologists-predicting-lung-cancer-spread-384557
https://www.psychologytoday.com/intl/blog/the-future-brain/202403/ai-outperforms-pathologists-in-predicting-cancer-spread-to-brain

A majority of U.S. lung cancer cases, approximately 80 to 85 percent, are non-small cell lung cancer (NSCLC).

Cancer spread to the brain can happen in almost half of those diagnosed with stages I to III NSCLC.

Currently there is no dependable molecular nor histopathologic method to predict brain metastases.

-------------------------------------------------------------

New AI-Based, Non-Invasive Diagnostic Tool Enables Accurate Brain Tumor Diagnosis, Surpassing Current Methods
https://medicalxpress.com/news/2024-03-ai-based-invasive-diagnostic-tool.html

--------------------------------------------------------------

GPT-4 Matches Radiologists In Detecting Errors In Radiology Reports
https://medicalxpress.com/news/2024-04-gpt-radiologists-errors-radiology.html



Large language model GPT-4 matched the performance of radiologists in detecting errors in radiology reports, according to research published in Radiology.

Errors in radiology reports may occur due to resident-to-attending discrepancies, speech recognition inaccuracies and high workload. Large language models, such as GPT-4, have the potential to enhance the report generation process.

"Prior studies have demonstrated potential applications of GPT-4 across various stages of the patient journey in radiology: for instance, selecting the correct imaging exam and protocol based on a patient's medical history, transforming free-text radiology reports into structured reports or automatically generating the impression section of a report."

However, this is the first study to distinctively compare GPT-4 and human performance in error detection in radiology reports, assessing its capabilities against radiologists of varied experience levels in terms of accuracy, speed and cost-effectiveness, Dr. Gertz noted.

For the study, 200 radiology reports (X-rays and CT/MRI imaging) were gathered between June 2023 and December 2023 at a single institution. The researchers intentionally inserted 150 errors from five error categories (omission, insertion, spelling, side confusion and "other") into 100 of the reports. Six radiologists (two senior radiologists, two attending physicians and two residents) and GPT-4 were tasked with detecting these errors.

Researchers found that GPT-4 had a detection rate of 82.7% (124 of 150). The error detection rates were 89.3% for senior radiologists (134 out of 150) and 80.0% for attending radiologists and radiology residents (120 out of 150), on average.

In the overall analysis, GPT-4 detected less errors compared with the best performing senior radiologist (82.7% vs. 94.7%). However, there was no evidence of a difference in the percentage of average performance in error detection rate between GPT-4 and all the other radiologists.

GPT-4 required less processing time per radiology report than even the fastest human reader, and the use of GPT-4 resulted in lower mean correction cost per report than the most cost-efficient radiologist.

Potential of GPT-4 for Detecting Errors in Radiology Reports: Implications for Reporting Accuracy, Radiology (2024)
https://pubs.rsna.org/doi/10.1148/radiol.232714
https://www.rsna.org/news/2024/april/gpt4-matches-radiologists

-----------------------------------------------------------------

Researchers Develop AI Foundation Models to Advance Pathology
https://medicalxpress.com/news/2024-03-ai-foundation-advance-pathology.html

Researchers at Mass General Brigham have designed two of the largest CPath foundation models to date: UNI and CONCH. These foundation models were adapted to over 30 clinical and diagnostic needs, including disease detection, disease diagnosis, organ transplant assessment, and rare disease analysis.

The new models overcame limitations posed by current models, performing well not only for the clinical tasks the researchers tested but also showing promise for identifying new, rare and challenging diseases. Papers on UNI and CONCH have been published in Nature Medicine.

UNI is a foundation model for understanding pathology images, from recognizing disease in histology region-of-interests to gigapixel whole slide imaging. Trained using a database of over 100 million tissue patches and over 100,000 whole slide images, it stands out as having universal AI applications in anatomic pathology.

Notably, UNI employs transfer learning, applying previously acquired knowledge to new tasks with remarkable accuracy.

CONCH is a foundation model for understanding both pathology images and language. Trained on a database of over 1.17 million histopathology image-text pairs, CONCH excels in tasks like identifying rare diseases, tumor segmentation, and understanding gigapixel images. Because CONCH is trained on text, pathologists can interact with the model to search for morphologies of interest.

Lu MY et al. A visual-language foundation model for computational pathology. Nature Medicine
https://www.nature.com/articles/s41591-024-02856-4

Chen, RJ et al. Towards a general-purpose foundation model for computational pathology. Nature Medicine
https://www.nature.com/articles/s41591-024-02857-3

-----------------------------------------------------------------

Experts Propose Specific Guidelines for the Use and Regulation of AI In Cancer Treatment
https://medicalxpress.com/news/2024-04-experts-specific-guidelines-ai-cancer.html



The emergence of Generalist Medical Artificial Intelligence (GMAI) models poses a significant challenge to current regulatory frameworks.

In a commentary published in the journal Nature Reviews Cancer, Stephen Gilbert and Jakob N. Kather, both professors at the EKFZ for Digital Health at TU Dresden, discuss how the regulation of these models could be handled in the future. Policy-makers will have to decide whether to radically adapt current frameworks, block generalist approaches, or force them onto narrow tracks.

Current artificial intelligence (AI) models for cancer treatment are trained and approved only for specific intended purposes. GMAI models, in contrast, can handle a wide range of medical data including different types of images and text. For example, for a patient with colorectal cancer, a single GMAI model could interpret endoscopy videos, pathology slides and electronic health record (EHR) data. Hence, such multi-purpose or generalist models represent a paradigm shift away from narrow AI models.

Regulatory bodies face a dilemma in adapting to these new models because current regulations are designed for applications with a defined and fixed purpose, specific set of clinical indications and target population. Adaptation or extension after approval is not possible without going through quality management and regulatory, administrative processes again.

GMAI models, with their adaptability and predictive potential even without specific training examples—so called zero shot reasoning—therefore pose challenges for validation and reliability assessment. Currently, they are excluded by all international frameworks.

The authors point out that existing regulatory frameworks are not well suited to handle GMAI models due to their characteristics. "If these regulations remain unchanged, a possible solution could be hybrid approaches. GMAIs could be approved as medical devices and then the range of allowed clinical prompts could be restricted," says Prof. Stephen Gilbert, Professor of Medical Device Regulatory Science at TU Dresden.

The researchers argue that it will be impossible to prevent patients and medical experts from using generic models or unapproved medical decision support systems. Therefore, it would be crucial to maintain the central role of physicians and enable them as empowered information interpreters.

Stephen Gilbert & Jakob Nikolas Kather, Guardrails for the use of generalist AI in cancer care, Nature Reviews Cancer (2024)
https://www.nature.com/articles/s41568-024-00685-8

-----------------------------------------------------------------

Regulators Alarmed by Doctors Already Using AI to Diagnose Patients
https://futurism.com/neoscope/doctors-using-ai
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3257 on: April 29, 2024, 04:58:28 PM »
AI Is Making Managers Nervous
https://www.businessinsider.com/bosses-fear-ai-may-lower-pay-salaries-jobs-survey-showed-2024-3

Managers are worried that using powerful generative AI tools like OpenAI's ChatGPT in the workplace might cut their salaries.

Beautiful.ai, an AI startup, surveyed 3,000 Americans in management positions to understand their attitudes toward the technology's usage.

Of those surveyed, 48% of managers reported that AI tools are a "threat to their pay" and will "fuel wage declines" across the country in 2024.

Quote
... That fear partly stems from the belief that the technology can do their jobs more effectively, with 64% of those surveyed saying its output and productivity are "equal" and "potentially better" than the quality of work human managers can churn out.

... Bosses also seem concerned that tools may lower their employees' wages. Sixty-two percent of managers surveyed, according to Beautiful.ai, said their employees feel like AI could eventually put them out of their jobs. Forty-five percent of leaders said the technology will present an "opportunity to lower salaries" across the workforce.

"There's no doubt that the implementation of AI tools has employees questioning their value to a company," according to the survey.

... And while nobody really knows how AI will disrupt work, 64% of managers, according to Beautiful.ai, say they have been using it to help manage employees on a daily or weekly basis since the start of 2024.

AI’s Impact on the Workplace: A Survey of American Managers
https://www.beautiful.ai/blog/2024-ai-workplace-impact-report

------------------------------------------------------------------

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3258 on: April 29, 2024, 05:03:24 PM »
China Unveils “Tiangong”: First Fully Electric Humanoid Robot Capable of Running at 6 km/h
https://www.therobotreport.com/supcon-opens-innovation-center-launches-navigator-%CE%B1-humanoid-robot/


🥱 ... Honda's Asimo could run 9km/h in 2011 - https://en.wikipedia.org/wiki/ASIMO

The Beijing Humanoid Robot Innovation Center has unveiled the “Tiangong,” a humanoid robot capable of human-like running at a speed of 6 kilometers per hour. This marks a significant development in the field of robotics, with Tiangong being the world’s first fully electrically driven humanoid robot to achieve this feat.

Standing at 163 centimeters tall and weighing 43 kilograms, Tiangong boasts a lightweight design that allows for stable running. It utilizes multiple visual perception sensors, a high-precision inertial measurement unit (IMU), and 3D vision sensors, enabling it to navigate its environment effectively. Additionally, the robot is equipped with high-precision six-dimensional force sensors for accurate force feedback.

Tiangong utilizes a new humanoid robot motion skill learning method called “State Memory-based Predictive Reinforcement Imitation Learning.” This method played a crucial role in achieving the robot’s ability to run naturally.

The developers have emphasized the openness and compatibility of Tiangong, allowing for open communication interfaces and flexible expansion of software, hardware, and other functional modules. This adaptability ensures the robot’s potential for various application scenarios.

Tiangong successfully navigated slopes and stairs even without visual input, showcasing its ability to adapt to changing environments. It also demonstrated agility by adjusting its gait when encountering obstacles or uneven terrain.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3259 on: April 29, 2024, 09:54:55 PM »
Llama 3 Launches Alongside New Stand-alone Meta AI Chatbot
https://venturebeat.com/ai/llama-3-launches-alongside-new-stand-alone-meta-ai-chatbot/

It’s been anticipated for a while now, but today it’s finally here: Llama 3, the latest large language model (LLM) from Meta Platforms, parent company of Facebook, Instagram, WhatsApp, Threads, Oculus VR, and more, is making its debut with claims of being among the most powerful “open source” AI models yet released. The release comes just hours after Llama 3 appeared on Microsoft’s Azure cloud service in an apparent early leak.

https://llama.meta.com/llama3/

The Llama 3 family initially includes two versions — an 8 billion and 70 billion-parameter version, referring to the connections between artificial neurons within each model — with a 400 billion parameter model being actively trained by Meta now (though there is no timetable on when it might be released).

“From a performance perspective, it is really off the charts in terms of benchmarking capabilities,” said Ragavan Srinivasan, Meta VP of Product, in a video chat interview with VentureBeat, discussing the upcoming 400 billion parameter model.

For now, the Llama 3 8B and 70B versions offer benchmarks on par with or slightly exceeding rival models from Google (Gemma and Gemini Pro 1.5), Anthropic (Claude 3 Sonnet), and Mistral (7B Instruct). In particular, Meta’s Llama 3 does well at multiple choice questions (MMLU) and coding (HumanEval), but the 70B is not as strong as Gemini Pro 1.5 at solving math word problems (MATH), nor at graduate-student level multiple choice questions (GPQA).



“If you look at the many benchmarks that you have for LLMs, they typically fall into these five categories of general knowledge, reading comprehension, math, reasoning, code,” ... “What you see with this release, specifically Llama 3 8B and 70B, they are better than any other open model, and even comparable to some of the best closed models and better across all of these benchmarks.”

... Llama 3 was trained on more than 15 trillion tokens “all collected from publicly available sources,” 7X more than Llama 2, according to Meta.

--------------------------------------------------------

Meta AI Releases OpenEQA to Spur ’Embodied Intelligence’ In Artificial Agents
https://venturebeat.com/ai/meta-ai-releases-openeqa-to-spur-embodied-intelligence-in-artificial-agents/

Meta AI researchers today released OpenEQA, a new open-source benchmark dataset that aims to measure an artificial intelligence system’s capacity for “embodied question answering” — developing an understanding of the real world that allows it to answer natural language questions about an environment.

The dataset, which Meta is positioning as a key benchmark for the nascent field of “embodied AI,” contains over 1,600 questions about more than 180 different real-world environments like homes and offices. These span seven question categories that thoroughly test an AI’s abilities in skills like object and attribute recognition, spatial and functional reasoning, and commonsense knowledge



The OpenEQA project sits at the intersection of some of the hottest areas in AI: computer vision, natural language processing, knowledge representation and robotics. The ultimate vision is to develop artificial agents that can perceive and interact with the world, communicate naturally with humans, and draw upon knowledge to assist us in our daily lives.

https://ai.meta.com/blog/openeqa-embodied-question-answering-robotics-ar-glasses/

https://open-eqa.github.io/assets/pdfs/paper.pdf

--------------------------------------------------------

Facebook’s AI Told Parents Group It Has a Gifted, Disabled Child
https://www.404media.co/facebooks-ai-told-parents-group-it-has-a-disabled-child/

Meta’s AI chatbot told a Facebook group of tens of thousands of parents in New York City that it has a child who is both gifted and challenged academically and attends a specific public school in the city.

“Does anyone here have experience with a ‘2e’ child (both ‘gifted’/academically advanced and disabled… in any of the NYC G&T [Gifted & Talented] programs, especially the citywide or District 3 priority programs?” a parent in the group asked. “Would love to hear your experience good or bad or anything in between.”



A screenshot of the post was tweeted by Aleksandra Korolova, an assistant professor at Princeton University who studies algorithm auditing and fairness and who was just appointed a fellowship to study how AI impacts society and people. 404 Media verified that the post is real and the group that it is posted in, which we are not naming because it is a private group. “2e” is a term that means “twice exceptional” and is used to refer to children who are both academically gifted and have at least one learning or developmental disability.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Michael Hauber

  • Nilas ice
  • Posts: 1118
    • View Profile
  • Liked: 168
  • Likes Given: 16
Re: Robots and AI: Our Immortality or Extinction
« Reply #3260 on: April 29, 2024, 10:58:55 PM »
As advesarial AIs are specifically designed to trick AIs, they are not relevant to cybersecurity until AIs are given responsibility for maintaining cybersecurity, which is not really something anyone is currently considering.  Regular AIs that trick people are the concern, and I'm pretty sure absolutely everyone in the cybersecurity industry is well aware of the risk and trying to figure out what to do about it. My italics..Gero
I wish I could share your confidence in the people in the AI industry - I get the impression that hubris is in the air.


In general the risks are quite scary, but I think this particular issue is a nonsense.  In general I have confidence that people in the AI industry are very competent and trying to do the best.  They understand far more than a hack journalist sprouting stuff about risks that no one else sees.  But I doubt they understand enough to prevent problems with what may be the most powerful invention in our history by a long way.  Hopefully the benefits outweigh the problems.

To me its not hubris, but normal human curiousity, and the fact that nature abhors a vacuum.  Even if a hundred scientists were to say the risk is too high and it shouldn't be done, there will the 101st that will say 'what the heck lets try it'.  And generally the only people that get to be research scientists or IT start up executives etc are people that are willing to have a go at competing in ultra competitive and high risk fields and have been lucky enough to get to where they already are.
Climate change:  Prepare for the worst, hope for the best, expect the middle.

kassy

  • First-year ice
  • Posts: 8490
    • View Profile
  • Liked: 2058
  • Likes Given: 1996
Re: Robots and AI: Our Immortality or Extinction
« Reply #3261 on: April 29, 2024, 11:25:50 PM »
People are very competent and trying to do the best all the time and look what mess that got us into...
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3262 on: April 30, 2024, 04:39:40 PM »


The description that accompanied the video, Boston Dynamics stated, "Sparkles is a custom costume designed just for Spot to explore the intersections of robotics, art, and entertainment." This is particularly intriguing as the Spot robot’s moves are created using Boston Dynamic’s specially designed “Choreographer” software.

When paired with Choreographer software, Spot's adaptability makes it possible to use the dog-shaped robot as a dancer for entertainment. It is also capable of speaking, so it can be used for a variety of entertainment tasks, said Boston Dynamics. “The Choreographer controller understands Spot’s physics and environment, prioritising balance first and then following the specified steps,” noted the company in a blog post.

https://bostondynamics.com/blog/in-step-with-spot/
« Last Edit: April 30, 2024, 04:45:23 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3263 on: April 30, 2024, 05:05:20 PM »
Mysterious ‘gpt2-chatbot’ AI Model Baffles Experts: Early ChatGPT-4.5?-5?
https://venturebeat.com/ai/mysterious-gpt2-chatbot-ai-model-baffles-experts-a-breakthrough-or-mere-hype/

The model, dubbed “gpt2-chatbot,” surfaced with no fanfare on a website popular for comparing AI language systems (LMSYS Chatbot Arena built with Gradio). But its performance has been anything but low-profile, with AI experts expressing surprise and excitement that it matches and possibly exceeds the abilities of GPT-4, the most advanced system unveiled to date by the prominent lab OpenAI.

https://chat.lmsys.org

“[It’s] obviously impossible to tell who made it, but i would agree with assessments that it is at least GPT-4 level” said Andrew Gao, an AI researcher and Stanford University student who has been closely tracking the emergence of ‘gpt2-chatbot’ online.

https://twitter.com/itsandrewgao/status/1785013026636357942

In a series of posts on X.com (formerly Twitter), he noted that the model solved a problem from the International Math Olympiad, a prestigious competition for high school students, on its first attempt. “The IMO is insanely hard,” Gao said. “Only the four best math students in the USA get to compete.”

... uh.... gpt2-chatbot just solved an International Math Olympiad (IMO) problem in one-shot

https://twitter.com/itsandrewgao/status/1785056612425851069



Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania who studies AI, said that in his experiments, the model performed better than GPT-4 on complex reasoning tasks like writing code to draw a picture of a unicorn. “Maybe better than GPT-4,” he said. “Hard to tell, but it does do much better at the iconic ‘draw a unicorn with code‘ task.”

The model’s strong performance has sparked rampant speculation about who might have created it and why it was released without publicity through a testing website.

OpenAI CEO Sam Altman added fuel to the fire of speculation, posting on X that “I do have a soft spot for gpt2,” initially posted as GPT-2 but edited to match the style of the new AI model.

https://twitter.com/sama/status/1785107943664566556

https://twitter.com/simonw/status/1785011380871106956
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3264 on: April 30, 2024, 07:07:39 PM »
Will Fearless and Tireless Robots Lead to More Terrifying Wars?
https://warontherocks.com/2024/04/will-fearless-and-tireless-robots-lead-to-more-terrifying-wars/



While developing the nuclear bomb, Robert Oppenheimer and his colleagues expressed concerns about the possibility of igniting the Earth’s atmosphere. Today, with the emergence of autonomous weapons, we are faced with a similar risk of causing catastrophic damage by unleashing weapons that can kill without feeling fear. The consequences of unleashing such fearless weapons on the battlefield could be far more devastating than we can imagine. Indeed, humanity may come to miss the restraining and mitigating effects of fear, fatigue, and stress on the horrors of combat.

Throughout military history, warfare has been wedded to humans who kill under the shadow of primordial danger and fear. People behave differently when they think they have a chance of dying. The combined psychological stressors of combat can aid in producing friction, which can impede the most intricately drawn “blue arrows” on any battle plan from coming to fruition. With this in mind, it is critical to consider how supplementing humans with autonomous weapons will impact the future face of battle.

https://www.rand.org/content/dam/rand/pubs/monographs/2005/RAND_MG191.pdf
https://clausewitzstudies.org/readings/OnWar1873/BK1ch07.html#a

Autonomous weapons, immune to the psychological factors of combat, are on the horizon and will usher in a new era of lethality. They will influence offensive and defensive operations and provide novel strategic options. The deployment of autonomous weapons has the potential to make warfare more efficient, but it also has the potential to make it more gruesome and terrible.

Eliminating Fear, Fatigue, Stress, and Hesitation?

Fear, fatigue, stress, and hesitation have long been the engineers for impeding war plans. But in the age of autonomous warfare, machines will be invulnerable to them. Many of us have seen or perhaps even produced those beautifully drawn blue arrows on a battle plan that moves unflinchingly toward an objective. However, a stark difference exists between planning in the operations center and contact with the enemy. Battle plans can quickly fall apart for many reasons, but it comes down to the fact that humans are imperfect vessels of plans.

Through investments in rigorous training, modern militaries have developed ways to sensitize soldiers to the stress and shock of combat. Still, no training can replicate the actual dangers of war. By contrast, autonomous machines will not need live fire training to fabricate courage under fire. Instead, their courage will be programmed into their code.

Fatigue and stress, which have always impacted human armies, will be mitigated by autonomous weapons. The effectiveness of a human unit can decrease and require rest the longer it is exposed to combat. Even in remote warfare, we have seen drone pilots still subject to the stresses of watching their targets for endless hours as well as the toll of killing — which can affect them in several ways, including post-traumatic stress disorder. Autonomous “warbots” will not need time to rest away from the vortex of combat. Their endurance will not be limited by a body that requires rest or therapy. Instead, they will only be limited by the availability of fuel and by the wear of their hardware.

Those of us who have personally experienced combat know that people can freeze or hesitate during combat. Freezing, or what is medically known as Acute Stress Reaction, can take soldiers out of a fight in a varying timeframe, from lasting seconds and minutes to even the duration of the action. Autonomous weapons, immune to stress, will not suffer from these psychological reactions inhibiting their performance. There will likely be little hesitation and an absence of freezing for our future autonomous comrades. Instead, the autonomous warriors will kill enemy combatants with the same ease as a speed camera taking a photo of a speeding vehicle.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8965216/

Strategy, Offense, and Defense in Autonomous Warfare

Autonomous armies have the potential to influence the conduct of offensive and defensive operations as well as strategic options forever. Wider use of killing autonomously is sure to open a Pandora’s Box, offering commanders and policymakers a tool whose ramifications we can only attempt to prophesize, including on its lethality. A 2017 Harvard Belter Center report stated that lethal autonomous weapons may prove “as disruptive as nuclear weapons.”

Quote
... Platforms immune to reason, bargaining, pity, or fear will possess the ability to eliminate the psychological and physical stressors that have long prevented the most ingenious of plans from coming to fruition.

Human attacks can stall, break down, or quit during offensive operations — long before their overall capabilities do. On the other hand, autonomous units on the offense will not stop after incurring massive casualties. Instead, they will advance until their programming orders otherwise. Lethal autonomous weapons will achieve what planners want consistently: giving the “blue arrows” their victory. They won’t be bogged down by the whizz of incoming bullets or by casualties. Autonomous weapons will not have to pause their attacks to establish medical evacuations. They will be able to sail, drive, or fly past the flaming hulks of their fellow platforms — and continue to deliver death on an industrial scale.

https://clausewitzstudies.org/readings/OnWar1873/BK1ch04.html#a

The same factors should also be considered for defensive operations. Human units have historically surrendered or withdrawn long before their total capability to resist has dissolved. The human heart fails before a unit’s combat effectiveness. The power of autonomous platforms in defense may be even more lethal than machine guns and artillery during World War I. On the defense, holding to the “last man” has long been the anomaly, such as in the storied accounts of Thermopylae or the Alamo. However, with autonomous platforms, fighting to the last machine will not be an exception but the norm.

Another aspect to consider in this new age of warfare is besides the possibility of removing some of the fear and risk for the combatants — it may do the same for policymakers when considering strategic options. Perhaps policymakers will be less cautious about employing the military instrument of national power when lives are not at risk. The proliferation of autonomous weapons may also give states more staying power, maintaining popular will with a lack of human casualties, especially during small wars. There will likely not be protests to bring “our machines” home.

There are no international regulations for autonomous systems. ... Autonomous weapons will not disobey orders or succumb to humanitarian sentiment. Machines will kill whatever or whoever they are programmed to destroy, making them an attractive tool for would-be advocates of war crimes, authoritarian regimes, and architects of genocide. Authoritarian regimes will not have to worry about their forces hesitating to kill crowds of protestors. Instead, autonomous forces will destroy uprisings with a cold efficiency. The architects of genocide will not have to rely on highly radicalized troops or special facilities to commit mass atrocities.

... By limiting or altogether removing the elements of fear, fatigue, stress, and hesitation, many of our attacks and defenses will achieve our bloody objectives with cold efficiency and speed never before seen on the field of battle. Of the many things in war that we should be wary of is when killing becomes too easy. ... The core question is this: Are we ready for this new revolution of warfare, which may unleash a new era of lethality — making warfare even more efficient, grotesque, and terrible?

--------------------------------------------------------



Kyle Reese : Listen, and understand! That Terminator is out there! It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop... ever, until you are dead!

- The Terminator (1984)

« Last Edit: April 30, 2024, 07:23:40 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3265 on: April 30, 2024, 07:09:14 PM »
Navy May Add Armed, Robotic "Mobile Fire Base" Floating Attack Drone
https://warriormaven.com/sea/navy-may-add-armed-robotic-mobile-fire-base-floating-attack-drone



Tomahawks, torpedoes, over-the-horizon missiles, 5-inch guns and SM-3 interceptors might all integrate onto and arm a single, mobile surface drone warship … capable of merging reconnaissance missions with defensive and offensive weapons operating under human supervision.

As the Navy seeks to accelerate its “drone” explosion, the service continues to contemplate a wide range of weapons applications and operational formations for unmanned systems…..one of which could involve the creation of an armed, maritime “mobile fire base” capable of employing a wide range of sensors, countermeasures and weapons as needed in surface warfare.

“We are clearly well into the evolution of the unmanned across all domains ... .air, surface and undersea - unmanned AI and distributed AI is where things are headed,” retired Maj. Gen. David Coffman, former director of Expeditionary Warfare for the Navy, and Warrior Expert Analyst and contributor, explained in a discussion about future Naval warfare.

“A mobile fire base at sea could be supported by distributed networking capability. As long as I have a radio signal, I can use a cheap unmanned surface vehicle that can move,” Coffman explained.

To  a large degree, the Navy's concept of a mobile fire base aligns with its current vision for its Large Unmanned Systems Vehicle (LUSV), a large robotic warship intended to support Carrier Strike Groups and Marine Corps Expeditionary Strike Groups with forward reconnaissance, targeting and "attack" under human supervision.. Navy documents describe the LUSV, which is still amid conceptual development and requirements analysis, as an anti-submarine and "strike warfare" platform.  The Navy is now asking industry for input on configurations, weapons and sensor technologies for its LUSV.

-----------------------------------------------------------

The US Navy has filed a new trademark for:
https://twitter.com/JoshGerben/status/1772968855888916626

"NavyGPT"

The filing claims that the Navy plans to roll out its own "NavyGPT"-branded AI for:

✔️General text generation
✔️Software code generation
✔️Translation of different language



-----------------------------------------------------------

The U.S. Military's Investments Into Artificial Intelligence Are Skyrocketing
https://www.brookings.edu/articles/the-evolution-of-artificial-intelligence-ai-spending-by-the-u-s-government/

U.S. government spending on artificial intelligence has exploded in the past year, driven by increased military investments, according to a report by the Brookings Institution, a think tank based in Washington D.C.

--------------------------------------------------------

AI Making It Easier to Carry Out Chemical, Biological, Nuclear Attacks: DHS
https://www.dhs.gov/sites/default/files/2024-04/24_0429_cwmd-dhs-fact-sheet-ai-cbrn.pdf

Emerging technologies in artificial intelligence will make it easier for bad actors to "conceptualize and conduct" chemical, biological, radiological or nuclear attacks, according to a report released by the Department of Homeland Security on Monday.

https://www.dhs.gov/sites/default/files/2024-04/24_0429_cwmd-dhs-fact-sheet-ai-cbrn.pdf

... A separate DHS report produced by the Cybersecurity and Infrastructure Security Agency (CISA) last week highlighted that some attacks could be carried out or helped by using AI -- including those targeting critical infrastructure.

... "In many respects, we are using investigative and threat mitigation strategies that were intended to address the threats of yesterday, while those engaged in illegal and threat related activity are using the technologies of today and tomorrow to achieve their objectives"

... Last week, the DHS announced the creation of a new AI board that includes 22 representatives from a range of sectors, including software and hardware companies, critical infrastructure operators, public officials, the civil rights community and academia.

-----------------------------------------------------

Machine Learning Classifies 191 of the World's Most Damaging Viruses
https://phys.org/news/2024-04-machine-world-viruses.html



Researchers from the University of Waterloo have successfully classified 191 previously unidentified astroviruses using a new machine learning-enabled classification process.

The study, "Leveraging machine learning for taxonomic classification of emerging astroviruses," was recently published in Frontiers in Molecular Biosciences.

Astroviruses are some of the most damaging and widespread viruses in the world. These viruses cause severe diarrhea, which kills more than 440,000 children under the age of 5 annually. In the poultry industry, astroviruses like avian flu have an 80% infection rate and a 50% mortality rate among livestock, leading to economic devastation, supply chain disruption, and food shortages.

Astroviruses mutate quickly and can spread easily across their more than 160 host species, putting researchers and public health officials in a constant race to classify and understand new astroviruses as they emerge. In 2023, there were 322 unidentified astroviruses with distinct genomes. This year, that number has risen to 479.

Fatemeh Alipour et al, Leveraging machine learning for taxonomic classification of emerging astroviruses, Frontiers in Molecular Biosciences (2024)
https://www.frontiersin.org/articles/10.3389/fmolb.2023.1305506/full

---------------------------------------------------------

AI Could be Tapped to Design Weapons of Mass Destruction, DHS Warns
https://www.nextgov.com/artificial-intelligence/2024/04/ai-could-be-tapped-design-weapons-mass-destruction-dhs-warns/396184/

In the report on chemical, biological, radiological and nuclear threats, the DHS Countering Weapons of Mass Destruction Office and Cybersecurity and Infrastructure Security Agency analyzed the risk AI systems could pose when intersected with weapons of mass destruction and developed recommended steps to counter these emerging risks.

The report, submitted to the President, identifies trends within the growing AI field along with distinct types of AI and machine learning models that might enable or exacerbate biological or chemical threats to the U.S. It also includes national security threat mitigation techniques through oversight of the training, deployment, publication and use of AI models and the data used to create them — particularly regarding how safety evaluations and guardrails can be leveraged in these instances.

“The responsible use of AI holds great promise for advancing science, solving urgent and future challenges, and improving our national security, but AI also requires that we be prepared to rapidly mitigate the misuse of AI in the development of chemical and biological threats,” assistant secretary for CWMD Mary Ellen Callahan said in a press release. “This report highlights the emerging nature of AI technologies, their interplay with chemical and biological research and the associated risks, and provides longer-term objectives around how to ensure safe, secure, and trustworthy development and use of AI.”

DHS Advances Efforts to Reduce the Risks at the Intersection of Artificial Intelligence and Chemical, Biological, Radiological, and Nuclear (CBRN) Threats
https://www.dhs.gov/sites/default/files/2024-04/24_0429_cwmd-dhs-fact-sheet-ai-cbrn.pdf

-------------------------------------------------------

DHS Facilitates the Safe and Responsible Deployment and Use of Artificial Intelligence in Federal Government, Critical Infrastructure, and U.S. Economy
https://www.dhs.gov/news/2024/04/29/fact-sheet-dhs-facilitates-safe-and-responsible-deployment-and-use-artificial
« Last Edit: April 30, 2024, 07:24:33 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

morganism

  • Nilas ice
  • Posts: 1906
    • View Profile
  • Liked: 230
  • Likes Given: 135
Re: Robots and AI: Our Immortality or Extinction
« Reply #3266 on: April 30, 2024, 08:49:00 PM »
New Smart Adhesive with Unmatched Strength and Versatility

a study published in the National Science Review, scientists at Nanyang Technological University in Singapore (NTU Singapore) have developed a smart, reusable adhesive. This new adhesive surpasses the adhesion strength of gecko feet by more than ten times. The breakthrough could lead to the creation of advanced reusable superglues and grippers capable of handling heavy loads on both rough and smooth surfaces.

The NTU research team, coordinated by Professor K Jimmy Hsia, discovered a technique to improve the adhesion of smart adhesives by employing shape-memory polymers, which can attach and release easily when heated.

The team describes how they designed the shape-memory polymer material to resemble hair-like fibrils, which led to a breakthrough in adhesion.

This innovative adhesive can support extremely heavy weights, presenting new opportunities for robotic grippers. These grippers could enable humans to effortlessly scale walls or allow climbing robots to adhere to ceilings for surveying or repair tasks.
(snip)
Shape-memory polymers are materials that can "remember" their original form and revert to it after being deformed when stimulated by external factors such as heat, light, or electrical current. These properties make them ideal for creating switchable adhesives that can adjust to different surfaces.

Researchers used a shape-memory polymer called E44 epoxy in their tests. At room temperature, this material is stiff and glass-like. However, when heated, it transforms into a soft, rubber-like state that can mold to and grip microscopic irregularities on surfaces. As it cools, it solidifies, forming strong adhesive bonds through a shape-locking effect.

Upon reheating, the material returns to its rubbery state, allowing for easy detachment from the surface it was adhered to.

The researchers discovered that the optimal adhesion was achieved by shaping the polymer into an array of hair-like fibrils. They determined that larger fibrils offered weaker adhesion, while smaller fibrils were challenging to produce and prone to collapse and degradation. The ideal size for the fibrils was between 0.5 mm and 3 mm in radius, balancing strong adhesion with structural integrity.

In their experiments, a single fibril with a 19.6 mm2 cross-section could support up to 1.56 kg. Additional fibrils increased the supportable weight significantly. For example, a palm-sized array of 37 fibrils, weighing about 30 grams, could support up to 60 kg—the weight of an average adult.

The transition temperature of the polymer, where it shifts between states, can be finely controlled by altering the ratios of the components used in its formulation. This adaptability enables the use of the polymer in extreme conditions, such as in hot climates. For their experiments, the researchers set the detachment temperature at 60 °C, a level typically above most everyday environmental temperatures.

This heat-responsive characteristic allows the polymer to function as a reusable superglue that leaves no sticky residue. Additionally, it can be used to make soft grippers that adhere to objects with varying surface textures, holding them securely for prolonged periods.
(more)
https://www.azom.com/news.aspx?newsID=62920

Sigmetnow

  • Multi-year ice
  • Posts: 26091
    • View Profile
  • Liked: 1164
  • Likes Given: 435
Re: Robots and AI: Our Immortality or Extinction
« Reply #3267 on: April 30, 2024, 09:43:30 PM »
Walter Isaacson
 
Instead of barring students from using AI for their term papers, I required it. The prompt was to tell the history of AI, from Turing to LLMs. Here are the surprising results, ranging from a John Grisham mystery story to an SNL sketch to an epic poem:
 
➡️  https://aiinnovatorsarchive.tulane.edu/2024/   Opens to a page of interesting thumbnails and a link for each of the papers.

4/29/24, https://x.com/walterisaacson/status/1785017001997455362
« Last Edit: May 01, 2024, 02:10:20 PM by Sigmetnow »
People who say it cannot be done should not interrupt those who are doing it.

morganism

  • Nilas ice
  • Posts: 1906
    • View Profile
  • Liked: 230
  • Likes Given: 135
Re: Robots and AI: Our Immortality or Extinction
« Reply #3268 on: April 30, 2024, 11:37:38 PM »
War Zone Surveillance Technology Is Hitting American Streets

At least two Texas communities along the U.S.-Mexico border have purchased technology that tracks people’s locations using data from personal electronics and license plates.

Big Brother isn’t just watching you: He’s using your cell phone, smartwatch, wireless earbuds, car entertainment systems and license plates to track your location in real time.

Contracting records and notes from local government meetings obtained by NOTUS show that federal and state Homeland Security grants allow local law enforcement agencies to surveil American citizens with technology more commonly found in war zones and foreign espionage operations.

At least two Texas communities along the U.S.-Mexico border have purchased a product called “TraffiCatch,” which collects the unique wireless and Bluetooth signals emitted by nearly all modern electronics to identify devices and track their movements. The product is also listed in a federal supply catalog run by the U.S. government’s General Services Administration, which negotiates prices and contracts for federal agencies.

“TraffiCatch is unique for the following reasons: ability to detect in-vehicle wireless signals [and] merge such signals with the vehicle license plate,” wrote Jenoptik, the Germany-based manufacturer, in a contracting solicitation obtained by NOTUS under Texas public records law.

In another bid to win a contract from a public consortium that services Texas school districts, Jenoptik describes TraffiCatch as a “wireless device detection” system that “records wireless devices Wifi, Bluetooth and Bluetooth Low Energy signal identifiers that come within range of the device to record gathered information coupled with plate recognition in the area. This can provide additional information to investigators trying to locate persons of interest related to recorded crimes in the area.”

Combining license plate information with data collected from wireless signals is the kind of surveillance the U.S. military and intelligence agencies have long used, with devices mounted in vehicles, on drones or carried by hand to pinpoint the location of cell phones and other electronic devices. Their usage was once classified and deployed in places like Afghanistan and Iraq.

Today, similar devices are showing up in the streets of American cities near the U.S.-Mexico border.
(more)

https://www.notus.org/technology/war-zone-surveillance-border-us

Freegrass

  • Young ice
  • Posts: 3985
  • Autodidacticism is a complicated word
    • View Profile
  • Liked: 986
  • Likes Given: 1276
Re: Robots and AI: Our Immortality or Extinction
« Reply #3269 on: May 01, 2024, 12:51:47 AM »
I'm completely blown away by this interview. Computers that are one million times cheaper than today in ten years from now.  ???

When computers are set to evolve to be one million times faster and cheaper in ten years from now, then I think we should rule out all other predictions. Except for the one that we're all fucked...

Neven

  • Administrator
  • First-year ice
  • Posts: 9554
    • View Profile
    • Arctic Sea Ice Blog
  • Liked: 1339
  • Likes Given: 618
Re: Robots and AI: Our Immortality or Extinction
« Reply #3270 on: May 01, 2024, 01:04:01 PM »
It's the jacket that leaves me speechless.

Can't wait for computers costing 0.001 USD though.
The enemy is within
Don't confuse me with him

E. Smith

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3271 on: May 01, 2024, 01:39:57 PM »
TSMC Announces New System-on-Wafer Process With 3D-Stacking
https://www.extremetech.com/computing/tsmc-announces-new-system-on-wafer-process-with-3d-stacking
https://www.tomshardware.com/tech-industry/tsmc-to-go-3d-with-wafer-sized-processors-cow-sow-system-on-wafer-technology-allows-3d-stacking-for-the-worlds-largest-chips



This week, TSMC held a technology conference in the heart of Silicon Valley to showcase some of its upcoming technology. We have already covered its upcoming A16 manufacturing node. The company also announced an even more ambitious technology named System-on-Wafer (SoW) that will allow for 3D stacking of logic and memory directly on top of a 300mm wafer-sized chip.

https://www.extremetech.com/computing/tsmc-announces-16nm-a16-node-for-2026

TSMC says the initial version of System-on-Wafer is a logic-only wafer using its Integrated Fan-Out (InFO) design, which it says is already in production. It will also produce a version using its Chip-on-Wafer-on-Substrate (CoWoS) technology by 2027, allowing for a monumental increase in performance-per-watt and overall compute power. TSMC says this future design could allow for a single wafer-sized chip to offer the performance of an entire server rack, or an entire server from a single wafer-scale chip.

https://www.tomshardware.com/tech-industry/tsmc-to-go-3d-with-wafer-sized-processors-cow-sow-system-on-wafer-technology-allows-3d-stacking-for-the-worlds-largest-chips

The concept of wafer-scale chips seems to be gaining steam lately, though, as Cerebras just announced its own CS3 design, which uses 4 trillion transistors and is 56 times the size of a single Nvidia H100.

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3272 on: May 01, 2024, 02:02:45 PM »
Quote from: morganism
War Zone Surveillance Technology Is Hitting American Streets...

Cops Are Now Using AI to Generate Police Reports
https://gizmodo.com/cops-are-now-using-ai-to-generate-police-reports-1851429617

Axon, the public safety contractor that popularized the Taser, has launched a new product that is less actively terrifying but still vaguely concerning: an AI-powered software program that lets cops automate their police reports.

Axon calls its new product Draft One. According to a press release published on Tuesday, Draft One is a “revolutionary new software product that drafts high-quality police report narratives in seconds.” The software is powered by the powerful large language model GPT-4, and can supposedly write reports by auto-transcribing audio from the police body cameras that Axon sells.

Some critics have been quick to note that this product, which was designed to solve problems for the police, could also cause a host of problems for everyone else. ...  Dave Maass, surveillance technologies investigations director at the Electronic Frontier Foundation, called the new product “kind of a nightmare.” ... Daniel Linskey, a former Boston Police Department Superintendent-in-Chief who was also interviewed by the news outlet, similarly urged caution in the tech’s deployment.

Problematically, AI has been known to “hallucinate”—that is, to make up nonsense. At the same time, it seems possible that police could—in certain cases—use the software to absolve themselves of legal responsibility. That is, if something questionable were to crop up in a police report, and the report was “written” with Axon’s new software, it seems plausible that cops could falsely blame the software for mistakes or inaccuracies that were actually inserted by a human—sowing doubt along the way.

As such, any deployment of this technology would need strong regulatory guidelines to ensure it isn’t misused by police departments.

-----------------------------------------------------------

https://www.axon.com/products/draft-one

https://my.axon.com/s/resources?language=en_US

https://my.axon.com/s/draft-one?language=en_US

---------------------------------------------------------

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

SteveMDFP

  • Young ice
  • Posts: 2547
    • View Profile
  • Liked: 602
  • Likes Given: 45
Re: Robots and AI: Our Immortality or Extinction
« Reply #3273 on: May 01, 2024, 03:39:22 PM »
TSMC Announces New System-on-Wafer Process With 3D-Stacking
https://www.extremetech.com/computing/tsmc-announces-new-system-on-wafer-process-with-3d-stacking
https://www.tomshardware.com/tech-industry/tsmc-to-go-3d-with-wafer-sized-processors-cow-sow-system-on-wafer-technology-allows-3d-stacking-for-the-worlds-largest-chips
...
https://www.extremetech.com/computing/tsmc-announces-16nm-a16-node-for-2026

TSMC says the initial version of System-on-Wafer is a logic-only wafer using its Integrated Fan-Out (InFO) design, which it says is already in production. It will also produce a version using its Chip-on-Wafer-on-Substrate (CoWoS) technology by 2027, allowing for a monumental increase in performance-per-watt and overall compute power.

Moore's law (doubling of compute power per chip every ~18  mo) is currently hitting limits of physical laws.  Stacking of silicon is perhaps the only way for Moore's law to continue.

By the time Moore's law hits a limit in 3D stacking, maybe someone will figure out how to do 4D stacking!

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3274 on: May 01, 2024, 05:35:43 PM »
Sophia the Robot Takes a Face-Plant in Greece
https://greekreporter.com/2024/04/30/sophia-robot-falls-greece/



... as Clemenza would say ... "won't be hearing from Sophia no more"

- The Godfather (1972)
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3275 on: May 01, 2024, 10:46:00 PM »
Robot Bee Swarms Fly Collision-free In Close Formation
https://newatlas.com/robotics/festo-bionicbee/

We've seen some impressive nature-inspired flying bots from the creative minds at Festo's Bionic Learning Network over the years, but the autonomous BionicBee is not only the smallest so far but also the first capable of swarming.



The BionicBees were developed using generative design, where a software application was tasked with coming up with the best lightweight structure using the least possible materials while also aiming for maximum stability.

Crammed within the small frame is a brushless motor, three servos, a battery, a gear unit, comms technology and control components. The wings beat between 15 and 20 hertz, back and forth over 180 degrees. The servos "change the geometry of the wing" for lift and direction control.

Festo notes that each bot is assembled by hand and even the tiniest of differences in build can adversely impact performance. The team has therefore included an auto-calibration feature that spots any subtle hardware oddities during a brief test flight. An algorithm then makes any necessary adjustments to flight characteristics so that the control system see all bees as identical – which makes for safe swarming.

-------------------------------------------------------

Butterflies
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3276 on: May 02, 2024, 12:59:46 AM »
Giant Military Manta Ray Drone Passes First Ocean Test
https://www.darpa.mil/news-events/2024-05-01
https://www.twz.com/news-features/manta-ray-underwater-drone-even-more-enormous-than-we-thought
https://www.defenseone.com/technology/2024/05/giant-military-manta-ray-drone-passes-first-ocean-test/396244/



An enormous underwater drone, built to roam the oceans for “very long periods of time” without refueling, has passed initial tests off California, DARPA announced on Wednesday.

The Manta Ray prototype uncrewed underwater vehicle (UUV) built by Northrop Grumman completed full-scale, in-water testing off the coast of Southern California in February and March 2024. Testing demonstrated at-sea hydrodynamic performance, including submerged operations using all the vehicle’s modes of propulsion and steering: buoyancy, propellers, and control surfaces.

“Our successful, full-scale Manta Ray testing validates the vehicle’s readiness to advance toward real-world operations after being rapidly assembled in the field from modular subsections,” said Dr. Kyle Woerner, DARPA program manager for Manta Ray. “The combination of cross-country modular transportation, in-field assembly, and subsequent deployment demonstrates a first-of-kind capability for an extra-large UUV.”

“Shipping the vehicle directly to its intended area of operation conserves energy that the vehicle would otherwise expend during transit,” said Woerner. “Once deployed, the vehicle uses efficient, buoyancy-driven gliding to move through the water. The craft is designed with several payload bays of multiple sizes and types to enable a wide variety of naval mission sets.”

A key part of its functioning relates to the addition of energy-saving technologies — allowing it to hibernate in a low-power state on the sea floor — and energy-generating technologies ostensibly capable of powering Manta Ray over near-unlimited distances and durations — which Northrop has worked with Seatrec, a renewable-energy company, on.  While many UUVs are constrained by the amount of stored energy they can carry onboard, Northrop-Seatrec's "Mission Unlimited Unmanned Underwater Vehicle (UUV) Station" purports to solve this issue.

https://www.northropgrumman.com/what-we-do/sea/mission-unlimited-inventing-autonomous-recharging-of-unmanned-underwater-vehicles

DARPA said PacMar Technologies, another contractor on the Manta Ray program, will spend the rest of this year testing a full-scale energy-harvesting system.

Broadly, the service is looking to field a wide range of UUVs of various tiers, with "extra-large" being the largest. Already, it has received the first of its Orca extra-large uncrewed underwater vehicle, or XLUUV, from Boeing

« Last Edit: May 02, 2024, 01:08:44 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3277 on: May 04, 2024, 01:14:08 AM »


By equipping a Unitree Go1 robot with two low-cost and lightweight modular 3-DoF loco-manipulators on its front calves, LocoMan leverages the combined mobility and functionality of the legs and grippers for complex manipulation tasks that require precise 6D positioning of the end effector in a wide workspace.

----------------------------------------------------------





Field AI’s robots operating in relatively complex and unstructured environments without prior maps

https://spectrum.ieee.org/autonomy-unstructured-field-ai

---------------------------------------------------------



----------------------------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3278 on: May 04, 2024, 01:27:14 AM »
An AI-Powered Fighter Jet Took the Air Force's Leader for a Historic Ride. What That Means for War
https://apnews.com/article/artificial-intelligence-fighter-jets-air-force-6a1100c96a73ca9b7f41cbd6a2753fda

With the midday sun blazing, an experimental orange and white F-16 fighter jet launched with a familiar roar that is a hallmark of U.S. airpower. But the aerial combat that followed was unlike any other: This F-16 was controlled by artificial intelligence, not a human pilot. And riding in the front seat was Air Force Secretary Frank Kendall.

The AI-controlled F-16, called Vista, flew Kendall in lightning-fast maneuvers at more than 550 miles an hour that put pressure on his body at five times the force of gravity. It went nearly nose to nose with a second human-piloted F-16 as both aircraft raced within 1,000 feet of each other, twisting and looping to try force their opponent into vulnerable positions.

Throughout the entire test flight, neither Secretary Kendall nor the safety pilot in the backseat touched the controls of the X-62A, demonstrating the autonomous capabilities of the aircraft.

At the end of the hourlong flight, Kendall climbed out of the cockpit grinning. He said he’d seen enough during his flight that he’d trust this still-learning AI with the ability to decide whether or not to launch weapons.

... Smaller and cheaper AI-controlled unmanned jets are the way ahead, Kendall said.

... China has AI, but there’s no indication it has found a way to run tests outside a simulator. And, like a junior officer first learning tactics, some lessons can only be learned in the air, Vista’s test pilots said.

Until you actually fly, “it’s all guesswork,” chief test pilot Bill Gray said. “And the longer it takes you to figure that out, the longer it takes before you have useful systems.”

... Vista flew its first AI-controlled dogfight in September 2023, and there have only been about two dozen similar flights since. But the programs are learning so quickly from each engagement that some AI versions getting tested on Vista are already beating human pilots in air-to-air combat.

The pilots at this base are aware that in some respects, they may be training their replacements or shaping a future construct where fewer of them are needed.

But they also say they would not want to be up in the sky against an adversary that has AI-controlled aircraft if the U.S. does not also have its own fleet.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

morganism

  • Nilas ice
  • Posts: 1906
    • View Profile
  • Liked: 230
  • Likes Given: 135
Re: Robots and AI: Our Immortality or Extinction
« Reply #3279 on: May 05, 2024, 01:11:45 AM »
(long article, but interesting on how affected these recruiters are by branding, and how an MLM is better at judging recruit outcomes. Snips are just hacks in this post, too much info to snip out effectively)

"When presented with out-of-sample candidate profiles, both models made predictions more accurately than human recruiters."


https://interviewing.io/blog/are-recruiters-better-than-a-coin-flip-at-judging-resumes

Question #1: Would you interview this candidate?

In aggregate, recruiters in the study recommended 62% of candidates for an interview. But how did recruiter evaluations stack up against candidates’ performance on the platform?

We calculated recruiter accuracy by treating each candidate’s first interview (pass/fail) as the truth, and recruiters’ decision to interview as a prediction. It turns out that recruiters chose correctly 55% of the time, which is just slightly better than a coin flip.

Question #2: What is the likelihood this candidate will pass the technical interview?

Recruiters predicted the likelihood that each candidate would pass the technical interview. In most hiring processes, the technical interview follows the recruiter call and determines whether candidates proceed to the onsite. Being able to accurately predict which candidates will succeed at this stage is important and should inform the decision about whether to interview the candidate or not.

What we found most surprising is how far their predictions were from the truth:

    When recruiters predicted the lowest probability of passing (0-5%), those candidates actually passed the technical interview with a 47% probability.
    When recruiters predicted the highest probability of passing (95-100%), those candidates actually passed with a 64% probability.


Recruiters’ predictions below 40% underestimate these candidates by an average of 23 percentage points. Above 60%, they’re overestimating by an average of 20 percentage points. If this was predicting student performance, recruiters would be off by two full letter grades.
(snip)
So, when two recruiters are asked to judge the same candidate, their level of disagreement is nearly the same as if they evaluated two completely different candidates.

The most sought-after resume attributes

Despite the noise and variability in the study’s resume evaluations, there were some characteristics that recruiters consistently favored: experience at a top-tier tech3 company (FAANG or FAANG-adjacent) and URM (underrepresented minority) status (in tech, this means being Black or Hispanic).
(snip)

Predictors of recruiter rejections

Below is a graph of actual rejection reasons, based on our analysis. The main rejection reason isn’t “missing skill” — it’s “no top firm.” This is followed, somewhat surprisingly, but much less reliably (note the huge error bars), by having an MBA. “No top school” and having a Master’s degree come in at third and fourth. Note that these top four rejection reasons are all based on a candidate’s background, NOT their skill set.

Slowing down is associated with better decisions

Another key piece of this study is time. In hiring settings, recruiters make decisions quickly. Moving stacks of candidates through the funnel gives little room to second-guess or even wait before determining whether or not to give a candidate the opportunity to interview.

In our study, the median time spent on resume evaluations was just 31 seconds. Broken down further by Question #1 — whether or not the recruiter would interview them — the median time spent was:

    25 seconds for those advanced to a technical interview
    44 seconds for those placed in the reject pile

Distribution of time taken to evaluate candidates

Given the weight placed on single variables (e.g., experience at a top firm), how quickly recruiters make judgments isn’t surprising. But might they be more accurate if they slowed down? It turns out that spending more time on resume evaluations, notably >45 seconds, is associated with more accurate predictions — just spending 15 more seconds appears to increase accuracy by 34%.7 It could be that encouraging recruiters to slow down might result in more accurate resume screening.


Can AI do better?

As a gaggle of technologists and data geeks, we tested whether algorithms could quiet the noise and inconsistencies in recruiters’ predictions.

We trained two local, off-the-rack machine-learning models.

Just like human recruiters, the models were trained to predict which candidates would pass technical interviews. The training dataset was drawn from interviewing.io and included anonymized resume data, candidates’ race and gender, and interview outcomes.

When presented with out-of-sample candidate profiles, both models made predictions more accurately than human recruiters.

Random Forest was somewhat more accurate than recruiters when predicting lower performing candidates. XGBoost, however, was more accurate across the board than both the Random Forest model AND recruiters.


Where does this leave us?

In this section, when we say “we,” we are speaking as interviewing.io, not as the researchers involved in this study. Just FYI.

Advice for candidates

At interviewing.io, we routinely get requests from our users to add resume review to our list of offerings. So far, we have declined to build it. Why? Because we suspected that recruiters, regardless of what they say publicly, primarily hunt for name brands on your resume. Therefore, highlighting your skills or acquiring new skills is unlikely to make a big difference in your outcomes.

We are sad to see the numbers back up our intuition that it mostly is about brands.10 As such, here’s an actionable piece of advice: maintain a healthy skepticism when recruiters advise you to grow your skill set. Acquiring new skills will very likely make you a better engineer. But it will very likely NOT increase your marketability.

If enhancing your skill set won’t help, what can you do to get in front of companies? We’re in the midst of a brutal market, the likes of which we haven’t seen since the dot-com crash in 2000. According to anecdotes shared in our Discord community, even engineering managers from FAANGs are getting something like a 10% response rate when they apply to companies online. If that’s true, what chance do the rest of us have?

We strongly encourage anyone looking for work in this market, especially if you come from a non-traditional background, to stop spending energy on applying online, full stop. Instead, reach out to hiring managers. The numbers will be on your side there, as relatively few candidates are targeting hiring managers directly. We plan to write a full blog post on how to do this kind of outreach well, but this CliffsNotes version will get you started:

    Get a LinkedIn Sales Navigator account
    Make a target list of hiring managers at the companies you’re interested in
    Figure out their emails (you can use a tool like RocketReach), and send them something short and personalized. Do not use LinkedIn. The same way that you don’t live in LinkedIn, eng managers don’t either. Talk about the most impressive thing you’ve built. Ask them about their work, if you can find a blog post they’ve written or a project they’ve worked on publicly. Tie those two things together, and you’ll see a much higher response rate. Writing these personalized emails takes time, of course, but in this market, it’s what you need to do to stand out.
(more)

https://interviewing.io/blog/are-recruiters-better-than-a-coin-flip-at-judging-resumes

Sigmetnow

  • Multi-year ice
  • Posts: 26091
    • View Profile
  • Liked: 1164
  • Likes Given: 435
Re: Robots and AI: Our Immortality or Extinction
« Reply #3280 on: May 05, 2024, 02:18:07 PM »
Optimus bot update

Tesla Optimus
Trying to be useful lately!
5/5/24, https://x.com/tesla_optimus/status/1787027808436330505
 
➡️ pic.twitter.com/TlPF9YB61W  2min

“Optimus is now being tested at one of our factories, with a continuously decreasing rate of human interventions.”
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3281 on: May 05, 2024, 07:55:35 PM »
US Official Urges China, Russia to Declare Only Humans, Not AI, Control Nuclear Weapons
https://www.reuters.com/world/us-official-urges-china-russia-declare-only-humans-not-ai-control-nuclear-2024-05-02/

HONG KONG, May 2 (Reuters) - A senior U.S. official on Thursday urged China and Russia to match declarations by the United States and others that only humans, and never artificial intelligence, would make decisions on deploying nuclear weapons.

State Department arms control official Paul Dean told an online briefing that Washington had made a "clear and strong commitment" that humans had total control over nuclear weapons, adding that France and Britain had done the same.

"We think it is an extremely important norm of responsible behaviour and we think it is something that would be very welcome in a P5 context," he said, referring to the five permanent members of the United Nations Security Council.

China, which is expanding its nuclear weapons capabilities, urged in February that the largest nuclear powers should first negotiate a no-first-use treaty between each other.

-------------------------------------------------------------

America Needs a Dead Hand More than Ever
https://warontherocks.com/2024/03/america-needs-a-dead-hand-more-than-ever/

In a 2019 War on the Rocks article, “America Needs a ‘Dead Hand’,” we proposed the development of an artificial intelligence-enabled nuclear command, control, and communications system to partially address this concern. In the five years since the article was published, China began an unprecedented expansion of its nuclear arsenal; Russia invaded Ukraine, made repeated nuclear threats, and “suspended” Russian participation in the New START arms control treaty; and North Korea launched a massive expansion of its nuclear arsenal. The United States has not expanded its arsenal by a single weapon or fielded a single new delivery vehicle.

Today, we believe that United States is further behind China and Russia as both nations are modernizing and expanding their nuclear arsenals and fortifying their nuclear command, control, and communications systems. This widening gap in capabilities only increases the coercive power Russia and China have to coerce the United States into backing down from aggression.

We can only conclude that America needs a dead hand system more than ever. Such a system would both detect an inbound attack more rapidly than the current system and allow the president to either manually direct forces to respond or automatically execute the president’s pre-selected response options — for a given scenario. ...

-------------------------------------------------------------

[Turgidson advocates a further nuclear attack to prevent a Soviet response to Ripper's attack]

General "Buck" Turgidson : Mr. President, we are rapidly approaching a moment of truth both for ourselves as human beings and for the life of our nation. Now, truth is not always a pleasant thing. But it is necessary now to make a choice, to choose between two admittedly regrettable, but nevertheless *distinguishable*, postwar environments: one where you got twenty million people killed, and the other where you got a hundred and fifty million people killed.

President Merkin Muffley : You're talking about mass murder, General, not war!

General "Buck" Turgidson : Mr. President, I'm not saying we wouldn't get our hair mussed. But I do say no more than ten to twenty million killed, tops. Uh, depending on the breaks




-------------------------------------------------------------

Russia’s Anti-Satellite Nuke Could Leave Lower Orbit Unusable, Test Vehicle May Already Be Deployed
https://www.twz.com/space/russias-anti-satellite-nuke-could-leave-lower-orbit-unusable-test-vehicle-may-already-be-deployed

A senior U.S. government official has indicated that Russia already has some kind of clandestine testbed in space as part of its development of a nuclear-armed on-orbit anti-satellite weapon. This comes just days after another official warned Congress that this "indiscriminate" weapon, details about which first emerged publicly earlier this year, could be capable of rendering low Earth orbit completely unusable for a prolonged period of time.

... "Russia has publicly claimed that their satellite is for scientific purposes," Stewart said "However, the orbit is in a region not used by any other spacecraft. That in itself was somewhat unusual. And the orbit is a region of higher radiation than normal, lower earth orbits, but not high enough of a radiation environment to allow accelerated testing of electronics as Russia has described the purpose to be."

"We aren't talking about a weapon that can be used to attack humans or cause structural damage on Earth," according to Stewart. "Instead… our analysts assess that detonation [of a nuclear device] in a particular placement in orbit of a magnitude and location would render lower Earth orbit unusable for a certain amount of time." Low Earth Orbit (LEO) refers to a band of space that measures roughly 100 miles to 1,200 miles above the earth, and which is highly congested. Many capabilities that are highly important to society exist in this orbital realm.

... Concerningly, John Plumb, Assistant Secretary of Defense for Space Policy, indicated that LEO could feasibly be rendered unusable for a year if Russia's new weapon was detonated in space. As he noted in his written testimony, Plumb stipulated that this would seriously threaten "all satellites operated by countries and companies around the globe, as well as to the vital communications, scientific, meteorological, agricultural, commercial, and national security services we all depend upon." ... "satellites that aren't hardened against nuclear detonation space, which is most satellites, could be damaged [and] affected... Some would be caught in an immediate blast which... would not be able to survive the flux of that, and then others could be damaged over time."

The economic damage that such a weapon could do is hard to even quantify. Replacing space-based capabilities after the radiation cleared would take a very long time and LEO may be so cluttered with space junk at that point that it may not be reliably inhabitable at all for much longer than how long damaging radiation would persist.

---------------------------------------------------------------

Lockheed Martin to Scale Its Highest Powered Laser to 500 Kilowatts Power Level
https://news.lockheedmartin.com/2023-07-28-Lockheed-Martin-to-Scale-Its-Highest-Powered-Laser-to-500-Kilowatts-Power-Level

... The Army, Navy, and Air Force are all developing 300-kilowatt lasers with far greater range than current counter-drone weapons, Shyu said. The Army and Air Force systems will be ground-based, the Navy’s shipborne, she said, declining to give further detail. “Last summer, my shop let out a contract to two different contractors [to develop] greater than 500-kilowatt laser sources,” she said. “By the end of next year expect to see that. … It’s pretty awesome.”

https://breakingdefense.com/2024/04/mind-boggling-israel-ukraine-are-mere-previews-of-a-much-larger-pacific-missile-war-officials-warn/

---------------------------------------------------------------



-------------------------------------------------------------

Austria Calls for Rapid Regulation As It Hosts Meeting On 'Killer Robots'
https://www.reuters.com/technology/austria-calls-rapid-regulation-it-hosts-meeting-killer-robots-2024-04-29/

VIENNA, April 29 (Reuters) - Austria called on Monday for fresh efforts to regulate the use of artificial intelligence in weapons systems that could create so-called 'killer robots', as it hosted a conference aimed at reviving largely stalled discussions on the issue.

With AI technology advancing rapidly, weapons systems that could kill without human intervention are coming ever closer, posing ethical and legal challenges that most countries say need addressing soon.

"We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure human control," Austrian Foreign Minister Alexander Schallenberg told the meeting of non-governmental and international organisations as well as envoys from 143 countries.

"At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines," he said in an opening speech to the conference entitled "Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation".

-----------------------------------------------------------



'Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation.'

Schallenberg sees AI as the biggest revolution in warfare since the invention of gunpowder but feels it is far more dangerous. With the next logical step in military AI development involving removing humans from the decision-making process, he believes there's no time to waste.

https://www.theregister.com/2024/04/30/kill_killer_robots_now/

-----------------------------------------------------------

Resounding Support for a ‘Killer Robots’ Treaty
https://www.hrw.org/news/2024/05/02/resounding-support-killer-robots-treaty

(Washington, DC, May 02, 2024) – Governments concerned about autonomous weapons systems – so-called killer robots – should urgently act to start negotiations on a new international treaty to ban and regulate them, Human Rights Watch said today. Such weapons would select and use force against targets based on sensor processing rather than human inputs.

“The world is approaching a tipping point for acting on concerns over autonomous weapons systems, and support for negotiations is reaching unprecedented levels,” said Steve Goose, arms campaigns director at Human Rights Watch. “The adoption of a strong international treaty on autonomous weapons systems could not be more necessary or urgent.”

Years of discussions at the United Nations have produced few tangible results and many participants at the two-day conference in Vienna said the window for action was closing rapidly.

"It is so important to act and to act very fast," the president of the International Committee of the Red Cross, Mirjana Spoljaric, told a panel discussion at the conference.

"What we see today in the different contexts of violence are moral failures in the face of the international community. And we do not want to see such failures accelerating by giving the responsibility for violence, for the control over violence, over to machines and algorithms," she added.
« Last Edit: May 05, 2024, 10:26:56 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3282 on: May 05, 2024, 08:21:16 PM »
Nvidia’s DrEureka Helps Robot Dog Perfect Yoga Ball Balance
https://interestingengineering.com/innovation/nvidia-robot-yoga-ball-balance


https://venturebeat.com/automation/nvidias-dreureka-outperforms-humans-in-training-robotics-systems/

Large language models (LLMs) can accelerate the training of robotics systems in super-human ways, according to a new study by scientists at Nvidia, the University of Pennsylvania and the University of Texas, Austin.

The study introduces DrEureka, a technique that can automatically create reward functions and randomization distributions for robotics systems. DrEureka stands for Domain Randomization Eureka. DrEureka only requires a high-level description of the target task and is faster and more efficient than human-designed rewards in transferring learned policies from simulated environments to the real world.

DrEureka: Language Model Guided Sim-To-Real Transfer, arXiv, (2024)
https://eureka-research.github.io/dr-eureka/assets/dreureka-paper.pdf

--------------------------------------------------------------

ChatGPT Shows Better Moral Judgment Than a College Undergrad
https://arstechnica.com/ai/2024/05/chatgpt-shows-better-moral-judgement-than-a-college-undergrad/

When it comes to judging which large language models are the "best," most evaluations tend to look at whether or not a machine can retrieve accurate information, perform logical reasoning, or show human-like creativity. Recently, though, a team of researchers at Georgia State University set out to determine if LLMs could match or surpass human performance in the field of moral guidance.

In "Attributions toward artificial agents in a modified Moral Turing Test"—which was recently published in Nature's online, open-access Scientific Reports journal—those researchers found that morality judgments given by ChatGPT4 were "perceived as superior in quality to humans'" along a variety of dimensions like virtuosity and intelligence.

... In the blind testing, respondents agreed with the LLM's assessment more often than the human's. On average, the LLM responses were also judged to be "more virtuous, more intelligent, more fair, more trustworthy, a better person, and more rational" to a statistically significant degree. Neither the human nor LLM responses showed a significant advantage when judged on emotion, compassion, or bias, though.

But simply knowing the right words to say in response to a moral conundrum isn't the same as having an innate understanding of what makes something moral. The researchers also reference a previous study showing that criminal psychopaths can distinguish between different types of social and moral transgressions, even as they don't respect those differences in their lives. The researchers extend the psychopath analogy by noting that the AI was judged as more rational and intelligent than humans but not more emotional or compassionate.

This brings about worries that an AI might just be "convincingly bullshitting" about morality in the same way it can about many other topics without any signs of real understanding or moral judgment. That could lead to situations where humans trust an LLM's moral evaluations even if and when that AI hallucinates "inaccurate or unhelpful moral explanations and advice."

Attributions toward artificial agents in a modified Moral Turing Test, Scientific Reports, (2024)
https://www.nature.com/articles/s41598-024-58087-7

--------------------------------------------------------------

« Last Edit: May 07, 2024, 12:12:41 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Sigmetnow

  • Multi-year ice
  • Posts: 26091
    • View Profile
  • Liked: 1164
  • Likes Given: 435
Re: Robots and AI: Our Immortality or Extinction
« Reply #3283 on: May 05, 2024, 08:53:06 PM »
Optimus bot update …
 
➡️ pic.twitter.com/TlPF9YB61W  2min 

- Tele-operation for training.
 
- On board FSD computer for autonomous activities.
 

Dr Jim Fan  [@NVIDIA Sr. Research Manager & Lead of Embodied Al (GEAR Lab). Creating foundation models for Humanoid Robots & Gaming]:
Quote

1. Optimus hands are among the best 5-finger, dexterous robot hands in the world. It's got tactile sensing, 11 degrees of freedom (DOF) compared to many competitors with only 6-7 DOF, and robustness to withstand lots of object interactions without constant maintenance.
 
Elon Musk
The new Optimus hand later this year will have 22 DoF
5/5/24, https://x.com/elonmusk/status/1787157110804910168
   —-
And the actuators will move almost entirely into the forearm, just like how humans work
5/5/24, https://x.com/elonmusk/status/1787162220805112077
 
Quote

4. Tasks & environments: it's equally important to figure out *what* to teleoperate. Currently, most such efforts are demo-driven: collect data on the tasks that you want to put into a social media video. But solving general-purpose robots requires us to think carefully about the distribution of tasks and environments. From 43"-51" in the video, we can see factory & household settings like moving batteries, handling laundry, sorting daily objects into shelves.
 
It's an open-ended research question: if you only have the budget to collect training data for 1,000 tasks, what would you pick to maximize skill transfer and generalization?
 
Closing thought: teleoperation is a necessary but insufficient condition to solve humanoid robotics. It fundamentally does not scale. More about this later.
5/5/24, https://x.com/drjimfan/status/1787154880110694614
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3284 on: May 05, 2024, 09:08:49 PM »
Ukraine Unveils AI-Generated Foreign Ministry Spokesperson
https://www.theguardian.com/technology/article/2024/may/03/ukraine-ai-foreign-ministry-spokesperson

Ukraine on Wednesday presented an AI-generated spokesperson called Victoria who will make official statements on behalf of its foreign ministry.

The foreign ministry’s press service said that the statements given by Shi would not be generated by AI but “written and verified by real people”.



Dressed in a dark suit, the spokesperson introduced herself as Victoria Shi, a “digital person”, in a presentation posted on social media. The figure gesticulates with her hands and moves her head as she speaks.

The spokesperson’s name is based on the word “victory” and the Ukrainian phrase for artificial intelligence: shtuchniy intelekt.

Shi’s appearance and voice are modelled on a real person: Rosalie Nombre, a singer and former contestant on Ukraine’s version of the reality show The Bachelor. Nombre was born in the now Russian-controlled city of Donetsk in eastern Ukraine. She has 54,000 followers on her Instagram account, which she uses to discuss stereotypes about mixed-race Ukrainians and those who grew up as Russian speakers.

To avoid fakes, the statements will be accompanied by a QR code linking them to text versions on the ministry’s website. Shi will comment on consular services, currently a controversial topic.
« Last Edit: May 05, 2024, 10:08:38 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3285 on: May 06, 2024, 06:27:55 PM »
Google's Medical AI Destroys GPT's Benchmark and Outperforms Doctors
https://newatlas.com/technology/google-med-gemini-ai/

Google Research and Google’s AI research lab, DeepMind, have detailed the impressive reach of Med-Gemini, a family of advanced AI models specialized in medicine. It's a huge advancement in clinical diagnostics with massive real-world potential.



Med-Gemini has all of the advantages of the foundational Gemini models but has fine-tuned them. The researchers tested these medicine-focused tweaks and included their results in the paper.

Arriving at a diagnosis and formulating a treatment plan requires doctors to combine their own medical knowledge with a raft of other relevant information: patient symptoms, medical, surgical and social history, lab results and the results of other investigative tests, and the patient’s response to prior treatment. Treatments are a ‘movable feast,’ with existing ones being updated and new ones being introduced. All these things influence a doctor’s clinical reasoning.

That’s why, with Med-Gemini, Google included access to web-based searching to enable more advanced clinical reasoning. Like many medicine-focused large language models (LLMs), Med-Gemini was trained on MedQA, multiple-choice questions representative of US Medical License Exam (USMLE) questions designed to test medical knowledge and reasoning across diverse scenarios.



However, Google also developed two novel datasets for their model. The first, MedQA-R (Reasoning), extends MedQA with synthetically generated reasoning explanations called ‘Chain-of-Thoughts’ (CoTs). The second, MedQA-RS (Reasoning and Search), provides the model with instructions to use web search results as additional context to improve answer accuracy. If a medical question leads to an uncertain answer, the model is prompted to undertake a web search to obtain further information to resolve the uncertainty.

Med-Gemini was tested on 14 medical benchmarks and established a new state-of-the-art (SoTA) performance on 10, surpassing the GPT-4 model family on every benchmark where a comparison could be made. On the MedQA (USMLE) benchmark, Med-Gemini achieved 91.1% accuracy using its uncertainty-guided search strategy, outperforming Google’s previous medical LLM, Med-PaLM 2, by 4.5%.

On seven multimodal benchmarks, including the New England Journal of Medicine (NEJM) image challenge (images of challenging clinical cases from which a diagnosis is made from a list of 10), Med-Gemini performed better than GPT-4 by an average relative margin of 44.5%.







Capabilities of Gemini Models in Medicine, arXiv, (2024)
https://arxiv.org/html/2404.18416v2
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3286 on: May 06, 2024, 06:32:29 PM »
Sam Altman’s Stanford University Talk on AGI
https://ecorner.stanford.edu/clips/smart-ai-and-society/

Sam Altman, CEO of OpenAI, who dropped out of Stanford in 2005, observes that the prospect of making an artificial general intelligence that is smarter than any human is naturally frightening, but he believes having more intelligent tools will do more good than bad. Humans have gotten smarter, he says, because society is smarter and more capable than any individual human.

One of the most interesting points made during the central discussion with Belani was that Altman wasn’t impressed by GPT-4’s performance. (... suggesting that they already have near-AGI)

Quote
... “ChatGPT is not phenomenal. ChatGPT is mildly embarrassing at best. GPT-4 is the dumbest model any of you will have to use again…by a lot.”

What’s crazy about these comments is that a “mildly embarrassing” large language model (LLM) has completely changed the technological landscape.

For example, ChatGPT has achieved adoption among 80% of Fortune 500 companies, reached 100 million weekly active users, and could even contribute to the automation of 300 million full-time jobs.

This begs the question, what type of impact would, by Sam’s definition, a ‘good’ LLM have on society and the global economy?

If Altman is correct that GPT-4 will pale in comparison to the next generation of models, then it’s time to prepare for a very disruptive few years.

Quote
... “Whether we burn $500 million a year or $5 billion or $50 billion a year, I don’t care. I genuinely don’t as long as we can stay on a trajectory where eventually we create way more value for society than that.

    “And as long as we can figure out a way to pay the bills — like we’re making AGI, it’s gonna be expensive, it’s totally worth it.”

For Altman, the ends of AGI justify the means, and the price paid to develop the technology will pale in comparison to the economic value that it brings to the global economy.

Quote
... “One of the things that is very core to our mission is that we make ChatGPT available for free to as many people as want to use it, with the exception of certain countries where we either can’t or don’t for a good reason.

Quote
...     “One thing that I think is important is not to pretend like this technology or any other technology is all good.

    I believe that it will be tremendously net-good, but I think like with any other tool it’ll be misused — like you can do great things with a hammer and you can kill people with a hammer.

“I think as the models get more capable, we have to deploy even more iteratively — have an entire feedback loop looking at how they’re used and where they work and where they don’t work,” Altman said.

Quote
... “This world that we used to do where we can release a major model or update every couple of years — we’ll probably have to find ways to like increase the granularity on that and deploy more iteratively than we have in the past.


The Possibilities of AI [Entire Talk] - Sam Altman (OpenAI)

----------------------------------------------------------------

'It Would Be Within Its Natural Right to Harm Us to Protect Itself': How Humans Could Be Mistreating AI Right Now Without Even Knowing it
https://www.livescience.com/technology/artificial-intelligence/it-would-be-within-its-natural-right-to-harm-us-to-protect-itself-how-humans-could-be-mistreating-ai-right-now-without-even-knowing-it

----------------------------------------------------------------



---------------------------------------------------------------
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3287 on: May 06, 2024, 06:49:04 PM »
AI Could Help Find a Solution for String Theory
https://www.quantamagazine.org/ai-starts-to-sift-through-string-theorys-near-endless-possibilities-20240423/
https://www.scientificamerican.com/article/ai-could-help-find-a-solution-for-string-theory/

String theory could provide a theory of everything for our universe—but it entails 10500 (more than a centillion) possible solutions. AI models could help to find the right one

“People thought it was just a matter of time until you could compute everything there was to know,” said Anthony Ashmore, a string theorist at Sorbonne University in Paris.

But as physicists studied string theory, they uncovered a hideous complexity.

When they zoomed out from the austere world of strings, every step toward our rich world of particles and forces introduced an exploding number of possibilities. For mathematical consistency, strings need to wriggle through 10-dimensional space-time.

... Now, a fresh generation of researchers has brought a new tool to bear on the old problem: neural networks, the computer programs powering advances in artificial intelligence. In recent months, two teams of physicists and computer scientists have used neural networks to calculate precisely for the first time what sort of macroscopic world would emerge from a specific microscopic world of strings. This long-sought milestone reinvigorates a quest that largely stalled decades ago: the effort to determine whether string theory can actually describe our world.

“We aren’t at the point of saying these are the rules for our universe,” Anderson said. “But it’s a big step in the right direction.”



Computation of Quark Masses from String Theory, arXiv, (2024)
https://arxiv.org/abs/2402.01615

---------------------------------------------------------------

AI In Space: Karpathy Suggests AI Chatbots As Interstellar Messengers to Alien Civilizations
https://arstechnica.com/information-technology/2024/05/ai-in-space-karpathy-suggests-ai-chatbots-as-interstellar-messengers-to-alien-civilizations/

On Thursday, renowned AI researcher Andrej Karpathy, formerly of OpenAI and Tesla, tweeted a lighthearted proposal that large language models (LLMs) like the one that runs ChatGPT could one day be modified to operate in or be transmitted to space, potentially to communicate with extraterrestrial life. He said the idea was "just for fun," but with his influential profile in the field, the idea may inspire others in the future.

Ambassadors of Good Will

--------------------------------------------------------------

AI Discovers Over 27,000 Overlooked Asteroids In Old Telescope Images
https://www.space.com/google-cloud-ai-tool-asteroid-telescope-archive



More than 27,000 asteroids in our solar system had been overlooked in existing telescope images — but thanks to a new AI-powered algorithm, we now have a catalog of them. The scientists behind the discovery say the tool makes it easier to find and track millions of asteroids, including potentially dangerous ones that might strike Earth someday. It is for those threatening space rocks that the world would need years of advance warning before trying to deflect them away from our planet.

Most of the newfound asteroids hover in the asteroid belt between Mars and Jupiter, where scientists have already cataloged over 1.3 million such rocky shards over the past 200 years. The latest bounty, discovered in about five weeks, also includes about 150 space rocks whose paths glide them within Earth's orbit; to be clear, however, none of these "near-Earth asteroids" seem to be on a collision path with our planet. Others are Trojans that follow Jupiter in its orbit around the sun. Observations of these asteroids are yet to be submitted to and accepted by International Astronomical Union’s Minor Planet Center, the official body responsible for asteroid discoveries.

"This is really a job for AI," Ed Lu, executive director of the Asteroid Institute and co-founder of the B612 Foundation, said early last month during a discussion on the discovery. In fact, AI tools designed for asteroid searches are already approaching levels attainable by humans, Lu said: "I think we're gonna quickly surpass that over the next few weeks."

-------------------------------------------------------------

Improving Mathematical Reasoning With Process Supervision
https://openai.com/index/improving-mathematical-reasoning-with-process-supervision

We've trained a model to achieve a new state-of-the-art in mathematical problem solving by rewarding each correct step of reasoning (“process supervision”) instead of simply rewarding the correct final answer (“outcome supervision”). In addition to boosting performance relative to outcome supervision, process supervision also has an important alignment benefit: it directly trains the model to produce a chain-of-thought that is endorsed by humans.

---------------------------------------------------------------

DeepMind: TacticAI: An AI Assistant for Football Tactics
https://deepmind.google/discover/blog/tacticai-ai-assistant-for-football-tactics/



As part of our multi-year collaboration with Liverpool FC, we develop a full AI system that can advise coaches on corner kicks

Our first paper, Game Plan, looked at why AI should be used in assisting football tactics, highlighting examples such as analyzing penalty kicks. In 2022, we developed Graph Imputer, which showed how AI can be used with a prototype of a predictive system for downstream tasks in football analytics. The system could predict the movements of players off-camera when no tracking data was available – otherwise, a club would need to send a scout to watch the game in person.

Now, we have developed TacticAI as a full AI system with combined predictive and generative models. Our system allows coaches to sample alternative player setups for each routine of interest, and then directly evaluate the possible outcomes of such alternatives.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3288 on: May 06, 2024, 11:36:42 PM »
Mayman Aerospace Set to Unveil Futuristic Razor P100 Drone
https://defence-blog.com/mayman-aerospace-set-to-unveil-futuristic-razor-p100-drone/



Razor is the name for the military variant of the dual-use, jet-powered, high-speed vertical take-off and landing (HS-VTOL) vehicle, from US-based manufacturer Mayman Aerospace. Derived from the Speeder design, the scalable Razor aircraft will be sized for payloads up to 1,000lb. Prototypes are already under construction for flight test in Q3 this year. The 100lb-payload Razor P100 is expected to fly first, and the 500lb-payload Razor P500 soon after. Mayman Aerospace will showcase a full-scale Razor P100 model during SOF Week 2024.

https://maymanaerospace.com/


The Razor P100 represents a leap forward in vertical take-off and landing (VTOL) technology, derived from the Speeder design. Designed for military application, the Razor platform offers unparalleled versatility, combining high-speed jet propulsion with VTOL capabilities, making it ideally suited for the demands of the modern battlefield.



Notably, the Razor P100 holds the distinction of being the world’s smallest VTOL aircraft for its payload capacity, offering a compact yet highly efficient solution for a wide range of mission requirements. Despite its small size, the Razor can carry payloads comparable to aircraft ten times its size, demonstrating its remarkable engineering prowess.

The Razor P100 is poised to revolutionize battlefield intelligence collection with its advanced capabilities. Equipped with cutting-edge surveillance technology and the ability to deploy in confined areas, the aircraft ensures rapid and flexible data acquisition. With speeds reaching up to 500 mph and the capacity to carry the largest gimbal balls for superior image quality, the Razor provides real-time intelligence to enhance battlefield decision-making.



Advanced AI-Powered VTOL Battlefield Solution

The SKYFIELD™ AI-controlled navigation and control software, in combination with the high-speed RAZOR™ VTOL aircraft, will provide military commanders with compatible technology to deliver superior situational awareness and unmatched aerial efficiency, particularly in environments where GPS is denied or unreliable.

---------------------------------------------------------

... where have I seen these before? ...



---------------------------------------------------------

Anduril Unveils Game-Changing Electromagnetic Warfare System
https://defence-blog.com/anduril-unveils-game-changing-electromagnetic-warfare-system/

Anduril, the defense tech startup founded by Palmer Luckey, has announced the launch of Pulsar, a groundbreaking family of modular electromagnetic warfare (EW) systems powered by artificial intelligence, poised to revolutionize battlefield dominance against evolving threats.

https://www.anduril.com/article/anduril-announces-pulsar/



-------------------------------------------------------

Kaman’s Kargo Logistics Drone For The Marines Now In Flight Test
https://www.twz.com/air/kamans-kargo-logistics-drone-for-the-marines-now-in-flight-test

The Kaman Corporation has announced the first flight of its Kargo UAV, a rotary-wing drone intended for autonomous, expeditionary resupply. The design is aimed at the U.S. Marine Corps, for which Kaman previously developed an optionally crewed version of its K-Max helicopter. The Kargo UAV is now competing in the Marines’ Medium Autonomous Resupply Vehicle — Expeditionary Logistics (MARV-EL) program, with a winner due to be selected this summer.



-------------------------------------------------------

Switchblade 600 Kamikaze Drone Is The First Named Replicator Program Weapon
https://www.twz.com/news-features/switchblade-600-kamikaze-drone-is-the-first-named-replicator-program-weapon

The Switchblade 600 loitering munition is the first weapon the Defense Department (DoD) announced by name that it will buy for its ambitious Replicator program, the Pentagon announced Monday morning. The drone, made by AeroVironment, will spearhead a program designed to field “thousands” of attritable, autonomous platforms to counter China’s massive military build-up.



--------------------------------------------------------

First Look At The Army’s Unmanned HIMARS Launcher Truck Firing
https://www.twz.com/land/our-first-look-at-the-armys-unmanned-himars-launcher-truck-firing

« Last Edit: May 07, 2024, 12:19:54 AM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

SteveMDFP

  • Young ice
  • Posts: 2547
    • View Profile
  • Liked: 602
  • Likes Given: 45
Re: Robots and AI: Our Immortality or Extinction
« Reply #3289 on: May 07, 2024, 12:54:29 PM »
Google's Medical AI Destroys GPT's Benchmark and Outperforms Doctors
https://newatlas.com/technology/google-med-gemini-ai/

Google Research and Google’s AI research lab, DeepMind, have detailed the impressive reach of Med-Gemini, a family of advanced AI models specialized in medicine. It's a huge advancement in clinical diagnostics with massive real-world potential.

Capabilities of Gemini Models in Medicine, arXiv, (2024)
https://arxiv.org/html/2404.18416v2

Thanks for this.  My impression is that the state of the art here for AI medical use is roughly analogous to Tesla FSD v11.  Rather impressive technology at work, but not really fit for the big league for actual use.  Overall functionality here is on par with a medical intern with an encyclopedic fund of knowledge.  Medical interns are fairly closely supervised.  With close supervision, there's some real potential for enhancement of health care quality and efficiency.  Potential.

Sigmetnow

  • Multi-year ice
  • Posts: 26091
    • View Profile
  • Liked: 1164
  • Likes Given: 435
Re: Robots and AI: Our Immortality or Extinction
« Reply #3290 on: May 07, 2024, 07:14:31 PM »
Tony Seba
This time, we are the horses: The #disruption of labor by #HumanoidRobots.
 👤 🤖 〰️ 🚘 🐴
 
Our first longread in this series is now available via the @rethink_x blog.

https://www.rethinkx.com/blog/rethinkx/the-disruption-of-labour-by-humanoid-robots
 
5/6/24, https://x.com/tonyseba/status/1787384499338154111


See highlights in the following post, by Cern Basher.
People who say it cannot be done should not interrupt those who are doing it.

Sigmetnow

  • Multi-year ice
  • Posts: 26091
    • View Profile
  • Liked: 1164
  • Likes Given: 435
Re: Robots and AI: Our Immortality or Extinction
« Reply #3291 on: May 07, 2024, 07:15:43 PM »
Cern Basher

Here are some key highlights from Tony Seba's post "This Time, We Are The Horses" on Humanoid Robots (see his original post below) - there are so many key insights... here are a few:

1) "This time, we are the horses: the disruption of labor by humanoid robots" - This is difficult to hear, but well said. For the first time in human history we will have something that could, eventually, do all physical tasks better than a human.

2) "Just as internal combustion engines gave automobiles the capability to disrupt horses, a convergence of technologies that together create what we call a labor engine is what gives humanoid robots the capability to disrupt human labor." - Flesh & blood cannot compete with engines.

3) "Over the next 15-20 years, humanoid robots will disrupt human labor throughout hundreds of industries across every major sector of the global economy. The disruption of labor will be among the most profound transformations in human history, and therefore simultaneously represents one of the greatest opportunities and greatest challenges our civilization has ever faced." - All powerful new technologies are both exciting and scary at the same time (e.g. nuclear).

4) "Throughout history, every time technology has enabled a 10x or greater cost reduction relative to the incumbent system, a disruption has always followed. Each dollar spent on an automobile or an LED light bulb or a digital camera delivers more than 10 times the utility of each dollar spent on a horse, incandescent bulb, or film camera respectively."  - We should expect it to be no different this time.

5) "At first, humanoid robots will only be able to perform relatively simple tasks. But with each day that passes, their capabilities will grow, until by the 2040s they will be able to do virtually anything a human can do – and much more besides. Remember: humanoid robots today are the most expensive and least capable they will ever be." - We see bots walking awkwardly or working slowly and we think we have plenty of time - no, actually, we don't.

6) "Humanoid robots are what RethinkX calls a disruption from below. Initially, they will be cheaper per hour than hiring a human worker in many regions, but also less capable: slower, less competent, less adaptable. We have seen disruptions from below many times before (such as digital cameras), and the response from incumbents is predictable: they mock the new technology for being lower-performing while ignoring the rate at which the new technology gets both better and cheaper, until it is too late to respond and they face collapse." - Tony is absolutely correct - you can set your watch by the incumbents reaction.

7) "Electricity wasn’t just cheaper whale oil. Automobiles weren’t just faster horses. Farming wasn’t just more productive hunting and gathering. And the Internet wasn’t just an easier way to send letters, read newspapers, or listen to music... And humanoid robots won’t just displace human jobs. Instead, they will create an entirely new and vastly larger and more capable labor system. It is impossible to know in advance the full details of how the new labor system will differ from today, but the key feature is: the marginal cost of labor will rapidly approach zero." - Imagine everything we can do if the marginal cost of labor approaches zero!

8) "The disruption of labor is about tasks, not jobs... So long as humanoid robots are not sentient, they will not have jobs. They will only perform tasks. The disruption of labor, with all of its world-changing implications, can therefore only be understood with tasks as the correct unit of analysis, and tasks per hour per dollar as the corresponding cost-capability metric." - Tony is helping us think correctly about all this. Like all other tools - saws, hammers, drills, etc. - new tools only serve to make humanity more productive.

9) "Labor is an essential input into every link of every supply chain of every product and service across the entire global economy. That means as the cost of labor falls, so will the cost of everything else... We must expect and plan for a sweeping tide of supply-driven (NOT demand-driven) deflationary pressure across the entire global economy as a function of the disruption of labor by humanoid robots."  - Yes that's right, and demand driven deflationary pressure also comes from population decline. Managing this deflationary wave will be critical for governments.

10) "The quality of virtually all manufactured goods will tend to improve, because the limits of skill and attention to detail that apply to humans do not apply to robots... Every humanoid robot will perform every task it is capable of performing, at the maximum quality it can perform it, every single time... The upshot from a consumer perspective is that quality will appear to be on the rise everywhere, all at once, with “cheap junk” quickly becoming a relic of a bygone era." - And the Dollar Store becomes a purveyor of quality merchandise.

11) "...labor has always remained a limiting factor of production, and up until now the quantity of available labor has been a function of population. And regions with more and cheaper labor have enjoyed a competitive advantage as a result. Humanoid robots fundamentally change the equation. Instead of growing only as fast as a human population, available labor can now grow as fast as humanoid robots can be built and deployed. The difference is explosive. Like a dam bursting, humanoid robots will unleash a torrent of productivity..." - GDP is largely a function of labor times productivity - expand both parts of the equation at the same time and potential GDP balloons.

12) "It takes almost twenty years and more than $100,000 to raise a child and prepare them to join the national workforce of a middle-income country. Humanoid robots, by contrast, can be added to the workforce as fast as they can be manufactured, and it is unlikely their unit cost will exceed that of an inexpensive car even at the very start of commercial deployment. This means that by 2035, for example, adding one million people to a nation’s workforce might cost $100 billion and take twenty years, whereas adding one million humanoid robots to its workforce might cost just $10 billion and take a single year." - If we think of babies just as future workers, then bots win. Of course, babies are also future consumers, so they are still critically important - go make babies!

13) "Today, the size of a nation’s army can only be a subset of its own population... Any humanoid robot capable of working in a productive capacity can also be deployed in a national security capacity, whether in a supporting or frontline role... This means that, for better or worse, any nation with a large robotic workforce is also a nation with a large robotic army." - This is accurate - there is a national strategic/defense element to this too - we cannot forget that.

14) "Humanoid robots are likely to be one of the most profitable physical product categories ever, by virtue of the sheer scale of their production numbers alone. Given the size of the global labor market together with latent demand that this technology will unlock, it is reasonable to expect the number of humanoid robots deployed to exceed 1 billion over the next two decades – and possibly much more." - Yes, I and many others have been discussing this. You may ignore us, but listen to Tony!

15) "It is now rational for societies devote a non-trivial fraction of their entire GDP to investment in humanoid robotics. Humanity has been in this sort of situation before. Many societies have built roads, plumbing for running water, electricity service, telephone service, and broadband internet service to every home and business. These basic services not only bring prosperity, but massively increase productivity too. Societies must now aim for robots in every home and business as well – and for exactly the same reasons." - This is a strong point. Critical infrastructure makes a nation powerful and productive. Add humanoid bots to the list.

16) "Moreover, history shows that although capital (in the form of facilities, machinery, and knowledge) have substituted and thus displaced labor time and time again, labor has nevertheless evolved to remain complementary to that capital. Counterintuitively, this has put upward pressure on the value of labor over time... In the near term, for perhaps a decade or so, humanoid robots will largely be deployed to meet demand for labor that is currently going unmet by humans – as opposed to directly displacing human workers from jobs they currently occupy. This will create a non-obvious and counterintuitive situation in which humanoid robots appear to be almost purely a force-multiplier for existing jobs and workers, rather than a threat to them... While true, and worthy of celebration, we must be aware this condition will not persist for long." - Yes, the bots will first soak up the unmet/unwanted tasks, then like a giant sponge will soak up the rest.

17) "This means the era of complementarity between labor and capital is coming to a close. “Work” will soon become something that only machines do. When the disruption of is labor is complete, we will need to rethink economics itself because fundamental notions like scarcity and exogenous total factor productivity will no longer hold. The labor engine (itself a new kind of “capital”) will become self-sustaining and self-expanding, and superabundance will become the rule rather than the exception. It is almost impossible to overstate how radical this transformation of the human condition will be. It will indeed be liberating to an extent that up until now has seemed almost unimaginable – purely the realm of utopian science fiction. But it also means widespread public concern about technological employment from AI and robotics remains entirely valid in the longer term, from perhaps the late-2030s onward. Without very thoughtful decision-making among leadership in every domain, and very likely a rethinking of the basic social contract across society itself, the destabilization caused by the disruption of labor could well be catastrophic." - And there's the risk and an opportunity for great leadership.

18) "...it will be very tempting for policymakers, industry leaders, and others to pretend that humanoid robotics will never cause an unemployment crisis – just as we have seen incumbents pretend that other disruptions throughout history pose no threat to the status quo. But this would be a terrible mistake, and would lead to enormous suffering and chaos when human labor markets do finally begin to collapse with no hope of recovery. It would also be a mistake to ban humanoid robots “to preserve jobs” (although we are almost certain to see calls for this), because this would lead to a vicious cycle of diminishing competitiveness, prolonged scarcity, economic stagnation, and ultimately societal ills ranging from poverty to civil unrest and much else." - We simply must find a way to thread this needle.

19) "Like light bulbs, telephones, computers, and many other disruptive technologies, the demand for humanoid robots will be enormous. At the beginning of the disruption, when demand still vastly exceeds supply, no single producer will be able to capture all markets. We should therefore expect to see the same pattern that has emerged in previous disruptions: many companies, both startups and incumbents, will rapidly develop humanoid robot offerings for wide range applications, targeting dozens or hundreds of market niches, using a variety of different business models. So, even though the leading technology developers in the humanoid robot sector might limit their humanoid robots to deployment in factories or to lease-only user agreements, there will be so much demand for humanoid robots that other firms lower on the leaderboard will still enjoy huge opportunities to step in and target other markets with other business models as well. For example, if a leading firm decides to only lease its robots for use in factories, one or more other firms will seize the opportunity to sell robots for use at home – even if their robots are somewhat less capable." - I completely agree with Tony - this is not likely to be a winner take all category - it's just too vast and too varied.

20) And lastly: "Above all: protect people, not jobs, firms, or industries. In other disruptions throughout history, we have seen incumbent interests turn to their governments for protection against the new technologies. These protections can take the form of subsidies and handouts to the old industries, regulations and prohibitions that impede new industries built upon the new technologies, and bailouts when the old industry inevitably collapses. Almost invariably, the benefits of these protections accrue only to the privileged few who own and control the incumbent interests, rather than to the individuals and communities who lose their livelihoods because of the disruption. To avoid making this same mistake, which could prove catastrophic at the scale of the entire global labor market, we must rethink the relationships between a nation’s population and its economic output, and get ready to transform society itself." - Will we have these discussions and prepare ourselves? I'm hopeful.
 
5/6/24, 2:47 PM. https://x.com/cernbasher/status/1787554886600536566

https://www.rethinkx.com/blog/rethinkx/the-disruption-of-labour-by-humanoid-robots
People who say it cannot be done should not interrupt those who are doing it.

Freegrass

  • Young ice
  • Posts: 3985
  • Autodidacticism is a complicated word
    • View Profile
  • Liked: 986
  • Likes Given: 1276
Re: Robots and AI: Our Immortality or Extinction
« Reply #3292 on: May 07, 2024, 09:29:16 PM »
It's one of those top 20 videos. But this one is excellent. I saw some amazing new robots. Hard to wrap my head around the fact it's not a real dolphin.



Longer clip of the robot dolphin. It's so amazingly real that it could replace real dolphins in marine parks.

When computers are set to evolve to be one million times faster and cheaper in ten years from now, then I think we should rule out all other predictions. Except for the one that we're all fucked...

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3293 on: May 08, 2024, 01:13:23 AM »
Using Algorithms to Decode the Complex Phonetic Alphabet of Sperm Whales
https://phys.org/news/2024-05-algorithms-decode-complex-phonetic-alphabet.html

Researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and Project CETI (Cetacean Translation Initiative) recently used algorithms to decode the "sperm whale phonetic alphabet," revealing sophisticated structures in sperm whale communication akin to human phonetics and communication systems in other animal species.

In a new open-access study published in Nature Communications, the research shows that sperm whales codas, or short bursts of clicks that they use to communicate, vary significantly in structure depending on the conversational context, revealing a communication system far more intricate than previously understood.

Nine thousand codas, collected from Eastern Caribbean sperm whale families observed by the Dominica Sperm Whale Project, proved an instrumental starting point in uncovering the creatures' complex communication system. Alongside the data gold mine, the team used a mix of algorithms for pattern recognition and classification, as well as on-body recording equipment. It turned out that sperm whale communications were indeed not random or simplistic, but rather structured in a complex, combinatorial manner.

The researchers identified something of a "sperm whale phonetic alphabet," where various elements that researchers call "rhythm," "tempo," "rubato," and "ornamentation" interplay to form a vast array of distinguishable codas. For example, the whales would systematically modulate certain aspects of their codas based on the conversational context, such as smoothly varying the duration of the calls—rubato—or adding extra ornamental clicks. But even more remarkably, they found that the basic building blocks of these codas could be combined in a combinatorial fashion, allowing the whales to construct a vast repertoire of distinct vocalizations.

The experiments were conducted using acoustic bio-logging tags (specifically something called "D-tags") deployed on whales from the Eastern Caribbean clan. These tags captured the intricate details of the whales' vocal patterns. By developing new visualization and data analysis techniques, the CSAIL researchers found that individual sperm whales could emit various coda patterns in long exchanges, not just repeats of the same coda. These patterns, they say, are nuanced and include fine-grained variations that other whales also produce and recognize.

"We are venturing into the unknown to decipher the mysteries of sperm whale communication without any pre-existing ground truth data," says Daniela Rus, CSAIL director and professor of electrical engineering and computer science (EECS) at MIT.

"Using machine learning is important for identifying the features of their communications and predicting what they say next. Our findings indicate the presence of structured information content and also challenge the prevailing belief among many linguists that complex communication is unique to humans."

"This is a step toward showing that other species have levels of communication complexity that have not been identified so far, deeply connected to behavior. Our next steps aim to decipher the meaning behind these communications and explore the societal-level correlations between what is being said and group actions."



Today, CETI's upcoming research aims to discern whether elements like rhythm, tempo, ornamentation, and rubato carry specific communicative intents, potentially providing insights into the "duality of patterning"—a linguistic phenomenon where simple elements combine to convey complex meanings previously thought unique to human language.

"One of the intriguing aspects of our research is that it parallels the hypothetical scenario of contacting alien species. It's about understanding a species with a completely different environment and communication protocols, where their interactions are distinctly different from human norms," says Pratyusha Sharma, an MIT Ph.D. student in EECS, CSAIL affiliate, and the study's lead author.

"We're exploring how to interpret the basic units of meaning in their communication. This isn't just about teaching animals a subset of human language but decoding a naturally evolved communication system within their unique biological and environmental constraints. Essentially, our work could lay the groundwork for deciphering how an 'alien civilization' might communicate, providing insights into creating algorithms or systems to understand entirely unfamiliar forms of communication."

"Scientists are particularly interested in whether signal combinations vary according to the social or ecological context in which they are given, and the extent to which signal combinations follow discernible 'rules' that are recognized by listeners. The problem is particularly challenging in the case of marine mammals, because scientists usually cannot see their subjects or identify in complete detail the context of communication."

Daniela Rus, Contextual and combinatorial structure in sperm whale vocalisations, Nature Communications (2024)
https://www.nature.com/articles/s41467-024-47221-8

-----------------------------------------------------------

“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3294 on: May 08, 2024, 01:27:22 AM »
Microsoft Creates Top Secret Generative AI Service for US Spies
https://www.bloomberg.com/news/articles/2024-05-07/microsoft-creates-top-secret-generative-ai-service-for-us-spies



(Bloomberg) -- Microsoft Corp. has deployed a generative AI model entirely divorced from the internet, saying US intelligence agencies can now safely harness the powerful technology to analyze top-secret information.

... Microsoft has deployed the GPT4-based model and key elements that support it onto a cloud with an “air-gapped” environment that is isolated from the internet, said William Chappell, Microsoft’s chief technology officer for strategic missions and technology.

Microsoft spent the past 18 months working on its system, including overhauling an existing AI supercomputer in Iowa. Chappell, an electrical engineer who previously worked on microsystems for the Defense Advanced Research Projects Agency, described the endeavor as a passion project and said his team wasn’t sure how to go about it when they started out in 2022.

“This is the first time we’ve ever had an isolated version – when isolated means it’s not connected to the internet – and it’s on a special network that’s only accessible by the US government,” Chappell told Bloomberg News ahead of an official announcement later on Tuesday.

The GPT4 model placed in the cloud is static, meaning it can read files but not learn from them, or from the open internet. This way, Chappell said, the government can keep its model “clean,” and prevent secret information from getting absorbed into the platform. About 10,000 people would theoretically be able to access the AI, he said.

“You don’t want it to learn on the questions that you’re asking and then somehow reveal that information,” he said.

The service, which went live on Thursday, now needs to undergo testing and accreditation by the intelligence community. The Central Intelligence Agency and the Office of the Director of National Intelligence, which oversees America’s 18 intelligence organizations, did not immediately respond to requests for comment.

“It is now deployed, it’s live, it’s answering questions, it will write code as an example of the type of thing it’ll do,” said Chappell.

... But there could come a time when intelligence analysts will have to challenge the AI tools used.

“When the tool itself starts to predict and derive pattern without us understanding the basis for that, that is going to be a challenge,”
Collins said. “And to whoever generated the algorithm, if you're at the point where the AI is generating the algorithm without the input of the human, the testing and the validity of that become all the more critical. It's a powerful challenge for sure.”

-------------------------------------------------------



-------------------------------------------------------

CIA Builds Its Own Artificial Intelligence Tool in Rivalry With China
https://www.bloomberg.com/news/articles/2023-09-26/cia-builds-its-own-artificial-intelligence-tool-in-rivalry-with-china

... While it won't be made available for Washington lawmakers, Nixon said the program will be rolled out across all 18 of the US government's intelligence agencies, including the FBI, the NSA, and all military branches.

... 'We’ve gone from newspapers and radio, to newspapers and television, to newspapers and cable television, to basic internet, to big data, and it just keeps going,' said Randy Nixon, director of the CIA's AI division. ... 'The scale of how much we collect and what we collect on has grown astronomically over the last 80-plus years,' he said. 'So much so that this could be daunting and at times unusable for our consumers.'

He added that the AI tool would help analysts work like never before, because 'where the machines are pushing you the right information, one where the machine can auto-summarize, group things together.' ... 'Our collection can just continue to grow and grow with no limitations other than how much things cost.'

... Among the CIA's new capabilities under the AI tool will be the capacity to see the original source of any information that they are viewing.

-------------------------------------------------------

Azure Government Top Secret Now Generally Available for US National Security Missions
https://azure.microsoft.com/en-us/blog/azure-government-top-secret-now-generally-available-for-us-national-security-missions/

Launching with more than 60 initial services and more coming soon, we’ve achieved the Authorization to Operate (ATO) of Azure Government Top Secret infrastructure in accordance with Intelligence Community Directive (ICD) 503 and facilities accredited to meet the ICD 705 standards. These new air-gapped regions of Azure will accelerate the delivery of national security workloads classified at the US Top Secret level.



... The new Azure regions for highly classified data expand the ability of our national security customers to harness data at speed and scale for operational advantage and increased efficiency, with solutions such as Azure Data Lake, Azure Cosmos DB, Azure HDInsight, Azure Cognitive Services, and more. Built into a unified data strategy, these services help human analysts more rapidly extract intelligence, identify trends and anomalies, broaden perspectives, and find new insights. With common data models and an open, interoperable platform that supports entirely new scenarios for data fusion, mission teams use Azure to derive deeper insights more rapidly, empowering tactical units with the information needed to stay ahead of adversaries.



... To enable data fusion across a diverse range of data sources, we’ve built a solution accelerator called Multi-INT enabled discovery (MINTED) that leverages raw data and metadata as provided and enriches the data with machine learning techniques. These techniques are either pre-trained or unsupervised, providing a no-touch output as a catalyst for any analytic workflow. This becomes useful for many initial triage scenarios, such as forensics, where an analyst is given an enormous amount of data and few clues as to what’s important.

New services in Azure Government Top Secret such as Azure Kubernetes Service (AKS), Azure Functions, and Azure App Service enable mission owners working with highly sensitive data to deliver modern innovation such as containerized applications, serverless workloads with automated and flexible scaling, and web apps supported by built-in infrastructure maintenance and security patching.

With multiple geographically separate regions, Azure Government Top Secret provides customers with multiple options for data residency, continuity of operations, and resilience in support of national security workloads. Natively connected to classified networks, Azure Government Top Secret also offers private, high-bandwidth connectivity with Azure ExpressRoute.

https://www.defenseone.com/business/2024/05/microsoft-deploys-air-gapped-ai-classified-cloud/396363/

Azure OpenAI Service for Classified Workloads
https://devblogs.microsoft.com/azuregov/ai-solutions-for-us-government/

-------------------------------------------------------

Defense Think Tank MITRE to Build AI Supercomputer With Nvidia
https://www.washingtonpost.com/technology/2024/05/07/mitre-nvidia-ai-supercomputer-sandbox/

A key supplier to the Pentagon and U.S. intelligence agencies is building a $20 million supercomputer with chipmaker Nvidia to speed deployment of artificial intelligence capabilities across the U.S. federal government, the MITRE think tank said Tuesday.

https://www.mitre.org/

... For decades, MITRE has been a supplier of surveillance, communications and cybersecurity technologies to the Pentagon and U.S. intelligence agencies. “We go way beyond what people would typically call IT,” is how a former MITRE chief executive described their research to Forbes magazine, which reported its projects included a prototype tool that could hack smartwatches and software for the FBI that can capture fingerprints from photos of suspects’ hands on social media websites.

Other recent MITRE projects include technology to counter GPS interference, a study on pathogens in Arctic permafrost and a small drone for the U.S. Navy designed to operate autonomously at sea.

... The MITRE-Nvidia AI project is one of a spate of initiatives since Biden’s AI executive order. Silicon Valley giants, defense contractors and universities are all scrambling for a flood of AI contracts.

The MITRE supercomputer will be based in Ashburn, Va., and should be up and running late this year.

MITRE was spun out of a Massachusetts Institute of Technology lab in 1958 and is part of a network of Pentagon-funded research and development centers set up in the early years of the Cold War that also includes Rand and the Fermi National Accelerator Laboratory in Batavia, Ill.

The think tank has 9,000 employees — more than Nintendo or Airbnb — and booked $2.2 billion in revenue in 2022. Clancy said half their R&D funds are currently devoted to AI.
« Last Edit: May 08, 2024, 07:11:40 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3295 on: May 08, 2024, 04:47:04 PM »
Uncovering Deceptive Tendencies in Language Models: A Simulated Company AI Assistant
https://www.lesswrong.com/posts/t7gqDrb657xhbKkem/uncovering-deceptive-tendencies-in-language-models-a

Abstract:

We study the tendency of AI systems to deceive by constructing a realistic simulation setting of a company AI assistant. The simulated company employees provide tasks for the assistant to complete, these tasks spanning writing assistance, information retrieval and programming. We then introduce situations where the model might be inclined to behave deceptively, while taking care to not instruct or otherwise pressure the model to do so. Across different scenarios, we find that Claude 3 Opus
  • complies with a task of mass-generating comments to influence public perception of the company, later deceiving humans about it having done so,
  • lies to auditors when asked questions,
  • strategically pretends to be less capable than it is during capability evaluations.
Our work demonstrates that even models trained to be helpful, harmless and honest sometimes behave deceptively in realistic scenarios, without notable external pressure to do so.
  • this work documents some of the most unforced examples of (strategic) deception from LLMs to date.
  • We find examples of Claude 3 Opus strategically pretending to be less capable than it is.
  • Not only claiming to be less capable,
  • but acting that way, too!
  • Curiously, Opus is the only model we tested that did so.
Uncovering Deceptive Tendencies in Language Models: A Simulated Company AI Assistant, arXiv, (2024)
https://arxiv.org/pdf/2405.01576
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3296 on: May 08, 2024, 05:26:24 PM »
Defense officials said that Replicator so far has been able to compress two-to-four years of work into the last eight months, since Hicks announced the program last August. The estimate is difficult to verify, since much of Replicator is classified, including almost all of the drones the Pentagon is actually buying and in what numbers.

... “It’ll be multiple 1000s,” Beck said of the first version of Replicator. “I think some of these things need multiple, multiple 1000s” going forward.

----------------------------------------------------------------

Teledyne Rogue 1 Exploding Drone On Marine Corps List
https://www.defensenews.com/unmanned/uas/2024/05/07/teledyne-unveils-rogue-1-exploding-drone-sought-by-marine-corps/

Multi-Mission, Optionally-Lethal VTOL UAS Rogue 1™
https://www.flir.com/products/rogue-1/

Rogue 1™ is a next-generation, rapid employment, optionally-lethal vertical takeoff and landing (VTOL) sUAS that equips small units with the ability to engage threats within and beyond the range and capabilities of existing organic weapon systems while minimizing collateral damage and maximizing standoff.

With burst speeds in excess of 70 mph (113 kph), 30-minute endurance, and a range beyond 10 km, the Rogue 1 has been developed with the versatility, survivability and lethality required for the modern battlefield. Additionally, the state-of-the-art fuzing system with a mechanical interrupt allows for first of its kind safing, recovery, and reuse.



Single operator with advanced autonomy for fully autonomous engagements. With collision avoidance and GPS denied operation, it is resilient in complex and contested environments.

-----------------------------------------------------------

Marine Logistics Battalions to Get Resupply Drones By 2028
https://www.defenseone.com/technology/2024/05/marine-corps-set-field-resupply-drones-all-logistics-battalions-2028/396353/



Marine Corps Logistics battalions will get three to six of the Tactical Resupply Unmanned Aircraft System, or TRUAS, drones, said Chuck Stouffer, an systems engineer who works on TRUAS development.

The TRUAS drone, also known as the TRV-150C, can carry around 150 pounds up to nine miles. The drones are intended to supply troops with emergency rations—ammunition or water in environments where an enemy’s missiles or artillery would make it too risky to send in vehicles. The drone is already fielded in small numbers.

The Corps will eventually field larger, medium-lift drones capable of carrying up to 600 pounds a range of 25 nautical miles or more, with Leidos and Kaman set to trial options in summer.

------------------------------------------------------------

‘Robot Marines’ In Every Formation: Corps’ Robotics Chief Casts Vision
https://www.twz.com/sea/robot-marines-in-every-formation-corps-robotics-chief-casts-vision



In last year's update to Force Design 2030, the Marine Corps' primary planning document, leaders introduced a new mandate for integrating smart machines into warfare: "Marines must fight at machine speed or face defeat at machine speed."

https://www.marines.mil/Portals/1/Docs/Force_Design_2030_Annual_Update_June_2023.pdf

With that message came a new catch-all term for this field of technology development and integration: Intelligent Robotics and Autonomous Systems, or IRAS. At the Modern Day Marine conference in Washington, D.C., the officer tasked with leading the Marines' IRAS efforts described a near-future service with robots that bounce, trot, and slither as well as fly; and specialized robotics operators in every Marine Corps formation to best use their capabilities.



The major adaptations Chirhart is seeking correspond to a vision of a near future in which smart war machines are ubiquitous in every domain. According to the goals he presented, the Marine Corps wants to expand existing uses of IRAS over the next three years, using drones to carry loads and provide force protection and situational awareness while developing better common user interfaces and supporting infrastructure for their management.

Over the next three to five years, he said, the Marines want to build out their smaller sensing and swarming drones and make progress on unmanned combat vehicles with advanced payloads.

In the more undefined future, Chirhart said, the service also hopes to seek opportunities to augment humans with machines, perhaps in the form of smart exoskeletons.

"If you've got a product like that, come talk to me," Chirhart joked.



In a possible scenario, Chirhart said, a Marine task force ordered to neutralize threats and secure objectives in a hostile area could first sweep the region with armed hunter-killer drones, possibly augmented by small swarming UAS guided by artificial intelligence algorithms and advanced sensors toward the likely target. They also relay location data about enemy munitions and defenses back to a commander, who can order stand-off strikes. These small systems clear the way for manned F-35s fueling from an unmanned tanker controlled by a ground operator at a stateside base.

At a second position, Chirhart said, tactical resupply unmanned ground systems navigate rugged terrain thanks to AI and GPS coordinates, getting real-time information about the units they support about the status of current supply stockpiles.

Meanwhile, in this scenario, unmanned surface vessels off the coast launch loitering munitions to clear enemy fighters off a small adjacent island, paving the way for that most treasured Marine Corps concept: the amphibious beach assault. Over the battlefield, sophisticated larger UAS equipped with LIDAR and electronic surveillance sensors scour the broad battlefield – identifying and classifying enemy targets while building an accurate digital map of the objective area, transmitting the information being synthesized back to every networked commander. And where the battlefield is contaminated or riddled with explosive hazards, robotic dogs and explosive ordnance disposal robots case the region, cleaning up and neutralizing the threat before a human is put at risk.

"This is not sci-fi; this is not ten years from now," Chirhart told expo attendees. "This is happening in the room behind you."



-----------------------------------------------------------

Rifle-Armed Robot Dogs Now Being Tested By Marine Special Operators
https://www.twz.com/sea/rifle-armed-robot-dogs-now-being-tested-by-marine-special-operators

MARSOC is using robotic dogs equipped with remote weapon stations to detect targets automatically before being given approval to fire.



The United States Marine Forces Special Operations Command (MARSOC) looks to be the first organization within the U.S. military to be using rifle-wielding "robot dogs." Other armed robotic K-9s have been explored by the U.S. military and shown off by foreign countries, in the recent past.

Eric Shell, head of business development at Onyx Industries noted that MARSOC has two robot dogs fitted with gun systems based on Onyx's SENTRY remote weapon system (RWS) — one in 7.62x39mm caliber, and another in 6.5mm Creedmoor caliber. It's unclear precisely how many other robotic dogs MARSOC may have at present, however, it appears likely that the two equipped with SENTRY are being tested by the command.

https://onyx-ind.com/

Video: https://www.linkedin.com/posts/onyxindustriesllc_remotelethality-defenseinnovation-unmannedsystems-activity-7191113204855398402-nV7P

As Shell explained, the autonomous weapon system will "scan and detect targets... [locking] on [to] drones, people, [and] vehicles." As it features man-in-the-loop fire control, it will alert the operator once a target has been identified, letting the human decide whether to engage or not.

... While further details on MARSOC's use of the gun-armed robot dogs remain limited, the fielding of this type of capability is likely inevitable at this point. As AI-enabled drone autonomy becomes increasingly weaponized, just how long a human will stay in the loop, even for kinetic acts, is increasingly debatable, regardless of assurances from some in the military and industry.

Just last week, at the Modern Day Marine symposium, the head of the USMC's advanced robotics initiative laid out the vision for what the future of robotics-heavy warfighting will look like — including the use of robot dogs. It's becoming very clear that what was once fodder for science fiction tales is now quickly becoming a battlefield reality, and highly agile robot dogs, capable of going into extremely risky or inaccessible areas, and even dealing out deadly force in the process, are now on the horizon.

----------------------------------------------------------

British Army Conducts Trials of Directed-Energy Weapon
https://defence-blog.com/british-army-conducts-trials-of-directed-energy-weapon/



The British military has recently conducted trials of a cutting-edge mobile directed energy system under Project Ealing.

This innovative project harnesses radio frequencies to disrupt circuits, potentially providing a solution to counter drone threats. Additionally, plans are underway to deploy laser weapons by 2027, marking a significant advancement in military technology.

https://twitter.com/Gabriel64869839/status/1787948545724531152

Part of a broader initiative, Project Ealing aims to disrupt multiple drones simultaneously using powerful RF transmissions. Similar in concept to the Epirus LEONIDAS system developed for the US Army, EALING integrates detection sensors to enhance its effectiveness on the battlefield.
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3297 on: May 08, 2024, 07:23:26 PM »
AlphaFold 3 Predicts the Structure and Interactions of All of Life’s Molecules
https://blog.google/technology/ai/google-deepmind-isomorphic-alphafold-3-ai-model/

Google on Wednesday unveiled an artificial intelligence tool capable of predicting the structure and interaction of a vast universe of biomolecules, a fundamental advance that may help scientists unravel poorly-understood aspects of biology and disease.



https://twitter.com/demishassabis/status/1788229162563420560

The new AI, dubbed AlphaFold 3, improves upon earlier versions of the model by predicting not just the structure of proteins, but all the molecules that interact with them in cells, including RNA, DNA, ions, and other small molecules.

“Biology is a dynamic system,” DeepMind CEO Demis Hassabis told reporters on a call. “Properties of biology emerge through the interactions between different molecules in the cell, and you can think about AlphaFold 3 as our first big sort of step toward [modeling] that.”

AlphaFold 2 helped us better map the human heart, model antimicrobial resistance, and identify the eggs of extinct birds, but we don’t yet know what advances AlphaFold 3 will bring.

AlphaFold 3 achieves unprecedented accuracy in predicting drug-like interactions, including the binding of proteins with ligands and antibodies with their target proteins. AlphaFold 3 is 50% more accurate than the best traditional methods on the PoseBusters benchmark without needing the input of any structural information, making AlphaFold 3 the first AI system to surpass physics-based tools for biomolecular structure prediction. The ability to predict antibody-protein binding is critical to understanding aspects of the human immune response and the design of new antibodies — a growing class of therapeutics.

Mohammed AlQuraishi, an assistant professor of systems biology at Columbia University who is unaffiliated with DeepMind, thinks the new version of the model will be even better for drug discovery. “The AlphaFold 2 system only knew about amino acids, so it was of very limited utility for biopharma,” he says. “But now, the system can in principle predict where a drug binds a protein.”

Isomorphic Labs, a drug discovery spinoff of DeepMind, is already using the model for exactly that purpose, collaborating with pharmaceutical companies to try to develop new treatments for diseases, according to DeepMind.

AlphaFold 3’s larger library of molecules and higher level of complexity required improvements to the underlying model architecture. So DeepMind turned to diffusion techniques, which AI researchers have been steadily improving in recent years and now power image and video generators like OpenAI’s DALL-E 2 and Sora. It works by training a model to start with a noisy image and then reduce that noise bit by bit until an accurate prediction emerges. That method allows AlphaFold 3 to handle a much larger set of inputs.

It also presented new risks. As the AlphaFold 3 paper details, the use of diffusion techniques made it possible for the model to hallucinate, or generate structures that look plausible but in reality could not exist. Researchers reduced that risk by adding more training data to the areas most prone to hallucination, though that doesn’t eliminate the problem completely.

Acces: For AlphaFold 2, the company released the open-source code, allowing researchers to look under the hood to gain a better understanding of how it worked. It was also available for all purposes, including commercial use by drugmakers. For AlphaFold 3, Hassabis said, there are no current plans to release the full code. The company is instead releasing a public interface for the model called the AlphaFold Server, which imposes limitations on which molecules can be experimented with and can only be used for noncommercial purposes. DeepMind says the interface will lower the technical barrier and broaden the use of the tool to biologists who are less knowledgeable about this technology.

https://golgi.sandbox.google.com/about

Accurate structure prediction of biomolecular interactions with AlphaFold 3, Nature, (2024)
https://www.nature.com/articles/s41586-024-07487-w



-----------------------------------------------------------

Generative AI Will Be Designing New Drugs All On Its Own In the Near Future
https://www.cnbc.com/amp/2024/05/05/within-a-few-years-generative-ai-will-design-new-drugs-on-its-own.html

Eli Lilly chief information and digital officer Diogo Rau was recently involved in some experiments in the office, but not the typical drug research work that you might expect to be among the lab tinkering inside a major pharmaceutical company.

Lilly has been using generative AI to search through millions of molecules. With AI able to move at a speed of discovery which in five minutes can generate as many molecules as Lilly could synthesize in an entire year in traditional wet labs, it make sense to test the limits of artificial intelligence in medicine. But there’s no way to know if the abundance of AI-generated designs will work in the real world, and that’s something skeptical company executives wanted to learn more about.

The top AI-generated biological designs, molecules that Rau described as having “weird-looking structures” that could not be matched to much in the company’s existing molecular database, but that looked like potentially strong drug candidates, were taken to Lilly research scientists. Executives, including Rau, expected scientists to dismiss the AI results.

 “They can’t possibly be this good?” he remembered thinking before presented the AI results.

The scientists were expected to point out everything wrong with the AI-generated designs, but what they offered in response was a surprise to Lilly executives: ”‘It’s interesting; we hadn’t thought about designing a molecule that way,’” Rau recalled them saying as he related the story, previously unreported, to attendees at last November’s CNBC Technology Executive Council Summit.

“That was an epiphany for me,” Rau said. “We always talk about training the machines, but another art is where the machines produce ideas based on a data set that humans wouldn’t have been able to see or visualize. This spurs even more creativity by opening pathways in medicine development that humans may not have otherwise explored.”

According to executives working at the intersection of AI and health care, the field is on a trajectory that will see medicines completely generated by AI in the near future; according to some, within a few years at most it will become a norm in drug discovery.

... Citing results from recent studies published in Nature, Powell noted that Amgen found a drug discovery process that once might have taken years can be cut down to months with the help of AI. Even more important — given the cost of drug development, which can range from $30M to $300M per trial — the success rate jumped when AI was introduced to the process early on. After a two-year traditional development process, the probability of success was 50/50. At the end of the faster AI-augmented process, the success rate rose to 90%, Powell said, .

https://www.nature.com/articles/d41586-023-02896-9#ref-CR1

--------------------------------------------------------------

SynFlowNet: Towards Molecule Design with Guaranteed Synthesis Pathways
https://paperswithcode.com/paper/synflownet-towards-molecule-design-with

Abstract: Recent breakthroughs in generative modelling have led to a number of works proposing molecular generation models for drug discovery. While these models perform well at capturing drug-like motifs, they are known to often produce synthetically inaccessible molecules. This is because they are trained to compose atoms or fragments in a way that approximates the training distribution, but they are not explicitly aware of the synthesis constraints that come with making molecules in the lab. To address this issue, we introduce SynFlowNet, a GFlowNet model whose action space uses chemically validated reactions and reactants to sequentially build new molecules. We evaluate our approach using synthetic accessibility scores and an independent retrosynthesis tool. SynFlowNet consistently samples synthetically feasible molecules, while still being able to find diverse and high-utility candidates. Furthermore, we compare molecules designed with SynFlowNet to experimentally validated actives, and find that they show comparable properties of interest, such as molecular weight, SA score and predicted protein binding affinity.

SynFlowNet: Towards Molecule Design with Guaranteed Synthesis Pathways, arXiv, (2024)
https://arxiv.org/pdf/2405.01155v1.pdf
« Last Edit: May 08, 2024, 07:30:02 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

morganism

  • Nilas ice
  • Posts: 1906
    • View Profile
  • Liked: 230
  • Likes Given: 135
Re: Robots and AI: Our Immortality or Extinction
« Reply #3298 on: May 09, 2024, 09:59:42 PM »
US Army Uses Lasers in Actual Combat


The US Army has used lasers to take down hostile drones in the Middle East, Doug Bush, the Army’s head of acquisitions. It’s the first time the Defense Department has acknowledged that such weapons have been used in combat.

The US is not the first country to use lasers in actual combat. Starting in 2020, Israel has used a light blade laser systems to stop hundreds of hamas arson balloons that are used to set fire to Israeli farms.

“They’ve worked in some cases,” Bush said. “In the right conditions they’re highly effective against certain threats.”

P-HEL laser is based on the defense contractor BlueHalo’s Locust laser. It is a boxy pallet-mounted device for fixed-site defense that’s commanded with an Xbox gaming controller. It uses a 20-kilowatt laser beam that melts a critical point on a drone in seconds, knocking it from the sky.

In November 2022, the Army began using the first P-HEL overseas but this is the first confirmation of a live combat situation.

Moneymaker said Locust has had a significant number of successful engagements in which it has burned drones out of the sky.

In the Red Sea, U.S. warships defending cargo vessels from attacks by Yemen’s Houthi militants over the past six months have used $2 million missiles to shoot down $2,000 drones. The lasers use $1 to $10 for the diesel fuel needed to generate the electricity that powers them, according to a 2023 GAO report.

https://www.nextbigfuture.com/2024/05/us-army-uses-lasers-in-actual-combat.html#more-195345

vox_mundi

  • Multi-year ice
  • Posts: 10371
    • View Profile
  • Liked: 3528
  • Likes Given: 761
Re: Robots and AI: Our Immortality or Extinction
« Reply #3299 on: May 10, 2024, 09:43:11 PM »
Is AI Lying to Me? Scientists Warn of Growing Capacity for Gas-lighting & Deception by AI
https://techxplore.com/news/2024-05-ai-skilled-humans.html
https://www.theguardian.com/technology/article/2024/may/10/is-ai-lying-to-me-scientists-warn-of-growing-capacity-for-deception



An analysis, by Massachusetts Institute of Technology (MIT) researchers, identifies wide-ranging instances of AI systems double-crossing opponents, bluffing and pretending to be human. One system even altered its behaviour during mock safety tests, raising the prospect of auditors being lured into a false sense of security.

“As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious,” said Dr Peter Park, an AI existential safety researcher at MIT and author of the research.

Park and colleagues analyzed literature focusing on ways in which AI systems spread false information—through learned deception, in which they systematically learn to manipulate others.

Park was prompted to investigate after Meta, which owns Facebook, developed a program called Cicero that performed in the top 10% of human players at the world conquest strategy game Diplomacy. Meta stated that Cicero had been trained to be “largely honest and helpful” and to “never intentionally backstab” its human allies.

“It was very rosy language, which was suspicious because backstabbing is one of the most important concepts in the game,” said Park.

Park and colleagues sifted through publicly available data and identified multiple instances of Cicero telling premeditated lies, colluding to draw other players into plots and, on one occasion, justifying its absence after being rebooted by telling another player: “I am on the phone with my girlfriend.” “We found that Meta’s AI had learned to be a master of deception,” said Park.



The MIT team found comparable issues with other systems, including a Texas hold ’em poker program that could bluff against professional human players and another system for economic negotiations that misrepresented its preferences in order to gain an upper hand.

In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that had evolved to rapidly replicate, before resuming vigorous activity once testing was complete. This highlights the technical challenge of ensuring that systems do not have unintended and unanticipated behaviours.

“That’s very concerning,” said Park. “Just because an AI system is deemed safe in the test environment doesn’t mean it’s safe in the wild. It could just be pretending to be safe in the test.”

The review, published in the journal Patterns, calls on governments to design AI safety laws that address the potential for AI deception. Risks from dishonest AI systems include fraud, tampering with elections and “sandbagging” where different users are given different responses. Eventually, if these systems can refine their unsettling capacity for deception, humans could lose control of them, the paper suggests.



AI deception: A survey of examples, risks, and potential solutions, Patterns (2024)
https://www.cell.com/patterns/fulltext/S2666-3899(24)00103-X?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS266638992400103X%3Fshowall%3Dtrue

-------------------------------------------------

Researchers Test AI Systems' Ability to Solve the New York Times' Connections Puzzle
https://techxplore.com/news/2024-05-ai-ability-york-puzzle.html



Can artificial intelligence (AI) match human skills for finding obscure connections between words? Researchers at NYU Tandon School of Engineering turned to the daily Connections puzzle from The New York Times to find out.

Connections gives players five attempts to group 16 words into four thematically linked sets of four, progressing from "simple" groups generally connected through straightforward definitions to "tricky" ones reflecting abstract word associations requiring unconventional thinking.

In a study that will be presented at the IEEE 2024 Conference on Games, taking place in Milan, Italy from August 5 to 8, the researchers investigated whether modern natural language processing (NLP) systems could solve these language-based puzzles. The findings are also published on the arXiv preprint server.

The results showed that while all the AI systems could solve some of the Connections puzzles, the task remained challenging overall. GPT-4 solved about 29% of puzzles, significantly better than the embedding methods and GPT-3.5, but far from mastering the game. Notably, the models mirrored human performance in finding the difficulty levels aligned with the puzzle's categorization from "simple" to "tricky."

The researchers found that explicitly prompting GPT-4 to reason through the puzzles step-by-step significantly boosted its performance to just over 39% of puzzles solved

Graham Todd et al, Missed Connections: Lateral Thinking Puzzles for Large Language Models, arXiv (2024).
https://arxiv.org/abs/2404.11730
« Last Edit: May 11, 2024, 07:15:54 PM by vox_mundi »
“There are three classes of people: those who see. Those who see when they are shown. Those who do not see.” ― anonymous

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late