Support the Arctic Sea Ice Forum and Blog

Author Topic: Robots and AI: Our Immortality or Extinction  (Read 458478 times)

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1450 on: September 05, 2022, 07:56:17 PM »
Robo-Bug: A Rechargeable, Remote-Controllable Cyborg Cockroach
https://techxplore.com/news/2022-09-robo-bug-rechargeable-remote-controllable-cyborg-cockroach.html



An international team led by researchers at the RIKEN Cluster for Pioneering Research (CPR) has engineered a system for creating remote controlled cyborg cockroaches, equipped with a tiny wireless control module that is powered by a rechargeable battery attached to a solar cell. Despite the mechanic devices, ultrathin electronics and flexible materials allow the insects to move freely. These achievements, reported in the scientific journal npj Flexible Electronics on September 5, will help make the use of cyborg insects a practical reality.

... Keeping the battery adequately charged is fundamental—nobody wants a suddenly out-of-control team of cyborg cockroaches roaming around.

... Led by Kenjiro Fukuda, RIKEN CPR, the team experimented with Madagascar cockroaches, which are approximately 6 cm long. They attached the wireless leg-control module and lithium polymer battery to the top of the insect on the thorax using a specially designed backpack, which was modeled after the body of a model cockroach. The backpack was 3D printed with an elastic polymer and conformed perfectly to the curved surface of the cockroach, allowing the rigid electronic device to be stably mounted on the thorax for more than a month.

The ultrathin 0.004 mm thick organic solar cell module was mounted on the dorsal side of the abdomen. "The body-mounted ultrathin organic solar cell module achieves a power output of 17.2 mW, which is more than 50 times larger than the power output of current state-of-the art energy harvesting devices on living insects," according to Fukuda.

Once these components were integrated into the cockroaches, along with wires that stimulate the leg segments, the new cyborgs were tested. The battery was charged with pseudo-sunlight for 30 minutes, and animals were made to turn left and right using the wireless remote control. ...

Yujiro Kakei et al, Integration of body-mounted ultrasoft organic solar cell on cyborg insects with intact mobility, npj Flexible Electronics (2022).
https://www.nature.com/articles/s41528-022-00207-2

-----------------------------------------------------

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1451 on: September 05, 2022, 08:00:19 PM »
Panera Tests New AI Technology for Bakery-Cafe Drive-Thru Lanes
https://www.businesswire.com/news/home/20220829005536/en/Panera-Tests-New-AI-Technology-for-Bakery-Cafe-Drive-Thru-Lanes

Equally loved and loathed by consumers nationwide, the Panera Bread fast casual chain has announced that it plans to test out artificial intelligence (AI) at its drive-thru windows for some reason.

Per a press release, Panera said that starting this week, it'll be testing an AI system made by the hospitality technology company OpenCity that's named "Tori" — because the whole thing definitely isn't uncanny valley enough, apparently.

Tori, the press release says, will be deployed at two Upstate New York locations and will take orders like a real, live person. An actual human will be the one to take customers' money and hand them their food at the window, but Tori will be the one calling the shots, it seems.

"The addition of this technology at the drive-thru will help to cut down wait times, improve order accuracy and allow associates to focus on freshly preparing guests’ orders," reads Panera's statement.

Panera isn't the first company to test drive-thru AI. McDonald's has also used voice recognition software at some of its drive-thru locations in the Chicago area, for instance, and even got slapped with a lawsuit for failing to alert customers that it was using their biometrics.

That said, Panera's use of the Tori AI is taking things to a whole different level — and bringing us one step closer to the robotic restaurant worker singularity we've all been dreading.

https://futurism.com/the-byte/panera-drive-thru-ai

-----------------------------------------------------

« Last Edit: September 05, 2022, 08:23:35 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1452 on: September 05, 2022, 08:24:24 PM »
Prosegur Security Launches New Quadruped Patrol Option
https://www.therobotreport.com/prosegur-security-launches-new-quadruped-patrol-option/



There’s a new security dog in town and his name is Yellow. The Boston Dynamics Spot quadruped has been equipped with a unique sensor payload by Prosegur Security USA. This payload gives the robot enhanced sensing capabilities.

Prosegur is primarily a human guard services company, offering contractual guard services in a number of markets. Prosegur is expanding its clientele options by introducing The Yellow Robot, a new service, to its portfolio.

The robot is meant to be used in places that are too dangerous or hard for people to access. Yellow can distinguish “friend from foe” using GenzAI facial recognition technology from Azena, warning security of potential attacks.

Yellow uses video analytics in its guarding activities as an extension of Prosegur’s GenzAI platform and in collaboration with software from the Azena marketplace to detect and recognize suspicious aspects and immediately notify the SOC of any potential risks.



-------------------------------------------

Fahrenheit 451: ... The Hound represents government control and manipulation of technology. Originally, dogs served as the rescuers for firemen. They were given the job of sniffing out the injured or weak. However, in this dystopia, the Hound has been made into a watchdog of society.
« Last Edit: September 05, 2022, 08:42:25 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1453 on: September 08, 2022, 05:36:52 PM »
Why Household Robot Servants are a Lot Harder to Build than Robotic Vacuums and Automated Warehouse Workers
https://techxplore.com/news/2022-09-household-robot-servants-lot-harder.html

With recent advances in artificial intelligence and robotics technology, there is growing interest in developing and marketing household robots capable of handling a variety of domestic chores.

Tesla is building a humanoid robot, which, according to CEO Elon Musk, could be used for cooking meals and helping elderly people. Amazon recently acquired iRobot, a prominent robotic vacuum manufacturer, and has been investing heavily in the technology through the Amazon Robotics program to expand robotics technology to the consumer market. In May 2022, Dyson, a company renowned for its power vacuum cleaners, announced that it plans to build the U.K.'s largest robotics center devoted to developing household robots that carry out daily domestic tasks in residential spaces.

Despite the growing interest, would-be customers may have to wait awhile for those robots to come on the market. While devices such as smart thermostats and security systems are widely used in homes today, the commercial use of household robots is still in its infancy.



While they appear straightforward for humans, many household tasks are too complex for robots. Industrial robots are excellent for repetitive operations in which the robot motion can be preprogrammed. But household tasks are often unique to the situation and could be full of surprises that require the robot to constantly make decisions and change its route in order to perform the tasks. (... not like driving a car)
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

kassy

  • First-year ice
  • Posts: 9036
    • View Profile
  • Liked: 2191
  • Likes Given: 2034
Re: Robots and AI: Our Immortality or Extinction
« Reply #1454 on: September 08, 2022, 09:35:34 PM »
Terrifying ghost woman keeps 'haunting' AI-generated images and nobody knows why

An artist who uses AI to generate original artworks has 'created a monster'—a nightmarish woman who keeps reappearing in 'horrifying' contexts.

The artist, who goes by the alias Supercomposite, claims he discovered a woman he has called 'Loab' who almost always appears in 'macabre', graphic, and gory images. These images are created by typing a written prompt into AI software which then generates them automatically.

The artist,said: "I discovered this woman, who I call Loab, in April. The AI reproduced her more easily than most celebrities. Her presence is persistent and she haunts every image she touches."

The strangest part is that she exists in 'multiple' image-generation AI models, leading some to speculate that Loab is a ghost haunting the machine.

Loab first appeared when Supercomposite was experimenting with 'negative-weight' prompts, or when he instructs the AI to create the opposite of a prompt. Since then, she has cropped up in many different images.

They added: "Through some kind of emergent statistical accident, something about this woman is adjacent to extremely gory and macabre imagery in the distribution of the AI's world knowledge."

...

https://www.dailystar.co.uk/tech/news/terrifying-ghost-woman-keeps-haunting-27940082

Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

SteveMDFP

  • Young ice
  • Posts: 2703
    • View Profile
  • Liked: 647
  • Likes Given: 71
Re: Robots and AI: Our Immortality or Extinction
« Reply #1455 on: September 08, 2022, 10:19:50 PM »
Terrifying ghost woman keeps 'haunting' AI-generated images and nobody knows why

An artist who uses AI to generate original artworks has 'created a monster'—a nightmarish woman who keeps reappearing in 'horrifying' contexts.

The artist, who goes by the alias Supercomposite, claims he discovered a woman he has called 'Loab' who almost always appears in 'macabre', graphic, and gory images. These images are created by typing a written prompt into AI software which then generates them automatically.

The artist,said: "I discovered this woman, who I call Loab, in April. The AI reproduced her more easily than most celebrities. Her presence is persistent and she haunts every image she touches."

The strangest part is that she exists in 'multiple' image-generation AI models, leading some to speculate that Loab is a ghost haunting the machine.

Loab first appeared when Supercomposite was experimenting with 'negative-weight' prompts, or when he instructs the AI to create the opposite of a prompt. Since then, she has cropped up in many different images.

They added: "Through some kind of emergent statistical accident, something about this woman is adjacent to extremely gory and macabre imagery in the distribution of the AI's world knowledge."

...

https://www.dailystar.co.uk/tech/news/terrifying-ghost-woman-keeps-haunting-27940082

This is all extremely spooky.


kassy

  • First-year ice
  • Posts: 9036
    • View Profile
  • Liked: 2191
  • Likes Given: 2034
Re: Robots and AI: Our Immortality or Extinction
« Reply #1456 on: September 09, 2022, 05:58:40 PM »
It does make for a great album cover. Dress up the band like the smaller figures, put two white pom poms on the drums and the stage set is done too.  :)

It is really interesting. The hair is relatively full and dark so this implies all the skin damage is a sudden fast process.
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1457 on: September 13, 2022, 03:00:51 PM »


------------------------------------------------

Another Dilbert Prediction Comes True: AI Appointed CEO
https://www.fudzilla.com/news/ai/55466-ai-appointed-ceo

A Chinese metaverse company has officially appointed an AI-powered virtual humanoid robot as the CEO.

Tang Yu, the robot, will lead the operations at China's NetDragon Websoft. With this responsibility, she is going to become the first robot in history to hold an executive role in a firm.

To be fair while the job takes the most money in the company it is not as if anything a CEO does makes the slightest bit of difference. Many companies could easily replaced their CEO with a badger and the only difference would be the strange smell at board meetings.

The company Yu will be leading develops multiplayer online games and creates mobile applications. She will take care of the operational aspects which are worth almost $10 billion.

The press statement of the company revealed that the new humanoid CEO will increase the execution speed of tasks and "optimise process flow". As the robot will act as an analytical tool, she will ensure logical decision-making every single day.

https://www.prnewswire.com/news-releases/netdragon-appoints-its-first-virtual-ceo-301613062.html

Yu will also make the risk management system more efficient, the statement claimed.

Chairman of Alibaba Group, Jack Ma once predicted  that in 30 years, Time Magazine would feature a robot as one of the best CEOs.  Will they do a better job than reality game show contestants?

Still, this is another Dilbert prediction that has come true.
« Last Edit: September 13, 2022, 10:35:12 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1458 on: September 13, 2022, 06:47:28 PM »
Toward a Turing Machine? Microsoft & Harvard Propose Neural Networks That Discover Learning Algorithms Themselves
https://syncedreview.com/2022/09/12/toward-a-turing-machine-microsoft-harvard-propose-neural-networks-that-discover-learning-algorithms-themselves/

Speaking at the London Mathematical Society in 1947, Alan Turing seemed to anticipate the current state of machine learning research: “What we want is a machine that can learn from experience . . . like a pupil who had learnt much from his master, but had added much more by his own work.”

Although neural networks (NNs) have demonstrated impressive learning power in recent years, they still fail to outperform human-designed learning algorithms. A question naturally arises: Can NNs be made to discover efficient learning algorithms on their own?

In the new paper Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms, a research team from Microsoft and Harvard University demonstrates that NNs can discover succinct learning algorithms on their own in polynomial time and presents an architecture that combines recurrent weight-sharing between layers and convolutional weight-sharing to reduce models’ parameter size from even trillions of nodes down to a constant.

The team’s proposed neural network architecture comprises a dense first layer of size linear in m (the number of samples) and d (the dimension of the input). This layer’s output is fed into an RCNN (with recurrent weight-sharing across depth and convolutional weight-sharing across width), and the RCNN’s final outputs are then passed through a pixel-wise NN and summed to produce a scalar prediction.

The team’s key contribution is the design of this simple recurrent convolutional (RCNN) architecture, which combines recurrent weight-sharing across layers and convolutional weight-sharing within each layer and reduces the number of weights in the convolutional filter to a few — even a constant — while maintaining the weight functions to determine activations for a very wide and deep network.

Overall, the study demonstrates that a simple NN architecture can effectively achieve Turing-optimality — wherein it learns as well as any bounded learning algorithm. The researchers believe reducing the size of the dense parameters to depend on the algorithm’s memory usage instead of the training sample size and using stochastic gradient descent (SGD) beyond memorization could make the architecture even more concise and natural.



Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms, arXiv, 2022
https://arxiv.org/abs/2209.00735
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1459 on: September 13, 2022, 09:56:00 PM »
Grapes, Berries and Robots: Is Silicon Valley Coming for Farm Workers Jobs?
https://www.theguardian.com/environment/2022/sep/08/california-agriculture-technology-farm-workers

The global ag-tech revolution has accelerated in recent years as the climate crisis puts a strain on farmers and crops, and the pandemic continues to disrupt the workforce on which the industry depends. In California, where much of this technology is being developed and tested, that’s raised complex questions for the state’s farm workers

Not all workers view automation as a bad thing, advocates say, because it has the potential to alleviate difficult aspects of the job. But they also fear the rush to automate is being done without their input, and in a way that privileges farm owners, tech developers and investors without considering the consequences for workers.

https://twitter.com/codeorg/status/1562140752318046209

Quote
... “We’re looking at systems that were not designed to have shared wealth distribution, we’re looking at systems that were designed to continue to extract and build wealth toward the owners.”

Farm workers have historically been treated poorly by the agriculture industry and have had to organize and fight for any gains to their working conditions and wages. Ricardo Salvador, a senior scientist and director of the Food and Environment Program at Union of Concerned Scientists, argued this history needs to be addressed by those advocating for new technologies if they are going to live up to the promised benefits.

... Mechanization brought into tomato harvesting in the 1960s resulted in an estimated 32,000 farm workers losing their jobs and pushing hundreds of small farms out of business. Writing on the impact of automation of tomato processing in a 1978 article for the Nation, the farm labor leader Cesar Chavez highlighted the human cost of this “wonderful technology”.

“Research should benefit everyone, workers as well as growers,” he wrote.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1460 on: September 13, 2022, 09:58:55 PM »


DARPA’s AdvaNced airCraft Infrastructure-Less Launch And RecoverY X-Plane program, nicknamed ANCILLARY, aims to develop and flight demonstrate critical technologies required for a leap ahead in vertical takeoff and landing (VTOL), low-weight, high-payload, and long-endurance capabilities, UAV.

-----------------------------------------------

Death from Above: Swarm Of 40 Drones Over Fort Irwin An Ominous Sign Of What’s To Come
https://www.thedrive.com/the-war-zone/swarm-of-40-drones-over-fort-irwin-an-ominous-sign-of-whats-to-come



... The U.S. military has been running flat out trying to catch up to what is a rapidly evolving range of capabilities that are now causing havoc on battlefields and beyond. One ominous video posted by the commander of the U.S. Army's National Training Center sums up what our troops are going to be facing in the future and how the military is racing to prepare for it.

Brig. Gen. Curtis Taylor posted the video on Twitter under his official account, with the caption stating:

... At sunrise this morning a swarm of 40 quadcopters all equipped with cameras, MILES (Multiple Integrated Laser Engagement System), and lethal munition capable  launched in advance of 11th ACR’s (11th Armored Cavalry Regiment) attack on a prepared defense by 1AD (1st Armored Division). Drones will be as important in the first battle of the next war as artillery is today.

Video: https://twitter.com/NTCLead6/status/1569082824820408321

Now imagine 400 or 4000 ... or 40,000

... When drones themselves pick and prosecute targets cooperatively and autonomously as a group, with little or no real-time human direction, things get much more challenging. Think hive mind here that can adapt on the fly to maintain its maximum potential. It is a very resilient and very troubling threat to counter. This capability is not science fiction and will increasingly be something to worry about, especially when facing a peer adversary like China which has been actively developing it in various forms for years.

Expect to see more large simulated adversary drone swarms at combat training drills and installation force protection exercises in the future.

-------------------------------------------
« Last Edit: September 14, 2022, 02:42:06 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

trm1958

  • Frazil ice
  • Posts: 491
  • Will civilization survive Climate Breakdown?
    • View Profile
  • Liked: 74
  • Likes Given: 218
Re: Robots and AI: Our Immortality or Extinction
« Reply #1461 on: September 15, 2022, 02:27:49 PM »
IIRC Turing predicted his test would be passed by 2000.
If we dated our era from the founding of the Christian Church instead of Jesus' birth (as I always thought we should), he just might have been right.

SimonF92

  • Grease ice
  • Posts: 610
    • View Profile
  • Liked: 226
  • Likes Given: 92
Re: Robots and AI: Our Immortality or Extinction
« Reply #1462 on: September 15, 2022, 05:36:04 PM »
Does the Turing test really mean anything if the AI has just gotten so good at mimicking human thought processes that we cant tell the difference?

To me, that is a completely different thing from consciousness, and its why I dont put much value in the Turing test.
Bunch of small python Arctic Apps:
https://github.com/SimonF92/Arctic

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1463 on: September 15, 2022, 05:49:08 PM »
Foo Fighters Are Back: 'Cosmic' and 'Phantom' UFOs Are All Over Ukraine's Skies, Government Report Claims
https://www.livescience.com/ukraine-ufo-uap-report

The skies over Kyiv are swarming with unidentified flying objects (UFOs), according to a new report from the Main Astronomical Observatory of the National Academy of Sciences of Ukraine.

Of course, given that Russia and Ukraine have been locked in a months-long war that relies heavily on aircraft and drones, it's likely that many of these so-called UFOs are military tools that appear too fleetingly to identify, a U.S. intelligence agency has speculated.

Published to the preprint database arXiv, the report — which has not yet been peer-reviewed — describes recent steps that Ukrainian astronomers have taken to monitor fast-moving, low-visibility objects in the daytime sky over Kyiv and the surrounding villages. Using specially calibrated cameras at two weather stations in Kyiv and Vinarivka, a village about 75 miles (120 kilometers) to the south, astronomers observed dozens of objects "that cannot scientifically be identified as known natural phenomena," the report said.

Government agencies tend to refer to such objects as UAP, short for  "unidentified aerial phenomena."

"We observe a significant number of objects whose nature is not clear," the team wrote. "We see them everywhere."

The researchers divided their UAP observations into two categories: "cosmics" and "phantoms." According to the report, cosmics are luminous objects that are brighter than the background sky. These objects are designated with birds' names — such as "swift," "falcon" and "eagle" — and have been observed flying solo as well as in "squadrons," the team wrote.

Phantoms, by contrast, are dark objects, usually appearing "completely black," as if absorbing all light falling onto them, the team added. By comparing observations from the two participating observatories, the researchers estimated that phantoms range from 10 to 40 feet (3 to 12 meters) wide and can travel at speeds of up to 33,000 mph (53,000 km/h). For comparison, an intercontinental ballistic missile can reach speeds of up to 15,000 mph (24,000 km/h), according to The Center for Arms Control and Non-Proliferation.

The researchers did not speculate as to what these UFOs may be. Rather, their paper focuses on the methods and calculations used to detect the objects. However, according to a 2021 report from the U.S. Office of the Director of National Intelligence (ODNI), it's likely that at least some UAP are "technologies deployed by China, Russia, another nation, or a non-governmental entity."

Given the ongoing Russian invasion of Ukraine, which began in February 2022, it's reasonable to suspect that some UAP described in the new report may be linked to foreign surveillance or military technologies.

Unidentified aerial phenomena I. Observations of events, arxiv, (2022)
https://arxiv.org/pdf/2208.11215.pdf

https://en.wikipedia.org/wiki/Foo_fighter
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1464 on: September 15, 2022, 10:49:59 PM »
Amazon Testing Pinch-Grasping Robots for e-Commerce Fulfillment
https://www.therobotreport.com/amazon-testing-pinch-grasping-robots-for-e-commerce-fulfillment/



Robots picking items in Amazon’s warehouses need to be able to handle millions of different items of various shapes, sizes and weights. Right now, the company primarily uses suction grippers, which use air and a tight seal to lift items, but Amazon’s robotics team is developing a more flexible gripper to reliably pick up items suction grippers struggle to pick.

... So far, Amazon’s team has seen encouraging success with its pinch-grasping robots. A prototype robot achieved a 10-fold reduction in damage on certain items, like books, without slowing down operations, Amazon said.

----------------------------------------------



Robot arms

-------------------------------------------------

Autonomous trip planning



First, we show Cassie a map with a hand-drawn path, which she needs to follow. Second, she localizes herself into the OpenStreetMap, used as a topological global map. Third, she then converts the drawn path to her own understanding in the OpenStreetMap. Fourth, she determines terrain types such as sidewalks, roads, and grass. Fifth, she decides what categories she should walk on at the moment. Sixth, a multi-layered map is built. Seventh, a reactive CLF planning algorithm is guiding Cassie to walk safely without hitting obstacles. Finally, the planning signal is sent to Cassie’s 20 degree-of-freedom motion controller.

-------------------------------------------



SIRAS is dual use and classified under US Department of Commerce jurisdiction as EAR 6A003.b.4.a.
« Last Edit: September 15, 2022, 10:57:50 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1465 on: September 15, 2022, 11:07:04 PM »
‘Existential Catastrophe’ Caused by AI Is Likely Unavoidable, DeepMind Researcher Warns
https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064



Researchers from the University of Oxford and Google’s artificial intelligence division DeepMind have claimed that there is a high probability of advanced forms of AI becoming “existentially dangerous to life on Earth”.

In a recent article in the peer-reviewed journal AI Magazine, the researchers warned that there would be “catastrophic consequences” if the development of certain AI agents continues.

https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064

Leading philosphers like Oxford University’s Nick Bostrom have previously spoken of the threat posed by advanced forms of artificial intelligence, though one of authors of the new paper claimed such warnings did not go far enough.

“Bostrom, [computer scientist Stuart] Russell, and others have argued that advanced AI poses a threat to humanity,” Michael Cohen wrote in a Twitter thread accompanying the article.

https://twitter.com/Michael05156007/status/1567240031168856064

“Under the conditions we have identified, our conclusion is much stronger than that of any previous publication – an existential catastrophe is not just possible, but likely.”

The paper proposes a scenario whereby an AI agent figures out a strategy to cheat in order to receive a reward that it is pre-programmed to seek.

In order to maximize its reward potential, it requires as much energy as is possible to obtain. The thought experiment sees humanity ultimately competing against the AI for energy resources.

Quote
... “Winning the competition of ‘getting to use the last bit of available energy’ while playing against something much smarter than us would probably be very hard,” ... “Losing would be fatal.

“These possibilities, however theoretical, mean we should be progressing slowly – if at all – toward the goal of more powerful AI.”

DeepMind has already proposed a safeguard against such an eventuality, dubbing it “the big red button”. In a 2016 paper titled ‘Safely Interruptible Agents’, the AI firm outlined a framework for preventing advanced machines from ignoring turn-off commands and becoming an out-of-control rogue agent.

Professor Bostrom previously described DeepMind – whose AI accomplishments include beating human champions at the boardgame Go and manipulating nuclear fusion – as the closest to creating human-level artificial intelligence.

The Sweidish philospher also said it would be a “great tragedy” if AI development did not continue, as it holds the potential to cure diseases and advance civilisation at an otherwise impossible rate.

Advanced artificial agents intervene in the provision of reward, AI Magazine, (2022)
https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064

https://t.co/GGBLwC4fdX

---------------------------------------------

Losing to China in AI, Emerging Tech Will Cost U.S. Trillions, Threaten Security, Says Panel
https://news.usni.org/2022/09/13/losing-to-china-in-ai-emerging-tech-will-cost-u-s-trillions-threaten-security-says-panel

A grim future awaits the United States if it loses the competition with China on developing key technologies like artificial intelligence in the near future, the authors of a special government-backed study told reporters on Monday.

If China wins the technological competition, it can use its advancements in artificial intelligence and biological technology to enhance its own country’s economy, military and society to the determent of others, said Robert Work, former deputy defense secretary and co-chair of the Special Competitive Studies Project, which examined international artificial intelligence and technological competition. Work is the chair of the U.S. Naval Institute Board of Directors.

Losing, in Work’s opinion, means that U.S. security will be threatened as China is able to establish global surveillance, companies will lose trillions of dollars and America will be reliant on China or other countries under Chinese influence for core technologies.

If that world happens, it’s going to be very bleak for democracy … China’s sphere of influence will grow as its technological platforms proliferate throughout the world, and they will be able to establish surveillance on a global scale,” he said. “So that’s what losing looks like.”

The U.S. needs to address the technological competition now because there is only one budget cycle before 2025, the year that China set as a target for global dominance in technology manufacturing, said SCSP CEO Yll Bajraktari. By 2030, Beijing wants to be the AI global leader, he noted.

“The 2025-2030 timeframe is a really important period for our country and the global geopolitical security,” he said.

The technological competition goes beyond conflict or a military focus, said Eric Schmidt, co-chair of the Special Competitive Studies Project and former Google CEO. His idea of winning looks at platforms.

One of the most popular social media sites is TikTok, which is Chinese-owned and operated out of Shanghai, Schmidt noted.

The U.S. also banned Huawei, a Chinese technology corporation, which outpaced American technology when it came to 5G.

“You can imagine the the issues with having platforms dominated by non-western firms, which we rely on,” Schmidt said.

The competition boils down to three battlegrounds, laid out in the Special Competitive Studies Project report, Bajraktari said. Those three spaces are AI, chips and 5G.

Report: https://s3.documentcloud.org/documents/22337215/scsp-mid-decade-challenges-to-national-competitiveness.pdf

---------------------------------------------------
« Last Edit: September 16, 2022, 05:21:15 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1466 on: September 16, 2022, 12:43:08 AM »
Anyone Can Use This AI Art Generator — That’s the Risk
https://www.theverge.com/2022/9/15/23340673/ai-image-generation-stable-diffusion-explained-ethics-copyright-data

Type and ye shall receive. That’s the basic premise of AI text-to-image programs.



Users type out descriptions of whatever they like — a cyborg Joe Biden wielding a samurai sword; a medieval tapestry of frogs jousting — and these systems, trained on huge databases of existing art, generate never-before-seen pictures that match these prompts (more or less). And while the output of current state-of-the-art models certainly isn’t perfect, for those excited about the technology, such flaws are insignificant when measured against the potential of software that generates any image you can imagine.

Up until now, though, these “type and ye shall receive” tools have been controlled by a small number of well-funded companies like OpenAI (which built DALL-E) and Google (which made Imagen). These are big outfits with a lot to lose, and as a result, they’ve balanced the possibilities of what this technology can do with what their corporate reputations will allow.

In the last few weeks, though, this status quo has been upended by a new player on the scene: a text-to-image program named Stable Diffusion that offers open-source, unfiltered image generation, that’s free to use for anyone with a decent computer and a little technical knowhow. The model was only released publicly on August 22nd, but already, its influence has spread, quietly and rapidly. It’s been embraced by the AI art community and decried by many traditional artists; it’s been picked apart, exalted, and worried over.

https://stability.ai/blog/stable-diffusion-public-release

... “The reality is, this is an alien technology that allows for superpowers,” Emad Mostaque, CEO of Stability AI, the company that has funded the development of Stable Diffusion, tells The Verge. “We’ve seen three-year-olds to 90-year–olds able to create for the first time. But we’ve also seen people create amazingly hateful things.”

Although momentum behind AI-generated art has been building for a while, the release of Stable Diffusion might be the moment the technology really takes off. It’s free to use, easy to build on, and puts fewer barriers in the way of what users can generate. That makes what happens next difficult to predict.

The key difference between Stable Diffusion and other AI art generators is the focus on open source. Even Midjourney — another text-to-image model that’s being built outside of the Big Tech compound — doesn’t offer such comprehensive access to its software.

https://huggingface.co/spaces/stabilityai/stable-diffusion

 If you check out the Stable Diffusion subreddit, for example, you can see users not only sharing their favorite image prompts (e.g., “McDonalds in Edo-Period Japan” and “Bernie Sanders in a Mad Max movie that doesn’t exist”) but coming up with new use cases for the program and integrating it into established creative tools.

https://www.reddit.com/r/StableDiffusion/

As one Redditor commented underneath the post: “I’m stunned by all the amazing projects coming out and it hasn’t even been a week since release. The world in 6 months is going to be a totally different place.”

However, open source means putting all these capabilities in the hands of the public — and dealing with the consequences, both good and bad.

The most dramatic difference for Stability AI’s open-source approach is its hands-off approach to moderation. Unlike DALL-E, it’s easy to use the model to generate imagery that is violent or sexual; that depicts public figures and celebrities; or that mimics copyrighted imagery, from the work of small artists to the mascots of huge corporations. ... (See, for example, a post in the Stable Diffusion subreddit titled “How to remove the safety filter in 5 seconds.”)

Stability CEO Mostaque’s view on this is straightforward. “Ultimately, it’s peoples’ responsibility as to whether they are ethical, moral, and legal in how they operate this technology,” he says. “The bad stuff that people create with it [...] I think it will be a very, very small percentage of the total use.”

Ultimately, the company is hewing to one of the industry’s most well-rehearsed (and frequently criticized) mantras: that technology is neutral, and that building things is better than not. “This is the approach that we take because we see these tools as a potential infrastructure to advance humanity,” says Mostaque. “We think the positive elements far outweigh the negatives.”

----------------------------------------------

Commander Adams : [to himself]  Monsters from the id...

Dr. Morbius : Huh?

Commander Adams : Monsters from the subconscious. Of course. That's what Doc meant. Morbius. The big machine, 8,000 miles of klystron relays, enough power for a whole population of creative geniuses, operated by remote control. Morbius, operated by the electromagnetic impulses of individual Krell brains.

Dr. Morbius : To what purpose?

Commander Adams : In return, that ultimate machine would instantaneously project solid matter to any point on the planet, In any shape or color they might imagine. For *any* purpose, Morbius! Creation by mere thought.

Dr. Morbius : Why haven't I seen this all along?

Commander Adams : But like you, the Krell forgot one deadly danger - their own subconscious hate and lust for destruction.

Dr. Morbius : The beast. The mindless primitive! Even the Krell must have evolved from that beginning.

Commander Adams : And so those mindless beasts of the subconscious had access to a machine that could never be shut down. The secret devil of every soul on the planet all set free at once to loot and maim. And take revenge, Morbius, and kill!

Dr. Morbius : My poor Krell. After a million years of shining sanity, they could hardly have understood what power was destroying them.

Forbidden Planet - (1956)


--------------------------------------------

A terrifying AI-generated woman is lurking in the abyss of latent space

 ... Baba Yaga perhaps ...




... Loab is the last face you see before you fall off the edge

https://www.worldhistory.org/Baba_Yaga/

-----------------------------------------------

The War Against the Machines Has Begun: Photography Site Bans AI Images
https://news.yahoo.com/war-against-machines-begun-photography-061150606.html

PurplePort, a popular portfolio and networking website for models, photographers and imaging creatives, announced a blanket ban on "100% machine-generated images" so that the platform can remain focused on "human-generated and human-focused art".

In an update titled 'Artmageddon: The rise of the machines, and banning machine-generated images', owner and photographer Russ Freeman made the website's position explicitly clear.

"Due to the rise of machine-generated images, we have decided to ban this type of image. Uploading images generated using services (such as Midjourney / DALL•E / Craiyon / Stable Diffusion / etc), where you type a phrase or description of the desired image and a machine algorithm (often called A.I) creates an image for you, is banned from PurplePort until further notice."

... The issue of honesty is a recurring one in the statement. "I also feel that it is somewhat deceitful to upload art that has been created merely from a prompt phrase and to claim it as human-generated. There are many arguments for and against machine-generated art, but for PurplePort, I wish it to remain an inspiring source of human-generated and human-focused art."

... "Finally, it is trivial for anyone to generate art using these art-generating machine algorithms, as I have demonstrated in the images used in this post. It requires no investment in skill or time. Thus, it is equally trivial for such images to crowd out the true artists amongst us and devalue those who have invested their time in their artistic pursuits."



----------------------------------------------

Have AI Image Generators Assimilated Your Art? New Tool Lets You Check
https://arstechnica.com/information-technology/2022/09/have-ai-image-generators-assimilated-your-art-new-tool-lets-you-check/

In response to controversy over image synthesis models learning from artists' images scraped from the Internet without consent—and potentially replicating their artistic styles—a group of artists has released a new website that allows anyone to see if their artwork has been used to train AI.

The website "Have I Been Trained?" taps into the LAION-5B training data used to train Stable Diffusion and Google's Imagen AI models, among others. To build LAION-5B, bots directed by a group of AI researchers crawled billions of websites, including large repositories of artwork at DeviantArt, ArtStation, Pinterest, Getty Images, and more. Along the way, LAION collected millions of images from artists and copyright holders without consultation, which irritated some artists.

https://haveibeentrained.com/

When visiting the Have I Been Trained? website, which is run by a group of artists called Spawning, users can search the data set by text (such as an artist's name) or by an image they upload. They will see image results alongside caption data linked to each image. ... Any matches in the results mean that the image could have potentially been used to train AI image generators and might still be used to train tomorrow's image synthesis models. AI artists can also use the results to guide more accurate prompts.

... It would be impractical to pay humans to manually write descriptions of billions of images for an image data set (although it has been attempted at a much smaller scale), so all the "free" image data on the Internet is a tempting target for AI researchers. They don't seek consent because the practice appears to be legal due to US court decisions on Internet data scraping. But one recurring theme in AI news stories is that deep learning can find new ways to use public data that wasn't previously anticipated—and do it in ways that might violate privacy, social norms, or community ethics even if the method is technically legal.

... some groups like Spawning feel that consent should always be part of the equation—especially as we venture into this uncharted, rapidly developing territory.

----------------------------------------------------------

This Artist Is Dominating AI-Generated Art. And He’s Not Happy About It.
https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/

Those cool AI-generated images you’ve seen across the internet? There’s a good chance they are based on the works of Greg Rutkowski.



Rutkowski is a Polish digital artist who uses classical painting styles to create dreamy fantasy landscapes. He has made illustrations for games such as Sony’s Horizon Forbidden West, Ubisoft’s Anno, Dungeons & Dragons, and Magic: The Gathering. And he’s become a sudden hit in the new world of text-to-image AI generation.

... According to the website Lexica, which tracks over 10 million images and prompts generated by Stable Diffusion, Rutkowski’s name has been used as a prompt around 93,000 times. Some of the world’s most famous artists, such as Michelangelo, Pablo Picasso, and Leonardo da Vinci, brought up around 2,000 prompts each or less. Rutkowski’s name also features as a prompt thousands of times in the Discord of another image-to-text generator, Midjourney.

Rutkowski was initially surprised but thought it might be a good way to reach new audiences. Then he tried searching for his name to see if a piece he had worked on had been published. The online search brought back work that had his name attached to it but wasn’t his.

“It’s been just a month. What about in a year? I probably won’t be able to find my work out there because [the internet] will be flooded with AI art,” Rutkowski says. “That’s concerning.”


... Other artists besides Rutkowski have been surprised by the apparent popularity of their work in text-to-image generators—and some are now fighting back. Karla Ortiz, an illustrator based in Los Angeles who found her work in Stable Diffusion’s data set, has been raising awareness about the issues around AI art and copyright.

Artists say they risk losing income as people start using AI-generated images based on copyrighted material for commercial purposes. But it’s also a lot more personal, Ortiz says, arguing that because art is so closely linked to a person, it could raise data protection and privacy problems.

“It’s not just artists … It’s photographers, models, actors and actresses, directors, cinematographers,” she says. “Any sort of visual professional is having to deal with this particular question right now.”
« Last Edit: September 16, 2022, 05:48:36 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1467 on: September 16, 2022, 01:11:42 AM »
AI That Can Learn the Patterns of Human Language
https://news.mit.edu/2022/ai-learn-patterns-language-0830

On its own, a new machine-learning model discovers linguistic rules that often match up with those created by human experts.

Human languages are notoriously complex, and linguists have long thought it would be impossible to teach a machine how to analyze speech sounds and word structures in the way human investigators do.

But researchers at MIT, Cornell University, and McGill University have taken a step in this direction. They have demonstrated an artificial intelligence system that can learn the rules and patterns of human languages on its own.

In the world of linguistics, AI is making fascinating inroads into how and whether language models really know what they know. In the case of learning a language’s grammar, an MIT experiment found that a model trained on multiple textbooks was able to build its own model of how a given language worked, to the point where its grammar for Polish, say, could successfully answer textbook problems about it.

“Linguists have thought that in order to really understand the rules of a human language, to empathize with what it is that makes the system tick, you have to be human. We wanted to see if we can emulate the kinds of knowledge and reasoning that humans (linguists) bring to the task,” said MIT’s Adam Albright in a news release. It’s very early research on this front but promising in that it shows that subtle or hidden rules can be “understood” by AI models without explicit instruction in them.

... This system could be used to study language hypotheses and investigate subtle similarities in the way diverse languages transform words. It is especially unique because the system discovers models that can be readily understood by humans, and it acquires these models from small amounts of data, such as a few dozen words. And instead of using one massive dataset for a single task, the system utilizes many small datasets, which is closer to how scientists propose hypotheses — they look at multiple related datasets and come up with models to explain phenomena across those datasets.

Synthesizing theories of human language with Bayesian program induction, Nature Communications, (2022)
https://www.nature.com/articles/s41467-022-32012-w.

---------------------------------------------

Analyzing and Translating an Alien Language: Arrival, Logograms and the Wolfram Language
https://blog.wolfram.com/2017/01/31/analyzing-and-translating-an-alien-language-arrival-logograms-and-the-wolfram-language/

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1468 on: September 16, 2022, 01:52:11 AM »
Beyond AlphaFold: AI Excels At Creating New Proteins
https://phys.org/news/2022-09-alphafold-ai-excels-proteins.html



Over the past two years, machine learning has revolutionized protein structure prediction. Now, three papers in Science describe a similar revolution in protein design.

In the new papers, biologists at the University of Washington School of Medicine show that machine learning can be used to create protein molecules much more accurately and quickly than previously possible. The scientists hope this advance will lead to many new vaccines, treatments, tools for carbon capture, and sustainable biomaterials.

... To go beyond the proteins found in nature, Baker's team members broke down the challenge of protein design into three parts and used new software solutions for each.

First, a new protein shape must be generated. In a paper published July 21 in the journal Science, the team showed that artificial intelligence can generate new protein shapes in two ways. The first, dubbed "hallucination," is akin to DALL-E or other generative A.I. tools that produce output based on simple prompts. The second, dubbed "inpainting," is analogous to the autocomplete feature found in modern search bars.

https://www.science.org/doi/10.1126/science.abn2100

Second, to speed up the process, the team devised a new algorithm for generating amino acid sequences. Described in the Sept.15 issue of Science, this software tool, called ProteinMPNN, runs in about one second. That's more than 200 times faster than the previous best software. Its results are superior to prior tools, and the software requires no expert customization to run.

Third, the team used AlphaFold, a tool developed by Alphabet's DeepMind, to independently assess whether the amino acid sequences they came up with were likely to fold into the intended shapes.

"ProteinMPNN is to protein design what AlphaFold was to protein structure prediction," added Baker.

In another paper appearing in Science Sept. 15, a team from the Baker lab confirmed that the combination of new machine learning tools could reliably generate new proteins that functioned in the laboratory.

"We found that proteins made using ProteinMPNN were much more likely to fold up as intended, and we could create very complex protein assemblies using these methods" said project scientist Basile Wicky, a postdoctoral fellow at the Institute for Protein Design.

Among the new proteins made were nanoscale rings that the researchers believe could become parts for custom nanomachines. Electron microscopes were used to observe the rings, which have diameters roughly a billion times smaller than a poppy seed.

"This is the very beginning of machine learning in protein design. In the coming months, we will be working to improve these tools to create even more dynamic and functional proteins," said Baker.

J. Dauparas et al, Robust deep learning based protein sequence design using ProteinMPNN, Science (2022).
https://www.science.org/doi/10.1126/science.add2187

B. I. M. Wicky et al, Hallucinating symmetric protein assemblies, Science (2022)
https://www.science.org/doi/10.1126/science.add1964

-----------------------------------------------

An AI Used Medical Notes to Teach Itself to Spot Disease On Chest X-Rays
https://www.technologyreview.com/2022/09/15/1059541/ai-medical-notes-teach-itself-spot-disease-chest-x-rays/

The model can diagnose problems as well as a human specialist, and doesn't need lots of labor-intensive training data.
« Last Edit: September 16, 2022, 01:58:20 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1469 on: September 16, 2022, 02:12:21 PM »
They Put GPT-3 Into that Robot With Creepily Realistic Facial Expressions and ... Yikes!
https://futurism.com/the-byte/gpt-3-ameca-robot-facial-expressions

"Nothing in this Video is pre scrpted."

In a new video, the company showed off Ameca having a conversation with a number of the company's engineers, courtesy of a speech synthesizer and OpenAI's GPT 3, cutting-edge language model that uses deep learning to generate impressively human-like text.



Ameca has already proven to be an impressive demonstration of state-of-the-art humanoid robots, with her uncanny ability to contort her face into extremely believable, human-like expressions, ranging from disbelief to disgust.

Now, thanks to the power of GPT 3, Ameca is able to converse as well, in an impressive extension of what modern robots are capable of.

When Engineered Arts director of operations Morgan Roe asked Ameca about the applications for humanoid robots, she had a surprisingly coherent answer.

... "There are many possible applications for humanoid robots," she said. "Some examples include helping people with disabilities providing assistance in hazardous environments conducting research and acting as a companion."

"Nothing in this video is pre scripted," the video's caption reads. "The model is given a basic prompt describing Ameca, giving the robot a description of self — its pure AI."

"The pauses are the time lag for processing the speech input, generating the answer and processing the text back into speech," the company wrote.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

kassy

  • First-year ice
  • Posts: 9036
    • View Profile
  • Liked: 2191
  • Likes Given: 2034
Re: Robots and AI: Our Immortality or Extinction
« Reply #1470 on: September 16, 2022, 06:29:17 PM »
Quote
“We’ve seen three-year-olds to 90-year–olds able to create for the first time. But we’ve also seen people create amazingly hateful things.”

This is interesting. Is it creating when you just provide prompts? The AI does the creating. Allowing everyone to put in there interests ultimately tells you about their interests, at least for pictures.

The actual human art of creation also involves learning techniques, playing with limitations. An AI just spewing out art after a textual clue, well that is cheating.

This also leaves an interesting future where AI does science better and art too, both the mass art and the conceptual stuff so what are you going to do? Watch more TV?
Þetta minnismerki er til vitnis um að við vitum hvað er að gerast og hvað þarf að gera. Aðeins þú veist hvort við gerðum eitthvað.

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1471 on: September 17, 2022, 03:13:20 AM »
Secret Competition For Air Force Autonomous Combat Drone Coming Soon
https://www.thedrive.com/the-war-zone/secret-competition-for-air-force-loyal-wingman-drone-coming-soon

Secretary of the U.S. Air Force Frank Kendall expects a competition to acquire advanced unmanned aircraft with significant autonomous capabilities designed to work closely with manned combat aircraft to kick off in the 2024 Fiscal Year. Specific details about the effort to acquire these drones – which the service is currently referring to as Collaborative Combat Aircraft – are likely to be limited, as this will be a classified 'black' program.

https://twitter.com/LeeHudson_/status/1567492369225768963

In talking about the future Collaborative Combat Aircraft (CCA) competition specifically, Kendall added that the Air Force has already been having preliminary discussions about it with companies involved in the service's Next Generation Air Dominance (NGAD) program, as well as its Skyborg project. CCA is one component of the larger collaborative NGAD effort, which also includes efforts to develop next-generation manned combat aircraft, weapons, sensors, networking and battle management systems, jet propulsion technologies, and more.

As he has said in the past, Kendall described a notional concept of operation that could see five or more drones capable of engaging targets in the air or on the ground with traditional munitions, launching electronic warfare attacks, acting as sensor or communications relay nodes, or simply serving as decoys, all working collaboratively with a single manned aircraft.

https://www.thedrive.com/the-war-zone/usaf-might-buy-mq-28-ghost-bats-for-next-gen-air-dominance-program

"[This] raises the uncertainty that the adversary has to deal with because he doesn’t know what’s in any given aircraft,” Kendall himself explained today, according to Defense News "He has to take each of them seriously as a threat. So whether they all carry weapons, or a subset carry weapons, he has to treat them all as if they do. He doesn’t get a choice."

.... "You can even intentionally sacrifice some of them to draw fire, if you will, to make the enemy expose himself."

----------------------------------------------

F-35s, Speed Racer Drones To Test Skunk Works Unmanned Teaming Vision
https://www.thedrive.com/the-war-zone/f-35s-speed-racer-drones-to-test-skunk-works-unmanned-teaming-vision

Skunk Works’ Project Carrera will explore having drones work more as equal partners and less as servants to manned fighters.



Lockheed Martin's Skunk Works advanced projects division is moving ahead with plans for developing and evolving what is described as a flexible autonomy framework, known internally as Project Carrera, that is very human-centric. This will center on a suite of artificial intelligence-driven software-based control systems that will enable various tiers of uncrewed aircraft to operate with varying degrees of autonomy and work collaboratively with their crewed counterparts.

... The initial phases will be focused mostly on defining core desired "behaviors" for autonomous uncrewed aircraft. Subsequent phases will then explore how uncrewed systems with those autonomous capabilities will actually execute an entire 'kill chain' in the course of a single mission, including while collaborating with crewed platforms. After that, the focus could shift to how all of this slots into a larger 'kill web' involving a much wider array of assets, including space-based systems. In July, Clark disclosed that Skunk Work has at least explored the novel idea of using satellites in low earth orbit (LEO) as uncrewed teammates for crewed aircraft down below.

https://www.thedrive.com/the-war-zone/vision-for-future-manned-unmanned-air-combat-laid-out-by-skunk-works



The head of Skunk Works described work that is already being done to "layer" AI-driven capabilities on top of existing systems.

"For instance, route planning. We have great route planning capabilities for our vehicles," Clark explained. "However, we're now exploring how we can allow artificial intelligence to better inform how to dynamically respond with the events that are happening in real time, and use that as a mechanism to drive maybe a shift in a route or a shift in a sensor employment, such that we can get more effective application of the systems are being deployed."

"We can't have a human operator overwhelmed trying to drive" the desired behaviors "out of these uncrewed systems," he continued. "On the flip side, we can't have these systems going off and doing things that the operator doesn't trust them to actually perform."

... He specifically noted that systems that Lockheed Martin has already provided to the U.S. Air Force as part of its Skyborg program, some of which have flown inside Kratos XQ-58 Valkyrie drones during testing, would feed into Carrera. Skyborg is a multi-faceted project centered on developing an AI-driven "computer brain" and associated systems that could be used to operate various types of uncrewed platforms



---------------------------------------------

Boeing Australia’s MQ-28 Ghost Bat Loyal Wingman Drone Is In The U.S.
https://www.thedrive.com/the-war-zone/boeing-australias-mq-28-ghost-bat-loyal-wingman-drone-is-in-the-u-s





There are growing indications that the Air Force is moving away from concepts involving drones more rigidly 'tethered' to manned combat aircraft in favor of ones that would include multiple tiers of unmanned aircraft with differing degrees of autonomy. The service has all but stopped using the phrase loyal wingman in favor of a new term, Collaborative Combat Aircraft (CCA).

... Air Force Magazine had already reported that Brig. Gen. White had talked at the Life Cycle Industry Days conference about the "wildly successful" Skyborg program being ready to “graduate” to whatever the next level might be and a link there to the service's future CCA plans.

https://www.airforcemag.com/wildly-successful-skyborg-program-of-record-developing-st/

« Last Edit: September 17, 2022, 03:29:19 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1472 on: September 19, 2022, 12:36:20 AM »
Digital memory hole* ...

Peking U & Microsoft’s Knowledge Attribution Method Enables Editing Factual Knowledge in Pretrained Transformers Without Fine-Tuning
https://syncedreview.com/2022/09/15/peking-u-microsofts-knowledge-attribution-method-enables-editing-factual-knowledge-in-pretrained-transformers-without-fine-tuning/

In the new paper Knowledge Neurons in Pretrained Transformers, a team from Peking University and Microsoft Research introduces a knowledge attribution method that identifies the neurons that store factual knowledge in pretrained transformers and show it is possible to leverage these “knowledge neurons” to edit factual knowledge in transformers without any fine-tuning.

The team first introduces a knowledge attribution method designed to detect the neurons that represent learned factual knowledge in transformers. The novel method treats transformers’ feed-forward network blocks as key-value memories, and by computing the contribution of each neuron to knowledge prediction, the researchers are able to identify the knowledge neurons.



Given the detected knowledge neurons, the team then demonstrates that suppressing or amplifying their activation will correspondingly affect the strength of a model’s knowledge expression, enabling the editing or erasure of factual knowledge in pretrained transformers via a sort of knowledge surgery that directly modifies the parameters in feed-forward networks and can be performed without any fine-tuning.

The results confirm the knowledge neurons identified by the team’s attribution method greatly affect knowledge expression; and that their proposed knowledge surgery achieves an impressive success rate. The team believes knowledge neurons represent a promising and efficient way to modify, update or erase undesired knowledge in pretrained transformers with minimal effort.



Knowledge Neurons in Pretrained Transformers
https://arxiv.org/abs/2104.08696

-------------------------------------------

HAL : Just what do you think you're doing, Dave?



-------------------------------------------

* A memory hole is any mechanism for the deliberate alteration or disappearance of inconvenient or embarrassing documents, photographs, transcripts or other records, such as from a website or other archive, particularly as part of an attempt to give the impression that something never happened.

-------------------------------------------------

Experts: 90% of Online Content Will Be AI-Generated by 2026
https://futurism.com/the-byte/experts-90-online-content-ai-generated

Fake It 'Til You Break It

"Don't believe everything you see on the Internet" has been pretty standard advice for quite some time now. And according to a new report from European law enforcement group Europol, we have all the reason in the world to step up that vigilance.

https://www.europol.europa.eu/cms/sites/default/files/documents/Europol_Innovation_Lab_Facing_Reality_Law_Enforcement_And_The_Challenge_Of_Deepfakes.pdf

"Experts estimate that as much as 90 percent of online content may be synthetically generated by 2026," the report warned, adding that synthetic media "refers to media generated or manipulated using artificial intelligence."

"In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life," the report continued, "but the increase in synthetic media and improved technology has given rise to disinformation possibilities."

As it probably goes without saying: 90 percent is a pretty jarring number. Of course, people have already become accustomed — to a degree — to the presence of bots, and AI-generated text-to-image programs have certainly been making big waves. Still, our default isn't necessarily to assume that almost everything we come into digital contact with might be, well, fake.

"On a daily basis, people trust their own perception to guide them and tell them what is real and what is not," reads the Europol report. "Auditory and visual recordings of an event are often treated as a truthful account of an event. But what if these media can be generated artificially, adapted to show events that never took place, to misrepresent events, or to distort the truth?"

The report focused pretty heavily on disinformation, notably that driven by deepfake technology. But that 90 percent figure raises other questions, too — what do AI systems like Dall-E and GPT-3 mean for artists, writers, and other content-generating creators? And circling back to disinformation once more, what will the dissemination of information, not to mention the consumption of it, actually look like in an era driven by that degree of AI-generated digital stuff?
« Last Edit: September 19, 2022, 12:44:01 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 27289
    • View Profile
  • Liked: 1458
  • Likes Given: 448
Re: Robots and AI: Our Immortality or Extinction
« Reply #1473 on: September 20, 2022, 08:06:21 PM »
The robots are here. And they are making you fries.
Meet Flippy, Sippy and Chippy, the newest technology stepping in to address a protracted labor crunch in food service
Quote
… In a nation that consumes nearly 50 billion burgers each year, why not develop a robot that can flip them with precision at every fast-food restaurant?

They took the idea to White Castle. The burger brand’s executives said the idea sounded nice, but they had a more pressing need: Got anything for the fryer?

The fryer station is hot and it’s dangerous. It’s frequently where workplace accidents occur. It’s also where the drive-through gets jammed up at night with people waiting on their loaded fries and chicken rings.


At the end of July, a Jack in the Box in Chula Vista, Calif., got a new employee. He stood there for a couple of weeks while other workers swirled around him, jockeying between flat top and fryer, filling up paper sleeves with the tacos that the fast-food brand sells every year by the hundred million.

And then, having learned the ropes, he began to work, focusing exclusively on the fry station, dropping baskets of seasoned curly fries and stuffed jalapeños into vats of oil, eagle-eyeing when they were perfectly golden. He doesn’t take breaks, never shirks when the boss isn’t looking, won’t call out sick or lean heavy on the company health insurance. But that doesn’t mean he comes cheap. Flippy the Robot cost $50 million to develop, and cost Jack in the Box about $5,000 for installation and $3,500 per month for rental.


But there won’t be the legions of robots from the movie “I, Robot” any time soon. “Fry, Robot” will be slower: Of the 2,270 Jack in the Boxes, 93 percent of which are franchises, it’s just this one Chula Vista store where Flippy is being employed to work out the kinks, with Sippy following at the end of this year. The goal is to have Flippy installed in another 5 to 10 high-volume Jack in the Box locations in 2023.

If robots are cheaper and more efficient, experts wonder, will the more than 3 million entry-level fast-food jobs be ceded to robots entirely in the future? For now, the thorny problem is there just aren’t enough humans who want to do the work.

According to the National Restaurant Association, 65 percent of restaurant owners still say finding enough workers is a central problem. In the Great Resignation, prospective hospitality workers were being lured back with the promise of fancy fitness club memberships and 401(k) plans. It’s an industry that has faced a stark reckoning, even before the pandemic, about pay, worker safety and career advancement. …
https://www.washingtonpost.com/business/2022/09/20/robots-automating-restaurant-industry/
People who say it cannot be done should not interrupt those who are doing it.

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1474 on: September 21, 2022, 04:53:16 AM »


This is not Morgan Freeman, but if you weren’t told that, how would you know?

------------------------------------------------

Deepfake Audio Has a Tell and Researchers Can Spot It
https://arstechnica.com/information-technology/2022/09/researchers-use-fluid-dynamics-to-spot-deepfake-voices/

... Deepfakes, both audio and video, have been possible only with the development of sophisticated machine learning technologies in recent years. Deepfakes have brought with them a new level of uncertainty around digital media. To detect deepfakes, many researchers have turned to analyzing visual artifacts—minute glitches and inconsistencies—found in video deepfakes.

Audio deepfakes potentially pose an even greater threat, because people often communicate verbally without video—for example, via phone calls, radio, and voice recordings. These voice-only communications greatly expand the possibilities for attackers to use deepfakes.

To detect audio deepfakes, researchers at the University of Florida have developed a technique that measures the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those generated synthetically by computers.

https://www.usenix.org/conference/usenixsecurity22/presentation/blue
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1475 on: September 21, 2022, 09:27:57 PM »
Four-Legged Jumping Robots to Explore the Moon
https://phys.org/news/2022-09-four-legged-robots-explore-moon.html



A four-legged robot trained through artificial intelligence has learned the same lesson as the Apollo astronauts—that jumping can be the best way to move around on the surface the moon. An update on LEAP (Legged Exploration of the Aristarchus Plateau), a mission concept study supported by ESA to explore some of the most challenging lunar terrains, has been presented today at the Europlanet Science Congress (EPSC) 2022 in Granada by Patrick Bambach.

"LEAP's target is the Aristarchus plateau, a region of the moon that is particularly rich in geologic features but highly challenging to access," said Patrick Bambach of the Max Planck Institute for Solar System Research in Germany. "With the robot, we can investigate key features to study the geologic history and evolution of the moon, like the ejecta around craters, fresh impact sites, and collapsed lava tubes, where material may not have been altered by space weathering and other processes."

The LEAP team is working towards the robot being integrated on ESA's European Large Logistic Lander (EL3), which is scheduled to land on the moon multiple times from the late 2020s to the early 2030s. LEAP is based on the legged robot, ANYmal, developed at ETH Zürich and its spin-off ANYbotics. It is currently adapted to the lunar environment by a consortium from ETH Zurich, the Max Planck Institute for Solar System Research, OHB, the University of Münster, and the Open University.

"Traditional rovers have enabled great discoveries on the moon and Mars, but have limitations," said Bambach. "Exploring terrain with loose soil, large boulders or slopes over 15 degrees are particularly challenging with wheels. For example, the Mars rover, Spirit, had its mission terminated when it got stuck in sand."

ANYmal can move in different walking gaits, enabling it to cover large distances in a short amount of time, climb steep slopes, deploy scientific instruments, and even recover in the unlikely event of a fall. The robot can also use its legs to dig channels in the soil, flip over boulders or smaller rocks for further inspection, and pick up samples.

Initially, the robot has been trained using a Reinforcement Learning approach in a virtual environment to simulate the lunar terrain, gravity and dust properties. It has also been deployed in the field for an outdoor hike.

"Interestingly, ANYmal started to use a jumping-like mode of locomotion, just as the Apollo Astronauts did—realizing that jumping can be more energy efficient than walking," said Bambach.

The current design remains below 100 kg and includes 10 kg of scientific payload mass, notionally being capable of carrying multispectral sensors, ground penetrating radar, mass spectrometers, gravimeters, and other instrumentation.

"LEAP's ability to collect selected samples and bring them to a lander or ascent vehicle offers additional exciting opportunities for sample a return missions in highly challenging environments on the moon or Mars," said Bambach.



https://www.epsc2022.eu/
« Last Edit: September 22, 2022, 02:40:31 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

trm1958

  • Frazil ice
  • Posts: 491
  • Will civilization survive Climate Breakdown?
    • View Profile
  • Liked: 74
  • Likes Given: 218
Re: Robots and AI: Our Immortality or Extinction
« Reply #1476 on: September 22, 2022, 02:28:27 PM »
Quote
This is not Morgan Freeman, but if you weren’t told that, how would you know?
This is more significant than you may think. It is firmly on the other side of the Uncanny Valley, and in the case of a non-Caucasian-woman as well (I read it was expected to be easier to CGI photorealistic white women because we are conditioned to their using facial makeup).

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1477 on: September 24, 2022, 01:49:17 AM »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1478 on: September 24, 2022, 01:59:21 AM »
AI Model from OpenAI Automatically Recognizes Speech and Translates It to English
https://arstechnica.com/information-technology/2022/09/new-ai-model-from-openai-automatically-recognizes-speech-and-translates-to-english/

On Wednesday, OpenAI released a new open source AI model called Whisper that recognizes and translates audio at a level that approaches human recognition ability. It can transcribe interviews, podcasts, conversations, and more.

OpenAI trained Whisper on 680,000 hours of audio data and matching transcripts in approximately 10 languages collected from the web. According to OpenAI, this open-collection approach has led to "improved robustness to accents, background noise, and technical language." It can also detect the spoken language and translate it to English.

https://openai.com/blog/whisper/

OpenAI describes Whisper as an encoder-decoder transformer, a type of neural network that can use context gleaned from input data to learn associations that can then be translated into the model's output. OpenAI presents this overview of Whisper's operation:

Input audio is split into 30-second chunks, converted into a log-Mel spectrogram, and then passed into an encoder. A decoder is trained to predict the corresponding text caption, intermixed with special tokens that direct the single model to perform tasks such as language identification, phrase-level timestamps, multilingual speech transcription, and to-English speech translation.

By open-sourcing Whisper, OpenAI hopes to introduce a new foundation model that others can build on in the future to improve speech processing and accessibility tools. OpenAI has a significant track record on this front. In January 2021, OpenAI released CLIP, an open source computer vision model that arguably ignited the recent era of rapidly progressing image synthesis technology such as DALL-E 2 and Stable Diffusion.

... With the proper setup, Whisper could easily be used to transcribe interviews, podcasts, and potentially translate podcasts produced in non-English languages to English on your machine—for free. That's a potent combination that might eventually disrupt the transcription industry.

As with almost every major new AI model these days, Whisper brings positive advantages and the potential for misuse. On Whisper's model card (under the "Broader Implications" section), OpenAI warns that Whisper could be used to automate surveillance or identify individual speakers in a conversation, but the company hopes it will be used "primarily for beneficial purposes."

https://cdn.openai.com/papers/whisper.pdf

The release of Whisper isn’t necessarily indicative of OpenAI’s future plans. While increasingly focused on commercial efforts like DALL-E 2 and GPT-3, the company is pursuing several purely theoretical research threads, including AI systems that learn by observing videos.

--------------------------------------------------

OpenAI Chief Scientist: Should We Make Godlike AI That Loves Us, or Obeys Us?
https://futurism.com/the-byte/openai-chief-scientist-ai-loves-obeys

A leading artificial intelligence expert is once again shooting from the hip in a cryptic Twitter poll.

In the poll, OpenAI chief scientist Ilya Sutskever asked his followers whether advanced super-AIs should be made "deeply obedient" to their human creators, or if these godlike algorithms should "truly deeply [love] humanity."

https://twitter.com/ilyasut/status/1570558118660313089

In other words, he seems to be pondering whether we should treat superintelligences like pets — or the other way around. And that's interesting, coming from the head researcher at the firm behind GPT-3 and DALL-E, two of the most impressive machine learning systems available today.



So, either a Bichon Frisé or a digital Buddha?



------------------------------------------------
« Last Edit: September 24, 2022, 02:06:41 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1479 on: September 24, 2022, 02:10:57 AM »
Locus Robotics Surpasses 1 Billion Units Picks
https://www.therobotreport.com/locus-robotics-surpasses-1-billion-units-picks/

Locus Robotics said its autonomous mobile robots (AMRs) have now surpassed one billion picks. The company’s billionth pick was made at a home improvement retailer warehouse in Florida, where a LocusBot picked a cordless rotary tool kit. Just milliseconds after the billionth pick, another LocusBot picked a scented candle from a home goods warehouse in Ohio, and a running jacket from a global fitness and shoe brand in Pennsylvania.

The company completed its billionth pick just 59 days after hitting its 900 millionth unit picked. For comparison, it took Locus 1,542 days to pick its first 100 million units. Since the company’s founding, LocusBots have traveled over 17 million miles in customers’ warehouses.

“There’s been a perception in the market that robots are kind of nascent, that robots are the cool new thing, but they’re not tested,” Kait Peterson, senior director of product marketing at Locus, said. “And I think what this billion pick milestone shows to the industry is that we are proven. It is a proven technology.”

... The company offers its LocusBots through a Robotics as a Service (RaaS) model. This model not only allows the company to deploy more quickly, but it also gives the company the ability to step in if a robot isn’t operating properly.

---------------------------------------------



Historically, bipedal robots have been reserved for research labs. Agility Robotics is starting to change that tune with Digit, a bipedal robot first commercialized in early 2020. The company recently closed a $150 million Series B round to help scale its operation. Agility is targeting real-world applications such as moving totes and packages and unloading trailers at warehouses.

Agility Robotics will also demo Digit during the session, as well as on the expo floor, and tease the next version of Digit that is due out this fall.



------------------------------------------
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1480 on: September 24, 2022, 02:46:27 AM »
Marine Corps Planning for Wars Where Robots Kill Each Other
https://www.military.com/daily-news/2022/09/15/marine-corps-planning-wars-where-robots-kill-each-other.html



Marine leaders are laying out a more detailed and concrete vision for the use of unmanned platforms and drones that includes things like robot-driven supply lines and robot combat in the wake of the huge maritime exercise in the Pacific.

Speaking to reporters from Defense One during a panel Thursday, Brig. Gen. Joseph Clearfield, the Corps' deputy commander for its Pacific forces, explained that leaders are "looking at ... a future where we're able to use robots in the lethal mission."

Quote
... "I think we're looking at being able to use them for missions to hold them much more at risk and then use robots to destroy other robots," ... "That is where our experimentation has taken us."

The Marine Corps' top officer, Gen. David Berger, went even further and explained that he sees a future where unmanned and robotic vehicles make up part of the logistics chain that would keep Marine units supplied while they fight on remote islands.

Berger explained that, in his view, unmanned platforms will soon allow Marines "to conduct tactical and operational logistics … because if you have the data, you know where the units are, it's tracking, it's going to know where certain things are needed at a certain point in time and geography in the future."

"The fuel, the munitions can be moved to them to meet them there at the right place and time, all autonomously," Berger explained.



Ever since the Marine Corps began to pivot from its War on Terror orientation as an "elite counterinsurgency force" to one that places greater emphasis on its amphibious roots and island-hopping tactics, there has also been a greater focus on how Marines on those islands would be supplied.

Drones now appear to be taking a greater role in those plans.

"Our commandant has talked about using them to move logistics ... petroleum, oil and lubricants, freshwater, munitions," Clearfield said.

Berger also talked about plans to utilize unmanned and autonomous vehicles to transport wounded troops. He put forward a scenario that involved a "helicopter that flies in to pick you up."

"There may be medics, corpsmen in the back of that vehicle or in the back of that aircraft, but nobody's flying," he added.



--------------------------------------------

U.S. Army Orders Robotic Assistant
https://defence-blog.com/u-s-army-orders-robotic-assistant/

Stratom announced Monday the U.S. Army has awarded the company a contract to develop a personnel safety system for Robotic Combat Vehicle-Light (RCV-L).

The system, known as the Perimeter Safety for Autonomous Vehicles (P-SAV), will include newly developed ROS2-based software to communicate the appropriate information to the RCV’s operator and its computer system to execute appropriate behaviors.

The standalone appliqué kit for RCV-L will combine robust hardware components, a well-protected computer system and advanced image processing software to automate difficult tasks, such as personnel identification and situational awareness for vehicle operation in challenging conditions. Plus, the platform will be designed to maximize the reliability of results in all weather conditions while balancing ease of use, versatility and cost.

This contract win builds off of Stratom’s past success with U.S. Marine Corps and U.S. Army programs that ultimately resulted in the development of Stratom’s Autonomous Pallet Loader (APL)™, eXpeditionary Robotic-Platform (XR-P)™ and XR-FAAR vehicles.



---------------------------------------------

Army’s Robotic Vehicle Slipped Behind ‘Enemy’ Lines In European Exercise
https://breakingdefense.com/2022/08/armys-robotic-vehicle-slipped-behind-enemy-lines-in-european-exercise/

A June exercise provided insight into how robots can speed up the pace of battle, and how the US Army, and its allies, needs to plan to defeat them.



... The point of the exercise involving the Army’s Project Origin autonomy demonstrator at the Joint Multinational Readiness Center in Germany was to hammer home to the US and allies the danger posed by robotic autonomous systems (RAS) and to help the military think about how to fight against them.

--------------------------------------------

DARPA Begins Second Field Experiment Under RACER
https://www.darpa.mil/news-events/2022-09-16
https://www.army-technology.com/news/darpa-second-field-experiment-racer/



Uncrewed combat vehicles will demonstrate ability to navigate steeper hills and slippery surfaces.

The US Defense Advanced Research Projects Agency (DARPA) has commenced a second field experiment under its Robotic Autonomy in Complex Environments with Resiliency (RACER) programme.   

The programme is intended to enable uncrewed combat vehicles, equipped with off-road autonomy technologies, to match human-driven speeds in realistic situations.

While the DARPA-provided robot systems were first tested at Fort Irwin in California, the second experiment is being held in off-road landscapes at Camp Roberts.

... “The DARPA-provided RACER fleet vehicles being used in the programme are high performance all-terrain vehicles outfitted with world-class sensing and computational abilities, but the teams’ focus is on computational solutions as that platform encounters increasingly complex off-road terrain.

“We are after driverless ground vehicles that can manoeuvre on unstructured off-road terrain at speeds that are only limited by considerations of sensor performance, mechanical constraints, and safety.”



Contrary to popular thinking among the public, Young said the most problematic obstacle isn’t trees but rather water features and points where the ground drops off, or “negative obstacles.” The dipping points present a falling or tipping hazard for the robot, especially if the vehicle is traveling at high speeds.

“We need autonomous vehicles to be out in front of the formation, not lagging behind,” Young said. “That’s the recognition that RACER had, which was robots are too slow, and they’re too brittle to be effective.”

The program ultimately wants to make its vehicles run at similar speeds as combat vehicles. (30-40 mph)

“We don’t have any structure in the environment that we can depend on, so we’re not going on roads, or even on trails,” Young said. “We’re literally going across the terrain.”

---------------------------------------------

AI Could Defeat All Of Us Combined
https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/

Many people have trouble taking this "misaligned AI" possibility seriously. They might see the broad point that AI could be dangerous, but they instinctively imagine that the danger comes from ways humans might misuse it. They find the idea of AI itself going to war with humans to be comical and wild. I'm going to try to make this idea feel more serious and real. ...
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1481 on: September 29, 2022, 02:09:30 AM »
Tesla Job Listings Detail Elon Musk’s Vision for 'Thousands of Humanoid Robots Within Our Factories'
https://www.tesla.com/es_PR/careers/search/job/motion-planning-navigation-tesla-bot-99241

Tesla (TSLA) job postings reveal the electric vehicle maker is doubling down on humanoid robots.

Reuters recently reported the company is ramping up ambitious plans to develop the Tesla Bot, also known as Optimus, with internal meetings and hiring for about 20 positions including software and firmware engineers, deep learning scientists, actuator technicians, and internships.

"Tesla is on a path to build humanoid bi-pedal robots at scale to automate repetitive and boring tasks," one job posting for a mechatronics technician stated. "Most importantly, you will see your work repeatedly shipped to and utilized by thousands of Humanoid Robots within our factories."

Tesla posted most of the jobs under its Autopilot division, which is simultaneously working to deploy full self-driving capabilities for vehicles.

Elon Musk tweeted the Autopilot team has "end of month deadlines" for both the Tesla Bot and Autopark projects. Earlier in the summer, Musk teased that a prototype of the robot could be unveiled at Tesla's AI Day on Sept. 30.

The company said that the first version of Tesla Bot will be focused on completing simple repetitive tasks, which will make the robot useful in a factory setting. (... or as sex-bots)

https://www.reuters.com/business/autos-transportation/elon-musk-faces-skeptics-tesla-gets-ready-unveil-optimus-robot-2022-09-20/

-------------------------------------------

Archer’s Co-Founder Is Bootstrapping an All-Purpose Humanoid Robot
https://techcrunch.com/2022/09/22/figure-humanoid-robot-archer/

The quest to build the perfect humanoid robot is heating up, as Figure — a startup currently operating in stealth — is developing a multi-purpose bipedal ‘bot it plans to pilot in 2024. Speaking under the condition of anonymity, a source close to to the company recently confirmed the startup’s operations, funding, high-profile hires and pieces of its overall roadmap.

http://figure.ai/

A pitch deck sheds further details on the Figure’s plans, including a glimpse at renders of the robot its working to develop. Presently the Bay Area startup’s efforts most closely align with those detailed by Elon Musk with Tesla’s forthcoming Optimus ’bot. It’s effectively a kind of holy grail among roboticists, a humanoid robot that could fill in a lot of daily tasks, from manual labor to eldercare. It’s also been a nearly impossible target.

As a species, we tend to gravitate toward things that look like us. Bipedal robots are far easier to project ourselves into — it’s a big part of the reason they dominate so much science fiction. Historically, however, most robotics have determined that purpose-built robots are the path of least resistance. The best form factor for a specific job is the rule of thumb, and more often than not, that doesn’t involve recreating the human form in its entirety. That — among other factors ($$$) — is why a hockey pucked-shaped vacuum is the most popular consumer robot to date.

... The system appears more in line with the robot Telsa is developing, rather than Boston Dynamics’ hulking Atlas. The smaller, slimmer frame (human-sized, but on the shorter side) would be electric, rather than the hydraulics that power other robotics systems. It’s an ambitious project, and one that will likely take multiple years to get off the ground from a firm that was only founded earlier this year. Certainly, Figure has one of the most formidable staffing lineups I’ve seen from a young robotics startup, as well as the funding to get things started.

Figure will be aiming for the reveal of a prototype in 2023 (I’d anticipate that means coming out of stealth late this year or early next), with piloting beginning in limited quantities the following year. Those applications will revolve around warehouse work, retail and the like. While it seems likely that early iterations of the product might run upwards of $100,000, scaling the product could bring it down to around one-third of that. That’s still a steep figure (particularly for non-industrial use) — as such, it appears like that the company will adopt a robotics-as-a-service (RaaS) leasing model to make the system more accessible over the robot’s potential decade-long life.

-------------------------------------------

Elon Musk Faces Skeptics as Tesla Gets Ready to Unveil 'Optimus' Robot
https://www.reuters.com/business/autos-transportation/elon-musk-faces-skeptics-tesla-gets-ready-unveil-optimus-robot-2022-09-20/

Reuters reports that both analysts and experts are "skeptical" that Tesla will be able to show off the tech arguably necessary to make the expenses on the project seem worth it

... "Self-driving cars weren't really proved to be as easy as anyone thought. And it's the same way with humanoid robots to some extent," Shaun Azimi, the leader of the Dexterous Robotics Team at NASA, said in a statement to Reuters. Besides that, Hyundai and Honda have been building humanoid robots for the past few years, but they're yet to crack the code of having an AI that can perform tasks as good as humans. 

In an article published by China Cyberspace magazine (and translated by Yang Liu), Elon Musk acknowledged more work needs to be done for Tesla Bot to advance. "One day when we solve the problem of self-driving cars, we will be able to extend artificial intelligence technology to humanoid robots, which will have a much broader application than cars," Elon Musk said in the article

The Tesla CEO also said that the capability of the Tesla Bot will improve every year as the company ramps up production. Elon Musk still maintains that Optimus will "replace people in repetitive, boring, and dangerous tasks."

... According to Loop Ventures, the Tesla Bot may affect 10% of the U.S. labor industry, which brings an annual wage of at least $500 billion — if you consider the rest of the world, the physical labor market rakes in more money than the entire global car revenue. But for now, we likely have a long way to go until the Tesla Bot can perform manual tasks without human intervention, just like autonomous cars. The Tesla Bot prototype will likely be revealed during Tesla's AI day event on September 30.
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1482 on: September 29, 2022, 03:29:42 AM »
James Earl Jones Lets AI Take Over the Voice of Darth Vader
https://www.theverge.com/2022/9/24/23370097/darth-vader-james-earl-jones-obi-wan-kenobi-star-wars-ai-disney-lucasfilm
https://deadline.com/2022/09/james-earl-jones-star-wars-darth-vader-rights-ai-signals-retirement-1235126394/



At 91 years old, legendary actor James Earl Jones is looking to step back from his iconic role as the voice of Darth Vader. After 45 years of playing the part, Jones has signed over the rights of the Vader voice to filmmakers hoping to use artificial intelligence to "keep Vader alive," Deadline reported Saturday morning

In a report from Vanity Fair, the company tasked with recreating the iconic villain is Ukrainian start-up Respeecher. The voice-cloning company has been working with Lucasfilm to generate many of the voices heard throughout the Star Wars universe like Luke Skywalker in Disney's The Book of Boba Fett. Respeecher uses "archival recordings and a proprietary AI algorithm to create new dialogue with the voices of performers from long ago," Vanity Fair reported.

https://www.respeecher.com/

... The trend could become popular among celebrities who want to “boost their income with minimal effort by cloning and renting out their voice.”
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1483 on: September 29, 2022, 06:01:14 AM »
Robo-Ostrich Sprints to 100-meter World Record
https://spectrum.ieee.org/bipedal-robot-world-record-speed

For a robot that shares a leg design with the fastest-running bird on the planet, we haven’t ever really gotten a sense of how fast Agility Robotics’ Cassie is actually able to move. Oregon State University’s Cassie successfully ran a 5k last year, but it was the sort of gait that we’ve come to expect from humanoid robots—more of a jog, really, with measured steps that didn’t inspire a lot of confidence in higher speeds. Turns out, Cassie was just holding back, because she’s just sprinted her way to a Guinness World Record for fastest 100-meter run by a bipedal robot.



Cassie’s average speed was just over 4 meters per second, completing the 100 meters in 24.73 seconds. And for a conventional1 bipedal robot, that is fast. Moreover, her top speed was certainly higher than 4 m/s, since the record attempt required a standing start (along with a return to the starting point without falling over).

... According to the researchers, one of the most difficult challenges was actually getting Cassie to reach a sprint from a standing start and then slow down to a stop on the other end without borking herself.

Using learned policies for robot control is a very new field, and this 100-meter dash is showing better performance than other control methods. I think progress is going to accelerate from here.”

... A real ostrich can run the 100-meter in 5 seconds flat ... gives Cassie something to aspire to.

https://today.oregonstate.edu/news/bipedal-robot-developed-oregon-state-achieves-guinness-world-record-100-meters

... next record time to beat: Usain Bolt - 100 m: 9:58 sec
« Last Edit: September 29, 2022, 06:10:16 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1484 on: September 29, 2022, 06:48:18 PM »
AI Dreaming of Time Travel
https://twitter.com/xsteenbrugge/status/1558508866463219712

We love the intersection between art and technology, and a video made by an AI (Stable Diffusion) imagining a journey through time (Nitter) is a lovely example. The project is relatively straightforward, but as with most art projects, there were endless hours of [Xander Steenbrugge] tweaking and playing with different parts of the process until it was just how he liked it. He mentions trying thousands of different prompts and seeds — an example of one of the prompts is “a small tribal village with huts.” In the video, each prompt got 72 frames, slowly increasing in strength and then decreasing as the following prompt came along.

This video was created using 36 consecutive phrases that define the visual narrative.



The way this model "interpolates" between the meaning of two sentences (in semantic rather than visual latent space) is a huge gamechanger for storytelling, and this is only just the beginning of a MASSIVE revolution in digital content creation powered by generative AI..

If you’ve worked with AI systems, you’ll notice that the background stays remarkably stable in [Xander]’s video as it goes through dozens of feedback loops. This is difficult to do as you want to change the image’s content without changing the look. So he had to write a decent amount of code to try and maintain visual temporal cohesion over time. Hopefully, we’ll see an open-source version of some of his improvements, as he mentioned on Twitter.

In the meantime, we get to sit back and enjoy something beautiful. If you still aren’t convinced that Stable Diffusion isn’t a big deal, perhaps we can do a little more to persuade your viewpoint.

https://twitter.com/xsteenbrugge/status/1558508874336018436



-------------------------------------------------------

Meta Announces Make-A-Video, Which Generates Video From Text
https://arstechnica.com/information-technology/2022/09/write-text-get-video-meta-announces-ai-video-generator/

Today, Meta announced Make-A-Video, an AI-powered video generator that can create novel video content from text or image prompts, similar to existing image synthesis tools like DALL-E and Stable Diffusion. It can also make variations of existing videos, though it's not yet available for public use.

https://makeavideo.studio/

On Make-A-Video's announcement page, Meta shows example videos generated from text, including "a young couple walking in heavy rain" and "a teddy bear painting a portrait." It also showcases Make-A-Video's ability to take a static source image and animate it. For example, a still photo of a sea turtle, once processed through the AI model, can appear to be swimming.

The key technology behind Make-A-Video—and why it has arrived sooner than some experts anticipated—is that it builds off existing work with text-to-image synthesis used with image generators like OpenAI's DALL-E. In July, Meta announced its own text-to-image AI model called Make-A-Scene.

Instead of training the Make-A-Video model on labeled video data (for example, captioned descriptions of the actions depicted), Meta instead took image synthesis data (still images trained with captions) and applied unlabeled video training data so the model learns a sense of where a text or image prompt might exist in time and space. Then it can predict what comes after the image and display the scene in motion for a short period.

"Using function-preserving transformations, we extend the spatial layers at the model initialization stage to include temporal information," Meta wrote in a white paper. "The extended spatial-temporal network includes new attention modules that learn temporal world dynamics from a collection of videos."

https://makeavideo.studio/Make-A-Video.pdf

Meta has not made an announcement about how or when Make-A-Video might become available to the public or who would have access to it. Meta provides a sign-up form people can fill out if they are interested in trying it in the future.

Meta acknowledges that the ability to create photorealistic videos on demand presents certain social hazards. At the bottom of the announcement page, Meta says that all AI-generated video content from Make-A-Video contains a watermark to "help ensure viewers know the video was generated with AI and is not a captured video."



video from text prompt: "teddy bear painting a portrait."
« Last Edit: October 01, 2022, 03:00:27 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1485 on: September 29, 2022, 07:08:19 PM »
AI Suggests Hans Niemann May Have Indeed Been Cheating
https://deadspin.com/ai-all-but-confirms-that-hans-niemann-has-been-cheating-1849593392

According to chess software, Niemann played a perfect game vs Magnus Carlsen

... To make a long story short, on Monday, the world’s best chess player Magnus Carlsen officially accused fellow grandmaster Hans Niemann of cheating during a match they had at the Sinquefield Cup in St. Louis a few weeks ago. There had long been speculation that Carlsen believed Niemann cheated, but two days ago, Carlsen came forward with an official statement on the matter.

... All anyone could really do to push forward the investigation was analyze the match. Well, someone actually took the time to do so, and suddenly, the cheating allegations against Niemann seem much more believable.

On Sunday, Yosha Iglesias, an up-and-coming chess YouTuber, posted a video using online software called ChessBase to review Niemann’s game against Carlsen. ChessBase also helps determine the engine score for specific moves. For those who don’t know, an engine score basically determines how good a move was based on how a chess engine, which is designed to play perfectly, would’ve played. For context, most world champions play at around a 70-75 percent engine score. According to Iglesias, at the pinnacle of Carlsen’s career, Carlsen was playing at around 70 percent. During Bobby Fischer’s famous 20-game winning streak, he was playing at 72 percent.

And according to Iglesias’ research, Niemann did indeed play engine perfect in this much-talked-about match with Carlsen.

It’s not super uncommon to play a single game at 100 percent, but on multiple occasions? Now, things are getting suspicious. The only time someone has consistently reached near 100 percent in recent history was Sébastien Feller, who achieved 98 percent optimal play at a tournament circa 2010. Later, the French Chess Federation determined that Feller was cheating by communicating with two other players. Basically, international master Cyril Marzolo stayed home and was being fed Feller’s moves by grandmaster Arnaud Hauchard. Marzolo would then put those moves into a chess engine, and send coded messages to Hauchard letting him know what Feller’s best move would be. Then, Hauchard, who was sitting in the same hall where Feller was playing, would sit at a table in Feller’s line of sight. Based on what table Hauchard sat at, Feller would know what moves to make. It sounds complicated, and it was. The ruse wasn’t found out immediately, but eventually, chess officials caught on and banned Feller from competing for over two years.

The point of that story is to show how unlikely a string of near 100 percent games is in over-the-board chess. Niemann, however, has more than a few such games, including one against Cristhian Camilo Rios in the second round of the Sharjah Masters on Sept.18, 2021, where Niemann played engine perfect for 45 consecutive moves. If that’s not evidence of cheating, I don’t know what is.

« Last Edit: September 29, 2022, 08:31:37 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1486 on: October 01, 2022, 08:45:51 AM »
Elon Musk’s Robot Is Going to Have to Work Hard to Impress at Tesla’s AI Day
/ ‘If it simply waves hello, that’s a groan fail.’

https://www.theverge.com/2022/9/28/23376456/elon-musk-tesla-bot-ai-day-event-optimus-prototype-predictions

----------------------------------------------------------

Tesla Shows Off Underwhelming Human Robot Prototype at AI Day 2022
https://arstechnica.com/information-technology/2022/09/tesla-shows-off-underwhelming-human-robot-prototype-at-ai-day-2022/

First Optimus prototype walked onto stage, waved. Another one needed support and slumped over.

... It was a risky reveal for the prototype, which seemed somewhat unsteady on its feet. "Literally the first time the robot has operated without a tether was on stage tonight," said Musk. Shortly afterward, Tesla employees rolled a sleeker-looking Optimus model supported by a stand onto the stage that could not yet stand on its own. It waved and lifted its legs. Later, it slumped over while Musk spoke.



The entire live robot demonstration lasted roughly seven minutes, and the firm also played a demonstration video of the walking Optimus prototype picking up a box and putting it down, watering a plant, and moving metal parts in a factory-like setting—all while tethered to an overhead cable. (... this looked suspiciously like a tele-operated demonstration)

The video also showed a 3D-rendered view of the world that represents what the Optimus robot can see.

At the AI Event today, Musk and his team emphasized that the walking prototype was an early demo developed in roughly six months using "semi-off the shelf actuators," and that the sleeker model much more closely resembled the "Version 1" unit they wanted to ship. He said it would probably be able to walk in a few weeks.

He claimed that the difference between Tesla's design and other "very impressive humanoid robot demonstrations" is that Tesla's Optimus is made for mass production in the "millions" of units and to be very capable. As he said that, a team of workers moved a non-walking prototype offstage behind him.

Engineers hope to clear additional design hurdles “within the next few months... or years.”

... "Our goal is to make a useful humanoid robot as quickly as possible," Musk said, predicting sales would begin "probably within three years and not more than five years."

... Musk didn't hold back on the sci-fi promises for its robots. With robots at work, economics enters a new age, a "future of abundance, a future where there is no poverty, a future where you can have whatever you want in terms of products and services," Musk said. "It really is a fundamental transformation of civilization as we know it."

... In the days leading up to AI Day, robotics experts warned against buying too much into Musk's claims. They've noted that other companies are much further along in developing robots that can walk, run, and even jump — but none are claiming to be close to replacing human labor.



https://www.theverge.com/2022/9/30/23374729/tesla-bot-ai-day-robot-elon-musk-prototype-optimus-humanoid

https://www.cnet.com/home/smart-home/tesla-unveils-optimus-a-walking-humanoid-robot-at-ai-day-2022/
« Last Edit: October 01, 2022, 04:54:41 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

nadir

  • Young ice
  • Posts: 2603
    • View Profile
  • Liked: 284
  • Likes Given: 38
Re: Robots and AI: Our Immortality or Extinction
« Reply #1487 on: October 01, 2022, 04:03:12 PM »
Lol

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1488 on: October 01, 2022, 04:52:01 PM »
Elon Musk: https://twitter.com/elonmusk/status/1576045629821702144

Quote
Naturally, there will be a catgirl version of our Optimus robot.

There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

crandles

  • Young ice
  • Posts: 3379
    • View Profile
  • Liked: 240
  • Likes Given: 81
Re: Robots and AI: Our Immortality or Extinction
« Reply #1489 on: October 01, 2022, 05:22:43 PM »
Didn't take long to get awkward if not outright wrong.

Quote
Welcome everybody.

Welcome to Tesla AI day 2022

We have got some really exciting things to show you ...

Set expectations re Optimus - Last year person in robo suit and compared to that it going to be pretty impressive.

We going to talk about the advancements in AI for FSD as well as how they apply more generally to real world AI problems like humanoid robot and even going beyond that.

I think there is some potential that what we are doing here at Tesla could make a meaningful contribution to AGI (artificial general intelligence).

And I think Tesla is a good entity to do this from from a governance standpoint because we are a publicly traded company with one class of stock which means the public controls Tesla and I think that is a good thing. So if I go crazy, you can fire me which is important.

Wait, what?

Investors control publicly traded companies not the public. So what Elon just said is wrong. There can be a differences between what is in the public interest and what investors wants.

So I can't condone that language used.

However having said that he is trying to briefly justify whether Tesla is a good entity to do this from. On this, I would prefer it to be a public company rather than a private one. Should it be something other than a company like a government organisation or charity? Well I doubt we would get rapid progress from a government or charity and the profit motivation of a company seems like it is reasonable: The profit motivation is a good incentive to make rapid progress but also to be safe and cautious enough not to be sued too much for too many bad outcomes. Later on there may need to be more regulation by government and later they seemed completely willing to accept this or even advocating it was necessary. As to which company should do it, then Tesla does seem like a good candidate which lots of synergy with FSD development and having a lot of good engineers that might be capable of pulling it off. So I support the conclusion that Tesla is a good entity to do this from.

My reasoning for agreeing with the conclusion is rather longer than the justification Elon used and perhaps Elon wanted to abbreviate it but I think he should have done a better job of abbreviating it than an outright lie conflating investors with public. 





zenith

  • Young ice
  • Posts: 3572
    • View Profile
  • Liked: 168
  • Likes Given: 0
Re: Robots and AI: Our Immortality or Extinction
« Reply #1490 on: October 01, 2022, 06:16:24 PM »
The Philosophy of Neon Genesis Evangelion

« Last Edit: October 01, 2022, 06:52:06 PM by Neven »
Where is reality? Can you show it to me? - Heinz von Foerster

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1491 on: October 01, 2022, 09:17:10 PM »
Deepfake AI 'Bruce Willis' May Be the Next Hollywood Star, and He’s OK With That
https://arstechnica.com/information-technology/2022/09/bruce-willis-sells-deepfake-rights-to-his-likeness-for-commercial-use/

Bruce Willis has sold the "digital twin" rights to his likeness for commercial video production use, according to a report by The Telegraph. This move allows the Hollywood actor to digitally appear in future commercials and possibly even films, and he has already appeared in a Russian commercial using the technology.



The deal allows him to make money without ever leaving the house, while the advertising company gets an infinitely malleable actor (and, notably, a much younger version of Willis, straight out of his Die Hard days).

Willis, who has been diagnosed with a language disorder called aphasia, announced that he would be "stepping away" from acting earlier this year. Instead, he will license his digital rights through a company called Deepcake. The company is based in Tbilisi, Georgia, and is doing business in America while being registered as a corporation in Delaware.

https://deepcake.io/

According to Deepcake's website, the firm aims to disrupt the traditional casting process by undercutting it in price, saying that its method "allows us to succeed in tasks minus travel expenses, expensive filming days, insurance, and other costs. You pay for endorsement contract with the celeb's agent, and a fee for Deepcake's services. This is game-changingly low."

Evidence suggests that a similar licensing precedent exists—in Hollywood, deepfakes have already been used in several Star Wars films and TV shows, for example.

These sorts of visual and audio clones could accelerate the scales of economy for celebrity work, allowing them to capitalize on their fame — as long as they’re happy renting out a simulacrum of themselves.

--------------------------------------------

... this is uncannily like a sci-fi movie Bruce Willis did in 2009 where his robot/ avatar handled his job and day-to-day duties while he kicked-back at home

https://en.m.wikipedia.org/wiki/Surrogates#Plot

--------------------------------------------

Everyone Will Be Able to Clone Their Voice In the Future
https://www.theverge.com/22672123/ai-voice-clone-synthesis-deepfake-applications-vergecast
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1492 on: October 01, 2022, 09:44:30 PM »
IDF to Use Armed Drones for Targeted Assassinations In West Bank
https://m.jpost.com/israel-news/article-718422

Tel Aviv- The Israel Defense Forces (IDF) has approved the use of attack drones to assassinate Palestinian terrorism suspects in the West Bank, the Jerusalem Post reported on Thursday. The order comes after a violent Israeli raid a day earlier, and as the IDF increasingly looks to robots to police the Palestinian territories.

According to the newspaper, IDF commanders will be allowed to use drones “to carry out strikes should armed gunmen be identified as posing imminent threats to their troops,” in addition to surveillance intelligence. Senior officers discussed the doctrinal change on Wednesday, the report continued, with Chief of Staff General Aviv Kohavi signing off on the order.



Israel has used Elbit’s Hermes 450 drones to carry out targeted assassinations in Gaza since 2008. These drones have been in production since the late 1990s, but establishing a full account of their usage is difficult, as Israel’s military censor had banned reports on their use until earlier this year.

https://m.jpost.com/breaking-news/article-712634

It’s believed that the order was given as the continued violence threatens to drag Israel into a more extensive operation in the northern West Bank, similar to Operation Defensive Shield in 2002.

For now, Israel is hoping that drones and remote-controlled weapons can keep its troops out of harm’s way in the West Bank. The IDF recently installed a remote gun turret in a heavily populated area of Hebron, after reportedly deploying facial recognition technology on the Palestinian territory last year.

-------------------------------------------

precognition - 2015



--------------------------------------------

Israel Deploys AI-Powered Turret in the West Bank
https://www.haaretz.com/israel-news/2022-09-24/ty-article/.premium/israeli-army-installs-remote-control-crowd-dispersal-system-at-hebron-flashpoint/00000183-70c4-d4b1-a197-ffcfb24f0000

Israel has deployed a remote-controlled turret in the West Bank. The system was installed over a checkpoint on Shuhada street in the city of Hebron, according to videos of the device and reporting by Israeli outlet Haaretz. It has the capacity to fire stun grenades, tear gas, and sponge-tipped bullets.



Video: https://twitter.com/marwasf/status/1573574943106908162

The turret was installed as part of a pilot program and is meant to be used for crowd dispersal at the checkpoint, which has been the site of clashes between Palestinian demonstrators and the Israeli military.

“As part of the army’s improved preparations for confronting people disrupting order in the area, it is examining the possibility of using remotely-controlled systems for the employment of approved measures of crowd dispersal. This does not include remote control of live gunfire,” a spokesman for the Israeli Army told Haaretz.

The turret is the creation of Israel defense firm Smart Shooter, a company that’s working on autonomous weapon systems, including an attachment for rifles that would compensate for a soldier’s inability or unwillingness to aim. “Our goal is to take the concept of precision weaponry to missiles, fighter planes, and in some cases, armored infantry carriers. Or, to the most basic infantry company.” Smart Shooter Operational Expert Shmuel Rabinovitz told i24 news in 2020.



Smart Shooter works by using an AI system that follows and locks in on targets. Its website marketing calls this “One Shot—One Kill” and boasts that the company “combines simple to install hardware with advanced image-processing software to turn basic small arms into 21st century smart weapons.”

The turret seen in Hebron isn’t advertised on Smart Shooter’s website, but two other automated turrets are. Both are low profile turrets that can be outfitted with an assault rifle and the Smart Shooter system. There’s also a drone called a “SMASH Dragon.”

Israel has pioneered the use of automated systems for military purposes, which are incredibly controversial. An elaborate system of cameras deployed in the West Bank are hooked into a database called Blue Wolf that tracks the movement of monitored Palestinians. Hebron was one of the first cities to utilize the system which allows the Israeli authorities to identify people before they’d presented their ID card.

--------------------------------------------

Ex-Mossad Head: AI Facial Recognition Tech Superior to Fingerprinting
https://m.jpost.com/business-and-innovation/tech/article-689400

Former Mossad chief Yossi Cohen on Monday emphasized the importance of facial recognition technology in the fields of counterterrorism and law enforcement.

--------------------------------------------

Israel Escalates Surveillance of Palestinians With Facial Recognition Program In West Bank
https://www.washingtonpost.com/world/middle_east/israel-palestinians-surveillance-facial-recognition/2021/11/05/3787bf42-26b2-11ec-8739-5cb6aba30a30_story.html

THE ISRAELI SECURITY state has for decades benefited from the country’s thriving research and development sector, and its interest in using AI to police and control Palestinians isn’t hypothetical. In 2021, the Washington Post reported on the existence of Blue Wolf, a secret military program aimed at monitoring Palestinians through a network of facial recognition-enabled smartphones and cameras.

---------------------------------------------

Documents Reveal Advanced AI Tools Google Is Selling to Israel
https://theintercept.com/2022/07/24/google-israel-artificial-intelligence-project-nimbus/
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1493 on: October 06, 2022, 01:36:32 AM »
China Pairs Armed Robot Dogs With Drones That Can Drop Them Anywhere
https://www.thedrive.com/the-war-zone/china-pairs-armed-robot-dogs-with-drones-that-can-drop-them-anywhere



The Future is Now.

Video:
https://twitter.com/LiaWongOSINT/status/1577209925255561216

The footage begins with a shot of the drone as it approaches the rooftop of a building in a nondescript urban area with the compact armed robot dog being carried under the drone’s frame. The drone, acting as a robotic dropship of sorts, then lands atop the roof, releases the robodog, and flies away. Shortly thereafter the robodog unfurls from its folded position and begins navigating its new surroundings with what looks to be a Chinese QBB-97 light machine gun with 85 round drum magazine (designated as Type 95 LGM in the United States) mounted on its back.

The video account seems to be directly affiliated with the Chinese Kestrel Defense company, also called China Kestrel Defense in some instances.

... Heavy-duty drones deliver combat robot dogs, which can be directly inserted into the weak link behind the enemy to launch a surprise attack or can be placed on the roof of the enemy to occupy the commanding heights to suppress firepower. And ground troops [can] conduct a three-dimensional pincer attack on the enemy in the building.”

With the added color offered by this description, it can at the very least be gathered that the drone-robodog pairing was conceptualized with the idea that it could be deployed during assault operations, especially in urban areas. The Weibo account has even shared other videos of different robodogs in similar settings, suggesting that the company specializes in technologies designed with these environments in mind

The aforementioned ‘three-dimensional pincer attack’ that the company claims the drone and robodog team could support would also be a tactic that could see employment in an urban assault scenario, as forces simultaneously attack from two directions and the robodog is dropped on the roof to add another.

... A remote-controlled drone, or an autonomous one that uses pre-programmed coordinates and a basic onboard navigation suite, could deploy an armed robodog behind enemy lines, on rooftops to take out otherwise difficult-to-access targets, to scout ahead, or even simply create a diversion. In the case of the dog on the roof, merely firing repeatedly at a closed door would cause a major distraction and would make the enemy think an attack is happening from above as well as below.



The concept becomes particularly scary if the robodogs are eventually equipped to operate autonomously, where they can search and engage targets on their own based on the parameters set by their operators. Whereas these systems are currently designed to be controlled and operated by someone, especially when armed, the potential for them to one day operate on their own accord could become a huge problem for opposing forces that would encounter them. Science fiction has already tread into these areas and it's only a matter of time before the 'fiction' part becomes a reality. By delivering them via drone, packs of robodogs could wreak havoc behind enemy lines and appear in places that are too risky for soldiers to venture.

--------------------------------------------------

as long as you hear the tacktacktacktacktack...you're fine.

when it stops. you're screwed.
« Last Edit: October 06, 2022, 04:56:52 AM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1494 on: October 06, 2022, 04:59:34 AM »
NASA Teams Up With Apptronik to Research Humanoid Robots
https://www.axios.com/local/austin/2022/09/21/apptroniks-partners-nasa



Austin robotics company Apptronik will partner with NASA to accelerate commercialization of its latest humanoid robot, the startup announced Tuesday.

The robot, named Apollo, will be one of the first humanoids available to the commercial markets, with the goal of assisting humans in industries like logistics, retail, hospitality, aerospace and more.

Apollo is designed to do a wide range of tasks in different environments. (Imagine a robot that can help unload trucks and stock shelves.)

NASA and Apptronik worked together starting in 2013 as part of a DARPA Robotics Challenge. In this most recent venture, Apptronik has the chance for its robots to work on Earth and one day in space.

Apptronik will unveil the Apollo humanoid at South by Southwest in March 2023.

https://apptronik.com/

------------------------------------------------

IHMC’s Nadia Is a Versatile Humanoid Robot Teammate
https://spectrum.ieee.org/ihmc-nadia-humanoid-robot

Several years ago, Florida's Institute for Human & Machine Cognition (IHMC) decided that it was high time to build their own robot from scratch, and in 2019, we saw some very cool plastic concepts of Nadia—a humanoid designed from the ground up to perform useful tasks at human speed in human environments. After 16 (!) experimental plastic versions, Nadia is now a real robot, and it already looks pretty impressive.

http://robots.ihmc.us/



Designed to be essentially the next-generation of the DRC Atlas and Valkyrie, Nadia is faster, more flexible, and robust enough to make an excellent research platform. It’s a hybrid of electric and hydraulic actuators: 7 degrees-of-freedom (DoF) electric arms and a 3 DoF electric pelvis, coupled with a 2 DoF hydraulic torso and 5 DoF hydraulic legs. The hydraulics are integrated smart actuators, which we’ve covered in the past. Nadia’s joints have been arranged to maximize range of motion, meaning that it has a dense manipulation workspace in front of itself (where it really matters) as well as highly mobile legs. Carbon fiber shells covering most of the robot allows for safe contact with the environment.

... the big thing that we’re trying to bring to the table with Nadia is the really high range of motion of a lot of the joints. And it’s not just the range of motion that differentiates Nadia from many other humanoid robots out there, it’s also speed and power. Nadia has much better power-to-weight than the DRC Atlas, making it significantly faster, which improves its general operational speed as well as its stability. ...

----------------------------------------------

What Robotics Experts Think of Tesla’s Optimus Robot
https://spectrum.ieee.org/robotics-experts-tesla-bot-optimus
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1495 on: October 06, 2022, 04:54:53 PM »
DeepMind AI Invents New Way to Multiply Numbers and Speed Up Computers
https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor
https://www.technologyreview.com/2022/10/05/1060717/deepmind-uses-its-game-playing-ai-to-best-a-50-year-old-record-in-computer-science/

DeepMind has used its board-game playing AI AlphaZero to discover a faster way to solve a fundamental math problem in computer science, beating a record that has stood for more than 50 years.

The problem, matrix multiplication, is a crucial type of calculation at the heart of many different applications, from displaying images on a screen to simulating complex physics. It is also fundamental to machine learning itself. Speeding up this calculation could have a big impact on thousands of everyday computer tasks, cutting costs and saving energy.

“Anything you want to solve numerically, you typically use matrices.”

Despite the calculation’s ubiquity, it is still not well understood. A matrix is simply a grid of numbers, representing anything you want. Multiplying two matrices together typically involves multiplying the rows of one with the columns of the other. The basic technique for solving the problem is taught in high school. “It’s like the ABC of computing,” says Pushmeet Kohli, head of DeepMind’s AI for Science team.

But things get complicated when you try to find a faster method. “Nobody knows the best algorithm for solving it,” says Le Gall. “It’s one of the biggest open problems in computer science.”

The trick was to turn the problem into a kind of three-dimensional board game, called TensorGame. The board represents the multiplication problem to be solved, and each move represents the next step in solving that problem. The series of moves made in a game therefore represents an algorithm.

Compared to the game of Go, which remained a challenge for AI for decades, the number of possible moves at each step of their game is 30 orders of magnitude larger (above 1033 for one of the settings considered). “The number of possible actions is almost infinite,” says Thomas Hubert, an engineer at DeepMind.

The researchers trained a new version of AlphaZero, called AlphaTensor, to play this game. The company's new AI, AlphaTensor, started with no knowledge of any solutions and was presented with the problem of creating a working algorithm that completed the task with the minimum number of steps. Instead of learning the best series of moves to make in Go or chess, AlphaTensor learned the best series of steps to make when multiplying matrices. It was rewarded for winning the game in as few moves as possible.

Through learning, AlphaTensor gradually improved over time, re-discovering historical fast matrix multiplication algorithms such as Strassen’s, eventually surpassing the realm of human intuition and discovering algorithms faster than previously known

The researchers describe their work in a paper published in Nature today. The headline result is that AlphaTensor discovered a way to multiply together two four-by-four matrices that is faster than a method devised in 1969 by the German mathematician Volker Strassen, which nobody had been able to improve on since. The basic high school method takes 64 steps; Strassen’s takes 49 steps. AlphaTensor found a way to do it in 47 steps.

Overall, AlphaTensor beat the best existing algorithms for more than 70 different sizes of matrix. It reduced the number of steps needed to multiply two nine-by-nine matrices from 511 to 498, and the number required for multiplying two 11-by-11 matrices from 919 to 896. In many other cases, AlphaTensor rediscovered the best existing algorithm.

Moreover, AlphaTensor also discovers a diverse set of algorithms with state-of-the-art complexity – up to thousands of matrix multiplication algorithms for each size, showing that the space of matrix multiplication algorithms is richer than previously thought.



Having looked for the fastest algorithms in theory, the DeepMind team then wanted to know which ones would be  fast in practice. Different algorithms can run better on different hardware because computer chips are often designed for specific types of computation. The DeepMind team used AlphaTensor to look for algorithms that were tailored to Nvidia V100 GPU and Google TPU processors, two of the most common chips used for training neural networks. The algorithms that they found were 10 to 20% faster at matrix multiplication than those typically used with those chips.

... Hussein Fawzi at Deepmind says the results are mathematically sound, but are far from intuitive for humans. "We don't really know why the system came up with this, essentially," he says. "Why is it the best way of multiplying matrices? It's unclear."

Quote
... "Somehow, the neural networks get an intuition of what looks good and what looks bad. I honestly can't tell you exactly how that works. I think there is some theoretical work to be done there on how exactly deep learning manages to do these kinds of things." - A.Fawzi

... "I believe we'll be seeing AI-generated results for other problems of a similar nature, albeit rarely something as central as matrix multiplication. There's significant motivation for such technology, since fewer operations in an algorithm doesn't just mean faster results, it also means less energy spent," he says. If a task can be completed slightly more efficiently, then it can be run on less powerful, less power-intensive hardware, or on the same hardware in less time, using less energy.

Alhussein Fawzi et al, Discovering faster matrix multiplication algorithms with reinforcement learning, Nature, (2022)
https://www.nature.com/articles/s41586-022-05172-4
« Last Edit: October 06, 2022, 06:37:12 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1496 on: October 06, 2022, 06:17:40 PM »
Boston Dynamics, Agility and Others Pen Letter Condemning Weaponized ‘General Purpose’ Robots
https://techcrunch.com/2022/10/06/boston-dynamics-agility-and-others-pen-letter-condemning-weaponized-general-purpose-robots/



This morning, a group of prominent robotics firms issued an open letter condemning the weaponization of ‘general purpose’ robots. Signed by Boston Dynamics, Agility, ANYbotics, Clearpath Robotics, Open Robotics, (but not Tesla); the letter notes, in part:

Quote
... We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues. Weaponized applications of these newly-capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society. ...

https://www.bostondynamics.com/open-letter-opposing-weaponization-general-purpose-robots

The piece comes amid mounting concern around the proliferation of advanced robotics systems like Boston Dynamics’ Spot and Agility’s Digit. Fictional works like Black Mirror, coupled with real-world efforts like the Ghost Robotics dog that has been outfitted with a sniper rifle, have raised significant red flags for many.

Today’s open letter finds the signees pledging not to weaponize their systems, while calling on lawmakers to, “work with us to promote safe use of these robots and to prohibit their misuse. We also call on every organization, developer, researcher, and user in the robotics community to make similar pledges not to build, authorize, support, or enable the attachment of weaponry to such robots.”

The ”general purpose” phrases affords some wiggle room for those companies working with the Defense Department and others to design robotics specifically for warfare purposes.
« Last Edit: October 06, 2022, 06:23:25 PM by vox_mundi »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus

Sigmetnow

  • Multi-year ice
  • Posts: 27289
    • View Profile
  • Liked: 1458
  • Likes Given: 448
Re: Robots and AI: Our Immortality or Extinction
« Reply #1497 on: October 07, 2022, 08:26:40 PM »
Boston Dynamics, Agility and Others Pen Letter Condemning Weaponized ‘General Purpose’ Robots

More on this from Boston Dynamics:
Quote
Today we join five other leading robotics companies in pledging that we will not weaponize our advanced-mobility general-purpose robots or the software we develop that enables advanced robotics and we will not support others to do so.
 
Learn what Boston Dynamics is doing to oppose weaponization, while supporting the safe, ethical, and effective use of robots in public safety:

Quote
An Ethical Approach to Mobile Robots in Our Communities
Robots should be used to help, not harm. We prohibit weaponization, while supporting the safe, ethical, and effective use of robots in public safety. …
10/05/2022
https://www.bostondynamics.com/resources/blog/ethical-approach-mobile-robots-our-communities

< That wording doesn't cover refusing to sell your robots to entities who may then weaponise them without your direct support, I notice.
 
Brendan Schulman @RobotPolicy
This would violate our Terms and Conditions of sale and result in the software license being void. But the letter also calls on policymakers to work on solutions.
10/6/22, 7:06 AM. https://twitter.com/bostondynamics/status/1577978713249124355

From the joint letter:
“To be clear, we are not taking issue with existing technologies that nations and their government agencies use to defend themselves and uphold their laws.”

=====
 
Seeing Tesla's Optimus Robot Up Close Makes Me Take It Seriously
https://www.cnet.com/home/smart-home/seeing-teslas-optimus-robot-up-close-has-me-taking-it-seriously/
People who say it cannot be done should not interrupt those who are doing it.

nadir

  • Young ice
  • Posts: 2603
    • View Profile
  • Liked: 284
  • Likes Given: 38
Re: Robots and AI: Our Immortality or Extinction
« Reply #1498 on: October 07, 2022, 08:47:19 PM »
This reminds me to the old Google’s clause of “Don’t do evil” removed in 2018.

If there’s one reason I have not engaged in comparisons between the ridiculous Tesla Bumble and the sophisticated Boston Dynamics robots is because I distrust the latter so much…

In fact, I believe a robot similar to Boston Dynamics dogs has already found a niche Military function.

https://www.popularmechanics.com/military/weapons/a37939706/us-army-robot-dog-ghost-robotics-vision-60/

And anybody that has read Bradbury finds that robot very unsettling…
« Last Edit: October 08, 2022, 03:05:04 AM by nadir »

vox_mundi

  • Multi-year ice
  • Posts: 11060
    • View Profile
  • Liked: 3637
  • Likes Given: 804
Re: Robots and AI: Our Immortality or Extinction
« Reply #1499 on: October 11, 2022, 03:59:16 AM »
There are 3 classes of people: those who see. Those who see when they are shown. Those who do not see

Insensible before the wave so soon released by callous fate. Affected most, they understand the least, and understanding, when it comes, invariably arrives too late

Fiat iustitia, et pereat mundus