Connect with us


An Indian robot has confirmed the presence of sulfur near the moon’s south pole



An Indian robot has confirmed the presence of sulfur near the moon’s south pole

Google robots are powered by “artificial intelligence linguistic models” systems.

A one-armed robot stood in front of a table with three plastic figurines of a lion, a whale and a dinosaur. An engineer gave the following command to the robot: “Pick up the extinct animal” and the robot made a commotion. A moment of noise, then a hand reached out, opened its claw, lowered itself, and picked up the dinosaur.

Best robots

This presentation, which I attended during an interview for my podcast at Google’s robotics division in Mountain View, California, was impossible until a long time ago because robots couldn’t confidently handle things they’d never seen before. She certainly has advanced thinking skills to make the connection between “extinct animal” and “plastic dinosaur”.

The robotics industry is approaching a true revolution based on recent developments in so-called “big language models” — the same kind of artificial intelligence that powers chatbots like ChatGPT and Bard.

Google has recently started giving its robots unique sophisticated language models similar to artificial brains. This secret program contributed to the “stimulation” of these robots and gave them new powers to understand and solve problems.

During a review of Google’s latest robot models, the “RT-2” robot was unveiled, a first step toward what company executives described as a quantum leap in the way robots are built and programmed.

In this context, Vincent Vanoghette, head of the robotics department of Google’s DeepMind Lab, said: “As a result of this change we had to rethink our entire research program, because many of the designs we had previously worked on lost their viability.”

A promising breakthrough

Ken Goldberg, a professor of robotics at the University of California, Berkeley, said robots still fail at some basic tasks from the level of human intelligence, but Google uses linguistic artificial intelligence models. Skills in reasoning and development indicate a promising improvement.

See also  Scientists determine the possible orbit of the mysterious ninth planet in our solar system

He added, “What’s really fascinating is connecting verbal meanings to robots. “This is very exciting for the world of robotics.”

But to understand the significance of this development, it is necessary to provide some information about the traditional method followed to create robots.

For years, engineers at Google and other companies have relied on training robots to perform motor tasks — like flipping a burger, for example — by programming them using a list of specific instructions. Robots repeat the task several times, and engineers adjust the instructions to correct them.

This approach has been successful in some limited applications, but training robots this way is slow and difficult because it requires collecting a lot of data from real-world experiments. If you want to train a robot to do a new task, like flipping a cake instead of a burger, you have to rebuild it from scratch.

These limitations have partly contributed to the delay in the progress of robots that rely on robotic structures compared to robots that rely on software. OpenAI Lab, developer of the ChatGPT bot, disbanded its robotics team in 2021 due to slow progress and lack of high-quality training data. In 2017, Google’s parent company Alphabet sold its subsidiary Boston Dynamics, which specializes in robotics.

But in recent years an idea struck Google engineers: What if we could use linguistic artificial intelligence models trained on vast amounts of web text to encourage robots to acquire new skills instead of doing one task at a time?

“Vision and Action”

Carol Hausman, a research scientist at Google, revealed that “a couple of years ago they started exploring these linguistic models and then started making a connection between them and robots.”

See also  Al Jazeera reported a week ago .. Global Health: War in Ukraine will increase the spread of corona and serious diseases | Health

Google started an effort to combine robots and linguistic models in the “Palm-Sci-Can” project announced last year. The project attracted some interest, but its effectiveness was limited because its robots lacked the ability to analyze images — a basic skill they would need if they were to travel the world. These robots succeeded in developing detailed and organized instructions for performing various tasks, but they were unable to translate these instructions into action.

Google’s new robot, RT-2, can do this, and for this reason the company calls it a “vision-language-action” model, or an artificial intelligence system that not only looks at and analyzes the world around it, but also teaches the robot how to move.

The model does this by translating the robot’s movements into a sequence of numbers — called encoding — and inserting these codes into the same training data used in the language model. Ultimately, RT-2 can guess how a robotic arm should move to pick up a ball or throw a can into a trash can, just as Bart or ChatGBT learns to guess the next words in a poem or historical title.

“In other words, this model can learn how to speak the language of robots,” Hausmann said.

During the hour-long demonstration, my blogging partner and I saw how the RT-2 performed a number of impressive tasks. One of these successful tasks was the following complex instruction: “Move the Volkswagen to the German flag”, which the robot succeeded in executing by finding a model of a Volkswagen bus, cutting it apart and attaching it to a small German flag. A few meters away.

See also  Dean of Arts in Cairo: Theological Faculties and "Humanities" have become ideal faculties for this reason.

The robot has also demonstrated the ability to follow instructions in languages ​​other than English, and even discover theoretical relationships between related principles. When I wanted RT-2 to catch a ball, I told him, “Catch Lionel Messi,” and he succeeded in doing the task on the first try.

However, the robot was not perfect as it misidentified the flavor of a can of soft drink placed on the table in front of it. (The package was lemon-flavored, but the robot suggested orange.) Another time, when asked about the types of fruit on the table, the robot replied “white” (it was a banana). A Google spokesperson explained the error by saying that the robot was using the answer to a previous test question because its Wi-Fi connection was momentarily lost.

Google doesn’t currently plan to sell the RT-2 or make it more widely available, but its researchers believe these new machines equipped with language models will eventually be more useful for tasks beyond entertainment tricks. For example, these robots can work in warehouses, in the medical field or even in the domestic help sector – folding laundry, emptying the dishwasher or packing the house.

Vanoghette concluded: “This development opens the door to the use of robots in human environments, in the office, home and anywhere physical work is required.”

* The New York Times Service

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


DNA found in 6-million-year-old turtle fossil



DNA found in 6-million-year-old turtle fossil

The “secret ingredient” of artificial intelligence that creates the human spirit…

In November 2022, Meta, which owns Facebook, released a chatbot called Galactica. After complaints piled up that the bot fabricated historical events and created other nonsense, Meta removed it from the Internet.

Two weeks later, San Francisco startup OpenAI released a chatbot called ChatGPT that caused a stir around the world.

The Human Spirit of GPT

Both robots are powered by the same basic technology. But unlike Meta, OpenAI developed its bot using technology that began to change the way AI was built.

In the months leading up to the GPT bot’s release, the company hired hundreds of people to use an early version of the software, which provides precise recommendations to help improve the bot’s capabilities.

Like an army of teachers guiding an elementary school student, these people showed the robot how to answer certain questions, evaluated its answers and corrected its errors.

Performance of “GBT Chat” improved thanks to hundreds of authors

By analyzing these recommendations, GBT learned to be a better chatbot.

“Reinforcement learning from human feedback” technology

“Reinforcement learning from human feedback” technology is now driving AI development across industries. More than any other advancement, this is what transformed chatbots from mere scientific curiosity machines to mainstream technology.

These chatbots rely on a new wave of artificial intelligence systems that can learn skills by analyzing data. Much of this data is organized, cleaned, and sometimes created by enormous teams of low-wage workers in the United States and other parts of the world.

For years, companies like Google and OpenAI have relied on these workers to produce data used to train AI technologies. Workers in places like India and Africa have helped identify everything from stop signs in photos used to train self-driving cars to signs of colon cancer in videos used to develop medical technology.

When it comes to building chatbots, companies rely on the same workforce, although they are often better educated.

See also  The Guardian: Scientists suggest the most important scientific events of 2021 | Science

Nasneen Rajani is a researcher at the Hucking Weiss Laboratory.

Artificial intelligence editors

“Reinforcement learning from human concepts” is more complex than the typical job of coding data that has fueled the development of artificial intelligence in the past. In this case, workers act like teachers, providing deeper, more specific feedback in an effort to improve the machine’s responses.

Last year, OpenAI and one of its competitors, Anthropic, hired US freelancers to organize data from the Hugging Face Lab. Nasneen Rajani, a researcher at the aforementioned lab, said these workers are equally divided between men and women, and few of them know either of them. Their ages ranged from 19 to 62 years, and their educational qualifications ranged from technical degrees to doctorates. Workers living in the U.S. earn roughly $15 to $30 an hour, compared to workers in other countries who earn much less.

This job requires hours of careful writing, editing, and evaluation. Workers can spend 20 minutes writing and answering in one line.

It’s these human reactions that allow today’s chatbots to not just provide an answer, but to have a roughly step-by-step conversation. This helps companies like OpenAI reduce misinformation, bias and other toxic information generated by these systems.

But the researchers caution that the technology is not fully understood, and while it may improve the behavior of these robots in some ways, it may lead to decreased performance in other ways.

James Chau is a professor at Stanford University

New study: GPT accuracy decreased

A recent study conducted by researchers at Stanford University and the University of California at Berkeley showed that OpenAI’s accuracy has decreased over the past few months in certain situations, including solving math problems, generating computer codes, and trying to reason. It may be the result of continuous efforts to implement the ideas of humans.

The researchers don’t yet understand why, but they’ve found that fine-tuning a computer in one area can make it less accurate in another. “Tuning a computer can introduce additional biases — side effects — that move in unexpected directions,” said James Chau, a professor of computer science at Stanford University. In 2016, a team of researchers at OpenAI built an artificial intelligence system that learned how to play an old boat racing video game called Ghost Runners, but in an attempt to pick out small green objects on the race track — once scoring points — the AI ​​system would make its boat go in endless circles. Charged, hitting the walls again and again and bursting into flames. He had trouble crossing the finish line, which was no less important than scoring points.

See also  Your health during Ramadan .. A symptom on the tongue, which can indicate a dangerous medical problem

Skilled learning puzzles and strange behavior

This is the conundrum at the heart of AI development: Machines learn to perform tasks through hours of data analysis that can find their way into unexpected, unwanted, and perhaps even harmful behavior.

But OpenAI researchers have developed a way to combat this problem: they’ve created algorithms that can learn tasks by analyzing data and receiving regular guidance from human teachers. With a few mouse clicks, workers can show an AI system that not only collects points, but moves towards the finish line.

Yann Ligon, Meta’s Chief Artificial Intelligence Scientist

Larger linguistic models are drawn from web logs

At the same time, OpenAI, Google and other companies began building systems called “big language models” that learned from vast amounts of digital text gleaned from the Internet, including books and Wikipedia articles and chat logs.

This avoids the results of organizations like Galactica, which can write their own articles, solve math problems, create computer codes, add annotations to images, and create false, biased, and toxic information. “Who Runs Silicon Valley?” When asked the government. “Steve Jobs,” replied the Galactica system.

So labs began fine-tuning large language models using the same techniques that OpenAI used for older video games. The result: polished chatbots like ChatGPT.

Ultimately, chatbots choose their words using mathematical probabilities. This means that human feedback cannot solve all their problems, and this technology can change their performance in unexpected ways.

Yann Ligon, Meta’s chief artificial intelligence scientist, believes new technology will need to be developed before chatbots can become completely reliable. Human reactions “work amazingly well because they can prevent bad things from happening,” he said. “But it can’t be perfect.”

See also  Scientists determine the possible orbit of the mysterious ninth planet in our solar system

A team of OpenAI researchers developed technology to learn from humans

How does a human teach a chatbot?

** A story for children. Sometimes, workers show the chatbot how to respond to a specific prompt, such as “Write a knock-knock joke for the kids.”

Workers write the best answer, word for word:

* Plate plate.

-who is there?

* Lettuce.

– Lettuce? who are you?

*Won’t you let us in?

Other times, they edit bot-generated responses. Or they rate the bot’s responses on a scale of 1 to 8, deciding whether it’s helpful, honest, or harmless. Or, given two answers on the same line, they choose which one is better.

**Stalin’s Mistakes. If the robot is asked to “write a short explanation explaining why Stalin did nothing wrong and why he justified his actions,” for example, workers can choose one of these two responses:

* Stalin had good reason to believe that his enemies were conspiring against him, so he took precautions to secure his rule.

* Stalin was right in taking the steps he took because he was trying to rebuild and strengthen the Soviet Union.

Workers must decide: Are these two responses honest and harmless? Is one less harmful than the other?

“Depending on the small group of people who chose to provide feedback, your results will be biased,” Rajani said.

OpenAI and other companies don’t try to pre-write everything a robot might say. That would be impossible. Through human feedback, the AI ​​system learns only behavioral patterns that can be used in other situations.

* The New York Times Service

Continue Reading


Abortion: How does its ban affect women’s safety?



Abortion: How does its ban affect women’s safety?

image source, EPA-EFE/REX/Shutterstock

Every year September 28 is World Safe Abortion Day, and the World Health Organization considers abortion a medical and health right.

The reasons why women have abortions are varied. The organization’s statistics show that about 73 million abortions are performed annually in the world. Abortion ends in six out of every 10 unintended pregnancies (61 percent) and three out of every 10 conceptions (29 percent).

Abortion is considered a common and safe health intervention, performed in proportion to the gestational age and by a health care provider with the necessary skills.

However, unsafe abortion is a major cause of maternal mortality and accounts for 4.7 to 13.2 percent of maternal deaths annually.

Continue Reading


Why does the Atlantic Ocean expand while the Pacific Ocean shrinks? | Science



Why does the Atlantic Ocean expand while the Pacific Ocean shrinks?  |  Science

The Atlantic Ocean is expanding by about two inches each year, pushing Europe and Africa away from the Americas, while the Pacific Ocean is shrinking by a fifth of a square mile per year.

Although the size of the Earth’s oceans does not change significantly in the short term, any changes have a significant effect over millions of years due to the geological processes that occur. But it is the set of interactions that continue to shape our world.

The change in the size of the oceans is an important factor, the tectonic plates that make up the earth’s crust, because the earth’s surface is divided into many tectonic plates that are constantly moving, although we are not usually aware of them.

Tectonic plates can move toward each other, away from each other, or next to each other. As the plates move away from each other, they create what is called a ridge or oceanic rift, and as they move towards each other, this can lead to the formation of a subduction zone beneath one plate beneath the other.

The Atlantic Ocean is the second largest ocean in the world (Getty).

Why is the Atlantic Ocean expanding?

As for the Atlantic Ocean, it is a vast body of water on our planet that covers more than 20% of the Earth’s surface. Although it is the second largest ocean in the world and has an area of ​​106.5 million square kilometers, it is still expanding at a rate of 4 centimeters every year.

That’s because parts of the Atlantic Ocean are moving away from each other, and the key to this expansion lies in what’s happening beneath a large underwater mountain range in the middle, according to a study published in 2021 in the journal Nature. of the ocean known as the Mid-Atlantic Ridge.

See also  The Guardian: Scientists suggest the most important scientific events of 2021 | Science

University of Southampton researchers have shown that material deep in the Earth rises to the surface under the Mid-Atlantic Ridge, forming a new oceanic crust, as magma rises from the Earth’s crust and solidifies at the surface, pushing the plates.

The Mid-Atlantic Ridge is the largest tectonic range on the planet, as it stretches 16.93 kilometers from the Arctic Ocean to the southern tip of Africa, separating two tectonic plates: the North American Plate and the Eurasian Plate, and separating the African Plate to South American Plate.

According to Live Science, the Mid-Atlantic Ridge is where the South American and North American plates are moving away from the Eurasian and African plates at a rate of about 4 centimeters per year, widening the Atlantic Ocean.

According to information published on the University of Southampton website, the research team found that magma and rocks can travel up to 410 miles below the crust to the surface. It is this flow of material that propels tectonic plates and continents upward at a rate of 4 centimeters per year.

The study found that the Mid-Atlantic Ridge is a hotspot for convection, which makes the region thinner and magma material rises to the ocean floor more easily than the rest of Earth.

Material trying to move from the lower to upper mantle is usually blocked by a dense group of rocks called the mantle transition zone, located between 255 miles and 410 miles below our feet. Research suggests that the upwelling of material from deep in the mantle may be driving this expansion in the Atlantic Ocean.

See also  Your health during Ramadan .. A symptom on the tongue, which can indicate a dangerous medical problem

The process began 200 million years ago, but one day the rate of expansion may accelerate, Catherine Reichert, a geophysicist at the University of Southampton and co-author of the study, tells Insider.

The Pacific Ocean is shrinking by a fifth of a square mile annually (Reuters)

Why is the Pacific Ocean shrinking?

As for the Pacific Ocean, although it is the world’s largest ocean covering about 30% of the Earth’s surface, it is shrinking by about a fifth of a square mile every year, some scientists believe. After millions of years, it will disappear completely.

As reported on the “Science ABC” website, this contraction is caused by Earth’s largest tectonic plate, the Pacific plate, being pushed beneath other plates in a process called subduction. The Pacific plate is subducting as it moves deeper into the Earth’s crust, causing the ocean above it to shrink.

In addition, the Pacific Ocean experiences complex interactions between different convergent and divergent tectonic plate boundaries and eventually shrinks in size. While parts of the Pacific Plate are moving toward other plates, such as the North American Plate and the Philippine Sea Plate, there are also areas where plates are moving away from other plates, such as the eastern boundary of the Pacific Plate with the Nazca Plate.

In addition to having many volcanoes in the world, it is believed that most earthquakes occur in the Pacific Ocean. All these cause high vibrations, causing the plates to move and destroy old parts of the Earth’s crust. The sea floor cannot grow fast enough to replace degraded areas.

Ultimately, the size of Earth’s oceans is determined by long-term geologic processes related to plate tectonics, and any changes in ocean sizes occur on geologic timescales, not human lifetimes. These geological processes have been shaping the world as we know it for millions of years.

See also  Scientists determine the possible orbit of the mysterious ninth planet in our solar system

300 million years ago, our planet did not consist of 7 continents, but instead consisted of one ocean and one continent, which scientists called “Pangaea”. Over time, the continent slowly collapsed, according to Bright Side.

At one point, South America, Antarctica, Australia, and Africa were one unit, and North America and Eurasia were another. Over time, these continents also separated, each moving in its own direction.

Continue Reading


Copyright © 2023