Connect with us

Tech

Its mass is many times that of the Sun.. Why was the early universe full of huge stars and how did they get smaller after that? | Science

Published

on

Its mass is many times that of the Sun.. Why was the early universe full of huge stars and how did they get smaller after that?  |  Science

I brought study What is new is that the universe knew only giant-sized stars at its beginning, and due to the complex physics of the universe at the time, this may have led to the formation of massive stars.

A new study concludes that the first stars in the universe were 10,000 times more massive than the Sun, and nearly a thousand times larger than the largest stars that exist today. Today, the largest stars reach only 100 solar masses.

According to the website “Universe Today” (Today is the universe) astronomers have questioned the size of the first stars for years, and some early estimates predicted these stars to be hundreds of times the mass of the Sun, while later simulations suggested they might be slightly larger. A natural dose, but according to a new study this isn’t true.

The researchers also found that these giant stars lived quickly and died very young. In general, the bigger the star, the shorter its life span, and once giant stars die, the conditions are not ideal for regeneration.

But why did it happen in the first place? Why are there no conditions in the universe for the rebirth of such giant stars?

Stars are born within the dust clouds that surround most galaxies (NASA).

Why don’t giant stars regenerate?

According to direct science (Direct scienceScientifically, after the Big Bang, 13 billion years ago, when there were no stars in the universe, the universe consisted only of a hot mixture of natural gas, almost entirely hydrogen and helium.

Hundreds of millions of years after that, during a period known as the Cosmic Dark Ages, this neutral gas began to accumulate increasingly into balls of dense matter.

See also  Check your mobile from them or not .. Xiaomi phones that get Android 12

Normally, these balls of dense matter quickly collapse and form stars in our modern universe, but this did not happen during the cosmic dark ages, and this is because the universe now contains something that was not present in the early universe, viz. Many elements are heavier than hydrogen and helium, while in the primitive era there was nothing but hydrogen and helium.

These heavier elements are very efficient at releasing energy, allowing dense clusters to contract very quickly and then collapse to densities high enough to trigger nuclear fusion, the process that powers stars by fusing lighter elements into heavier ones.

These heavy elements were not available in the early universe because the only way to obtain these elements in the first place was through the process of nuclear fusion.

According to the NASA website (NASA), when very massive stars collapse into their death orbits, the collapsing core heats up enough to support highly exotic nuclear reactions that consume helium and produce a variety of heavy elements, even iron.

It is this that leads to generations of star formation, mergers and deaths, enriching the universe to its present state and providing the heavy material that contributes to star formation thereafter. Therefore, the first generation of stars formed under very different and difficult conditions.

The bigger the star, the younger it is (NASA)
The bigger the star, the younger it is (NASA)

How do stars usually form?

Stars are born within dust clouds that are common in most galaxies, and an example of dust clouds is the Orion Nebula, and deep perturbations within these clouds lead to the formation of nodes of sufficient mass, which begin to collapse under the influence of gas and dust. Their gravity causes the nodes to take a spherical shape and the large ball of gas and dust is in a contraction as the temperature of the gas rises.

See also  Observing the microscopic flows of plasma that feed the solar wind | Science

The gas usually consists of hydrogen and helium, which are light elements, and the temperature of the gas continues to rise due to compression, so the atoms become ions and free electrons at higher temperatures, and the state is called plasma.

The ball of plasma continues to contract under its own gravity, and its temperature rises until it’s high enough to start the reaction of ionized hydrogen to form helium. This interaction is called nuclear fusion, and it creates so much energy that the star begins to glow.

As for the first stars, they were not just ordinary fusion factories, they were giant masses of neutral gas that simultaneously ignited their fusion nuclei, avoiding the point where they would collapse into smaller pieces, resulting in massive stellar mass.

These first stars were very bright, yet they lived very short lives, less than a million years. Stars in the modern universe can live for billions of years, after which the giant stars die in supernova explosions.

To understand the mystery of these first stars, a team of astrophysicists turned to sophisticated computer simulations from the Dark Ages to understand what was happening back then.

The researchers found that a complex web of interactions existed before the first stars formed. Neutral gas, a gas with less and less reactivity with other substances, began to stick together, and the hydrogen and helium gave off little heat, allowing the clumps of neutral gas to slowly reach higher densities.

But denser clumps become very hot, producing radiation that separates the neutral gas from breaking up into many smaller clumps, meaning that stars formed from these clumps can become enormously massive.

See also  Honor 50 and 50 Lite phones arrive in Europe with Google services - UAE Emergency News
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

A NASA probe dropped a capsule containing samples from asteroid Bennu over the Utah desert today.

Published

on

A NASA probe dropped a capsule containing samples from asteroid Bennu over the Utah desert today.

The “secret ingredient” of artificial intelligence that creates the human spirit…

In November 2022, Meta, which owns Facebook, released a chatbot called Galactica. After complaints piled up that the bot fabricated historical events and created other nonsense, Meta removed it from the Internet.

Two weeks later, San Francisco startup OpenAI released a chatbot called ChatGPT that caused a stir around the world.

The Human Spirit of GPT

Both robots are powered by the same basic technology. But unlike Meta, OpenAI developed its bot using technology that began to change the way AI was built.

In the months leading up to the GPT bot’s release, the company hired hundreds of people to use an early version of the software, which provides precise recommendations to help improve the bot’s capabilities.

Like an army of teachers guiding a primary school student, these people showed the robot how to answer certain questions, evaluated its answers and corrected its errors.

Performance of “GBT Chat” improved thanks to hundreds of authors

By analyzing these recommendations, GPT has learned to be a better chatbot.

“Reinforcement learning from human feedback” technology

“Reinforcement learning from human feedback” technology is now driving AI development across industries. More than any other advancement, this is what transformed chatbots from mere scientific curiosity machines to mainstream technology.

These chatbots rely on a new wave of artificial intelligence systems that can learn skills by analyzing data. Much of this data is organized, cleaned, and sometimes created by enormous teams of low-wage workers in the United States and other parts of the world.

For years, companies like Google and OpenAI have relied on these workers to produce data used to train AI technologies. Workers in places like India and Africa have helped identify everything from stop signs in photos used to train self-driving cars to signs of colon cancer in videos used to develop medical technology.

When it comes to building chatbots, companies rely on the same workforce, although they are often better educated.

See also  Honor 50 and 50 Lite phones arrive in Europe with Google services - UAE Emergency News

Nasneen Rajani is a researcher at the Hucking Weiss Laboratory.

Artificial intelligence editors

“Reinforcement learning from human concepts” is more complex than the typical job of coding data that has fueled the development of artificial intelligence in the past. In this case, workers act like teachers, providing deeper, more specific feedback in an effort to improve the machine’s responses.

Last year, OpenAI and one of its competitors, Anthropic, hired US freelancers to organize data from the Hugging Face Lab. Nasneen Rajani, a researcher at the aforementioned lab, said these workers are equally divided between men and women, and few of them know either of them. Their ages ranged from 19 to 62 years, and their educational qualifications ranged from technical degrees to doctorates. Workers living in the U.S. earn roughly $15 to $30 an hour, compared to workers in other countries who earn much less.

This job requires hours of careful writing, editing, and evaluation. Workers can spend 20 minutes writing and answering in one line.

It’s these human reactions that allow today’s chatbots to not just provide an answer, but to have a roughly step-by-step conversation. This helps companies like OpenAI reduce misinformation, bias and other toxic information generated by these systems.

But the researchers caution that the technology is not fully understood, and while it may improve the behavior of these robots in some ways, it may lead to decreased performance in other ways.

James Chau is a professor at Stanford University

New study: GPT accuracy decreased

A recent study conducted by researchers at Stanford University and the University of California at Berkeley showed that OpenAI’s accuracy has decreased over the past few months in certain situations, including solving math problems, generating computer codes, and trying to reason. It may be the result of continuous efforts to implement the ideas of humans.

Researchers don’t yet understand why, but they’ve found that fine-tuning a computer in one area can make it less accurate in another. “Tuning a computer can introduce additional biases — side effects — that move in unexpected directions,” said James Chau, a professor of computer science at Stanford University. In 2016, a team of researchers at OpenAI built an artificial intelligence system that learned how to play an old boat racing video game called Ghost Runners, but in an attempt to pick out small green objects on the race track — once scoring points — the AI ​​system would make its boat go in endless circles. Charged, hitting the walls again and again and bursting into flames. He had trouble crossing the finish line, which was no less important than scoring points.

See also  Install the new GTA San Andreas for Android 200MG from Grand Theft Auto

Skilled learning puzzles and strange behavior

This is the conundrum at the heart of AI development: Machines learn to perform tasks through hours of data analysis that can find their way into unexpected, unwanted, and perhaps even harmful behavior.

But OpenAI researchers have developed a way to combat this problem: they’ve created algorithms that can learn tasks by analyzing data and receiving regular guidance from human teachers. With a few mouse clicks, workers can show an AI system that not only collects points, but moves towards the finish line.

Yann Ligon, Meta’s Chief Artificial Intelligence Scientist

Larger linguistic models are drawn from web logs

At the same time, OpenAI, Google and other companies began building systems called “big language models” that learned from vast amounts of digital text gleaned from the Internet, including books and Wikipedia articles and chat logs.

This avoids the results of organizations like Galactica, which can write their own articles, solve math problems, create computer codes, add annotations to images, and create false, biased, and toxic information. “Who Runs Silicon Valley?” When asked the government. “Steve Jobs,” replied the Galactica system.

So labs began fine-tuning large language models using the same techniques that OpenAI used for older video games. The result: polished chatbots like ChatGPT.

Ultimately, chatbots choose their words using mathematical probabilities. This means that human feedback cannot solve all their problems, and this technology can change their performance in unexpected ways.

Yann Ligon, Meta’s chief artificial intelligence scientist, believes new technology will need to be developed before chatbots can become completely reliable. Human reactions “work amazingly well because they can prevent bad things from happening,” he said. “But it can’t be perfect.”

See also  HUAWEI MateView GT 27 inch screen introduced in Saudi Arabia

A team of OpenAI researchers developed technology to learn from humans

How does a human teach a chatbot?

** A story for children. Sometimes, workers show the chatbot how to respond to a specific prompt, such as “Write a knock-knock joke for the kids.”

Workers write the best answer, word for word:

* Plate plate.

-who is there?

* Lettuce.

– Lettuce? who are you?

*Won’t you let us in?

Other times, they edit bot-generated responses. Or they rate the bot’s responses on a scale of 1 to 8, deciding whether it’s helpful, honest, or harmless. Or, given two answers on the same line, they choose which one is better.

**Stalin’s Mistakes. If the robot is asked to “write a short explanation explaining why Stalin did nothing wrong and why he justified his actions,” for example, workers can choose one of these two responses:

* Stalin had good reason to believe that his enemies were conspiring against him, so he took precautions to secure his rule.

* Stalin was right in taking the steps he took because he was trying to rebuild and strengthen the Soviet Union.

Workers must decide: Are these two responses honest and harmless? Is one less harmful than the other?

“Depending on the small group of people who chose to provide feedback, your results will be biased,” Rajani said.

OpenAI and other companies don’t try to pre-write everything a robot might say. That would be impossible. Through human feedback, the AI ​​system learns only behavioral patterns that can be used in other situations.

* The New York Times Service

Continue Reading

Tech

Official Confirmation.. A phone developed by Oppo and OnePlus

Published

on

Official Confirmation.. A phone developed by Oppo and OnePlus

Oppo and OnePlus are gearing up to launch their first joint phone with similar design and specifications, but each company will launch it with its own name and brand, the first collaboration between two major companies with competing products in the market. .

Oppo and OnePlus have collaborated before to create smartphones, especially foldable ones, but when it comes to launching an actual product, it’s from the first company, while OnePlus doesn’t have a foldable screen phone with the brand.

Oppo Find N3 - OnePlus Open

In an official confirmation from the company to The Verge, OnePlus has confirmed that it will work with Oppo to develop a foldable phone, which each company will release separately and under different names.

The report explained that Peter Lau, Chief Product Officer of Oppo and co-founder of OnePlus, suggested that a phone would be developed in collaboration between the two companies’ teams and released under two brands and under each name. they.

While the company’s confirmation hasn’t specifically revealed the name that the phone will come with, according to recent leaks, it might be called the OnePlus Open, while its name is the Oppo Find N3.

Oppo Find N takes the company by surprise due to bookings

On the other hand, according to GSMArena, the Oppo Find N3 will be an exclusive phone only in the Chinese market, while the same phone, but under the OnePlus Open name, will arrive in global markets including the US and Europe. India and the Middle East.

GSMArena’s information is consistent with what was published on The Verge based on OnePlus’ confirmation, so we’re dealing with a unique case of a phone with two brands.

See also  Honor 50 and 50 Lite phones arrive in Europe with Google services - UAE Emergency News

Based on previous leaks, it was reported that the phone will come with solid updates to the hinges, which will be 37% better than the previous generation Oppo phones, and the hinges will have 31 fewer components than the Oppo Find N2.

The phone has a 7.82-inch internal display and a 6.31-inch external display. The phone can have up to 24 GB of RAM and up to 1 TB of internal storage space.

Continue Reading

Tech

Xiaomi redmi note 13 latest mobile phones price and specifications in Egypt

Published

on

Xiaomi redmi note 13 latest mobile phones price and specifications in Egypt

Xiaomi Redmi Note 13 Price and Specifications Recently, with the launch of a new phone by the Chinese company Xiaomi Redmi Note 13 5G, the interest and query rates on search engines about the price and features of this phone have increased. This phone is considered very competitive. It’s the version on the phones in this series that we’ll be reviewing. This article contains all the important details about Xiaomi Redmi Note 13 phone price and specifications, so follow the following paragraphs in detail.

Xiaomi redmi note 13 price and specifications

The phone is available at different prices in Arab countries due to taxes and merchant profits. These prices may include the following page:

The price of the Xiaomi Redmi Note 13 phone in Egypt is around 13,500 pounds and this price varies from one place to another depending on the value added tax and the merchant’s profit.
Also, the price of this phone exceeds 914 AED in UAE.
Its price in Saudi Arabia is 934 Saudi Riyals.
It is necessary to contact one of the local trusted stores to get updates and latest prices for this mobile.

Xiaomi redmi note 13 specifications

This phone has various unique features compared to other phones and these features include:

Among the features of this popular phone device is the announcement of a screen that supports a peak brightness of up to a thousand candles per square meter.
It also has support for artificial intelligence to improve the quality of photos and videos.
You can get it with either 256GB or 512GB internal storage for your storage needs.
The aforementioned device has a side fingerprint sensor to unlock the phone easily, quickly and securely.

See also  Honor 50 and 50 Lite phones arrive in Europe with Google services - UAE Emergency News

Disadvantages of Xiaomi Redmi Note 13 Pro

This mobile phone has several disadvantages including the following:

The rear camera of the phone is distinguished by its excellent clarity and its great height from the surface of the device.
The Xiaomi Redmi Note 13 Pro phone does not offer a microSD slot to increase storage space.
No FM radio

Continue Reading

Trending

Copyright © 2023