link1s.site

ChatGPT: Explained to Kids(How ChatGPT works)

Chat means chat, and GPT is the acronym for Gene Rate Pre trained Transformer.

Genrative means generation, and its function is to create or produce something new; Pre trained refers to a model of artificial intelligence that is learned from a large amount of textual materials, while Transformer refers to a model of artificial intelligence.

Don't worry about T, just focus on the words G and P.

We mainly use its Generative function to generate various types of content; But we need to know why it can produce various types of content, and the reason lies in P.

Only by learning a large amount of content can we proceed with reproduction.

And this kind of learning actually has limitations, which is very natural. For example, if you have learned a lot of knowledge since childhood, can you guarantee that your answer to a question is completely correct?

Almost impossible, firstly due to the limitations of knowledge, ChatGPT is no exception, as it is impossible to master all knowledge; The second is the accuracy of knowledge, how to ensure that all knowledge is accurate and error free; The third aspect is the complexity of knowledge, where the same concept is manifested differently in different contexts, making it difficult for even humans to grasp it perfectly, let alone AI.

So when we use ChatGPT, we also need to monitor the accuracy of the output content of ChatGPT. It is likely not a problem, but if you want to use it on critical issues, you will need to manually review it again.

And now ChatGPT has actually been upgraded twice, one is GPT4 with more accurate answering ability, and the other is the recent GPT Turbo.

The current ChatGPT is a large model called multimodality, which differs from the first generation in that it can not only receive and output text, but also other types of input, such as images, documents, videos, etc. The output is also more diverse. In addition to text, it can also output images or files, and so on.

World's deepest diving pool opens in Poland, 45.5 meters deep
The world's deepest diving pool, Deepspot, opened this weekend near the Polish capital Warsaw. The 45.5-meter pool contains artificial underwater caves, Mayan ruins and a small shipwreck for scuba divers and free divers to explore. Deepspot can hold 8,000 cubic meters of water, more than 20 times the capacity of a normal 25-meter swimming pool. Unlike ordinary swimming pools, Deepspot can still open despite Poland's COVID-19 epidemic prevention restrictions because it is a training center that provides courses. The operator also plans to open a hotel where guests can observe divers at a depth of 5 meters from their rooms. "This is the deepest diving pool in the world," Michael Braszczynski, 47, Deepspot's director and a diving enthusiast, told AFP at the opening yesterday. The current Guinness World Record holder is a 42-meter-deep pool in Montegrotto Terme, Italy. The 50-meter-deep Blue Abyss pool in the UK is scheduled to open in 2021. On the first day of Deepspot's opening, about a dozen people visited, including eight experienced divers who wanted to pass the instructor exam. "There are no spectacular fish or coral reefs here, so it can't replace the ocean, but it is certainly a good place to learn and train safe open water diving," said 39-year-old diving instructor Przemyslaw Kacprzak. "And it's fun! It's like a kindergarten for divers."
Former Microsoft CEO Ballmer wealth surpassed Gates, he only did one thing
On July 1, former Microsoft CEO and President Steve Ballmer surpassed Microsoft co-founder Bill Gates for the first time on the Bloomberg list of the world's richest people to become the sixth richest person in the world. According to the data, as of the same day, Ballmer's net worth reached $157.2 billion, while Gates's wealth was $156.7 billion, falling to seventh place. The latest figures, as of July 6, show that Ballmer's wealth has grown further to $161 billion, and Gates' wealth is $159 billion. This is the first time Ballmer's net worth has surpassed Gates', and it is also the rare time in history that an employee's net worth has surpassed that of a company founder. Unlike Musk, Jeff Bezos and others, Ballmer's wealth was not accumulated through entrepreneurial success as a business founder, but simply because he chose to hold Microsoft "indefinitely." As Fortune previously reported, Ballmer is the only individual with a net worth of more than $100 billion as an employee rather than a founder.
SpaceX astronaut returns with an incredible change in his body
A provocative new study reveals the complex effects of the space environment on human health, providing insight into potential damage to blood, cell structure and the immune system. The study focused on SpaceX's Inspiration4 mission, which successfully sent two men and two women into space in 2021 to orbit the Earth for three days and shed some light on the effects of space travel on the human body. The research data, derived directly from the Inspiration4 mission, shows that even a brief trip to space can significantly damage the human immune system, trigger an inflammatory response, and profoundly affect cell structure. In particular, space travel triggered unprecedented changes in cytokines that play a key role in immune response and muscle regulation but are not usually directly associated with inflammation. In particular, the study found a significant increase in muscle factors, which are physiological responses specific to skeletal muscle cells in microgravity, rather than a simple immune response. Although non-muscular tissues did not show changes in proteins associated with inflammation, specific leg muscles such as soleus and tibialis anterior muscles showed significant signs of metabolic activity, especially increased interleukin in the latter, further enhancing the activation of immune cells.
Stanford AI project team apologizes for plagiarizing Chinese model
An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday. We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X. The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500. According to media reports, the announcement published by one of the team members quickly received more than 300,000 views. However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University. Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project. They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work. In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing. According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public. He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment. Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X. As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing. Global Times
Samsung Electronics wins cutting-edge AI chip order from Japan's Preferred Networks
SEOUL, July 9 (Reuters) - Samsung Electronics (005930.KS), opens new tab said on Tuesday it won an order from Japanese artificial intelligence company Preferred Networks to make chips for AI applications using the South Korean firm's 2-nanometre foundry process and advanced chip packaging service. It is the first order Samsung has revealed for its cutting-edge 2-nanometre chip contract manufacturing process. Samsung did not elaborate on the size of the order. The chips will be made using high-tech chip architecture known as gate all-around (GAA) and multiple chips will be integrated in one package to enhance inter-connection speed and reduce size, Samsung said in a statement. South Korea's Gaonchips Co (399720.KQ), opens new tab designed the chips, Samsung said. The chips will go toward Preferred Networks' high-performance computing hardware for generative AI technologies such as large language models, Junichiro Makino, Preferred Networks vice president and chief technology officer of computing architecture, said in the statement.