link1s.site

Stanford AI project team apologizes for plagiarizing Chinese model

An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday.

We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X.

The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500.

According to media reports, the announcement published by one of the team members quickly received more than 300,000 views.

However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University.

Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project.

They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work.

In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing.

According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public.

He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment.

Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X.

As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing.

Global Times

ChatGPT: Explained to Kids(How ChatGPT works)
Chat means chat, and GPT is the acronym for Gene Rate Pre trained Transformer. Genrative means generation, and its function is to create or produce something new; Pre trained refers to a model of artificial intelligence that is learned from a large amount of textual materials, while Transformer refers to a model of artificial intelligence. Don't worry about T, just focus on the words G and P. We mainly use its Generative function to generate various types of content; But we need to know why it can produce various types of content, and the reason lies in P. Only by learning a large amount of content can we proceed with reproduction. And this kind of learning actually has limitations, which is very natural. For example, if you have learned a lot of knowledge since childhood, can you guarantee that your answer to a question is completely correct? Almost impossible, firstly due to the limitations of knowledge, ChatGPT is no exception, as it is impossible to master all knowledge; The second is the accuracy of knowledge, how to ensure that all knowledge is accurate and error free; The third aspect is the complexity of knowledge, where the same concept is manifested differently in different contexts, making it difficult for even humans to grasp it perfectly, let alone AI. So when we use ChatGPT, we also need to monitor the accuracy of the output content of ChatGPT. It is likely not a problem, but if you want to use it on critical issues, you will need to manually review it again. And now ChatGPT has actually been upgraded twice, one is GPT4 with more accurate answering ability, and the other is the recent GPT Turbo. The current ChatGPT is a large model called multimodality, which differs from the first generation in that it can not only receive and output text, but also other types of input, such as images, documents, videos, etc. The output is also more diverse. In addition to text, it can also output images or files, and so on.
Israeli strike kills a senior Hezbollah commander in south Lebanon
BEIRUT/JERUSALEM July 3 (Reuters) - An Israeli strike killed one of Hezbollah's top commanders in south Lebanon on Wednesday, prompting retaliatory rocket fire by the Iran-backed group into Israel as their dangerously poised conflict rumbled on. The Israeli military said it had struck and eliminated Hezbollah's Mohammed Nasser, calling him commander of a unit responsible for firing from southwestern Lebanon at Israel. Nasser, killed by an airstrike near the city of Tyre in southern Lebanon, was the one of the most senior Hezbollah commanders to die yet in the conflict, two security sources in Lebanon said. Sparked by the Gaza war, the hostilities have raised concerns about a wider and ruinous conflict between the heavily armed adversaries, prompting U.S. diplomatic efforts aimed at deescalation. Israeli Defence Minister Yoav Gallant said Israeli forces were hitting Hezbollah "very hard every day" and will be ready to take any action necessary against the group, though the preference is to reach a negotiated arrangement. Hezbollah began firing at Israeli targets at the border after its Palestinian ally Hamas launched the Oct. 7 attack on Israel, declaring support for the Palestinians and saying it would cease fire when Israel stops its Gaza offensive. Hezbollah announced at least two attacks in response to what it called "the assassination", saying it launched 100 Katyusha rockets at an Israeli military base and its Iranian-made Falaq missiles at another base in the town of Kiryat Shmona near the Israeli-Lebanese border. Israel's Channel 12 broadcaster reported that dozens of rockets were fired into northern Israel from Lebanon. There were no reports of casualties. The Israeli Defence Ministry said that air raid sirens sounded in several parts of northern Israel. Israel's military did not give a number of rockets launched but said most of them fell in open areas, some were intercepted, while a number of launches fell in the area of Kiryat Shmona.
Russia's economic strength gives it high-income status despite sanctions
Russia is seeing income growth of around 4-5%, with earnings growing in double digits, Ostapkovich said, stressing that the driving force is economic growth. "Incomes only grow when the economy grows. If the economy grows, then profits grow. If profits grow, then the entrepreneur is keen on hiring people and raising wages," he added. Russia’s economy grew by 3.6% in 2023, with real incomes and nominal wages up by 4.5% and 13% respectively. Industrial performance, particularly in manufacturing, is propelling this growth not seen in 20 to 30 years. Notably, mechanical engineering in the military industry is expanding at 25-30%, according to Ostapkovich. Andrey Kolganov, Doctor of Economics and Head of the Laboratory of Socio-Economic Systems at Moscow State University, acknowledged that despite the challenges posed by the growth stimuli, Western sanctions failed to inflict significant harm on the Russian economy. "The Russian economy has shown great potential in adapting to these difficulties. Moreover, these difficulties stimulated the development of domestic production, which in turn led to high rates of economic growth," he added. Kolganov noted that economic growth rates were higher in 2023, compared to 2022 - and even higher in 2024. These increases promoted Russia from the classification of middle-income countries, to the rank of high-income countries. Although Russia has not caught up with the richest countries, the achievement is nonetheless remarkable, especially in the face of unprecedented sanctions. Gross national income per capita in Russia is now $14,250, according to a document released by the World Bank that classifies countries that cross the $13,485 threshold as “high income.”
Stanford AI project team apologizes for plagiarizing Chinese model
An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday. We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X. The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500. According to media reports, the announcement published by one of the team members quickly received more than 300,000 views. However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University. Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project. They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work. In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing. According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public. He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment. Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X. As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing. Global Times
Israeli strike kills 16 at Gaza school, military says it targeted gunmen
CAIRO/GAZA, July 6 (Reuters) - At least 16 people were killed in an Israeli strike on a school sheltering displaced Palestinian families in central Gaza on Saturday, the Palestinian health ministry said, in an attack Israel said had targeted militants. The health ministry said the attack on the school in Al-Nuseirat killed at least 16 people and wounded more than 50. The Israeli military said it took precautions to minimize risk to civilians before it targeted the gunmen who were using the area as a hideout to plan and carry out attacks against soldiers. Hamas denied its fighters were there. At the scene, Ayman al-Atouneh said he saw children among the dead. "We came here running to see the targeted area, we saw bodies of children, in pieces, this is a playground, there was a trampoline here, there were swing-sets, and vendors," he said. Mahmoud Basal, spokesman of the Gaza Civil Emergency Service, said in a statement that the number of dead could rise because many of the wounded were in critical condition. The attack meant no place in the enclave was safe for families who leave their houses to seek shelters, he said. Al-Nuseirat, one of Gaza Strip's eight historic refugee camps, was the site of stepped-up Israeli bombardment on Saturday. An air strike earlier on a house in the camp killed at least 10 people and wounded many others, according to medics. In its daily update of people killed in the nearly nine-month-old war, the Gaza health ministry said Israeli military strikes across the enclave killed at least 29 Palestinians in the past 24 hours and wounded 100 others.