link1s.site

Stanford AI project team apologizes for plagiarizing Chinese model

An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday.

We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X.

The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500.

According to media reports, the announcement published by one of the team members quickly received more than 300,000 views.

However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University.

Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project.

They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work.

In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing.

According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public.

He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment.

Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X.

As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing.

Global Times

China's generative AI patents are far ahead of the US!
The World Intellectual Property Organization (WIPO) recently said that China filed 38,000 artificial intelligtion-related generative AI patents from 2014-23, while the United States filed 6,276 of the 50,000 patents filed by all countries. Of the 50,000 applications, 25 percent were filed last year.The top five inventor regions are: China (38,210 inventions), the United States (6,276 inventions), the Republic of Korea (4,155 inventions), Japan (3,409 inventions) and India (1,350 inventions).
Stanford AI project team apologizes for plagiarizing Chinese model
An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday. We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X. The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500. According to media reports, the announcement published by one of the team members quickly received more than 300,000 views. However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University. Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project. They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work. In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing. According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public. He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment. Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X. As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing. Global Times
Google extends Linux kernel support to 4 years
According to AndroidAuthority, the Linux kernel used by Android devices is mostly derived from Google's Android Universal Kernel (ACK) branch, which is created from the Android mainline kernel branch when new LTS versions are released upstream. For example, when kernel version 6.6 is announced as the latest LTS release, an ACK branch for Android15-6.6 appears shortly after, with the "android15" in the name referring to the Android version of the kernel (in this case, Android 15). Google maintains its own set of LTS kernel branches for three main reasons. First, Google can integrate upstream features that have not yet been released into the ACK branch by backporting or picking, so as to meet the specific needs of Android. Second, Google can include some features that are being developed upstream in the ACK branch ahead of time, making it available for Android devices as early as possible. Finally, Google can add some vendor or original equipment manufacturer (OEM) features for other Android partners to use. Once created, Google continues to update the ACK branch to include not only bug fixes for Android specific code, but also to integrate the LTS merge content of the upstream kernel branch. For example, the Linux kernel vulnerability disclosed in the July 2024 Android security bulletin will be fixed through these updates. However, it is not easy to distinguish a bug fix from other bug fixes, as a patch that fixes a bug may also accidentally plug a security vulnerability that the submitter did not know about or chose not to disclose. Google does its best to recognize this, but it inevitably misses the mark, resulting in bug fixes for the upstream Linux kernel being released months before Android devices. As a result, Google has been urging Android vendors to regularly update the LTS kernel to avoid being caught off guard by unexpectedly disclosed security vulnerabilities. Clearly, the LTS version of the Linux kernel is critical to the security of Android devices, helping Google and vendors deal with known and unknown security vulnerabilities. The longer the support period, the more timely security updates Google and vendors can provide to devices.
Gold reaction to employment data and geopolitical events
The June US Nonfarm Payrolls (NFP) data showed an increase of 206,000 jobs, exceeding expectations. Political uncertainty and the People's Bank of China's pause in gold purchases influence gold market dynamics. Recent technical developments in the gold market, including breaking the triangle formation and subsequent rally, indicate the potential for higher prices. Despite a bullish outlook, further consolidation is possible before a significant surge. The recent US Nonfarm Payrolls (NFP) data revealed a rise of 206,000 jobs in June, surpassing the market expectation of 190,000, despite a downward revision from 272,000 to 218,000 for May. The unemployment rate increased to 4.1% and the wage inflation declined to 3.9% year-over-year. These mixed employment signals have increased the likelihood of a rate cut by the Federal Reserve in September. Additionally, political developments in France, where the left-wing New Popular Front led by Jean-Luc Mélenchon is poised to win a significant number of seats, add to the global economic uncertainty. Meanwhile, the People's Bank of China (PBoC) has paused its gold purchasing program, potentially waiting for a further price pullback. These factors collectively influence gold prices, providing a complex backdrop where the prospect of lower interest rates, political uncertainty, and central bank purchasing strategies are likely to drive market dynamics and investor behaviour in the coming months. Bullish Trends in Gold Prices The announcement of the NFP data has dropped the US Dollar Index and boosted gold prices. Since the gold market broke the triangle formation on Wednesday and formed an inside candle on Thursday, the break above Thursday's high on Friday initiated a strong rally, closing the price at higher levels. The red line was the first resistance of this breakout where the gold closed the last week. A clear break above this level may initiate another surge higher. The breakout of the triangle suggests higher prices, but the risk environment remains, as June was a correction month. It looks like the price is preparing for higher levels, but the possibility of consolidation before the surge cannot be ignored. Bottom line In conclusion, the increase in US employment, despite mixed signals in wage inflation and unemployment, has increased the likelihood of a Federal Reserve rate cut, boosting gold prices while weakening the US Dollar Index. Political uncertainties in France and the pause in gold purchases by the People's Bank of China further contribute to the complex economic landscape, indicating potential volatility ahead. The gold market's recent technical developments, including breaking the triangle formation and the subsequent rally, suggest readiness for higher prices. However, the possibility of consolidation before another significant surge remains, necessitating careful observation by investors as the market navigates these multifaceted influences.
Russia's economic strength gives it high-income status despite sanctions
Russia is seeing income growth of around 4-5%, with earnings growing in double digits, Ostapkovich said, stressing that the driving force is economic growth. "Incomes only grow when the economy grows. If the economy grows, then profits grow. If profits grow, then the entrepreneur is keen on hiring people and raising wages," he added. Russia’s economy grew by 3.6% in 2023, with real incomes and nominal wages up by 4.5% and 13% respectively. Industrial performance, particularly in manufacturing, is propelling this growth not seen in 20 to 30 years. Notably, mechanical engineering in the military industry is expanding at 25-30%, according to Ostapkovich. Andrey Kolganov, Doctor of Economics and Head of the Laboratory of Socio-Economic Systems at Moscow State University, acknowledged that despite the challenges posed by the growth stimuli, Western sanctions failed to inflict significant harm on the Russian economy. "The Russian economy has shown great potential in adapting to these difficulties. Moreover, these difficulties stimulated the development of domestic production, which in turn led to high rates of economic growth," he added. Kolganov noted that economic growth rates were higher in 2023, compared to 2022 - and even higher in 2024. These increases promoted Russia from the classification of middle-income countries, to the rank of high-income countries. Although Russia has not caught up with the richest countries, the achievement is nonetheless remarkable, especially in the face of unprecedented sanctions. Gross national income per capita in Russia is now $14,250, according to a document released by the World Bank that classifies countries that cross the $13,485 threshold as “high income.”