An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday.
We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X.
The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500.
According to media reports, the announcement published by one of the team members quickly received more than 300,000 views.
However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University.
Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project.
They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work.
In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing.
According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public.
He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment.
Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X.
As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing.
Global Times