link1s.site

Stanford AI project team apologizes for plagiarizing Chinese model

An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday.

We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X.

The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500.

According to media reports, the announcement published by one of the team members quickly received more than 300,000 views.

However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University.

Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project.

They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work.

In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing.

According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public.

He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment.

Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X.

As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing.

Global Times

Stanford AI project team apologizes for plagiarizing Chinese model
An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday. We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X. The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500. According to media reports, the announcement published by one of the team members quickly received more than 300,000 views. However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University. Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project. They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work. In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing. According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public. He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment. Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X. As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing. Global Times
Zuckerberg surfed and drank beer on vacation, Musk: I prefer to work
After Meta CEO Mark Zuckerberg posted a video on his Facebook and Instagram accounts of his free time during the Independence Day holiday on the X platform, Musk said, "I prefer to work." Zuckerberg posted a video of himself surfing on a hydrofoil in a tuxedo, waving an American flag and drinking a beer, and wrote: "Happy birthday America." The video quickly went viral, and after greg shared it on the X platform, Musk replied: "I hope he continues to have fun on the yacht." I prefer to work." Musk, a workaholic, attended the 29th annual Barron Investment Conference in November 2022, where he said: "My workload went from 78 hours a week to 120 hours a week..." In 2018, he slept on the floor of the Gigafactory in Fremont in an effort to ramp up production of the Tesla Model 3.
TikTok to introduce a new feature that can clone your voice with AI in just 10 second
Use of AI is certainly the hottest topic in the tech industry and every major and minor player in this industry is using AI in some way. Tools like ChatGPT can help you do a wide range of task and even help you generate images. The other thing is - Voice Cloning. OpenAI recently introduced a voice engine that can generate clone of your voice with just 15 seconds of your audio. There is no shortage of voice cloning tools on the web which can help you do the same. The newest tech giant which is going to use AI to clone your voice is - TikTok. We all know TikTok, posting short videos with filters, effects and all other kind of things. So TikTok found a way to use the voice cloning AI in its app. TikTok is working on this feature, which does not seem to really have a proper name, it just references it as "Create your voice with AI" and "TikTok Voice Library". In the latest version of TikTok I came across some strings which indicates that TikTok is working on it. I was also able to access the initial UI which introduces the feature and was able to see the terms and condition of "TikTok Voice Library" which user have to accept in order to use the feature. Here are the screenshots from the app- As you can in the screenshot above, this is the initial screen which a user will see for the first time they access this feature. Tiktok claims that it can create an AI verison of your voice in just 10 seconds. The generated AI voice clone can be used with text-to-speech in TikTok videos. It also outline the process of how it will work. You have to record yourself speaking and TikTok will process the voice and use information about your voice to generate your AI voice. When it comes to privacy, your AI voice will stay private and you can delete it anytime. Tapping the "Continue" button brings "TikTok Voice Library Terms" screen which a user should definitely read, you can see here and read as well - How it will work After agreeing to terms and conditions I was introduced with a screen where TikTok will show some text and user have to press the record button while reading the text. Now unfortunately I did not see any text. This is probably because the feature is not fully ready or the backend from which it fetches the text is not live yet. Manually pressing the record button and saying random things also shows an error. So, it's also not possible to provide any sample voice generated with it and see how it compares to other voice cloning competitors. If it starts working someday, it will process your recorded voice and generate AI version of your voice. Here is a screenshot of that screen - My guess is that whenever the feature starts working, users have to clone voice only one time and the saved AI voice can be used through the text-to-speech method to add voice in your videos. You just have to type the words, choice is yours :p
Autonomous driving is not so hot
From the perspective of the two major markets of the United States and China, the autonomous driving industry has fallen into a low tide in recent years. For example, last year, Cruise Origin, one of the twin stars of Silicon Valley autonomous driving companies and once valued at more than $30 billion, failed completely, its Robotaxi (driverless taxi) operation qualification was revoked, and autonomous driving models have been discontinued. However, as a new track with the deep integration of digital economy and real economy, automatic driving is a must answer: on the one hand, automatic driving will accelerate the process of technology commercialization and industrialization, and become an important part of the game of major powers; On the other hand, autonomous driving will also promote industrial transformation and upgrading by improving the mass travel service experience, seeking new engines for urban development, and injecting new vitality into the urban economy.
WhatsApp's new feature will let Meta AI edit your photos for you
WhatsApp beta version 2.24.14.20 has a new feature that allows users to share photos with Meta AI. The AI chatbot will analyze uploaded images and provide information or context about the content. Users may be able to request specific edits to their photos directly through Meta AI, though the extent of this feature is still unknown. As the battle for AI dominance heats up, Meta is adding a new trick to its AI chatbot, Meta AI, which is already part of Facebook, Instagram, and WhatsApp. While Meta AI already has impressive text capabilities, such as replying to questions, suggesting captions, and holding conversations, users cannot currently share or upload photos to the Meta AI chat. WaBetaInfo has uncovered the exciting new feature in the WhatsApp beta for Android version 2.24.14.20. This feature will allow Meta AI to interact with photos shared by users, reply to photos, and even edit them. As shown in the attached screenshot, WhatsApp is testing a new camera button in the Meta AI chat, designed to function similarly to the camera button in regular chats. This addition will allow users to manually share photos with Meta AI, a capability that is currently unavailable. With this new functionality, users will be able to ask questions about their photos, presumably allowing users to ask the AI to identify objects or locations or provide context about the photo’s content. Moreover, the screenshot suggests that Meta AI will also offer the option to edit photos, enabling users to make changes to their images directly within the chat by sharing a prompt. The exact scope of this image editing feature remains unclear, leaving us to wonder if it will be limited to simple tweaks or if it will unleash a powerful AI-driven photo editing suite. The possibilities are both exciting and intriguing, and this feature could definitely be a big hit, especially if it performs as promised. While this new image-sharing feature would mean Meta will analyze and face-scan the photos you upload, the screenshot includes a disclaimer indicating that users will have the option to delete their photos whenever they want. As of now, it seems that the feature is still in development, so it might be some time before we finally get to see it roll out publicly. Recently, we also reported about WhatsApp working on an “Imagine Me” feature that would allow Meta AI to generate AI avatars of you based on a set of your photos. WhatsApp in our newsletters WhatsApp is a leading messaging app, keep up to date on the latest, and learn about more Android apps today!