link1s.site

Stanford AI project team apologizes for plagiarizing Chinese model

An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday.

We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X.

The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500.

According to media reports, the announcement published by one of the team members quickly received more than 300,000 views.

However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University.

Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project.

They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work.

In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing.

According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public.

He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment.

Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X.

As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing.

Global Times

TikTok to introduce a new feature that can clone your voice with AI in just 10 second
Use of AI is certainly the hottest topic in the tech industry and every major and minor player in this industry is using AI in some way. Tools like ChatGPT can help you do a wide range of task and even help you generate images. The other thing is - Voice Cloning. OpenAI recently introduced a voice engine that can generate clone of your voice with just 15 seconds of your audio. There is no shortage of voice cloning tools on the web which can help you do the same. The newest tech giant which is going to use AI to clone your voice is - TikTok. We all know TikTok, posting short videos with filters, effects and all other kind of things. So TikTok found a way to use the voice cloning AI in its app. TikTok is working on this feature, which does not seem to really have a proper name, it just references it as "Create your voice with AI" and "TikTok Voice Library". In the latest version of TikTok I came across some strings which indicates that TikTok is working on it. I was also able to access the initial UI which introduces the feature and was able to see the terms and condition of "TikTok Voice Library" which user have to accept in order to use the feature. Here are the screenshots from the app- As you can in the screenshot above, this is the initial screen which a user will see for the first time they access this feature. Tiktok claims that it can create an AI verison of your voice in just 10 seconds. The generated AI voice clone can be used with text-to-speech in TikTok videos. It also outline the process of how it will work. You have to record yourself speaking and TikTok will process the voice and use information about your voice to generate your AI voice. When it comes to privacy, your AI voice will stay private and you can delete it anytime. Tapping the "Continue" button brings "TikTok Voice Library Terms" screen which a user should definitely read, you can see here and read as well - How it will work After agreeing to terms and conditions I was introduced with a screen where TikTok will show some text and user have to press the record button while reading the text. Now unfortunately I did not see any text. This is probably because the feature is not fully ready or the backend from which it fetches the text is not live yet. Manually pressing the record button and saying random things also shows an error. So, it's also not possible to provide any sample voice generated with it and see how it compares to other voice cloning competitors. If it starts working someday, it will process your recorded voice and generate AI version of your voice. Here is a screenshot of that screen - My guess is that whenever the feature starts working, users have to clone voice only one time and the saved AI voice can be used through the text-to-speech method to add voice in your videos. You just have to type the words, choice is yours :p
Ukrainian Presidential Office: Russia's attacks on multiple locations in Ukraine have killed 36 people
The Ukrainian presidential office said on July 8 local time that Russia's large-scale attacks on many parts of Ukraine have killed 36 people and injured 140 others. According to the Ukrainian State Emergency Service, a total of 619 rescue workers and 132 equipment participated in the rescue work across Ukraine that day. Ukrainian President Zelensky said on social media on the 8th that Russia launched more than 40 missiles of various types at Ukraine that day. Residential buildings and infrastructure in many cities in Ukraine were damaged in varying degrees of attacks, and a children's hospital was destroyed. Rescue departments are currently conducting emergency rescue on the scene. The Russian Ministry of Defense issued a statement on the 8th local time saying that Ukrainian officials' claim that Russia used missiles to attack Ukrainian civilian facilities was untrue. The damage suffered by Kiev was caused by the fall of missiles launched by the city's air defense system.
Exclusive: India's Paytm gets government panel nod to invest in payments arm, sources say
NEW DELHI, July 9 (Reuters) - India's beleaguered Paytm (PAYT.NS), opens new tab has secured approval from a government panel that oversees investments linked to China to invest 500 million rupees ($6 million) in a key subsidiary, three sources with direct knowledge of the matter said. The approval, which still has to be vetted by the finance ministry, will remove the main stumbling block to the unit, Paytm Payment Services, resuming normal business operations. Paytm Payment Services is one of the biggest remaining parts of the fintech firm's business, accounting for a quarter of consolidated revenue in the financial year ended March 2023. A separate unit, Paytm Payments Bank, was wound down this year by order of the central bank due to persistent compliance issues, triggering a meltdown in Paytm's stock. The government panel had earlier held back approval due to concerns about the 9.88% stake in Paytm held by China's Ant Group. India has intensified scrutiny of Chinese businesses since a 2020 border clash between the two countries. All in all, Paytm has been waiting for the nod from the government panel for about two years and without it, it would have had to also wind down its payment services business, which was forbidden from taking on new customers in March 2023. Once the approval has been formalised, it will be able to seek a so-called "payment aggregator" licence from the Reserve Bank of India. The sources, two of whom are government sources, declined to be identified as the decision has not been formally announced. India's foreign, home, finance and industries ministries, whose representatives sit on the panel, did not reply to emails seeking comment. A Paytm spokesperson said the company does not comment on market speculation. "We will continue to make disclosures in compliance with our obligations under the SEBI Regulations, and will inform the exchanges when there is any new material information to share," the spokesperson said.
China's generative AI patents are far ahead of the US!
The World Intellectual Property Organization (WIPO) recently said that China filed 38,000 artificial intelligtion-related generative AI patents from 2014-23, while the United States filed 6,276 of the 50,000 patents filed by all countries. Of the 50,000 applications, 25 percent were filed last year.The top five inventor regions are: China (38,210 inventions), the United States (6,276 inventions), the Republic of Korea (4,155 inventions), Japan (3,409 inventions) and India (1,350 inventions).
OpenAI's internal AI details stolen in 2023 breach, NYT reports
July 4 (Reuters) - A hacker gained access to the internal messaging systems at OpenAI last year and stole details about the design of the company's artificial intelligence technologies, the New York Times reported, opens new tab on Thursday. The hacker lifted details from discussions in an online forum where employees talked about OpenAI's latest technologies, the report said, citing two people familiar with the incident. However, they did not get into the systems where OpenAI, the firm behind chatbot sensation ChatGPT, houses and builds its AI, the report added. OpenAI executives informed both employees at an all-hands meeting in April last year and the company's board about the breach, according to the report, but executives decided not to share the news publicly as no information about customers or partners had been stolen. OpenAI executives did not consider the incident a national security threat, believing the hacker was a private individual with no known ties to a foreign government, the report said. The San Francisco-based company did not inform the federal law enforcement agencies about the breach, it added. OpenAI in May said it had disrupted five covert influence operations that sought to use its AI models for "deceptive activity" across the internet, the latest to stir safety concerns about the potential misuse of the technology. The Biden administration was poised to open up a new front in its effort to safeguard the U.S. AI technology from China and Russia with preliminary plans to place guardrails around the most advanced AI Models including ChatGPT, Reuters earlier reported, citing sources.