link1s.site

Xinjiang scientists discover plant with potential to survive on Mars

In a groundbreaking discovery, researchers from the Xinjiang Institute of Ecology and Geography of the Chinese Academy of Sciences have found a desert moss species, known as Syntrichia caninervis, that has the potential to survive in the extreme conditions on Mars.

The Global Times learned from the institute that during the third Xinjiang scientific expedition, the research team focused on studying the desert moss and found that it not only challenges people's understanding of the tolerance of organisms in extreme environments, but also demonstrates the ability to survive and regenerate under simulated Martian conditions.

Supported by the Xinjiang scientific expedition project, researchers Li Xiaoshuang, Zhang Daoyuan and Zhang Yuanming from the Xinjiang Institute of Ecology and Geography and Kuang Tingyun, an academician from the Chinese Academy of Sciences, concentrated on studying the "pioneer species" Syntrichia caninervis in an extreme desert environment, according to the institute in an article it sent to the Global Times on Sunday.

Through scientific experiments, the researchers systematically proved that the moss can tolerate over 98 percent cell dehydration, survive at temperatures as low as -196 C without dying, withstand over 5000Gy of gamma radiation without perishing, and quickly recover, turn green, and resume growth, showcasing extraordinary resilience.

These findings push the boundaries of human knowledge on the tolerance of organisms in extreme environments.

Furthermore, the research revealed that under simulated Martian conditions with multiple adversities, Syntrichia caninervis can still survive and regenerate when returned to suitable conditions. This marks the first report of higher plants surviving under simulated Martian conditions.

The research team also identified unique characteristics of Syntrichia caninervis. Its overlapping leaves reduce water evaporation, while the white tips of the leaves reflect intense sunlight. Additionally, the innovative "top-down" water absorption mode of the white tips efficiently collects and transports water from the atmosphere. Moreover, the moss can enter a selective metabolic dormancy state in adverse environments and rapidly provide the energy needed for recovery when its surrounding environment improves.

Based on the extreme environmental tolerance of Syntrichia caninervis, the research team plans to conduct experiments on spacecraft to monitor the survival response and adaptation capabilities of the species under microgravity and various ionizing radiation adversities. They aim to unravel the physiological and molecular basis of the moss and explore the key life tolerance regulatory mechanisms, laying the foundation for future applications of Syntrichia caninervis in outer space colonization.

Hollywood's strongest supporting actor has been launched, AI is not far from subverting "Dreamworks"?
As a major city in the United States and even the global film industry, Hollywood has gathered a large number of veteran film and television production companies, including Universal Pictures, Warner Bros., Paramount Pictures, Disney Pictures, MGM Pictures, etc. In addition, new streaming forces such as Netflix have also entered in recent years. When the new generation of technology represented by generative AI sweeps the world, the movie "dream factory" is also experiencing a transformative moment. In early May last year, the US film and television industry launched a series of strikes that lasted for five months. Two labor disputes, led by the Writers Guild and the Screen Actors Guild, have caused the worst industry disruption since the 2020 pandemic, forcing many film projects and TV shows to halt or delay production. The strike has been costly, with Kevin Klowden, chief global strategist at the Milken Institute think tank, estimating it has cost the U.S. economy more than $5 billion, affecting not only film and television production companies, but also surrounding service industries such as catering, trucking and dry cleaning. One of the main conflicts between labor and management is that many actors and screenwriters have expressed concerns about "unemployment" due to the "invasion" of artificial intelligence. Luo Chenya has been working in the film and television industry for more than 10 years, including scriptwriter, documentary photographer and assistant director. She told the first financial reporter that after ChatGPT became popular, she also tried to use chatbots to assist script creation. "I can talk to the AI about my ideas and ideas, and it will help analyze and refine my ideas, and even make some suggestions that I think are quite effective." But on the execution level, the idea of writing it down into a very specific scene, character action, it doesn't really help me." Luo Chenya said that AI still needs more training and evolution in script writing, but the ability to present images is amazing. "AI can directly generate images, which can indeed save labor to a great extent, and may even replace photographers in the future." In post-production, AI can beautify images and modify flaws." A place to be fought over Earlier this year, OpenAI released the Vincennes video model Sora on its website, which can create videos up to a minute long, generating complex scenes with multiple characters, specific types of movement, and precise theme and background details. In addition to being able to generate video from text, the model can also generate video from still images, precisely animating the image content. "Vincennes Video can quickly produce high-quality video content, greatly improving production efficiency, and generative AI helps to improve the analysis of user preferences and personalized recommendations, and enhance the attractiveness of content." These technologies will disrupt traditional video production and content distribution models, and media companies need to adapt and change their operating models." Wang Haoyu, CEO of Mairui Asset Management, said in an interview with the first financial reporter. For this reason, Hollywood giants have long made big bets and stepped up their layout.
Stanford AI project team apologizes for plagiarizing Chinese model
An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday. We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X. The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500. According to media reports, the announcement published by one of the team members quickly received more than 300,000 views. However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University. Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project. They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work. In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing. According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public. He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment. Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X. As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing. Global Times
TikTok to introduce a new feature that can clone your voice with AI in just 10 second
Use of AI is certainly the hottest topic in the tech industry and every major and minor player in this industry is using AI in some way. Tools like ChatGPT can help you do a wide range of task and even help you generate images. The other thing is - Voice Cloning. OpenAI recently introduced a voice engine that can generate clone of your voice with just 15 seconds of your audio. There is no shortage of voice cloning tools on the web which can help you do the same. The newest tech giant which is going to use AI to clone your voice is - TikTok. We all know TikTok, posting short videos with filters, effects and all other kind of things. So TikTok found a way to use the voice cloning AI in its app. TikTok is working on this feature, which does not seem to really have a proper name, it just references it as "Create your voice with AI" and "TikTok Voice Library". In the latest version of TikTok I came across some strings which indicates that TikTok is working on it. I was also able to access the initial UI which introduces the feature and was able to see the terms and condition of "TikTok Voice Library" which user have to accept in order to use the feature. Here are the screenshots from the app- As you can in the screenshot above, this is the initial screen which a user will see for the first time they access this feature. Tiktok claims that it can create an AI verison of your voice in just 10 seconds. The generated AI voice clone can be used with text-to-speech in TikTok videos. It also outline the process of how it will work. You have to record yourself speaking and TikTok will process the voice and use information about your voice to generate your AI voice. When it comes to privacy, your AI voice will stay private and you can delete it anytime. Tapping the "Continue" button brings "TikTok Voice Library Terms" screen which a user should definitely read, you can see here and read as well - How it will work After agreeing to terms and conditions I was introduced with a screen where TikTok will show some text and user have to press the record button while reading the text. Now unfortunately I did not see any text. This is probably because the feature is not fully ready or the backend from which it fetches the text is not live yet. Manually pressing the record button and saying random things also shows an error. So, it's also not possible to provide any sample voice generated with it and see how it compares to other voice cloning competitors. If it starts working someday, it will process your recorded voice and generate AI version of your voice. Here is a screenshot of that screen - My guess is that whenever the feature starts working, users have to clone voice only one time and the saved AI voice can be used through the text-to-speech method to add voice in your videos. You just have to type the words, choice is yours :p
Morning Bid: Eyes switch to inflation vs elections, Powell up
A look at the day ahead in U.S. and global markets from Mike Dolan After an intense month focused on election risk around the world, markets quickly switched back to the more prosaic matter of the cost of money - and whether disinflation is resuming to the extent it allows borrowing costs to finally fall. Thursday's U.S. consumer price update for June is the key moment of the week for many investors - with the headline rate expected to have fallen two tenths of a percentage point to 3.1% but with 'core' rates still stuck at 3.4%. With Federal Reserve chair Jerome Powell starting his two-pronged semi-annual congressional testimony later on Tuesday, the consensus CPI forecast probably reflects what the central bank thinks of the situation right now - encouraging but not there yet. But as the U.S. unemployment rate is now back above 4.0% for the first time since late 2021, markets may look for a more nuanced approach from the Fed chair that sees it increasingly wary of a sudden weakening of the labor market as real time quarterly GDP estimates ebb again to about 1.5%. There were some other reasons for Fed optimism in the lead up to the testimony. The path U.S. inflation is expected to follow over coming years generally softened in June, amid retreating projections of price increases for a wide array of consumer goods and services, a New York Fed survey showed on Monday. Inflation a year from now was seen at 3% as of June - down from the expected rise of 3.2% in May - and five-year expectations fell to 2.8% from 3%. Crude oil prices are better behaved this week, too, falling more than 3% from the 10-week highs hit late last week and halving the annual oil price gain to 10%. The losses on Tuesday came after a hurricane that hit a key U.S. oil-producing hub in Texas caused less damage than many in markets had expected - easing concerns over supply disruption. Before Powell starts speaking later, there will also be an update on U.S. small business confidence for last month.
Doctors visited the White House 8 times? White House: Biden did not receive treatment for Parkinson's disease
White House spokeswoman Karina Jean-Pierre denied a report in the U.S. media on the 8th that President Joseph Biden did not receive treatment for Parkinson's disease. Biden had the first televised debate of the 2024 presidential election with Republican opponent Donald Trump on June 27, and his poor performance on the spot triggered discussions about his physical condition. The New York Times reported that a doctor specializing in the treatment of Parkinson's disease had "visited" the White House eight times from August last year to March this year. Facing the media's questions about Biden's health, Jean-Pierre asked and answered himself at a regular White House press conference on the 8th: "Has the president received treatment for Parkinson's disease? No. Is he currently receiving treatment for Parkinson's disease? No, he is not. Is he taking medication for Parkinson's disease? No." Jean-Pierre said Biden had seen a neurologist three times, all related to his annual physical examination. She also took out the report issued by the doctor after Biden's most recent physical examination in February this year. The report said, "An extremely detailed neurological examination was once again reassuring" because no symptoms consistent with stroke, multiple sclerosis or Parkinson's disease were found. The doctor who went to the White House mentioned by the New York Times is Kevin Kanal, a neurology and movement disorder expert at the Walter Reed National Military Medical Center in Maryland and an authority on Parkinson's disease. Jean-Pierre suggested that the doctor might have come to treat military personnel on duty at the White House.