link1s.site

China proposes to establish BCI committee to strive for domestic innovation

China is mulling over establishing a Brain-Computer Interface (BCI) standardization technical committee under its Ministry of Industry and Information Technology (MIIT), aiming to guide enterprises to enhance industrial standards and boost domestic innovation.

The proposed committee, revealed by the MIIT on Monday, will work on composing a BCI standards roadmap for the entire industry development as well as the standards for the research and development of the key technologies involved, according to the MIIT.

China has taken strides in developing the BCI industry over the years, not only providing abundant policy support but also generous financial investment, Li Wenyu, secretary of the Brain-Computer Interface Industrial Alliance, told the Global Times.

From last year to 2024, both the central and local governments have successively issued relevant policies to support industrial development.

The MIIT in 2023 rolled out a plan selecting and promoting a group of units with strong innovation capabilities to break through landmark technological products and accelerate the application of new technologies and products. The Beijing local government also released an action plan to accelerate the industry in the capital (2024-2030) this year.

In 2023, there were no fewer than 20 publicly disclosed financing events for BCI companies in China, with a total disclosed amount exceeding 150 million yuan ($20.6 million), Li said. “The strong support from the government has injected momentum into industrial innovation.”

The fact that China's BCI industry started later than Western countries such as the US is a reality, leading to the gap in China regarding technological breakthroughs, industrial synergy, and talent development, according to Li.

To further close gaps and solve bottlenecks in BCI industrial development, Li suggested that the industry explore various technological approaches to suit different application scenarios and encourage more medical facilities powered by BCI to initiate clinical trials by optimizing the development of BCI-related ethics.

Additionally, he highlighted that standard development is one of the aspects to enhance the overall level and competitiveness of the industry chain, which could, in turn, empower domestic BCI innovation.

While China's BCI technology generally lags behind leading countries like the US in terms of system integration and clinical application, this has not hindered the release of Neucyber, which stands as China's first "high-performance invasive BCI."

Neucyber, an invasive implanted BCI technology, was independently developed by Chinese scientists from the Chinese Institute for Brain Research in Beijing.

Li Yuan, Business Development Director of Beijing Xinzhida Neurotechnology, the company that co-developed this BCI system, told the Global Times that the breakthrough of Neucyber could not have been achieved without the efforts of the institute gathering superior resources from various teams in Beijing.

A group of mature talents were gathered within the institute, from specific fields involving electrodes, chips, algorithms, software, and materials, Li Yuan said.

Shrugging off the outside world's focus on China’s competition with the US in this regard, Li Yuan said her team doesn’t want to be imaginative and talk too much, but strives to produce a set of products step by step that can be useful in actual applications.

In addition, Li Wenyu also attributed the emergence of Neucyber to the independent research atmosphere and the well-established talent nurturing mechanism in the Chinese Institute for Brain Research.

He said that to advance China’s BCI industry, it is necessary not only to cultivate domestic talents but also to introduce foreign talents to enhance China's research and innovation capabilities.

The proposed plan for establishing the BCI standardization technical committee under the MIIT will solicit public opinions until July 30, 2024.

Autonomous driving is not so hot
From the perspective of the two major markets of the United States and China, the autonomous driving industry has fallen into a low tide in recent years. For example, last year, Cruise Origin, one of the twin stars of Silicon Valley autonomous driving companies and once valued at more than $30 billion, failed completely, its Robotaxi (driverless taxi) operation qualification was revoked, and autonomous driving models have been discontinued. However, as a new track with the deep integration of digital economy and real economy, automatic driving is a must answer: on the one hand, automatic driving will accelerate the process of technology commercialization and industrialization, and become an important part of the game of major powers; On the other hand, autonomous driving will also promote industrial transformation and upgrading by improving the mass travel service experience, seeking new engines for urban development, and injecting new vitality into the urban economy.
Morning Bid: Eyes switch to inflation vs elections, Powell up
A look at the day ahead in U.S. and global markets from Mike Dolan After an intense month focused on election risk around the world, markets quickly switched back to the more prosaic matter of the cost of money - and whether disinflation is resuming to the extent it allows borrowing costs to finally fall. Thursday's U.S. consumer price update for June is the key moment of the week for many investors - with the headline rate expected to have fallen two tenths of a percentage point to 3.1% but with 'core' rates still stuck at 3.4%. With Federal Reserve chair Jerome Powell starting his two-pronged semi-annual congressional testimony later on Tuesday, the consensus CPI forecast probably reflects what the central bank thinks of the situation right now - encouraging but not there yet. But as the U.S. unemployment rate is now back above 4.0% for the first time since late 2021, markets may look for a more nuanced approach from the Fed chair that sees it increasingly wary of a sudden weakening of the labor market as real time quarterly GDP estimates ebb again to about 1.5%. There were some other reasons for Fed optimism in the lead up to the testimony. The path U.S. inflation is expected to follow over coming years generally softened in June, amid retreating projections of price increases for a wide array of consumer goods and services, a New York Fed survey showed on Monday. Inflation a year from now was seen at 3% as of June - down from the expected rise of 3.2% in May - and five-year expectations fell to 2.8% from 3%. Crude oil prices are better behaved this week, too, falling more than 3% from the 10-week highs hit late last week and halving the annual oil price gain to 10%. The losses on Tuesday came after a hurricane that hit a key U.S. oil-producing hub in Texas caused less damage than many in markets had expected - easing concerns over supply disruption. Before Powell starts speaking later, there will also be an update on U.S. small business confidence for last month.
Stanford AI project team apologizes for plagiarizing Chinese model
An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday. We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X. The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500. According to media reports, the announcement published by one of the team members quickly received more than 300,000 views. However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University. Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project. They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work. In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing. According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public. He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment. Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X. As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing. Global Times
"Corrupt Politicians GPT" "Fiscal Bill GPT", Kenyan protesters use AI to "protest"
In the past few weeks of anti-government activities in Kenya, AI tools have been creatively used by protesters to serve protests. According to the US "Flag" News Agency on July 5, protests in Kenya triggered by the 2024 fiscal bill are still continuing. In the past few weeks, Kenyan protesters, mainly young people, have creatively developed a series of AI tools to assist anti-government activities. The Kenyan government expressed concern about the risks associated with the use of AI tools in protests. Kelvin Onkundi, a software engineer in Kenya, developed the "Fiscal Bill GPT", which operates similarly to ChatGPT and can receive questions about the fiscal bill and generate responses. Martin Siele, a reporter from the "Flag" News Agency, analyzed: "The 'Fiscal Bill GPT' can convert professional terms in many legislative fields into easy-to-understand information for protesters, helping Kenyans understand the potential impact of the fiscal bill." Another software engineer, Marion Kavengi, developed the "SHIF GPT" to provide Kenyans with information about the upcoming Social Health Insurance Fund (SHIF). In addition to AI tools designed to help people understand controversial policies, protesters have also developed "Corrupt Politicians GPT" to assist protest demonstrations. After entering the name of a politician on the platform, the platform will generate a list of corruption scandals about the politician in chronological order. Developer BenwithSon wrote on the social platform X on June 28: "'Corrupt Politicians GPT' allows people to search for any scandal related to any politician. I have seen some leaders stand at the forefront of the political arena, but they are corrupt behind the scenes." Kenyan Chief Minister and Foreign Minister Mudavadi issued a communiqué to ambassadors of various countries in Nairobi on July 2 local time on protests and relevant government measures, expressing concerns about the use of AI and false information in protests. Mudavadi said: "AI technology is used by people with ulterior motives, which will fill the global information system with false narratives." The Kenya Times reported on June 30 that AI technology enables people to force the government to increase transparency and strengthen accountability, and its role in Kenyan political activities is becoming increasingly prominent. Martin Siller believes that AI is reshaping African political behavior in many ways. AI is a new tool for both governments and opposition parties in Africa, but Kenya is one of the African countries with the most developers, and its young protesters are particularly good at using AI technology to fight the government. The 2024 fiscal bill voted and passed by the Kenyan National Assembly on June 25 clearly stated that additional taxes will be levied to repay the interest on high sovereign debt, triggering large-scale demonstrations. After President Ruto announced the withdrawal of the tax increase bill on the evening of the 26th, demonstrations in many parts of Kenya continued. According to Reuters on July 3, Kenyan anti-government protesters are re-adjusting their activities to prevent the protests from turning into violent incidents.
ChatGPT: Explained to Kids(How ChatGPT works)
Chat means chat, and GPT is the acronym for Gene Rate Pre trained Transformer. Genrative means generation, and its function is to create or produce something new; Pre trained refers to a model of artificial intelligence that is learned from a large amount of textual materials, while Transformer refers to a model of artificial intelligence. Don't worry about T, just focus on the words G and P. We mainly use its Generative function to generate various types of content; But we need to know why it can produce various types of content, and the reason lies in P. Only by learning a large amount of content can we proceed with reproduction. And this kind of learning actually has limitations, which is very natural. For example, if you have learned a lot of knowledge since childhood, can you guarantee that your answer to a question is completely correct? Almost impossible, firstly due to the limitations of knowledge, ChatGPT is no exception, as it is impossible to master all knowledge; The second is the accuracy of knowledge, how to ensure that all knowledge is accurate and error free; The third aspect is the complexity of knowledge, where the same concept is manifested differently in different contexts, making it difficult for even humans to grasp it perfectly, let alone AI. So when we use ChatGPT, we also need to monitor the accuracy of the output content of ChatGPT. It is likely not a problem, but if you want to use it on critical issues, you will need to manually review it again. And now ChatGPT has actually been upgraded twice, one is GPT4 with more accurate answering ability, and the other is the recent GPT Turbo. The current ChatGPT is a large model called multimodality, which differs from the first generation in that it can not only receive and output text, but also other types of input, such as images, documents, videos, etc. The output is also more diverse. In addition to text, it can also output images or files, and so on.