link1s.site

Insurers fret over militant attacks, AI hacks at Paris Olympics

LONDON, July 5 (Reuters) - Insurers are nervous that militant attacks or AI-generated fake images could derail the Paris Olympics, risking event cancellations and millions of dollars in claims.

Insurers faced losses after the 2020 Tokyo Olympics were postponed for a year due to the COVID-19 pandemic.

Since then, wars in Ukraine and Gaza and a spate of elections this year, including in France, have driven up fears of politically-motivated violence at high-profile global events.

The Olympics take place in Paris from July 26-Aug 11 and the Paralympics from Aug 28-Sept 8.

German insurer Allianz (ALVG.DE), opens new tab is insurance partner for the Games. Other insurers, such as the Lloyd's of London (SOLYD.UL) market, are also providing cover.

"We are all aware of the geopolitical situation the world is in," said Eike Buergel, head of Allianz's Olympic and Paralympic programme.

"We are convinced that the IOC (International Olympic Committee), Paris 2024 and the national organising committees, together with the French authorities, are taking the right measures when it comes to challenges on the ground."

The largest password leak in history exposes nearly 10 billion credentials
The largest collection of stolen passwords ever has been leaked to a notorious crime marketplace, according to cybersecurity researchers at Cybernews. This leak, dubbed RockYou2024 by its original poster “ObamaCare,” holds a file containing nearly 10 billion unique plaintext passwords. Allegedly gathered from a series of data breaches and hacks accumulated over several years, the passwords were posted on July 4th and hailed as the most extensive collection of stolen and leaked credentials ever seen on the forum. “In its essence, the RockYou2024 leak is a compilation of real-world passwords used by individuals all over the world,” the researchers told Cybernews. “Revealing that many passwords for threat actors substantially heightens the risk of credential stuffing attacks.” Credential stuffing attacks are among the most common methods criminals, ransomware affiliates, and state-sponsored hackers use to access services and systems. Threat actors could exploit the RockYou2024 password collection to conduct brute-force attacks against any unprotected system and “gain unauthorized access to various online accounts used by individuals whose passwords are included in the dataset,” the research team said. This could affect online services, cameras and hardware This could affect various targets, from online services to internet-facing cameras and industrial hardware. “Moreover, combined with other leaked databases on hacker forums and marketplaces, which, for example, contain user email addresses and other credentials, RockYou2024 can contribute to a cascade of data breaches, financial frauds, and identity thefts,” the team concluded. However, despite the seriousness of the data leak, it is important to note that RockYou2024 is primarily a compilation of previous password leaks, estimated to contain entries from a total of 4,000 massive databases of stolen credentials, covering at least two decades. This new file notably includes an earlier credentials database known as RockYou2021, which featured 8.4 billion passwords. RockYou2024 added approximately 1.5 billion passwords to the collection, spanning from 2021 through 2024, which, though a massive figure, is only a fraction of the reported 9,948,575,739 passwords in the leak. Thus, users who have changed their passwords since 2021 may not have to panic about a potential breach of their information. That said, the research team at Cybernews stressed the importance of maintaining data security. In response to the leak, they recommend immediately changing the passwords for any accounts associated with the leaked credentials, ensuring each password is strong and unique and not reused across different platforms. Additionally, they advised enabling multi-factor authentication (MFA), which requires an extra form of verification beyond the password, wherever possible, to strengthen cyber security. Lastly, tech users should utilize password manager software, which securely generates and stores complex passwords, mitigating the risk of password reuse across multiple accounts.
China proposes to establish BCI committee to strive for domestic innovation
China is mulling over establishing a Brain-Computer Interface (BCI) standardization technical committee under its Ministry of Industry and Information Technology (MIIT), aiming to guide enterprises to enhance industrial standards and boost domestic innovation. The proposed committee, revealed by the MIIT on Monday, will work on composing a BCI standards roadmap for the entire industry development as well as the standards for the research and development of the key technologies involved, according to the MIIT. China has taken strides in developing the BCI industry over the years, not only providing abundant policy support but also generous financial investment, Li Wenyu, secretary of the Brain-Computer Interface Industrial Alliance, told the Global Times. From last year to 2024, both the central and local governments have successively issued relevant policies to support industrial development. The MIIT in 2023 rolled out a plan selecting and promoting a group of units with strong innovation capabilities to break through landmark technological products and accelerate the application of new technologies and products. The Beijing local government also released an action plan to accelerate the industry in the capital (2024-2030) this year. In 2023, there were no fewer than 20 publicly disclosed financing events for BCI companies in China, with a total disclosed amount exceeding 150 million yuan ($20.6 million), Li said. “The strong support from the government has injected momentum into industrial innovation.” The fact that China's BCI industry started later than Western countries such as the US is a reality, leading to the gap in China regarding technological breakthroughs, industrial synergy, and talent development, according to Li. To further close gaps and solve bottlenecks in BCI industrial development, Li suggested that the industry explore various technological approaches to suit different application scenarios and encourage more medical facilities powered by BCI to initiate clinical trials by optimizing the development of BCI-related ethics. Additionally, he highlighted that standard development is one of the aspects to enhance the overall level and competitiveness of the industry chain, which could, in turn, empower domestic BCI innovation. While China's BCI technology generally lags behind leading countries like the US in terms of system integration and clinical application, this has not hindered the release of Neucyber, which stands as China's first "high-performance invasive BCI." Neucyber, an invasive implanted BCI technology, was independently developed by Chinese scientists from the Chinese Institute for Brain Research in Beijing. Li Yuan, Business Development Director of Beijing Xinzhida Neurotechnology, the company that co-developed this BCI system, told the Global Times that the breakthrough of Neucyber could not have been achieved without the efforts of the institute gathering superior resources from various teams in Beijing. A group of mature talents were gathered within the institute, from specific fields involving electrodes, chips, algorithms, software, and materials, Li Yuan said. Shrugging off the outside world's focus on China’s competition with the US in this regard, Li Yuan said her team doesn’t want to be imaginative and talk too much, but strives to produce a set of products step by step that can be useful in actual applications. In addition, Li Wenyu also attributed the emergence of Neucyber to the independent research atmosphere and the well-established talent nurturing mechanism in the Chinese Institute for Brain Research. He said that to advance China’s BCI industry, it is necessary not only to cultivate domestic talents but also to introduce foreign talents to enhance China's research and innovation capabilities. The proposed plan for establishing the BCI standardization technical committee under the MIIT will solicit public opinions until July 30, 2024.
ChatGPT: Explained to Kids(How ChatGPT works)
Chat means chat, and GPT is the acronym for Gene Rate Pre trained Transformer. Genrative means generation, and its function is to create or produce something new; Pre trained refers to a model of artificial intelligence that is learned from a large amount of textual materials, while Transformer refers to a model of artificial intelligence. Don't worry about T, just focus on the words G and P. We mainly use its Generative function to generate various types of content; But we need to know why it can produce various types of content, and the reason lies in P. Only by learning a large amount of content can we proceed with reproduction. And this kind of learning actually has limitations, which is very natural. For example, if you have learned a lot of knowledge since childhood, can you guarantee that your answer to a question is completely correct? Almost impossible, firstly due to the limitations of knowledge, ChatGPT is no exception, as it is impossible to master all knowledge; The second is the accuracy of knowledge, how to ensure that all knowledge is accurate and error free; The third aspect is the complexity of knowledge, where the same concept is manifested differently in different contexts, making it difficult for even humans to grasp it perfectly, let alone AI. So when we use ChatGPT, we also need to monitor the accuracy of the output content of ChatGPT. It is likely not a problem, but if you want to use it on critical issues, you will need to manually review it again. And now ChatGPT has actually been upgraded twice, one is GPT4 with more accurate answering ability, and the other is the recent GPT Turbo. The current ChatGPT is a large model called multimodality, which differs from the first generation in that it can not only receive and output text, but also other types of input, such as images, documents, videos, etc. The output is also more diverse. In addition to text, it can also output images or files, and so on.
Stanford AI project team apologizes for plagiarizing Chinese model
An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday. We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X. The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500. According to media reports, the announcement published by one of the team members quickly received more than 300,000 views. However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University. Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project. They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work. In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing. According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public. He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment. Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X. As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing. Global Times
Are US development jobs falling off a cliff?
Companies are going to have fewer people and fewer layers. Ten years from now, the software development circuit may have fewer jobs, higher salaries, and more product-centric work. The reason behind it is the rapid development of AI, AI has approached human beings at the intelligence level, a lot of work relying on thinking ability may be handed over to AI, while emotion is still the territory of human beings, how to communicate and collaborate is the most important ability in the near future. When Indeed's chart for software development and operations jobs was released, we found that, as the chart shows, there was a peak in early 2022, but after that there was a precipitous decline.