link1s.site

Poland and Ukraine sign bilateral security agreement

On July 8, Ukrainian President Zelensky, who was visiting Poland, and Polish Prime Minister Tusk signed a bilateral security agreement in Warsaw, the capital of Poland.

The agreement clearly states that Poland will provide support to Ukraine in air defense, energy security and reconstruction. After signing the agreement, Tusk said that the agreement includes actual bilateral commitments, not "empty promises."

Previously, the United States, Britain, France, Germany and other countries as well as the European Union signed similar agreements with Ukraine.

The largest password leak in history exposes nearly 10 billion credentials
The largest collection of stolen passwords ever has been leaked to a notorious crime marketplace, according to cybersecurity researchers at Cybernews. This leak, dubbed RockYou2024 by its original poster “ObamaCare,” holds a file containing nearly 10 billion unique plaintext passwords. Allegedly gathered from a series of data breaches and hacks accumulated over several years, the passwords were posted on July 4th and hailed as the most extensive collection of stolen and leaked credentials ever seen on the forum. “In its essence, the RockYou2024 leak is a compilation of real-world passwords used by individuals all over the world,” the researchers told Cybernews. “Revealing that many passwords for threat actors substantially heightens the risk of credential stuffing attacks.” Credential stuffing attacks are among the most common methods criminals, ransomware affiliates, and state-sponsored hackers use to access services and systems. Threat actors could exploit the RockYou2024 password collection to conduct brute-force attacks against any unprotected system and “gain unauthorized access to various online accounts used by individuals whose passwords are included in the dataset,” the research team said. This could affect online services, cameras and hardware This could affect various targets, from online services to internet-facing cameras and industrial hardware. “Moreover, combined with other leaked databases on hacker forums and marketplaces, which, for example, contain user email addresses and other credentials, RockYou2024 can contribute to a cascade of data breaches, financial frauds, and identity thefts,” the team concluded. However, despite the seriousness of the data leak, it is important to note that RockYou2024 is primarily a compilation of previous password leaks, estimated to contain entries from a total of 4,000 massive databases of stolen credentials, covering at least two decades. This new file notably includes an earlier credentials database known as RockYou2021, which featured 8.4 billion passwords. RockYou2024 added approximately 1.5 billion passwords to the collection, spanning from 2021 through 2024, which, though a massive figure, is only a fraction of the reported 9,948,575,739 passwords in the leak. Thus, users who have changed their passwords since 2021 may not have to panic about a potential breach of their information. That said, the research team at Cybernews stressed the importance of maintaining data security. In response to the leak, they recommend immediately changing the passwords for any accounts associated with the leaked credentials, ensuring each password is strong and unique and not reused across different platforms. Additionally, they advised enabling multi-factor authentication (MFA), which requires an extra form of verification beyond the password, wherever possible, to strengthen cyber security. Lastly, tech users should utilize password manager software, which securely generates and stores complex passwords, mitigating the risk of password reuse across multiple accounts.
"Corrupt Politicians GPT" "Fiscal Bill GPT", Kenyan protesters use AI to "protest"
In the past few weeks of anti-government activities in Kenya, AI tools have been creatively used by protesters to serve protests. According to the US "Flag" News Agency on July 5, protests in Kenya triggered by the 2024 fiscal bill are still continuing. In the past few weeks, Kenyan protesters, mainly young people, have creatively developed a series of AI tools to assist anti-government activities. The Kenyan government expressed concern about the risks associated with the use of AI tools in protests. Kelvin Onkundi, a software engineer in Kenya, developed the "Fiscal Bill GPT", which operates similarly to ChatGPT and can receive questions about the fiscal bill and generate responses. Martin Siele, a reporter from the "Flag" News Agency, analyzed: "The 'Fiscal Bill GPT' can convert professional terms in many legislative fields into easy-to-understand information for protesters, helping Kenyans understand the potential impact of the fiscal bill." Another software engineer, Marion Kavengi, developed the "SHIF GPT" to provide Kenyans with information about the upcoming Social Health Insurance Fund (SHIF). In addition to AI tools designed to help people understand controversial policies, protesters have also developed "Corrupt Politicians GPT" to assist protest demonstrations. After entering the name of a politician on the platform, the platform will generate a list of corruption scandals about the politician in chronological order. Developer BenwithSon wrote on the social platform X on June 28: "'Corrupt Politicians GPT' allows people to search for any scandal related to any politician. I have seen some leaders stand at the forefront of the political arena, but they are corrupt behind the scenes." Kenyan Chief Minister and Foreign Minister Mudavadi issued a communiqué to ambassadors of various countries in Nairobi on July 2 local time on protests and relevant government measures, expressing concerns about the use of AI and false information in protests. Mudavadi said: "AI technology is used by people with ulterior motives, which will fill the global information system with false narratives." The Kenya Times reported on June 30 that AI technology enables people to force the government to increase transparency and strengthen accountability, and its role in Kenyan political activities is becoming increasingly prominent. Martin Siller believes that AI is reshaping African political behavior in many ways. AI is a new tool for both governments and opposition parties in Africa, but Kenya is one of the African countries with the most developers, and its young protesters are particularly good at using AI technology to fight the government. The 2024 fiscal bill voted and passed by the Kenyan National Assembly on June 25 clearly stated that additional taxes will be levied to repay the interest on high sovereign debt, triggering large-scale demonstrations. After President Ruto announced the withdrawal of the tax increase bill on the evening of the 26th, demonstrations in many parts of Kenya continued. According to Reuters on July 3, Kenyan anti-government protesters are re-adjusting their activities to prevent the protests from turning into violent incidents.
Russia's economic strength gives it high-income status despite sanctions
Russia is seeing income growth of around 4-5%, with earnings growing in double digits, Ostapkovich said, stressing that the driving force is economic growth. "Incomes only grow when the economy grows. If the economy grows, then profits grow. If profits grow, then the entrepreneur is keen on hiring people and raising wages," he added. Russia’s economy grew by 3.6% in 2023, with real incomes and nominal wages up by 4.5% and 13% respectively. Industrial performance, particularly in manufacturing, is propelling this growth not seen in 20 to 30 years. Notably, mechanical engineering in the military industry is expanding at 25-30%, according to Ostapkovich. Andrey Kolganov, Doctor of Economics and Head of the Laboratory of Socio-Economic Systems at Moscow State University, acknowledged that despite the challenges posed by the growth stimuli, Western sanctions failed to inflict significant harm on the Russian economy. "The Russian economy has shown great potential in adapting to these difficulties. Moreover, these difficulties stimulated the development of domestic production, which in turn led to high rates of economic growth," he added. Kolganov noted that economic growth rates were higher in 2023, compared to 2022 - and even higher in 2024. These increases promoted Russia from the classification of middle-income countries, to the rank of high-income countries. Although Russia has not caught up with the richest countries, the achievement is nonetheless remarkable, especially in the face of unprecedented sanctions. Gross national income per capita in Russia is now $14,250, according to a document released by the World Bank that classifies countries that cross the $13,485 threshold as “high income.”
Stanford AI project team apologizes for plagiarizing Chinese model
An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday. We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X. The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500. According to media reports, the announcement published by one of the team members quickly received more than 300,000 views. However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University. Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project. They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work. In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing. According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public. He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment. Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X. As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing. Global Times
OpenAI's internal AI details stolen in 2023 breach, NYT reports
July 4 (Reuters) - A hacker gained access to the internal messaging systems at OpenAI last year and stole details about the design of the company's artificial intelligence technologies, the New York Times reported, opens new tab on Thursday. The hacker lifted details from discussions in an online forum where employees talked about OpenAI's latest technologies, the report said, citing two people familiar with the incident. However, they did not get into the systems where OpenAI, the firm behind chatbot sensation ChatGPT, houses and builds its AI, the report added. OpenAI executives informed both employees at an all-hands meeting in April last year and the company's board about the breach, according to the report, but executives decided not to share the news publicly as no information about customers or partners had been stolen. OpenAI executives did not consider the incident a national security threat, believing the hacker was a private individual with no known ties to a foreign government, the report said. The San Francisco-based company did not inform the federal law enforcement agencies about the breach, it added. OpenAI in May said it had disrupted five covert influence operations that sought to use its AI models for "deceptive activity" across the internet, the latest to stir safety concerns about the potential misuse of the technology. The Biden administration was poised to open up a new front in its effort to safeguard the U.S. AI technology from China and Russia with preliminary plans to place guardrails around the most advanced AI Models including ChatGPT, Reuters earlier reported, citing sources.