
Insurers fret over militant attacks, AI hacks at Paris Olympics
LONDON, July 5 (Reuters) - Insurers are nervous that militant attacks or AI-generated fake images could derail the Paris Olympics, risking event cancellations and millions of dollars in claims. Insurers faced losses after the 2020 Tokyo Olympics were postponed for a year due to the COVID-19 pandemic. Since then, wars in Ukraine and Gaza and a spate of elections this year, including in France, have driven up fears of politically-motivated violence at high-profile global events. The Olympics take place in Paris from July 26-Aug 11 and the Paralympics from Aug 28-Sept 8. German insurer Allianz (ALVG.DE), opens new tab is insurance partner for the Games. Other insurers, such as the Lloyd's of London (SOLYD.UL) market, are also providing cover. "We are all aware of the geopolitical situation the world is in," said Eike Buergel, head of Allianz's Olympic and Paralympic programme. "We are convinced that the IOC (International Olympic Committee), Paris 2024 and the national organising committees, together with the French authorities, are taking the right measures when it comes to challenges on the ground."

OpenAI's internal AI details stolen in 2023 breach, NYT reports
July 4 (Reuters) - A hacker gained access to the internal messaging systems at OpenAI last year and stole details about the design of the company's artificial intelligence technologies, the New York Times reported, opens new tab on Thursday. The hacker lifted details from discussions in an online forum where employees talked about OpenAI's latest technologies, the report said, citing two people familiar with the incident. However, they did not get into the systems where OpenAI, the firm behind chatbot sensation ChatGPT, houses and builds its AI, the report added. OpenAI executives informed both employees at an all-hands meeting in April last year and the company's board about the breach, according to the report, but executives decided not to share the news publicly as no information about customers or partners had been stolen. OpenAI executives did not consider the incident a national security threat, believing the hacker was a private individual with no known ties to a foreign government, the report said. The San Francisco-based company did not inform the federal law enforcement agencies about the breach, it added. OpenAI in May said it had disrupted five covert influence operations that sought to use its AI models for "deceptive activity" across the internet, the latest to stir safety concerns about the potential misuse of the technology. The Biden administration was poised to open up a new front in its effort to safeguard the U.S. AI technology from China and Russia with preliminary plans to place guardrails around the most advanced AI Models including ChatGPT, Reuters earlier reported, citing sources.

Former Microsoft CEO Ballmer wealth surpassed Gates, he only did one thing
On July 1, former Microsoft CEO and President Steve Ballmer surpassed Microsoft co-founder Bill Gates for the first time on the Bloomberg list of the world's richest people to become the sixth richest person in the world. According to the data, as of the same day, Ballmer's net worth reached $157.2 billion, while Gates's wealth was $156.7 billion, falling to seventh place. The latest figures, as of July 6, show that Ballmer's wealth has grown further to $161 billion, and Gates' wealth is $159 billion. This is the first time Ballmer's net worth has surpassed Gates', and it is also the rare time in history that an employee's net worth has surpassed that of a company founder. Unlike Musk, Jeff Bezos and others, Ballmer's wealth was not accumulated through entrepreneurial success as a business founder, but simply because he chose to hold Microsoft "indefinitely." As Fortune previously reported, Ballmer is the only individual with a net worth of more than $100 billion as an employee rather than a founder.

ChatGPT: Explained to Kids(How ChatGPT works)
Chat means chat, and GPT is the acronym for Gene Rate Pre trained Transformer. Genrative means generation, and its function is to create or produce something new; Pre trained refers to a model of artificial intelligence that is learned from a large amount of textual materials, while Transformer refers to a model of artificial intelligence. Don't worry about T, just focus on the words G and P. We mainly use its Generative function to generate various types of content; But we need to know why it can produce various types of content, and the reason lies in P. Only by learning a large amount of content can we proceed with reproduction. And this kind of learning actually has limitations, which is very natural. For example, if you have learned a lot of knowledge since childhood, can you guarantee that your answer to a question is completely correct? Almost impossible, firstly due to the limitations of knowledge, ChatGPT is no exception, as it is impossible to master all knowledge; The second is the accuracy of knowledge, how to ensure that all knowledge is accurate and error free; The third aspect is the complexity of knowledge, where the same concept is manifested differently in different contexts, making it difficult for even humans to grasp it perfectly, let alone AI. So when we use ChatGPT, we also need to monitor the accuracy of the output content of ChatGPT. It is likely not a problem, but if you want to use it on critical issues, you will need to manually review it again. And now ChatGPT has actually been upgraded twice, one is GPT4 with more accurate answering ability, and the other is the recent GPT Turbo. The current ChatGPT is a large model called multimodality, which differs from the first generation in that it can not only receive and output text, but also other types of input, such as images, documents, videos, etc. The output is also more diverse. In addition to text, it can also output images or files, and so on.