link1s.site

Avi Bruce appointed as head of IDF Central Command

On the evening of July 8, local time, the Israel Defense Forces issued a statement saying that Major General Avi Bluth replaced Yehuda Fox as the commander of the Israeli Central Command. Earlier that day, the Israeli army held a handover ceremony, which was presided over by the Israeli Chief of Staff Halevy.

Avi Bluth joined the Israel Defense Forces in 1993 and commanded the Israeli military operations in the West Bank. In May this year, Bruce was promoted to major general and served as a military commander in the Israeli Central Command.

CCTV reporters learned that in late April this year, Yehuda Fox, then commander of the Israeli Central Command, requested to resign and retire from the army in August this year. Fox had previously stated that he should bear part of the responsibility for the military intelligence failure on October 7 last year, and "must end his term like everyone else."

According to the official website of the Israeli Defense Forces, the Central Command is one of the four major commands of the Israeli army, headquartered in Jerusalem, and its responsibility covers nearly one-third of Israel's territory.

ChatGPT: Explained to Kids(How ChatGPT works)
Chat means chat, and GPT is the acronym for Gene Rate Pre trained Transformer. Genrative means generation, and its function is to create or produce something new; Pre trained refers to a model of artificial intelligence that is learned from a large amount of textual materials, while Transformer refers to a model of artificial intelligence. Don't worry about T, just focus on the words G and P. We mainly use its Generative function to generate various types of content; But we need to know why it can produce various types of content, and the reason lies in P. Only by learning a large amount of content can we proceed with reproduction. And this kind of learning actually has limitations, which is very natural. For example, if you have learned a lot of knowledge since childhood, can you guarantee that your answer to a question is completely correct? Almost impossible, firstly due to the limitations of knowledge, ChatGPT is no exception, as it is impossible to master all knowledge; The second is the accuracy of knowledge, how to ensure that all knowledge is accurate and error free; The third aspect is the complexity of knowledge, where the same concept is manifested differently in different contexts, making it difficult for even humans to grasp it perfectly, let alone AI. So when we use ChatGPT, we also need to monitor the accuracy of the output content of ChatGPT. It is likely not a problem, but if you want to use it on critical issues, you will need to manually review it again. And now ChatGPT has actually been upgraded twice, one is GPT4 with more accurate answering ability, and the other is the recent GPT Turbo. The current ChatGPT is a large model called multimodality, which differs from the first generation in that it can not only receive and output text, but also other types of input, such as images, documents, videos, etc. The output is also more diverse. In addition to text, it can also output images or files, and so on.
Audi RS e-tron GT intelligent cockpit innovation analysis
RS e-tron GT: Shares J1 platform with Porsche Taycan. The iconic closed hexagonal "big mouth" is quite a brand recognition, and the rear of the car uses a decorative design shaped like a diffuser. Although the difference between it and the regular e-tron GT is very limited, the "RS" nameplate on the rear of the car means that it is not an ordinary person, of course, low-key is also the style of AUD-Sport. The center console continues the family design of the Audi brand, the lines are simple and refined, and the center control screen, the front air conditioning control panel and the function keys below are obviously tilted to the driver's side, echoing the product positioning of the driver's car. Sports seats, leather fabrics with red stitches, etc. appear in the configuration table of the car, rendering the interior sports atmosphere just right, and the overall beauty of the cabin has been affirmed by the reviewers. Although the official model of the cockpit chip selected by the car has not been announced, it has a high score in the evaluation items such as the cold start speed of the car, the start speed of the core application and the navigation search speed, which shows that the car performance is good. In addition, in terms of specifications and accuracy, the car received full marks in the touch accuracy and screen sharpness evaluation, and the daily high-frequency interaction experience is excellent. Of course, if you optimize the voice car control ability, its intelligent experience will be a higher level.
OpenAI's internal AI details stolen in 2023 breach, NYT reports
July 4 (Reuters) - A hacker gained access to the internal messaging systems at OpenAI last year and stole details about the design of the company's artificial intelligence technologies, the New York Times reported, opens new tab on Thursday. The hacker lifted details from discussions in an online forum where employees talked about OpenAI's latest technologies, the report said, citing two people familiar with the incident. However, they did not get into the systems where OpenAI, the firm behind chatbot sensation ChatGPT, houses and builds its AI, the report added. OpenAI executives informed both employees at an all-hands meeting in April last year and the company's board about the breach, according to the report, but executives decided not to share the news publicly as no information about customers or partners had been stolen. OpenAI executives did not consider the incident a national security threat, believing the hacker was a private individual with no known ties to a foreign government, the report said. The San Francisco-based company did not inform the federal law enforcement agencies about the breach, it added. OpenAI in May said it had disrupted five covert influence operations that sought to use its AI models for "deceptive activity" across the internet, the latest to stir safety concerns about the potential misuse of the technology. The Biden administration was poised to open up a new front in its effort to safeguard the U.S. AI technology from China and Russia with preliminary plans to place guardrails around the most advanced AI Models including ChatGPT, Reuters earlier reported, citing sources.
China's generative AI patents are far ahead of the US!
The World Intellectual Property Organization (WIPO) recently said that China filed 38,000 artificial intelligtion-related generative AI patents from 2014-23, while the United States filed 6,276 of the 50,000 patents filed by all countries. Of the 50,000 applications, 25 percent were filed last year.The top five inventor regions are: China (38,210 inventions), the United States (6,276 inventions), the Republic of Korea (4,155 inventions), Japan (3,409 inventions) and India (1,350 inventions).
Stanford AI project team apologizes for plagiarizing Chinese model
An artificial intelligence (AI) team at Stanford University apologized for plagiarizing a large language model (LLM) from a Chinese AI company, which became a trending topic on the Chinese social media platforms, where it sparked concern among netizens on Tuesday. We apologize to the authors of MiniCPM [the AI model developed by a Chinese company] for any inconvenience that we caused for not doing the full diligence to verify and peer review the novelty of this work, the multimodal AI model Llama3-V's developers wrote in a post on social platform X. The apology came after the team from Stanford University announced Llama3-V on May 29, claiming it had comparable performance to GPT4-V and other models with the capability to train for less than $500. According to media reports, the announcement published by one of the team members quickly received more than 300,000 views. However, some netizens from X found and listed evidence of how the Llama3-V project code was reformatted and similar to MiniCPM-Llama3-V 2.5, an LLM developed by a Chinese technology company, ModelBest, and Tsinghua University. Two team members, Aksh Garg and Siddharth Sharma, reposted a netizen's query and apologized on Monday, while claiming that their role was to promote the model on Medium and X (formerly Twitter), and that they had been unable to contact the member who wrote the code for the project. They looked at recent papers to validate the novelty of the work but had not been informed of or were aware of any of the work by Open Lab for Big Model Base, which was founded by the Natural Language Processing Lab at Tsinghua University and ModelBest, according to their responses. They noted that they have taken all references to Llama3-V down in respect to the original work. In response, Liu Zhiyuan, chief scientist at ModelBest, spoke out on the Chinese social media platform Zhihu, saying that the Llama3-V team failed to comply with open-source protocols for respecting and honoring the achievements of previous researchers, thus seriously undermining the cornerstone of open-source sharing. According to a screenshot leaked online, Li Dahai, CEO of ModelBest, also made a post on his WeChat moment, saying that the two models were verified to have highly similarity in terms of providing answers and even the same errors, and that some relevant data had not yet been released to the public. He said the team hopes that their work will receive more attention and recognition, but not in this way. He also called for an open, cooperative and trusting community environment. Director of the Stanford Artificial Intelligence Laboratory Christopher Manning also responded to Garg's explanation on Sunday, commenting "How not to own your mistakes!" on X. As the incident became a trending topic on Sina Weibo, Chinese netizens commented that academic research should be factual, but the incident also proves that the technology development in China is progressing. Global Times