link1s.site

The US and Australia will work to improve financial links in the Pacific region to counter China's influence

U.S. and Australian officials said on Monday (July 8) that both countries are committed to improving financial connectivity in the Pacific and strengthening banking services in the region to resist China's growing covetousness.

According to Reuters, at the two-day Pacific Banking Forum co-hosted by the United States and Australia, Australian Assistant Treasurer Stephen Jones said that Canberra hopes to be the partner of choice in the Pacific region, both in banking and defense.

"If there are countries acting in this region whose main goal is to promote their own national interests rather than the interests of Pacific island countries, we will be very concerned," Jones said at the first day of the forum in Brisbane. He made this comment when asked about Chinese banks filling the vacuum in the Pacific region.

The report said that as some Western banks have interrupted their long-standing business relationships with banks in small Pacific island countries, while others are preparing to close their businesses, these Pacific island countries face many challenges and their ability to obtain US dollar-dominated banking business is limited.

The report said that experts said that Western banks are taking de-risking actions to meet financial regulations, which makes it more difficult to do business in Pacific island countries. This in turn weakens the financial resilience of these island nations.

At the same time, Washington is also stepping up efforts to support Pacific island nations in limiting China's influence. Brian Nelson, U.S. Treasury Undersecretary for Counterterrorism and Financial Intelligence, said, "We recognize the economic and strategic importance of the Pacific region, and we are committed to deepening engagement and cooperation with our allies and partners to enhance financial connectivity, investment and integration."

The report said that neither the United States nor Australia has yet announced detailed plans at the forum, but comments from officials from both countries reflect the growing unease among Western countries that have traditionally had influence in the Pacific region about China's growing influence in the region.

Hamas chief says latest Israeli attack on Gaza could jeopardise ceasefire talks
AIRO, July 8 (Reuters) - A new Israeli assault on Gaza on Monday threatened ceasefire talks at a crucial moment, the head of Hamas said, as Israeli tanks pressed into the heart of Gaza City and ordered residents out after a night of massive bombardment. Residents said the airstrikes and artillery barrages were among the heaviest in nine months of conflict between Israeli forces and Hamas militants in the enclave. Thousands fled. The assault unfolded as senior U.S. officials were in the region pushing for a ceasefire after Hamas made major concessions last week. The militant group said the new offensive appeared intended to derail the talks and called for mediators to rein in Israel's Prime Minister Benjamin Netanyahu. The assault "could bring the negotiation process back to square one. Netanyahu and his army will bear full responsibility for the collapse of this path," Hamas quoted leader Ismail Haniyeh as saying. Gaza City, in the north of the Palestinian enclave, was one of Israel's first targets at the start of the war in October. But clashes with militants there have persisted and civilians have sought shelter elsewhere, adding to waves of displacement. Much of the city lies in ruins. Residents said Gaza City neighbourhoods were bombed through the night into the early morning hours of Monday. Several multi-storey buildings were destroyed, they said. The Gaza Civil Emergency Service said it believed dozens of people were killed but emergency teams were unable to reach them because of ongoing offensives. Gaza residents said tanks advanced from at least three directions on Monday and reached the heart of Gaza City, backed by heavy Israeli fire from the air and ground. That forced thousands of people out of their homes to look for safer shelter, which for many was impossible to find, and some slept on the roadside.
Samsung expects profits to jump by more than 1,400%
Samsung Electronics expects its profits for the three months to June 2024 to jump 15-fold compared to the same period last year. An artificial intelligence (AI) boom has lifted the prices of advanced chips, driving up the firm's forecast for the second quarter. The South Korean tech giant is the world's largest maker of memory chips, smartphones and televisions. The announcement pushed Samsung shares up more than 2% during early trading hours in Seoul. The firm also reported a more than 10-fold jump in its profits for the first three months of this year. In this quarter, it said it is expecting its profit to rise to 10.4tn won ($7.54bn; £5.9bn), from 670bn won last year. That surpasses analysts' forecasts of 8.8tn won, according to LSEG SmartEstimate. "Right now we are seeing skyrocketing demand for AI chips in data centers and smartphones," said Marc Einstein, chief analyst at Tokyo-based research and advisory firm ITR Corporation. Optimism about AI is one reason for the broader market rally over the last year, which pushed the S&P 500 and the Nasdaq in the United States to new records on Wednesday. The market value of chip-making giant Nvidia surged past $3tn last month, briefly holding the top spot as the world's most valuable company. "The AI boom which massively boosted Nvidia is also boosting Samsung's earnings and indeed those of the entire sector," Mr Einstein added. Samsung Electronics is the flagship unit of South Korean conglomerate Samsung Group. Next week, the tech company faces a possible three-day strike, which is expected to start on Monday. A union of workers is demanding a more transparent system for bonuses and time off.
ChatGPT: Explained to Kids(How ChatGPT works)
Chat means chat, and GPT is the acronym for Gene Rate Pre trained Transformer. Genrative means generation, and its function is to create or produce something new; Pre trained refers to a model of artificial intelligence that is learned from a large amount of textual materials, while Transformer refers to a model of artificial intelligence. Don't worry about T, just focus on the words G and P. We mainly use its Generative function to generate various types of content; But we need to know why it can produce various types of content, and the reason lies in P. Only by learning a large amount of content can we proceed with reproduction. And this kind of learning actually has limitations, which is very natural. For example, if you have learned a lot of knowledge since childhood, can you guarantee that your answer to a question is completely correct? Almost impossible, firstly due to the limitations of knowledge, ChatGPT is no exception, as it is impossible to master all knowledge; The second is the accuracy of knowledge, how to ensure that all knowledge is accurate and error free; The third aspect is the complexity of knowledge, where the same concept is manifested differently in different contexts, making it difficult for even humans to grasp it perfectly, let alone AI. So when we use ChatGPT, we also need to monitor the accuracy of the output content of ChatGPT. It is likely not a problem, but if you want to use it on critical issues, you will need to manually review it again. And now ChatGPT has actually been upgraded twice, one is GPT4 with more accurate answering ability, and the other is the recent GPT Turbo. The current ChatGPT is a large model called multimodality, which differs from the first generation in that it can not only receive and output text, but also other types of input, such as images, documents, videos, etc. The output is also more diverse. In addition to text, it can also output images or files, and so on.
iPhone 16 Pro leak just confirmed a huge camera upgrade
The tetraprism lens with 5x optical zoom currently exclusive to the iPhone 15 Pro Max could be headed to both the iPhone 16 Pro and iPhone 16 Pro Max, narrowing the gap between Apple's premium flagships. That's according to a new report from analyst Ming-Chi Kuo, who cites a recent earnings call with Apple lens supplier Largan. In the call, a spokesperson from Largan said "some flagship specifications will be extended to other models" in the second half of 2024, presumably in reference to the upcoming iPhone Pro models. "Apple is Largan’s largest customer, and Largan is also Apple’s largest lens supplier," Kuo said. "Therefore, the quote likely refers to the fact that the new iPhone 16 Pro and Pro Max will have a tetraprism camera in 2H24 (while only the iPhone 15 Pro Max had this camera in 2H23).” The report goes on to say that the tetraprism camera for the iPhone 16 Pro series won't be all that different from the one in the iPhone 15 Pro Max. While the lack of an upgrade is disappointing, it's not necessarily a bad thing as these kinds of lenses are already top-of-the-line. They represent a major increase over prior models’ zoom capabilities, and they're capable of offering more depth while still fitting into super-slim smartphones. That being said, Apple does appear to be revamping the main camera and ultra-wide camera on the iPhone 16 Pro Max. Evidence continues to mount that both iPhone 16 Pro models will share the same 5x optical zoom camera. Earlier this week, DigitTimes in Asia (via 9to5Mac) reported that Apple is set to ramp up orders for tetraprism lenses as it expands their use in its upcoming iPhone series. Industry sources told the outlet that Largan and Genius Electronic Optical were tapped as the primary suppliers. Apple would be wise to streamline its Pro-level iPhones with the same camera setup; then all customers have to consider with their choice of a new iPhone is the size and price. Of course, this should all be taken with a grain of sand for now until we hear more from Apple. It's still a while yet before Apple's usual September time window for iPhone launches. In the meantime, be sure to check out all the rumors so far in our iPhone 16, iPhone 16 Pro and iPhone 16 Pro Max hubs.
TikTok to introduce a new feature that can clone your voice with AI in just 10 second
Use of AI is certainly the hottest topic in the tech industry and every major and minor player in this industry is using AI in some way. Tools like ChatGPT can help you do a wide range of task and even help you generate images. The other thing is - Voice Cloning. OpenAI recently introduced a voice engine that can generate clone of your voice with just 15 seconds of your audio. There is no shortage of voice cloning tools on the web which can help you do the same. The newest tech giant which is going to use AI to clone your voice is - TikTok. We all know TikTok, posting short videos with filters, effects and all other kind of things. So TikTok found a way to use the voice cloning AI in its app. TikTok is working on this feature, which does not seem to really have a proper name, it just references it as "Create your voice with AI" and "TikTok Voice Library". In the latest version of TikTok I came across some strings which indicates that TikTok is working on it. I was also able to access the initial UI which introduces the feature and was able to see the terms and condition of "TikTok Voice Library" which user have to accept in order to use the feature. Here are the screenshots from the app- As you can in the screenshot above, this is the initial screen which a user will see for the first time they access this feature. Tiktok claims that it can create an AI verison of your voice in just 10 seconds. The generated AI voice clone can be used with text-to-speech in TikTok videos. It also outline the process of how it will work. You have to record yourself speaking and TikTok will process the voice and use information about your voice to generate your AI voice. When it comes to privacy, your AI voice will stay private and you can delete it anytime. Tapping the "Continue" button brings "TikTok Voice Library Terms" screen which a user should definitely read, you can see here and read as well - How it will work After agreeing to terms and conditions I was introduced with a screen where TikTok will show some text and user have to press the record button while reading the text. Now unfortunately I did not see any text. This is probably because the feature is not fully ready or the backend from which it fetches the text is not live yet. Manually pressing the record button and saying random things also shows an error. So, it's also not possible to provide any sample voice generated with it and see how it compares to other voice cloning competitors. If it starts working someday, it will process your recorded voice and generate AI version of your voice. Here is a screenshot of that screen - My guess is that whenever the feature starts working, users have to clone voice only one time and the saved AI voice can be used through the text-to-speech method to add voice in your videos. You just have to type the words, choice is yours :p