
TikTok to introduce a new feature that can clone your voice with AI in just 10 second
Use of AI is certainly the hottest topic in the tech industry and every major and minor player in this industry is using AI in some way. Tools like ChatGPT can help you do a wide range of task and even help you generate images. The other thing is - Voice Cloning. OpenAI recently introduced a voice engine that can generate clone of your voice with just 15 seconds of your audio. There is no shortage of voice cloning tools on the web which can help you do the same. The newest tech giant which is going to use AI to clone your voice is - TikTok. We all know TikTok, posting short videos with filters, effects and all other kind of things. So TikTok found a way to use the voice cloning AI in its app. TikTok is working on this feature, which does not seem to really have a proper name, it just references it as "Create your voice with AI" and "TikTok Voice Library". In the latest version of TikTok I came across some strings which indicates that TikTok is working on it. I was also able to access the initial UI which introduces the feature and was able to see the terms and condition of "TikTok Voice Library" which user have to accept in order to use the feature. Here are the screenshots from the app- As you can in the screenshot above, this is the initial screen which a user will see for the first time they access this feature. Tiktok claims that it can create an AI verison of your voice in just 10 seconds. The generated AI voice clone can be used with text-to-speech in TikTok videos. It also outline the process of how it will work. You have to record yourself speaking and TikTok will process the voice and use information about your voice to generate your AI voice. When it comes to privacy, your AI voice will stay private and you can delete it anytime. Tapping the "Continue" button brings "TikTok Voice Library Terms" screen which a user should definitely read, you can see here and read as well - How it will work After agreeing to terms and conditions I was introduced with a screen where TikTok will show some text and user have to press the record button while reading the text. Now unfortunately I did not see any text. This is probably because the feature is not fully ready or the backend from which it fetches the text is not live yet. Manually pressing the record button and saying random things also shows an error. So, it's also not possible to provide any sample voice generated with it and see how it compares to other voice cloning competitors. If it starts working someday, it will process your recorded voice and generate AI version of your voice. Here is a screenshot of that screen - My guess is that whenever the feature starts working, users have to clone voice only one time and the saved AI voice can be used through the text-to-speech method to add voice in your videos. You just have to type the words, choice is yours :p

Former British PM Sunak appoints Conservative Party shadow cabinet
On July 8, local time, former British Prime Minister Sunak announced the appointment of the Conservative Party Shadow Cabinet, which is the first shadow cabinet of the Conservative Party in 14 years. Several former British cabinet members during Sunak's tenure as prime minister were appointed to the Conservative Party Shadow Cabinet, including James Cleverly as Shadow Home Secretary and Jeremy Hunt as Shadow Chancellor of the Exchequer. But former Foreign Secretary Cameron was not appointed as Shadow Foreign Secretary. In addition, the new leader of the Conservative Party will be elected as early as this week. On July 4, the UK held a parliamentary election. The counting results showed that the British Labour Party won more than half of the seats and won an overwhelming victory; the Conservative Party suffered a disastrous defeat, ending its 14-year continuous rule.

ChatGPT: Explained to Kids(How ChatGPT works)
Chat means chat, and GPT is the acronym for Gene Rate Pre trained Transformer. Genrative means generation, and its function is to create or produce something new; Pre trained refers to a model of artificial intelligence that is learned from a large amount of textual materials, while Transformer refers to a model of artificial intelligence. Don't worry about T, just focus on the words G and P. We mainly use its Generative function to generate various types of content; But we need to know why it can produce various types of content, and the reason lies in P. Only by learning a large amount of content can we proceed with reproduction. And this kind of learning actually has limitations, which is very natural. For example, if you have learned a lot of knowledge since childhood, can you guarantee that your answer to a question is completely correct? Almost impossible, firstly due to the limitations of knowledge, ChatGPT is no exception, as it is impossible to master all knowledge; The second is the accuracy of knowledge, how to ensure that all knowledge is accurate and error free; The third aspect is the complexity of knowledge, where the same concept is manifested differently in different contexts, making it difficult for even humans to grasp it perfectly, let alone AI. So when we use ChatGPT, we also need to monitor the accuracy of the output content of ChatGPT. It is likely not a problem, but if you want to use it on critical issues, you will need to manually review it again. And now ChatGPT has actually been upgraded twice, one is GPT4 with more accurate answering ability, and the other is the recent GPT Turbo. The current ChatGPT is a large model called multimodality, which differs from the first generation in that it can not only receive and output text, but also other types of input, such as images, documents, videos, etc. The output is also more diverse. In addition to text, it can also output images or files, and so on.

iPhone 16 Pro leak just confirmed a huge camera upgrade
The tetraprism lens with 5x optical zoom currently exclusive to the iPhone 15 Pro Max could be headed to both the iPhone 16 Pro and iPhone 16 Pro Max, narrowing the gap between Apple's premium flagships. That's according to a new report from analyst Ming-Chi Kuo, who cites a recent earnings call with Apple lens supplier Largan. In the call, a spokesperson from Largan said "some flagship specifications will be extended to other models" in the second half of 2024, presumably in reference to the upcoming iPhone Pro models. "Apple is Largan’s largest customer, and Largan is also Apple’s largest lens supplier," Kuo said. "Therefore, the quote likely refers to the fact that the new iPhone 16 Pro and Pro Max will have a tetraprism camera in 2H24 (while only the iPhone 15 Pro Max had this camera in 2H23).” The report goes on to say that the tetraprism camera for the iPhone 16 Pro series won't be all that different from the one in the iPhone 15 Pro Max. While the lack of an upgrade is disappointing, it's not necessarily a bad thing as these kinds of lenses are already top-of-the-line. They represent a major increase over prior models’ zoom capabilities, and they're capable of offering more depth while still fitting into super-slim smartphones. That being said, Apple does appear to be revamping the main camera and ultra-wide camera on the iPhone 16 Pro Max. Evidence continues to mount that both iPhone 16 Pro models will share the same 5x optical zoom camera. Earlier this week, DigitTimes in Asia (via 9to5Mac) reported that Apple is set to ramp up orders for tetraprism lenses as it expands their use in its upcoming iPhone series. Industry sources told the outlet that Largan and Genius Electronic Optical were tapped as the primary suppliers. Apple would be wise to streamline its Pro-level iPhones with the same camera setup; then all customers have to consider with their choice of a new iPhone is the size and price. Of course, this should all be taken with a grain of sand for now until we hear more from Apple. It's still a while yet before Apple's usual September time window for iPhone launches. In the meantime, be sure to check out all the rumors so far in our iPhone 16, iPhone 16 Pro and iPhone 16 Pro Max hubs.

Google may bring Google Wallet for Indian users
Google Wallet can help you store your IDs, driving license, loyalty cards, concert tickets and more. You can also store your payment cards and use tap to pay to pay anywhere Google Pay is accepted. Google wallet is available in various countries but Google never launched it in India. Google let indian users stick with the Gpay which facilitates UPI payments. Tap to pay is not part of it. Also we can not store things such as IDs and Passes in indian version of Gpay. This might change and Google may launch Google Wallet in India. With the recent version of Google Wallet and Google Play Services, Google has added some flags and code which indicate that Google is working on something for Indian users regarding wallet. The first change I noticed recently when going through the Google Play Services apk was addition of two new flags Both flags are part of com.google.android.gms.pay package in the Google Play Services. This package contains all the flags for features of Gpay/Wallet. Google does server side flipping of flags to enable/disable features for users. So both these flags doesn't really provide any info about what features enabling these flags is going to bring. But the point here is that Google Wallet is not launched in India so why Google added these flags inside Play Services ? The answer could be that Google may be working on bringing Google Wallet to India. It can enable tap to pay, store payments and various other features for Indian users which we don't have in the current Gpay for India. I found similar flags in the analysis Google Wallet APK - These flags are also disabled by default. But this is again a clear indication of Google working towards something for Indian users. In both cases, enabling the flags doesn't bring anything noticeable UI or feature because there is nothing much added besides flags. Google has dogfood/testing versions internally, so the code will show up slowly in upcoming versions. The last piece of code I found is also from Google Play Services. In case you don't know, Google was working on Digilocker integration in the Google Files app which was supposed to bring your digital document inside the app such as driving license, COVID certificates, aadhar card. But Google has ditched the effort of bringing these features and they removed the "Important" tab (where digilocker was supposed to be integrated) from the Google Files app completely. So things are going to change and here is how. This is the code which I found in the Google Play Services - So the word "PASS" along with PAN, DRIVERS LICENCE, VACC CERTIFICATE & AADHAR CARD, is clear indication of the possibility of Google adding support for these directly through Google Wallet using Digilocker, just like Samsung Pass does it. This code is not old as I have checked older beta versions of Play Services where this code is not present. Here is a string which was added in a previous beta version a few weeks ago but I completely ignored it because it didn't make any sense without flags and the other code - This addition was surprising because there was nothing regarding digilocker before in the Play Services. In the words "pay_valuable", the "pay" to Wallet/Gpay and "valuable" refers to the things like Passes, loyalty cards and transit cards. Since we are talking about digilocker, these "valuable" are driving license, vaccination certificate, PAN card and Aadhar card which can be store in Google Wallet after digilocker integration. That's all about it. We will know more about it in upcoming app updates or maybe Google can itself annouce something about this.