Connect with us

AI Insights

Google Highlights Artificial Intelligence Instead Of Upgrading Hardware In Launching Latest Smartphones

Published

on


JAKARTA Alphabet Inc through Google officially launched its newest line of Pixel smartphones and supporting devices on Wednesday, August 20, with a major focus on artificial intelligence integration (AI) rather than a major increase in hardware side.

The annual “Made by Google” event, which was held in New York, features a number of celebrities such as host Jimmy Fallon and the Jonas Brothers group to demonstrate the real implementation of AI on Google devices.

Rick Osterloh, Senior Vice President of Google’s Division of Devices and Services, stated that their latest AI model, Gemini, is the main key in the experience of the latest generation of Pixel users. “There are a lot of hype about AI on mobile phones and also a lot of promises that are not fulfilled, but Gemini is real evidence,” he said.

The AI feature embedded in the Pixel 10 line includes intelligent assistants who are able to display relevant information without user requests, such as displaying flight confirmation emails when making calls to airlines, as well as camera “coaches” that help users take better photos. Google also introduced a real-time language translation feature for phone calls.

In terms of design, the Pixel 10 has not changed much compared to the previous generation, but is now equipped with a telephoto lens on the basic model to be equivalent to a premium version. All devices are equipped with the latest G5 Tensor processor and Pixelsnap magnetic charging technology, which is similar to Apple’s MagSafe feature.

The price of the Pixel 10 starts from 799 US dollars (around Rp13,020,700) and 1,799 US dollars (around Rp29,373,700) for the folding model, remaining stable even though previously there were concerns about an increase in prices due to the US import rate.

According to Bob O’Donnell, principal analyst at Technalysis Research, Google’s main focus is now no longer on hardware specifications alone. “A lot of things they showed today can actually run almost the same on devices last year. Their message is clear: it’s no longer just a matter of hardware,” he said.

Although aggressive in driving AI integration, the Pixel market share has not shown a significant increase. Based on IDC data, Pixel’s global market share only rose slightly from 0.9% to 1.1% in the second quarter of this year, while in the United States it fell from 4.5% to 4.3%. Nearly three-quarter sales of Pixel occurred in the US, Japan, and the UK. This year, Google began expanding its market by selling Pixel in Mexico for the first time.

“The opportunity to expand their market is still very large, and that’s what Google is currently an obstacle to,” said Carolina Milanesi, analyst at Creative Strategies.

Pixel 10, Pixel 10 Pro, and Pixel 10 Pro XL will be available later this month, while the Pixel 10 Pro Fold is scheduled to launch in October. Apart from smartphones, Google also introduced the Pixel Watch 4, as well as the cheap Pixel Buds 2a wireless earbuds. Meanwhile, the Pixel Buds Pro 2 did not get any major updates other than the addition of new colors and software upgrades.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language.
(system supported by DigitalSiber.id)





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

No, AI Is Not Better Than a Good Doctor

Published

on


Search the internet and you will find countless testimonials of individuals using AI to get diagnoses their doctors missed. And while it is important for individuals to take ownership of their healthcare and use all available resources, it is just as important to understand the process behind an AI diagnosis.

If you ask AI to figure out what ails you based on inputting a series of symptoms, the AI will use mathematical probability to calculate the appropriate sequence of words that would generate the most valuable output given the specific prompt. The AI has no intrinsic or learned understanding of what “body,” “illness,” “pain,” or “disease” mean. Such practically meaningful concepts to humans are, to the bot, just letters encountered in the training set frequently paired with other letters.

New research on AI’s lack of medical reasoning

Recently, a team of researchers set out to investigate whether AIs that achieved near-perfect accuracy on medical benchmarks like MedQA actually reasoned through medical problems or simply exploited statistical patterns in their training data. If doctors and patients more widely rely on AI tools for diagnosis, it becomes critical to understand the capability of AI when faced with novel clinical scenarios.

The researchers took 100 questions from MedQA, a standard dataset of multiple-choice medical questions collected from professional medical board exams, and replaced the original correct answer choice with “None of the other answers.” If the AI was simply pattern-matching to its training data, the change should prove devastating to its accuracy. On the other hand, if there was reasoning behind its answers the negative effect should be minimal.

Sure enough, they found that when an AI was faced with a question that deviates from the familiar answer patterns it was trained on, there was a substantive decline in accuracy, from 80% to 42% accuracy. This is because AI today are still just probability calculators, not artful thinkers.

Artful medical practitioners see, hear, feel, and recognize medical conditions in ways they are often not consciously aware of. While an AI would be thrown off by an unfamiliar description of symptoms, good doctors listen to the specific word choices of patients and try to understand. They appreciate how societal factors can impact health, trusting both their own intuitions and those of the patient. They pay close attention to all the presenting symptoms in an open-minded manner, as opposed to algorithmically placing the patient in a generic diagnostic box.

Healing is more than a single task

And yet, algorithmic supremacists are as confident as ever in their belief that human healthcare providers will be replaced by machines. In 2016, at the Machine Learning and Market for Intelligence Conference in my hometown of Toronto, Geoffrey Hinton took the mic to confidently assert: “If you work as a radiologist, you are like Wile E. Coyote in the cartoon. You’re already over the edge of the cliff, but you haven’t yet looked down … People should stop training radiologists now. It’s just completely obvious that in five years deep learning is going to do better than radiologists.”

Seven years later, well past the five-year deadline, Kevin Fischer, CEO of Open Souls, attacked Hinton’s erroneous AI prediction, explaining how tech boosters home in on a single behavior against some task and then extrapolate broader implications based on that single task alone. The reality is that reducing any job, especially a wildly complex job that requires a decade of training, to a handful of tasks is absurd.

As Fischer explains, radiologists have a 3D world model of the brain and its physical dynamics in their head, which they use when interpreting the results of a scan. An AI tasked with analysis is simply performing 2D pattern recognition. Furthermore, radiologists have a host of grounded models they use to make determinations, and, when they think artfully, one of the most important is whether something “feels” off. A large part of their job is communicating their findings with fellow human physicians. Further, human radiologists need to see only a single example of a rare and obscure condition to both remember it and identify it in the future, unlike algorithms, which struggle with what to do with statistical outliers.

So, by all means, use whatever tools you can access to help your wellness. But be mindful of the difference between a medical calculator and an artful thinker.





Source link

Continue Reading

AI Insights

Escape from Tarkov is finally coming to Steam ‘soon,’ developer says

Published

on


Following news that Escape from Tarkov is escaping its perpetual beta, the pioneering extraction shooter is also about to make its debut on Steam. Nikita Buyanov, head of the Battlestate Games studio that developed Escape from Tarkov, confirmed on X that the game’s Steam page “will be available soon,” only teasing that the full details will come later.

Buyanov’s confirmation comes less than a day after the developer posted a GIF on X of a man spraying steam from an iron. Earlier this month, Buyanov revealed on X that the looter shooter will get its 1.0 release on November 15, 2025, more than eight years after the beta opened up to players in July 2017, and that the studio has plans to port it to consoles. The Steam page for Escape from Tarkov isn’t live yet, and with only vague details to go off of, longtime fans already have burning questions. Most importantly, existing players are eager to know if they will have to buy the game again on Steam and how this change will affect the ongoing cheating problem.

While we don’t have any answers yet, Battlestate Games recently went into damage control mode when it revealed the Unheard Edition of the game that costs $250 and includes a new PvE mode. This move irked longstanding players who previously purchased another premium edition of the game, called the Edge of Darkness, which promised access to all future DLCs. The controversy boiled down to owners of the Edge of Darkness edition claiming they should have access to the new content, but the studio argued that it isn’t classified as DLC. In the end, Buyanov apologized for the debacle and promised the PvE mode would be available for anyone who purchased the Edge of Darkness package.



Source link

Continue Reading

AI Insights

Soft skills to survival skills: How to prepare for the ‘job apocalypse’ due to AI

Published

on


The rise of artificial intelligence is already reshaping the global workforce, with experts warning that the ability to build skills such as judgment, empathy, adaptability and digital literacy will be essential to avoid being left behind.

As the technology evolves in waves, from automation to generative AI, agentic systems and eventually artificial general intelligence, millions risk losing their income and also their sense of purpose and identity.

Maha Hosain Aziz, professor at New York University and a member of the World Economic Forum’s Global Foresight Network, warned that the world rarely considers the broader social consequences of this disruption.

“We rarely connect the dots to what happens next – when millions lose not just income, but the anchor that work provides,” she wrote on the World Economic Forum’s platform.

“What happens when our education or years of work experience don’t matter as much any more? Many may face a grim choice: scramble to ‘learn AI’ to stay relevant – or drift into a new class, uncertain where they can fit in the AI economy.”

Ms Aziz outlined four waves of disruption, including traditional automation replacing routine jobs and generative AI transforming content creation and knowledge work.

Agentic AI is taking on multi-step tasks in areas such as HR, market research and IT, with the potential to replace midlevel managers.

By 2030, the world could see the rise of artificial general intelligence capable of most cognitive tasks.

“Each wave will displace another segment of the global working population,” Ms Aziz said.

“The challenge isn’t just how to re-employ people, but how to help them adapt to a future where their previous skills or identities may no longer be relevant. In a way, we’ve seen this before.”