Connect with us

Tools & Platforms

XAI releases Grok 4 amid furor over antisemitic comments

Published

on


Generative AI vendors xAI and Perplexity released new models and products to challenge mainstream vendors.

Amid controversy surrounding its Grok AI chatbot making a series of antisemitic comments, xAI released Grok 4 on Wednesday night.

During a live stream on X, xAI’s founder and X owner, Elon Musk, said the model can perform at a postgraduate level in mathematics, chemistry and linguistics based on tests like AI benchmarking platform Humanity’s Last Exam.

“With respect to academic questions, Grok 4 is better than a PhD level in every subject, with no exception,” Musk said during the livestream.

He added that while the multimodal generative AI model has not yet discovered new technologies, it could do so later this year or by 2026.

“AI is advancing faster than any human,” Musk said.

Meanwhile, upstart AI search vendor Perplexity released an AI browser.

Examining Grok 4

The new model has reasoning and problem-solving capability and uses DeepSearch to access factual information from the web, including the X platform. DeepSearch is a tool for web-based analysis and helps with complex queries that require multiple steps.

Grok 4 can process text and image inputs and has a new voice called Eve. The model can also perform multiple tasks simultaneously and is agentic, meaning it can use one or numerous agents for functions. It has a 256k context window and comes in standard and Heavy versions. Standard costs $30 per month, and Heavy costs $300.

The standard version performs single-agentic tasks, while the Heavy version is multi-agentic.

The release of Grok 4 comes only a few months after Grok 3 was released earlier this year, and days after Grok produced a slew of antisemitic responses.

While Grok 4 shows the progress xAI is making in foundation models, the uproar over the model overshadowed the latest version’s technical capabilities, said Arun Chandrasekaran, an analyst with Gartner.

“They have solid research and technical capabilities,” Chandrasekaran said.

Also, the benchmarks that xAI cites seem accurate, but enterprises should not make their decisions about models based on benchmarks, said Bradley Shimmin, an analyst with Futurum Group.

“It is a very much a guidepost, at best,” Shimmin said. “It tells us that Grok 4 aligns with other frontier-scale models.”

He added that the Grok models have been in line with other frontier models for some time, but the update with Grok 4 shows that xAI has been trying to improve the model’s ability to exceed other models on Humanity’s Last Exam.

Safe and responsible AI

Despite the advancement, xAI needs to focus on responsible and safe AI, according to many tech observers

“They need to focus more on guardrails,” Chandrasekaran said. XAI should concentrate more on safety and ensure that the safety mechanisms are layered as part of the entire process of training and releasing a model, including considering prompt inputs.

“Particularly in the case of Grok, it’s more about the recency,” Chandrasekaran said. This is because Grok seems to be taking context from the content coming from the X social media platform, known for its sometimes virulent and uncensored arguments about politics and culture. “They need to have a better filtering way from the context because otherwise the model could be very easily baited and biased from the recency of the inputs that are coming from X.”

In response to the comments the Grok chatbot made about the holocaust and false statements about “white genocide” in South Africa, xAI blamed a programming error.

But for some, model’s offensive hallucinations go beyond an error made by a computer system.

“This is just the latest instance in which [Musk’s] work and reputation are bound up with antisemitism,” said Michael Bennett, associate vice chancellor for data science and artificial intelligence strategy at the University of Illinois Chicago. “For the industry, it’s just a clear indicator that there’s still a lot of work to be done to get these models to produce useful, unbiased and socially acceptable responses. For his enterprises, it’s a further datapoint suggesting that his antisemitism perhaps is not a one-off.”

Permissiveness in the industry

The model’s responses also signal an attitude of laxness in the AI industry that has cropped up over the last year, said Kashyap Kompella, CEO of RPA2AI Research.

“The Grok incident is a sharp reminder that unfettered AI is a bad idea,” Kompella said. “Grok’s shenanigans expose the challenges of letting out AI chatbots unsupervised. We are ignoring and underinvesting in AI governance and guardrails. If there is a silver lining, this incident should wake up the AI industry to take AI governance seriously.”

Taking AI governance is especially important because these tools and technologies have a wider reach beyond the bounds of the U.S. and traditions of free speech, Bennett said.

“For technologies that enable speech that reaches a broader audience … the norms that we ought to be targeting to get the technology to align with, must necessarily be broad as well.”

The lack of governance could also affect xAI’s ability to attract enterprise customers.

“Model safety and responsible AI is a critical evaluation factor for a lot of enterprises; it’s an area where xAI needs to make a lot of progress if they want to be a serious enterprise contender,” Chandrasekaran said.

Perplexity AI

Meanwhile, Perplexity made good on its promise to launch an AI-powered web browser.

On Wednesday, the AI-search vendor launched Comet, a browser that Perplexity said is built for today’s internet.

The new browser is available to Perplexity Max subscribers and other select users by invite-only access.

With Comet, Perplexity appears to be to following a trend that has been developing with major search engines such as Google, Edge, and Safari, in which the value proposition is no longer the link the user has to click on, Shimmin said. Instead, the model is producing an answer, while the link might still be present in terms of a footnote.

“These are all merging into one user experience,” Shimmin said.

He added that it’s not clear whether Perplexity will disrupt existing search engines or user experience, but the vendor differs from traditional search vendors because it does not try to protect the existing search browser model. While other search engines are trying to protect the existing search system because of ad revenue, so Perplexity took a slower approach in embedding AI into search and started embedding AI right away.

Comet can connect with enterprise applications, including Slack, and users can ask questions with voice and text.

Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

One in three Greek SMEs engaged with AI tools, survey shows

Published

on



One in three small and medium-sized enterprises (SMEs) in Greece has engaged with artificial intelligence (AI) tools, a shift that suggests the technology is no longer the preserve of large corporations, according to a new survey by the National Bank of Greece.

The study, titled “Artificial Intelligence as a Growth Catalyst for Greek Businesses,” found that most SMEs use AI for basic applications such as text and image generation. However, one in three users has ventured into more advanced use cases, including data analysis – indicating a quiet wave of technological experimentation that has so far gone under the radar of official statistics.

Despite this progress, the report also pointed to significant untapped potential. Around half of investment-active SMEs have yet to adopt any AI tools, the bank said.





Source link

Continue Reading

Tools & Platforms

How TBM is evolving to power the AI era – cio.com

Published

on



How TBM is evolving to power the AI era  cio.com



Source link

Continue Reading

Tools & Platforms

TED Radio Hour : NPR

Published

on


Illustration by Luke Medina/ NPR/Photo by Andrey Popov/ Getty Images

Futurist Ray Kurzweil’s goal is to not die at all.

A far-fetched idea, and yet those who have followed Kurzweil’s work over the decades know that many of his wild ideas and predictions come true.

Kurzweil was one of the first to forecast how AI would turbocharge human potential. His thought-provoking predictions about digital technology come from over six decades of experience inventing groundbreaking tools that we use today — tools like text to speech synthesis in 1976 and the first music synthesizer in 1983.

Now, 77, the computer scientist is focused on another prediction: that technology will soon make it possible to extend the human lifespan indefinitely.

Extending life through “longevity escape velocity”

“Right now you go through a year and you use up a year of your longevity,” Kurzweil explained in his 2024 TED Talk. “However, scientific progress is also progressing. … It’s giving us cures for diseases, new forms of treatment. … So you lose a year, you get back four months.”

As scientific progress accelerates, Kurzweil thinks the rate of developing treatments will outpace our aging. He calls this concept “longevity escape velocity.”

“For example, I’ve had these two problems, diabetes and heart disease, which I’ve actually overcome, and I really have no concern with them today,” Kurzweil told NPR’s Manoush Zomorodi. “So today I have an artificial pancreas that’s just like a real pancreas. It’s actually external, but it detects my glucose, determines the amount of insulin that I should have, and it works just like a real pancreas.”

With these types of medical advances, every year that someone gets older their health could deteriorate less and less.

“I don’t guarantee immortality. I’m talking about longevity escape velocity, where we can keep going without getting older. We won’t be aging in the same way that we are today,” said Kurzweil.

Is it only a matter of time before your mind merges with AI?

Along with his goal of escaping death, Kurzweil has envisioned a future where AI dramatically alters the way we think and live.

In 1999, in his book The Age of Spiritual Machines, Kurzweil predicted that by 2029, artificial general intelligence would match and even exceed human intelligence. And while that may not seem so far-fetched anymore, Kurzweil says there’s one way his prediction is unique:

He claims our minds will merge with AI.

“We’re going to be able to think of things and we’re not going to be sure whether it came from our biological intelligence or our computational intelligence. It’s all going to be the same thing.”

Kurzweil calls this “the Singularity” and predicts a future where nanobots directly connect our brains to the cloud, expanding our intelligence.

“We will be funnier, sexier, smarter, more creative, free from biological limitations. We’ll be able to choose our appearance. We’ll be able to do things we can’t do today, like visualize objects in 11 dimensions … speak all languages,” Kurzweil said in his 2024 TED Talk. “We’ll be able to expand consciousness in ways we can barely imagine.”

As far as Kurzweil is concerned, our minds are already starting to merge with machines and will only continue to do so.

TED Radio Hour‘s special series: Prophets of Technology

Curious to learn more about Kurzweil’s predictions about AI and technology? On TED Radio Hour‘s three-part series, Prophets of Technology, host Manoush Zomorodi speaks with Ray Kurzweil and other scientists, entrepreneurs and experts predicting and shaping our tech future. They share what they’ve gotten right — and wrong — and where they think we’re headed next.

This episode is part one of TED Radio Hour’s three-part series: Prophets of Technology, conversations with the minds shaping our digital world. Part two will be available on Friday, July 18 and part three will be available on Friday, July 25.

This digital story was written by Harsha Nahata and edited by Katie Monteleone and Rachel Faulkner White.

This episode of TED Radio Hour was produced by James Delahoussaye and Matthew Cloutier. It was edited by Sanaz Meshkinpour and Manoush Zomorodi.

Our production staff at NPR also includes Fiona Geiran.

Our audio engineers were Maggie Luthar, Jimmy Keeley, Stacey Abbott and Josephine Nyounai.

Talk to us on Instagram @ManoushZ, and on Facebook or email us at TEDRadioHour@npr.org.





Source link

Continue Reading

Trending