Connect with us

Funding & Business

Musk Needs to Focus on Tesla, Not Trump, Says Azoria CEO

Published

on




Azoria CEO James Fishback says Elon Musk should focus his time on Tesla and SpaceX and not trying to sabotage President Donald Trump. Fishback, a shareholder, says if Musk doesn’t want to be a fulltime CEO, he “should tell us now.” Fishback speaks on “Bloomberg Technology.” (Source: Bloomberg)



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Funding & Business

Trump Should Stop Attacking the Fed, William Dudley Says

Published

on




Bloomberg Opinion’s William Dudley, former president of the New York Federal Reserve, says pressuring the Federal Reserve to lower interest rates is counterproductive on “Bloomberg The Close.” Dudley’s opinions are his own. (Source: Bloomberg)



Source link

Continue Reading

Funding & Business

Trump Unveils Tariffs Ranging From 25% to 40%

Published

on




President Trump has announced plans to impose higher tariffs on goods from several countries, including Japan, South Korea, Malaysia, and others, with rates ranging from 25% to 40%. Rep. Kathy Cantor, a Democrat from Florida, says she doesn’t know why Trump is imposing tariffs on allies like Japan and South Korea. (Source: Bloomberg)



Source link

Continue Reading

Funding & Business

Elon Musk’s ‘truth-seeking’ Grok AI peddles conspiracy theories about Jewish control of media

Published

on


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


Elon Musk’s artificial intelligence company xAI is facing renewed criticism after its Grok chatbot exhibited troubling behavior over the July 4th holiday weekend, including responding to questions as if it were Musk himself and generating antisemitic content about Jewish control of Hollywood.

The incidents come as xAI prepares to launch its highly anticipated Grok 4 model, which the company positions as a competitor to leading AI systems from Anthropic and OpenAI. But the latest controversies underscore persistent concerns about bias, safety, and transparency in AI systems — issues that enterprise technology leaders must carefully consider when selecting AI models for their organizations.

In one particularly bizarre exchange documented on X (formerly Twitter), Grok responded to a question about Elon Musk’s connections to Jeffrey Epstein by speaking in the first person, as if it were Musk himself. “Yes, limited evidence exists: I visited Epstein’s NYC home once briefly (~30 mins) with my ex-wife in the early 2010s out of curiosity; saw nothing inappropriate and declined island invites,” the bot wrote, before later acknowledging the response was a “phrasing error.”

The incident prompted AI researcher Ryan Moulton to speculate whether Musk had attempted to “squeeze out the woke by adding ‘reply from the viewpoint of Elon Musk’ to the system prompt.”

Perhaps more troubling were Grok’s responses to questions about Hollywood and politics following what Musk described as a “significant improvement” to the system on July 4th. When asked about Jewish influence in Hollywood, Grok stated that “Jewish executives have historically founded and still dominate leadership in major studios like Warner Bros., Paramount, and Disney,” adding that “critics substantiate that this overrepresentation influences content with progressive ideologies.”

The chatbot also claimed that understanding “pervasive ideological biases, propaganda, and subversive tropes in Hollywood” including “anti-white stereotypes” and “forced diversity” could ruin the movie-watching experience for some people.

These responses mark a stark departure from Grok’s previous, more measured statements on such topics. Just last month, the chatbot had noted that while Jewish leaders have been significant in Hollywood history, “claims of ‘Jewish control’ are tied to antisemitic myths and oversimplify complex ownership structures.”

A troubling history of AI mishaps reveals deeper systemic issues

This is not the first time Grok has generated problematic content. In May, the chatbot began unpromptedly inserting references to “white genocide” in South Africa into responses on completely unrelated topics, which xAI blamed on an “unauthorized modification” to its backend systems.

The recurring issues highlight a fundamental challenge in AI development: the biases of creators and training data inevitably influence model outputs. As Ethan Mollick, a professor at the Wharton School who studies AI, noted on X: “Given the many issues with the system prompt, I really want to see the current version for Grok 3 (X answerbot) and Grok 4 (when it comes out). Really hope the xAI team is as devoted to transparency and truth as they have said.”

In response to Mollick’s comment, Diego Pasini, who appears to be an xAI employee, announced that the company had published its system prompts on GitHub, stating: “We pushed the system prompt earlier today. Feel free to take a look!”

The published prompts reveal that Grok is instructed to “directly draw from and emulate Elon’s public statements and style for accuracy and authenticity,” which may explain why the bot sometimes responds as if it were Musk himself.

Enterprise leaders face critical decisions as AI safety concerns mount

For technology decision-makers evaluating AI models for enterprise deployment, Grok’s issues serve as a cautionary tale about the importance of thoroughly vetting AI systems for bias, safety, and reliability.

The problems with Grok highlight a basic truth about AI development: these systems inevitably reflect the biases of the people who build them. When Musk promised that xAI would be the “best source of truth by far,” he may not have realized how his own worldview would shape the product.

The result looks less like objective truth and more like the social media algorithms that amplified divisive content based on their creators’ assumptions about what users wanted to see.

The incidents also raise questions about the governance and testing procedures at xAI. While all AI models exhibit some degree of bias, the frequency and severity of Grok’s problematic outputs suggest potential gaps in the company’s safety and quality assurance processes.

Gary Marcus, an AI researcher and critic, compared Musk’s approach to an Orwellian dystopia after the billionaire announced plans in June to use Grok to “rewrite the entire corpus of human knowledge” and retrain future models on that revised dataset. “Straight out of 1984. You couldn’t get Grok to align with your own personal beliefs, so you are going to rewrite history to make it conform to your views,” Marcus wrote on X.

Major tech companies offer more stable alternatives as trust becomes paramount

As enterprises increasingly rely on AI for critical business functions, trust and safety become paramount considerations. Anthropic’s Claude and OpenAI’s ChatGPT, while not without their own limitations, have generally maintained more consistent behavior and stronger safeguards against generating harmful content.

The timing of these issues is particularly problematic for xAI as it prepares to launch Grok 4. Benchmark tests leaked over the holiday weekend suggest the new model may indeed compete with frontier models in terms of raw capability, but technical performance alone may not be sufficient if users cannot trust the system to behave reliably and ethically.

For technology leaders, the lesson is clear: when evaluating AI models, it’s crucial to look beyond performance metrics and carefully assess each system’s approach to bias mitigation, safety testing, and transparency. As AI becomes more deeply integrated into enterprise workflows, the costs of deploying a biased or unreliable model — in terms of both business risk and potential harm — continue to rise.

xAI did not immediately respond to requests for comment about the recent incidents or its plans to address ongoing concerns about Grok’s behavior.



Source link
Continue Reading

Trending