Connect with us

AI Insights

Trump wants to keep ‘woke Marxist lunacy’ out of artificial intelligence models

Published

on


President Donald Trump this week signed a series of executive orders and released an action plan aimed at the artificial intelligence industry.

Much of Trump’s plan gives the green light to the tech industry, according to Axios. It focuses on speeding up AI innovation, rather than addressing concerns such as model safety, environmental risks and the potential for job losses.

One of the executive orders Trump signed would speed the permitting process for major AI infrastructure projects and another focused on promoting export of U.S. AI products, according to The New York Times.

The approach signals the administration has embraced the tech industry’s arguments that it must be allowed to work with few guardrails, the Times said. That’s a different take than some other governments that have approved AI regulations, including the European Commission.

“America is the country that started the AI race,” Trump said Wednesday, according to the Times. “And as president of the United States, I’m here today to declare that America is going to win it.”

But Trump is also taking aim at the ideology of AI models by attempting to dictate how chatbots deal with contentious political issues, Axios said.

One of his orders requires that any model bought by a federal agency be ideologically neutral. It calls for AI language models to “prioritize historical accuracy, scientific inquiry, and objectivity,” but also singles out diversity, equity and inclusion as an example of “ideological dogma,” according to Axios.

“Demanding that developers refrain from ‘ideological bias’ or be ‘neutral’ in their models is an impossible, vague standard that the Administration will be able to weaponize for its own ideological ends,” the Center for Democracy and Technology said, according to Axios.

The requirements on ideology pose technical challenges and raise questions about who decides what counts as an acceptable answer on some issues, according to Axios.

Trump decried “woke Marxist lunacy in the AI models” before signing his orders, according to The Guardian.

“Once and for all, we are getting rid of woke. Is that OK?” Trump said, drawing loud applause from the audience of AI leaders, according to The Guardian.

He also said former President Joe Biden had “established toxic diversity, equity and inclusion ideology as a guiding principle of American AI development”.

The growth of artificial intelligence is expected in the long run to boost business for chip manufacturers like Micron Technology. The company is planning to build a massive complex of chip plants in the town of Clay, north of Syracuse.

If you purchase a product or register for an account through a link on our site, we may receive compensation. By using this site, you consent to our User Agreement and agree that your clicks, interactions, and personal information may be collected, recorded, and/or stored by us and social media and other third-party partners in accordance with our Privacy Policy.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Robinhood CEO says just like every company became a tech company, every company will become an AI company

Published

on


Earlier advances in software, cloud, and mobile capabilities forced nearly every business—from retail giants to steel manufacturers—to invest in digital transformation or risk obsolescence. Now, it’s AI’s turn.

Companies are pumping billions of dollars into AI investments to keep pace with a rapidly changing technology that’s transforming the way business is done.

Robinhood CEO Vlad Tenev told David Rubenstein this week on Bloomberg Wealth that the race to implement AI in business is a “huge platform shift” comparable to the mobile and cloud transformations in the mid-2000s, but “perhaps bigger.”

“In the same way that every company became a technology company, I think that every company will become an AI company,” he explained. “But that will happen at an even more accelerated rate.”

Tenev, who co-founded the brokerage platform in 2013, pointed out that traders are not just trading to make money, but also because they love it and are “extremely passionate about it.”

“I think there will always be a human element to it,” he added. “I don’t think there’s going to be a future where AI just does all of your thinking, all of your financial planning, all the strategizing for you. It’ll be a helpful assistant to a trader and also to your broader financial life. But I think the humans will ultimately be calling the shots.”

Yet, Tenev anticipates AI will change jobs and advised people to become “AI native” quickly to avoid being left behind during an August episode of the Iced Coffee Hour podcast. He added AI will be able to scale businesses far faster than previous tech booms did. 

“My prediction over the long run is you’ll have more single-person companies,” Tenev said on the podcast. “One individual will be able to use AI as a huge accelerant to starting a business.”

Global businesses are banking on artificial intelligence technologies to move rapidly from the experimental stage to daily operations, though a recent MIT survey found that 95% of pilot programs failed to deliver.

U.S. tech giants are racing ahead, with the so-called hyperscalers planning to spend $400 billion on capital expenditures in the coming year, and most of that is going to AI.

Studies show AI has already permeated a majority of businesses. A recent McKinsey survey found 78% of organizations use AI in at least one business function, up from 72% in early 2024 and 55% in early 2023. Now, companies are looking to continually update cutting-edge technology.

In the finance world, JPMorgan Chase’s Jamie Dimon believes AI will “augment virtually every job,” and described its impact as “extraordinary and possibly as transformational as some of the major technological inventions of the past several hundred years: think the printing press, the steam engine, electricity, computing, and the Internet.”

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

AI Insights

California Lawmakers Once Again Challenge Newsom’s Tech Ties with AI Bill

Published

on


Last year, California Governor Gavin Newsom vetoed a wildly popular (among the public) and wildly controversial (among tech companies) bill that would have established robust safety guidelines for the development and operation of artificial intelligence models. Now he’ll have a second shot—this time with at least part of the tech industry giving him the green light. On Saturday, California lawmakers passed Senate Bill 53, a landmark piece of legislation that would require AI companies to submit to new safety tests.

Senate Bill 53, which now awaits the governor’s signature to become law in the state, would require companies building “frontier” AI models—systems that require massive amounts of data and computing power to operate—to provide more transparency into their processes. That would include disclosing safety incidents involving dangerous or deceptive behavior by autonomous AI systems, providing more clarity into safety and security protocols and risk evaluations, and providing protections for whistleblowers who are concerned about the potential harms that may come from models they are working on.

The bill—which would apply to the work of companies like OpenAI, Google, xAI, Anthropic, and others—has certainly been dulled from previous attempts to set up a broad safety framework for the AI industry. The bill that Newsom vetoed last year, for instance, would have established a mandatory “kill switch” for models to address the potential of them going rogue. That’s nowhere to be found here. An earlier version of SB 53 also applied the safety requirements to smaller companies, but that has changed. In the version that passed the Senate and Assembly, companies bringing in less than $500 million in annual revenue only have to disclose high-level safety details rather than more granular information, per Politico—a change made in part at the behest of the tech industry.

Whether that’s enough to satisfy Newsom (or more specifically, satisfy the tech companies from whom he would like to continue receiving campaign contributions) is yet to be seen. Anthropic recently softened on the legislation, opting to throw its support behind it just days before it officially passed. But trade groups like the Consumer Technology Association (CTA) and Chamber for Progress, which count among its members companies like Amazon, Google, and Meta, have come out in opposition to the bill. OpenAI also signaled its opposition to regulations California has been pursuing without specifically naming SB 53.

After the Trump administration tried and failed to implement a 10-year moratorium on states implementing regulations on AI, California has the opportunity to lead on the issue—which makes sense, given most of the companies at the forefront of the space are operating within its borders. But that fact also seems to be part of the reason Newsom is so shy to pull the trigger on regulations despite all his bluster on many other issues. His political ambitions require money to run, and those companies have a whole lot of it to offer.



Source link

Continue Reading

AI Insights

Will Smith allegedly used AI in concert footage. We’re going to see a lot more of this…

Published

on


Earlier this month, footage was released of one of Will Smith’s gigs which was allegedly AI-generated.

Snopes agreed that the crowd shots featured ‘some AI manipulation’. You can watch the video below:





Source link

Continue Reading

Trending