Connect with us

Tools & Platforms

Lets Protect Public Good Over Tech Profits

Published

on


A long time ago, kings owned all the land, while serfs worked that land without owning anything. Back then, if a serf had said, “Hey, I think this little plot of land where I built my house and farm my crops should belong to me,” he would have been laughed at. 

“Oh yeah, how’s that going to work?” the king would have asked. “Is every one of you going to own your own little plot of land? Will you little people be able to buy and sell land to one another? How are you going to keep track of who owns what? Obviously, none of this is doable.”

In today’s increasingly digital world, data is becoming as valuable as land. And the lords of Silicon Valley don’t want us owning our data any more than the old kings wanted serfs owning their land.

Last week at the questionably titled “Winning the AI Race Summit” in Washington, D.C., President Donald J. Trump was talking about whether big tech companies should have to share the wealth with all the people whose skill, talent and labor contribute to the value of their extremely lucrative AI products.  

“You just can’t do it,” said Mr. Trump, “because it’s not doable.” 

I consider myself an extremely lucky artist. I’ve gotten to be a part of some incredible creative projects, but what I actually feel luckiest about is the people I’ve gotten to collaborate with. Making things together with my fellow passionate artists — whether “professional” or “unestablished” and whether “above the line” or “below” — is truly one of the great joys in my life. So you might assume I’d hate the very idea of using technology to do creative things that in the past could only be done “manually” by humans. But this isn’t the case. I don’t have a problem with AI as a technology; I think some of the new creative tools are inspiring. However, I believe we all have an urgent problem with today’s big AI companies’ unethical business practices

The truth is that today’s GenAI couldn’t generate anything at all without its “training data” — the writing, photos, videos and other human-made things whose digital 1s and 0s get algorithmically crunched up and spit out as new. For more than half a decade now, AI companies have been scraping up massive amounts of this content without asking permission and without offering compensation to the people whose creations are so indispensable to this new technology. 

Silicon Valley’s justification for what I believe is a clear case of theft — which Mr. Trump echoed — is that a Large Language Model (LLM) is no different from a person who, for example, reads a book and takes inspiration from it. But this comparison is not only inaccurate, it’s dystopian and anti-human. These tech products are not people. And our laws should not be protecting their algorithmic data-crunching the way we protect human ingenuity and hard work. 

Enter Republican Sen. Josh Hawley and Democratic Sen. Richard Blumenthal (to thunderous applause) who introduced The AI Accountability and Personal Data Protection Act just last week as well. This new legislation would bar AI companies from training on copyrighted works, and allow people to sue for use of their personal data or copyrighted works without consent. In stark contrast to Mr. Trump’s Silicon Valley bootlicking summit, these two lawmakers from both sides of the aisle are standing up for working Americans against the giants of the tech industry. We should all hope their bill passes. 

There are also glimmers of hope coming from the judiciary. In contrast to Mr. Trump’s comments, the White House’s official AI Action Plan doesn’t address the question of training data and intellectual property, and administration officials said it should be left up to the courts. Now, a few weeks ago, Mark Zuckerberg’s Meta declared victory on the issue, when a federal court ruled against a group of authors who had sued for violation of their copyright. But in fact, the judge of that case said the authors probably only lost because their lawyers made the wrong argument about the legal framework of fair use. 

In his ruling, Judge Vince Chhabria wrote: “No matter how transformative LLM training may be, it’s hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books.” So, if I were Zuck, I wouldn’t be celebrating too hard yet. There are plenty more lawsuits against AI companies still pending — including a recent first from major Hollywood studios — and I can only imagine the next set of plaintiffs will heed Judge Chhabria’s advice to focus on market harm.

But, what if none of this works? What if AI companies are just allowed to keep going with this unethical practice? Of course no one can predict the future, but it only stands to reason that this would eventually spell the end of any other commercial content business. Film and television, for sure. Professional journalism, as well. The new and vibrant creator economy of today’s YouTubers, podcasters, newsletter writers, all gone. I’m not saying people won’t make stuff anymore; I’m just saying they won’t be able to earn a living with what they’ve made. Because as long as an AI company can copy all of our content into their model at no cost and spit out quasi-new content for close to no cost, there’s no logical business case for paying human creators anymore. 

Don’t get me wrong — I do believe it’s possible that this new technology could propel a great leap forward in human creativity. But only if there’s a system in place that rewards people for their novel creative work as it’s incorporated into the AI models. Without such a system, and with no economic incentive for people to be creative, our media landscape and public square will become absolutely devoid of anything but algorithmically regurgitated slop optimized for attention maximization and ad revenue.  

As concerned as an artist like myself may be with the future of art and creativity, this issue actually reaches far beyond the media industry. It’s also about ordinary people’s everyday struggle just to make ends meet.  

We creators might be some of the first to feel the threat, but anyone who does their work on a computer is in the same crosshairs: people who work in marketing, or logistics, or finance, or design, just to name a few. And while white-collar jobs will be impacted earlier, blue-collar jobs will follow soon enough, especially as autonomous vehicles and robotics come into further use. Employment as a plumber is considered safe for now, but perhaps not for our kids’ generation. And how will an autonomous plumber-bot know how to do its job? The AI powering it would be trained on data that came from millions of human plumbers doing their jobs. Wouldn’t those humans deserve some compensation? Not if Silicon Valley gets its way. The decisions we make today really could commit us to a future where any valuable work done by any human being will become fair game for a tech company to hoover up into its AI model and monetize, while that human being gets nothing.

People feel this coming. In fact, in a recent poll, 77 percent of Americans said they’d rather get AI right than get it first. Of course, this sentiment is bad for business, so Big Tech responds by sounding the national security alarm. Mr. Trump echoed this common Silicon Valley refrain last week, warning that American AI companies must be allowed to continue to steal everyone’s data or else we’ll lose to China. Why do you think the summit was titled “Winning the AI Race”? Who’s going to argue with a matter of national security? 

But let’s be real. These AI businesses have no loyalty to the American people. Their only obligation is to their shareholders. Plus, if our national security would really be compromised if AI companies had to compensate people for their data, then theoretically, shouldn’t the government be willing to make up the difference? I was just corresponding with a D.C. lawyer about this, and he brought up the “Takings Clause” of our Fifth Amendment: “… nor shall private property be taken for public use, without just compensation.” To me, it still seems like the tech companies should pony up, not the government. But I think it goes to show that all this urgency to “beat China” is not really a matter of national security. It’s just competitive businessmen wanting to beat their competitors. Cash, as one great American poet said, rules everything around me.  

I didn’t vote for President Trump, but I think most of the people who did vote for him genuinely believed he would stand up against a powerful establishment and fight for working Americans. But there is no establishment more powerful in the world today than the handful of gigantic businesses building and selling AI, and none posing a greater threat to the American people’s widespread prosperity. If Mr. Trump really wanted to fight for working Americans, he would join Senators Hawley and Blumenthal in building out policy to protect the public good over Silicon Valley’s bottom line. It’s what he was elected to do. And it is, in fact, doable.

Joseph Gordon-Levitt is an actor, filmmaker, and founder of the online community HITRECORD. He recently started publishing “Joe’s Journal” on Substack and is set to direct an upcoming thriller about AI for Rian Johnson and Ram Bergman’s T-Street.



Source link

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | usatoday.com
logo

usatoday.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on usatoday.com



Source link

Continue Reading

Tools & Platforms

iShares Future AI & Tech ETF (NYSEARCA:ARTY) Surges 27.6% in 2025 — Is It a Buy?

Published

on




ARTY delivers strong tech exposure with 83% allocation to AI leaders, but volatility and valuations test investor conviction | That’s TradingNEWS


TradingNEWS Archive
8/30/2025 8:54:36 PM





Source link

Continue Reading

Tools & Platforms

Emperor Musk’s AI Clothes – Will Lockett’s Newsletter

Published

on


Musk has been parading around in his AI clothes for a while now. With the amount he screams and shouts about AI, you’d think he invented it. Of course, like everything else Musk peddles, he had nothing to do with its invention or development, except for underpaying and overworking his engineers and being an awful, overpromising PR man. However, people aren’t just noticing that Musk’s clothes are non-existent — they are also starting to point and laugh at his skid marks and the “I Love the Nazi Man” tattoo down his back. Why? Because he just can’t seem to get his AI up and working. And there is no little blue pill to remedy this situation.

Take, for example, Tesla’s hilariously crap Robotaxi rollout. The media at large is only just cottoning on to it being a huge PR stunt.

I have gone on ad nauseam about why Tesla’s self-driving cars are completely inadequate, so if you want to know the details, read my previous article here. But the helicopter view is that, unlike other autonomous vehicles, Tesla’s system has zero redundancy or safety nets and requires a nearly 100% accurate AI — which categorically can’t exist — to be even remotely safe.

Tesla is painfully aware of this fatal flaw, with Tesla engineers whistleblowing their concerns about it to the media (read more here) and the DOJ opening an investigation (read more here). So I, along with countless other commentators, was pretty damn relieved to find out that Tesla’s Robotaxis had safety drivers. There was even mention of remote workers being able to take control of the car and drive it safely in the case of a critical disengagement.

But this kind of system isn’t impressive enough for Musk. Any Uber or Lyft driver with a Tesla who wastes their money on FSD can do the exact same thing. There is no social or investor kudos to be gained for Tesla or Musk here. And here is a hint: Musk doesn’t make money from Tesla sales. After all, his $50 billion pay packet (which is now less, thanks to Musk tanking Tesla’s valuation) was the equivalent of him getting $10,000 for every Tesla ever sold! Tesla makes substantially less profit from every car sold than that.

So, what do you do if you have bet your entire company’s valuation on autonomous technology that you simply can’t deliver on?

Fudge it.

Tesla put the safety driver in the passenger seat! Because, look, it’s a self-driving car — there is no one in the driver’s seat!

This is a dangerous move that offers no benefit other than optics.

Rather than being able to properly take over the car and drive it to safety, the only thing these safety drivers could do was press a button to bring the vehicle to a stop. Which, as anyone with a driving licence will tell you, is not always the safest option! Particularly when you consider that Robotaxis have been spotted driving into lanes of oncoming traffic.

Yet, this bafflingly shite decision wasn’t really reported on. Or at least it wasn’t until a video surfaced a few days ago that showed FSD failing and a safety driver being forced to exit the vehicle in the middle of traffic to take the driver’s seat and regain control. (watch it here).

This shows just how wildly dangerous Tesla’s Robotaxis are.

The safety driver had to take a serious risk to take control of the car. Not only that, but this incident suggests there are no remote operatives capable of taking over when things go wrong. That has been a core safety feature of all developing self-driving ride-hailing services, such as Waymo and Cruise, since day one and is routinely used to keep passengers safe. The fact that this is absent for Robotaxis, which Tesla already know have a far, far higher critical disengagement rate than any other self-driving ride-hailing service, could easily be seen as insanely negligent.

Musk is comfortable putting other people — not just the safety driver, but paying passengers and the public — in danger, all for a crappy PR stunt to cover up how bad his self-driving system actually is. And the media at large, as well as public consensus, are beginning to catch up to this horrifying fact.

However, Musk’s AI woes go far, far deeper than that.



Source link

Continue Reading

Trending