The best bit about ChatGPT and other AI chatbots is the tech’s ability to meet your needs based on how you word your responses to it.
In the world of AI, prompt engineering is the way that you formulate inputs to explain to the models how you want to receive your response.
I’ve covered many of my favorite prompts over the last few months, including this excellent one that makes it easy to learn anything.
That said, there’s actually a lot of small, let’s call them ‘cheat codes’ that you can add to your usual standard prompts to get better answers from AI.
This article is mostly for beginners and people who are just starting out with AI, but even if you know your way around, read on as you might be surprised by one of the entries.
Here are 5 simple ChatGPT cheat codes to help get better answers from your conversations with AI.
This article was inspired by a Reddit post, you can find the original source here.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
1. ELI5 (explain like I’m 5)
Type ELI5 followed by your topic into ChatGPT, and you’ll get a very simple explanation of the thing you’re trying to understand.
ELI5 (explain like I’m 5) is often used on internet platforms like Reddit, with one of the most followed subreddits in the world filled with human-made explanations of complex topics.
If you enjoy learning about things in a more streamlined fashion, adding ELI5 to the start of your prompts is a great way to make things simple to understand.
2. Step-by-step
Need help breaking tasks down? Type Step-by-step before your prompt, and ChatGPT will respond back in easy-to-follow steps that might make your next overwhelming task more manageable.
You can use this addition to a prompt with almost anything, and it’s a fantastic way to get quick answers to problems.
Whether you use this to help access specific settings on your iPhone or a quick recipe for Carbonara, try adding step-by-step instructions to the start of your next prompt and get a guide from AI.
3. TL;DR (too long; didn’t read)
Want to know what a long piece of text is about without reading the full thing? Simply add TL;DR followed by some kind of text, and ChatGPT will summarize it on the spot.
There are plenty of AI summarization tools built into your tech products, and often, they’re more convenient than using ChatGPT for the same job.
That said, there are definitely times when this quick addition to a prompt can help, so it’s worth knowing that this internet slang also works with AI.
4. Decision tree
If you type Decision tree, ChatGPT will help you make a choice based on a variety of options. This is great to add to a prompt when you’re trying to rationalize potential strategies or get a different perspective on a problem.
There are many ways to get ChatGPT to help with making decisions. In fact, I think ChatGPT’s ability to help you understand different perspectives on a problem is one of the AI chatbot‘s best strengths.
That said, it’s cool to simply type “Decision tree” followed by your query like “Should I go to the cinema tonight?” and watch as ChatGPT helps you come to a conclusion.
5. Diagram
Need a diagram? ChatGPT can generate one if you just type diagram into any conversation.
That decision tree I mentioned above? Combine that with the ability to make a diagram, and ChatGPT will quickly generate imagery to match your discussion.
While most people are aware of ChatGPT’s image generation prowess, it’s awesome to be able to just ask for an image related to your conversation at any time during the back-and-forth with AI.
In 2016, Noam Chomsky, the father of modern linguistics, published the book Who Rules the World? referring to the United States’ dominance in global affairs. Today, policymakers—such as U.S. President Donald J. Trump argue that whoever wins the artificial intelligence (AI) race will rule the world, driven by a relentless, borderless competition for technological supremacy. One strategy gaining traction is open-source AI. But is it advisable? The short answer, I believe, is no.
Closed-source and open-source represent the two main paradigms in software, and AI software is no exception. While closed-source refers to proprietary software with restricted use, open-source software typically involves making the underlying source code publicly available, allowing unrestricted use, including the ability to modify the code and develop new applications.
AI is impacting virtually every industry, and AI startups have proliferated nonstop in recent years. OpenAI secured a multi-billion-dollar investment from Microsoft, while Anthropic has attracted significant investments from Amazon and Google. These companies are currently leading the AI race with closed-source models, a strategy aimed at maintaining proprietary control and addressing safety concerns.
But open-source models have consistently driven innovation and competition in software. Linux, one of the most successful open-source operating systems ever, is pivotal in the computer industry. Google Android, which is used in approximately 70 percent of smartphones worldwide, Amazon Web Services, Microsoft Azure, and all of the world’s top 500 supercomputers run on Linux. The success story of open-source software naturally fuels enthusiasm for open-source AI software. And behind the scenes, companies such as Meta are emerging by developing open-source AI initiatives to promote the democratization and growth of AI through a joint effort.
Mark Zuckerberg, in promoting an open-source model for AI, recalled the story of Linux’s open-source operating system. Linux became “the industry standard foundation for both cloud computing and the operating systems that run most mobile devices—and we all benefit from superior products because of it.”
But the story of Linux is quite different from Meta’s “open-source” AI project, Llama. First and foremost, no universally accepted definition of open-source AI exists. Second, Linux had no “Big Tech” corporation behind it. Its success was made possible by the free software movement, led by American activist and programmer Richard Stallman, who created the GNU General Public License (GPL) to ensure software freedom. The GPL allowed for the free distribution and collaborative development of essential software, most notably the Linux open source operating system, developed by Finnish programmer Linus Torvalds. Linux has become the foundation for numerous open-source operating systems, developed by a global community that has fostered a culture of openness, decentralization, and user control. Llama is not distributed under a GPL.
Under the Llama 4 licensing agreement, entities with more than 700 million monthly active users in the preceding calendar month must obtain a license from Meta, “which Meta may grant to you in its sole discretion” before using the model. Moreover, algorithms powering large AI models rely on vast amounts of data to function effectively. Meta, however, does not make its training data publicly available.
Thus, can we really call it open source?
Most importantly, AI presents fundamentally different and more complex challenges than traditional software, with the primary concern being safety. Traditional algorithms are predictable; we know the inputs and outputs. Consider the Euclidean algorithm, which provides an efficient way for computing the greatest common divisor of two integers. Conversely, AI algorithms are typically unpredictable because they leverage a large amount of data to build models, which are becoming increasingly sophisticated.
Deep learning algorithms, which underlie large language models such as ChatGPT and other well-known AI applications, rely on increasingly complex structures that make AI outputs virtually impossible to interpret or explain. Large language models are performing increasingly well, but would you trust something that you cannot fully interpret and understand? Open-source AI, rather than offering a solution, may be amplifying the problem. Although it is often seen as a tool to promote democratization and technological progress, open source in AI increasingly resembles a Ferrari engine with no brakes.
Like cars, computers and software are powerful technologies—but as with any technology, AI can harm if misused or deployed without a proper understanding of the risks. Currently, we do not know what AI can and cannot do. Competition is important, and open-source software has been a key driver of technological progress, providing the foundation for widely used technologies such as Android smartphones and web infrastructure. It has been, and continues to be, a key paradigm for competition, especially in a digital framework.
Is AI different because we do not know how to stop this technology if required? Free speech, free society, and free software are all appealing concepts, but let us do better than that. In the 18th century, French philosopher Baron de Montesquieuargued that “Liberty is the right to do everything the law permits.” Rather than promoting openness and competition at any cost to rule the world, liberty in AI seems to require a calibrated legal framework that balances innovation and safety.
The mysteries of the brain have captivated scientists for more than 100 years, most notably illustrated in the detailed drawings of neuroanatomist Santiago Ramón y Cajal. These drawings and his findings related to neuronal organization pushed neuroscience into the modern era.1
Since Ramón y Cajal, researchers have developed new approaches to answer questions about the types of cells in the brain and their functions. Neuroscientists understand how calcium allows these cells to send messages and the role of dopamine in the reward system. They can spy on neuron activity using patch clamp electrophysiology and can even watch as someone uses a specific region of the brain with functional magnetic resonance imaging.
However, the factors that determine how neurons connect and interact following a stimulus remain elusive. The task seemed so enormous that some scientists considered it impossible. Francis Crick said as much in a 1979 article in Scientific American, calling a wiring diagram of the brain “asking for the impossible.”2
Clay Reid, today a neuroscientist at the Allen Institute, read this article with Crick’s comment in 1982 when he was a recent college graduate in physics and mathematics from Yale University. “I wish I could say…from the moment I read it, that that was what I wanted to solve. That’s not true, but I think it probably lit a fire,” Reid said.
Eventually, this burning interest led Reid and other researchers to create the most comprehensive wiring diagram of a mammalian brain to date. Fueled by emerging interest in expanding the power of artificial intelligence (AI), the Machine Intelligence from Cortical Networks (MICrONS) program combined anatomical information and functional activity of a neuronal circuit on the scale of hundreds of thousands of cells to provide insights into the brain’s processes. This resource can help researchers begin to understand what guides neuronal interactions and how these connections influence their functions.
Exploring Neuronal Connections Through Structure and Function
Although he didn’t immediately dive into creating a map of the brain, Reid wasn’t too far removed from neural circuitry. After transferring from physics to neuroscience in graduate school, he found a research home exploring the inner workings of the visual cortex. Originally, he used electrophysiology to study the function of these neurons, but a new technique using calcium-sensitive fluorescent probes was lighting up neuroscientists’ computer screens.3,4
“I got tenure and decided it’s time to be brave,” Reid said about switching to calcium imaging to study neurons in the eye.5 “Rather than hearing a pop every time a neuron fired, we were able to see a flash every time a neuron fired.”
Santiago Ramón y Cajal illustrated some of the most detailed depictions of neurons in the brains of animals.
Santiago Ramón y Cajal/Public Domain
Around this time, Reid returned to the question about what neurons do in the living brain and how they do it. Answering this question, though, would require anatomical information about how neurons connect to each other, a field called connectomics, which Reid said was most accurately collected with electron microscopy. In 2004, physicist Winfried Denk at the Max Planck Institute applied electron microscopy to connectomics, demonstrating the ability to reconstruct three-dimensional features of tissues from serial sections—called volume electron microscopy—at micrometer scales using computer automation.6 “It was exactly the technique that we needed to answer the questions that we wanted to do,” Reid said.
Indeed, Reid and his team began combining volume electron microscopy with calcium imaging to explore neural circuitry in parallel with their functions.7 However, these studies looked at a few thousand neurons—barely a fraction of even a mouse brain. Scaling up to the size of a more comprehensive circuit, which includes hundreds of thousands of neurons, though, would require a far larger investment of time and resources. “When it got to that scale, it was a scale that required a much larger group and collaborators all over the country,” Reid said. “At that point, it’s definitely not an ‘I’, it’s a ‘we’.”
Luckily around 2014, an interest for just this type of project was coming online.
Advancing Neuroscience to Improve Artificial Intelligence
The draw to study the inner workings of the brain for neuroscientists stems from their interest in figuring out how creatures, including humans, learn and become individuals, as well as what goes wrong in neurological diseases. These incredible biological processing machines have also inspired developers of machine learning systems.
One funding agency, Intelligence Advanced Research Project Activity (IARPA), through the Office of the Director of National Intelligence, sought to study brain circuitry to build better machine learning algorithms that could replicate the processes of neurons.
It’s like matching a fingerprint with itself. Once you see the match, you know it’s correct.
—Clay Reid, Allen Institute
Previous studies that explored neuronal connections and functions looked at either small scales of up to one thousand neurons or at large-scale neuronal interactions within the whole brain with functional magnetic resonance. Focusing on the middle-scale—neuronal circuits comprising tens to hundreds of thousands of neurons—could offer more insights into how neural circuits work to interpret information but would require advancements for managing the petabyte-levels of data and processing the results.
Seeking to expand the work done on parts of circuits by researchers like Reid, IARPA created the MICrONS program in 2014 to map a circuit on the millimeter scale. “It was, let’s say, an ambitious goal,” said Andreas Tolias, a neuroscientist who was at the Baylor College of Medicine at the time.
Today at Stanford University, Tolias explores the intersection of neuroscience and AI. “I want to understand intelligence,” he said. He’s also been interested in AI and, because of its similarities to the brain, the field of neuroAI, which he described as, “basically forming bridges between these two fields in a more active way and in particular, [an] experimental or data driven way. So, I found that very appealing.”
Using morphological features observed in their electron microscopy data, researchers identified connection patterns in inhibitory Martinotti cells in the mouse visual cortex. Pseudocolored synaptic outputs indicate whether the cells connect to excitatory cells in layers 2 and 3 of the cortex (red) or in layer 5 (cyan).
Clare Gamlin/Allen Institute
Previously, Tolias and his group developed new methods and approaches to study and interpret signals from neurons.8,9 They also designed new deep learning models to explore the function of the visual cortex.10,11
“In neuroscience, we’ve been data limited, and we still are in many domains. So, at the time, I was looking for opportunities where we could scale up large data collection,” Tolias said, adding that the chance to do exactly this is what attracted him to the MICrONS project. Tolias and his colleagues applied for funding through the MICrONS program to conduct functional imaging of neurons in the visual cortex and use machine learning to explore the mechanisms of this circuit.
“I always dreamed that there would be two main interactions: AI tools to help us understand the brain, and then eventually, as understanding the brain at some fundamental level should also be helpful to AI,” he said.
Automation and Algorithms Yield New Neuron Knowledge
Ultimately, IARPA awarded grants to Reid’s group, Tolias’s group, and a team led by Sebastian Seung, a neuroscientist at Princeton University, to carry out the goals of the MICrONS program. Because researchers had previously characterized the connectomics of the visual cortex, IARPA selected this circuit to focus on for the project.
The teams would collect functional data from neurons in this region while a mouse watched specific visual stimuli using calcium imaging. Then, they planned to obtain anatomical information from this same area using volume electron microscopy. Finally, they would reconstruct the images and align these with the neuron activity data, while at the same time developing digital models from this functional data.
“It sounds like, and it is, a difficult problem to take a bunch of pictures of individual neurons in a living brain, slice them up into thousands of pieces, put it back together, and then say ‘this is neuron 711 here. Here’s neuron 711 in the electron microscopy,’” Reid said about the endeavor. Even so, he added, “It’s like matching a fingerprint with itself. Once you see the match, you know it’s correct.”
However, even before they had the data, the research pushed the teams to develop better technologies to accomplish their goals. “There’s a lot of engineering all the way from hardware to software to bringing in AI tools and scaling up the imaging so it could be done efficiently,” Tolias said.
For example, Reid’s team developed automated serial sectioning processes and pipelines for electron microscopy imaging.12,13 “The electron microscopy is this beautiful, amazing, three-dimensional data,” Reid said. “But back in the day, the way to make sense of that data was to have human beings wander through the three-dimensional data and essentially lay down breadcrumbs one by one to trace out the neurons.”
Seung, he said, pioneered several advancements in tools for outlining, reconstructing, and editing this data that overcame these analysis limitations to “do the equivalent of millions of years of human labor.”14-16 In fact, Reid said that the last four years of the project were really spearheaded by data scientists including those in Seung’s team and, at the Allen Institute, Forrest Collman and Nuno Maçarico da Costa.
Eventually, though, the teams began to realize the fruits of their labor. Reid recalled seeing some of the first reconstructed images. “It was extraordinary,” he said, adding that now, someone can explore the entire 3D structure of one of the processed neurons in the MICrONS dataset.
Beyond the wiring diagrams, the researchers also revealed new insights into the functions of the visual circuit. They identified an overarching mechanism guiding cell communication, showing that inhibitory neurons target specific neurons to block their activity, and that sometimes different types of these inhibitory neurons cooperate to target the same cells through distinct mechanisms.17 In another study, the researchers revealed that excitatory neurons’ structures exist on a continuum, that these forms related to the cells’ functions, and that the projections of some cells are geographically confined to specific regions.18
Using the functional data, Tolias’s group trained an artificial neural network to create a brain model, or digital twin, of the visual circuit.19 This model would try to replicate the neural activity from actual brain data and also solve novel problems using these same neural processes.
Unlike the Human Brain Project, in which scientists tried to recreate models of the brain architecture, Tolias’s team trained their digital twin on only the neural activity from visual stimuli. Subsequently, this model successfully predicted neuronal responses to novel stimuli and features of these cell types despite not receiving anatomical information.20 “Now, it forms a bridge, or sort of a Rosetta Stone, if you want, between artificial intelligence and real brains,” he said.
These digital twins, Tolias said, can allow researchers to perform experiments in silico that would be difficult or even impossible in real animal brains. “What would have taken, let’s say 10,000 or 100,000 years, we can run it very fast on [graphics processing units], because now we can parallelize. Instead of having one digital twin of a mouse, we can have 100,000.”
At the time of the MICrONS dataset publication, the scientists working on the project had only finalized reconstructions of a couple thousand of the tens of thousands of neurons collected in the study. Tolias said that, because of the current need for manual proofreading, the reconstructions take time, but new advances in machine learning could continue to simplify this process.
Even so, the team was excited to show that such a lofty goal was attainable. “It’s beyond our wildest dreams, frankly, that when we started in 2006 that less than 20 years later, at least the first draft of Francis Crick’s impossible experiment was done,” Reid said. Reflecting on the experiment’s completion, he said, “It’s extreme pleasure and a bit of disbelief.”
Scaling Up Brain Science of Mice and Men
The findings also stunned neuroscientists not involved with the project. Sandra Acosta, a neurodevelopmental biologist at the University of Barcelona and Barcelonaβeta Brain Research Center, referenced the drawings of Ramón y Cajal to highlight the advancement. “The level of complexity, although it was fantastic, it was 120-year-old microscopes drawing by hand by like incredible scientists with a very big mind, but that is, at some point, very subjective,” she said, contrasting it with the systematized and objective images from MICrONS.
Using machine learning models, researchers built a digital twin that learned how to respond to stimuli, such as visual information, in the same way that biological neurons do. On the left is an image showing activated neurons from the cubic millimeter of studied brain area, and on the right is a representation of a digital twin of this information.
Tyler Sloan, Quorumetrix
“For me, the most shocking [thing] was seeing the numbers,” Acosta continued. The researchers recorded calcium imaging data from more than 75,000 neurons and imaged more than 200,000 cells in the cubic millimeter of the visual cortex that they mapped. “That’s beautiful,” she added.
Cian O’Donnell, a computational neuroscientist at Ulster University, said that a major advantage of the MICrONS project over similar previous studies is that the data were both high throughput and high fidelity. “We had some information, but nowhere near the level of resolution as the MICrONS project has delivered.”
“It’s letting us ask questions, qualitatively different questions that we couldn’t address before,” O’Donnell continued. He and his team study learning using computer modeling, and he said that the paired recordings of brain activity during visual stimulation with the anatomical connectomics data would be helpful information to answer questions he’s interested in.
Similarly, Acosta is looking forward to seeing similar research that evaluates brains from animals with neurodegenerative conditions. “It will be nice to see the extension of this neurodegeneration at a very molecular level, or a synaptic level, as it is here,” she said.
Clay Reid led a team of researchers at the Allen Institute to process one cubic millimeter of a mouse brain in the visual cortex and then image this tissue using electron microscopy.
Jenny Burns/Allen Institute
Beyond the physical data and the findings themselves, the researchers developed a variety of tools and resources to facilitate data processing and use. One tool, Neural Decomposition, expedites the editing process by fixing errors introduced from automated data processing tools.21 Another tool, Connectome Annotation Versioning Engine, allows researchers to analyze information from one part of the dataset while another is undergoing editing.22 This resource helped other researchers reconstruct one cubic millimeter of human cortex from electron microscopy data.23 Meanwhile, the reconstruction tools developed by Seung’s group aided the development of the first whole-brain wiring diagram of the fly brain.24
“So, yes, we found out some things about the visual cortical circuit, but I think the influence is far stronger than that,” Reid said.
Additionally, a subsequent project, Brain CONNECTS, is underway using data and resources developed in the MICrONS study to scale up the findings of MICrONS to capture the whole mouse brain. “It’s so unimaginable that Francis Crick wouldn’t have said this is impossible, because it’s absurd,” Reid said. MICrONS researchers Maçarico da Costa and Collman are leading one of the Brain CONNECTS projects where they are using volume electron microscopy to map another region of the mouse brain and combine this with existing gene expression data to create a cell atlas.
“It’s not just going to be like, if we scale it up 10 times, that doesn’t mean nine more of the same things. It means different brain regions being connected to each other,” O’Donnell said about expanding this area of research. Having a whole brain diagram, he said, “it’s going to change neuroscience forever.”
He added that this could eventually lead to studying the brains of multiple mice, allowing for exploration into variability between brains that could help researchers, like O’Donnell, study differences in brains with autism-like traits.
Eventually, researchers including Reid want to extend these advances into mapping the human brain at the same scale. “I want to be involved in [the whole mouse brain], but I really want to map the human brain, because it’s the human brain,” he said.
The FTC is investigating how companies use personal and behavioral data to set individualized prices.
Critics say surveillance pricing threatens consumer welfare and may cross constitutional lines.
A lack of robust data privacy protections is at the heart of the issue, according to experts
They know who you are, where you live, how much money you make and where you spent your last vacation.
They’re watching what websites you visit, tracking your mouse movements while you’re there and what you’ve left behind in virtual shopping carts. Mac or PC? iPhone or Android? Your preferences have been gathered and logged.
And they’ve got the toolkit, powered by artificial intelligence software, to assemble all this information to zero in on exactly how much you’re likely willing to pay for any product or service that might strike your fancy.
The “they” is a combination of retailers and service providers, social media operators, app developers, big data brokers and a host of other entities with whom you have voluntarily and involuntarily shared personal and behavioral information. And they’ve even come up with new labels to make you feel better about the systems that are using your personal data to set a custom price.
Dynamic pricing. Personalized pricing. Even “discount pricing.”
FTC investigates surveillance pricing
But the Federal Trade Commission and others have another name for it: surveillance pricing.
In an ongoing investigation launched last year, the FTC is looking into the practice of surveillance pricing, a term for systems that use personal consumer data to set individualized prices — meaning two people may be quoted different prices for the same product or service, based on what a company predicts they are willing — or able — to pay.
Part of the FTC’s mandate includes working to prevent fraudulent, deceptive and unfair business practices along with providing information to help consumers identify and avoid scams and fraud, according to the agency.
In a preliminary report in January, the agency highlighted actions it’s already taken to quell the rise of surveillance pricing amid its effort to gather more in-depth information on the practice:
One complaint issued by the FTC included allegations that a mobile data broker was harvesting consumer information and sensitive location data, including visits to health clinics and places of worship which was later sold to third-parties.
The agency said it issued the first-ever ban on the use and sale of sensitive location data by a data broker which allegedly sold consumer location data it collected from third-party apps and by purchasing location data from other data brokers and aggregators.
Another FTC complaint alleged that the data broker InMarket used consumers’ location data to sort them into particularized audience segments — such as “parents of preschoolers,” “Christian church goers,” “wealthy and not healthy” — which it then provided to advertisers.
Last year, the FTC issued orders to eight companies that offer surveillance pricing products and services that incorporate data about consumers’ characteristics and behavior. The orders, according to the agency, seek information about the potential impact these practices have on privacy, competition and consumer protection.
“Firms that harvest Americans’ personal data can put people’s privacy at risk. Now firms could be exploiting this vast trove of personal information to charge people higher prices,” said then-FTC Chair Lina M. Khan. “Americans deserve to know whether businesses are using detailed consumer data to deploy surveillance pricing, and the FTC’s inquiry will shed light on this shadowy ecosystem of pricing middlemen.”
AI-driven consumer profiling
George Slover, general counsel and senior counsel for competition policy at the Center for Democracy and Technology, has his own term for the practice of using personal information to construct prices for individuals: “bespoke pricing.” He said it poses a fundamental threat to consumer welfare and free market principles.
Proponents of individualized pricing systems have argued that the method can send prices both directions — higher prices for some, lower prices for others. But Slover warned that, unlike uniform pricing, bespoke pricing enabled by “big data” and artificial intelligence gives companies little incentive to offer discounts to those who can’t afford market prices.
“Theoretically, maybe,” Slover told the Deseret News. “But as a practical matter what the sellers will do is maximize their prices. There’s a lot less incentive to lower the price for someone than raise the price for someone.”
Slover characterized the FTC investigation as “very useful” and said it could reveal more about the methodologies behind bespoke pricing and potentially lead to appropriate restrictions under existing law.
For now, he said, consumers have few defenses beyond masking their data.
“Potential ways for consumers to change their profile include working through a (virtual private network) internet connection, using an anonymized intermediary or even setting up a bogus, fictional profile … looking to reduce how much they have to pay,” Slover said.
But he cautioned that vast troves of consumer data on virtually every internet user have already been harvested and repackaged by brokers.
Slover, who has worked on antitrust and competition law for more than 35 years including stints with the U.S. Department of Justice and U.S. House of Representative’s Judiciary Committee, tied the debate over surveillance pricing to the broader need for comprehensive privacy protections.
“My organization, the Center for Democracy and Technology, was founded 30 years ago when the internet was getting off the ground,” he said. “One of the issues we’ve been focusing on since the beginning is the privacy of data … and getting Congress to implement a strong, comprehensive privacy law.”
Looking to protect data privacy
Utah state Rep. Tyler Clancy, R-Provo, said he’s not willing to wait for Congressional action on data privacy, which he sees as the critical underlying issue behind surveillance pricing.
“The privacy aspect is the biggest issue for me,” Clancy told the Deseret News. “Companies use that data to do business … but this is an area where we need some guardrails.
“If you’re creating a price for someone from immutable characteristics — race, faith, gender, ethnicity — that runs into constitutional concerns.”
Clancy said he is exploring “consent provisions” to ensure Utahns can know if their data is being used in pricing systems. He cited the response to recent news stories generated after a Delta Air Lines executive indicated in an earnings call that the carrier was testing out artificial intelligence technology that could set fares based on “the amount people are willing to pay for the premium products related to the base fares.” Delta issued a follow-up statement, clarifying that it was not using any manner of “individualized pricing” to set air fares based on customer data.
“There is no fare product Delta has ever used, is testing or plans to use that targets customers with individualized prices based on personal data,” said Delta chief external affairs officer Peter Carter.
But that qualification came after an uproar had already begun, Clancy said, and the issue has drawn interest from across the political spectrum.
“When this news story originally broke, it was a shock to people on the political right and the political left,” Clancy said.
Clancy said he’s working on a proposal for the 2026 session of the Utah Legislature and aims to compel transparency in how business entities use personal information in pricing systems for products and services.
“Sunshine is the best disinfectant and overall that’s the goal I’m trying to achieve here,” Clancy said. “It will lead to a better and freer market and that’s a win for everyone.”
Targeted advertising came first
When it comes to names, BYU marketing professor John Howell isn’t a fan of surveillance pricing, a term he believes is unnecessarily inflammatory. But he says the growing controversy over the practice isn’t about whether it’s possible or not, but whether consumers will tolerate it.
“This isn’t a new phenomenon,” Howell said in a Deseret News interview. “We’ve been paying attention to this at least from the early 2000s, individual level pricing, first-degree price discrimination. Charge every person a specific price based on their willingness to pay.”
Howell said economists have long predicted the advent of such models, but noted industry made a first stop in the advertising realm.
“It’s been conjectured as coming for at least 20 years,” he said. “Industry went to targeted advertising before targeted pricing. And I’m surprised that they went for that when targeted pricing is more profitable.”
Even if the practice is logical from a business perspective, Howell said consumer reaction is also predictably negative.
“Any time customers start to see price discounts, or individual pricing, they absolutely hate it,” he said.
And that tension isn’t new either. Howell highlighted that the Sherman Antitrust Act, passed in 1890, was “largely inspired by price discrimination by the railroad industry back at the turn of the 20th century.”
From an academic perspective, Howell said, price discrimination should benefit more consumers than it harms.
“The theory is, price discrimination almost always leads to lower average prices,” Howell said. “That doesn’t mean that every person is going to pay less. Also, it generally increases the availability of goods and services to lower-income customers.
“If you have price discrimination you can charge the people that can afford to pay a lot and less for those who cannot afford it. That’s generally what happens.”
In practice, however, Howell notes entities with the most data, and by default the most control over individual pricing systems, can easily disrupt the realm of competition.
“It’s not the theory of price discrimination that’s been disrupted,” he said. “What the data has done in our current economic system is tend to reward big players with increased power and lock out all the smaller players. As soon as they’re not competitive, the theory falls apart. If I was a policy maker, that’s what I would target, keeping big tech from accumulating so much power.”