AI Research
Why we should all pay attention to how lawyers, auditors, and accountants are using AI
Hello and welcome to Eye on AI. In today’s edition…the U.S. Senate rejects moratorium on state-level AI laws…Meta unveils its new AI organization…Microsoft says AI can out diagnose doctors…and Anthropic shows why you shouldn’t let an AI agent run your business just yet.
AI is rapidly changing work for many of those in professional services—lawyers, accountants, auditors, compliance officers, consultants, and tax advisors. In many ways, the experience of these professionals, and the businesses they work for, are a harbinger of what’s likely to happen for other kinds of knowledge workers in the near future.
Because of this, it was interesting to hear the discussion yesterday at a conference on the “Future of Professionals” at Oxford University’s Said School of Business. The conference was sponsored by Thomson Reuters, in part to coincide with the publication of a report it commissioned on trends in professionals’ use of AI.
That report, based on a global survey of 2,275 professionals in February and March, found that professional services firms seem to be finding a return on their AI investment at a higher rate than in other sectors. Slightly more than half—53%—of the respondents said their firm had found at least one AI use case that was earning a return, which is about twice what other, broader surveys have tended to find.
Not surprisingly, Thomson Reuters found it was the professional firms where AI usage was part of a well-defined strategy and that had implemented AI governance structures were the most likely to see gains from the technology. Interestingly, among firms where AI adoption was less structured, 64% of those surveyed still reported ROI from at least one use case, which may reflect how powerful and time-saving these tools can be even when used by individuals to improve their own workflows.
The biggest factors holding back AI use cases, the respondents said, included concerns about inaccuracy (with 50% of those surveyed noting this was a problem) and data security (42%). For more on how law firms are using AI, check out this feature from my Fortune colleague Jeff John Roberts.
Mind the gaps
Here are a few tidbits from the conference worth highlighting:
Mari Sako, the Oxford professor of management studies who helped organize the conference, talked about the three gaps that professionals needed to watch out for in trying to manage AI implementation: One was the responsibility gap between model developers, application builders, and end users of AI models. Who bears responsibility for the model’s accuracy and possible harms?
A second was the principles to practice gap. Businesses enact high-minded “Responsible AI” principles but then the teams building or deploying AI products struggle to operationalize them. One reason this happens is that first gap—it means that teams building AI applications may not have visibility into the data used to train a model they are deploying or detailed information about how it may perform. This can make it hard to apply AI principles about transparency and mitigating bias, among other things.
Finally, she said, there is a goals gap. Is everyone in the business aligned about why AI is being used in the first place? Is it for human augmentation or automation? Is it operational efficiency or revenue growth? Is the goal to be more accurate than a human, or simply to come close to human performance at a lower cost? What role should environmental sustainability play in these decisions? All good questions.
Not a substitute for human judgment
Ian Freeman, a partner at KPMG UK, talked about his firm’s increasing use of AI tools to help auditors. In the past, auditors were forced to rely on sampling transactions, trying to apply more scrutiny to those that presented a bigger business risk. But now, with AI, it is possible to run a screen on every single transaction. Still, it is the riskiest transactions that should get the most scrutiny and AI can help identify those. Freeman said AI could also help more junior auditors understand the rationale for probing certain transactions. And he said AI models could help with a lot of routine financial analysis.
But he said KPMG had a policy of not deploying AI in situations that called for human judgment. Auditing is full of such cases, such as deciding on materiality thresholds, making a call about whether a client has submitted enough evidence to justify a particular accounting treatment, or deciding on appropriate warranty reserves for a new product. That sounds good, but I also wonder about the ability of AI models to act as tutors or digital mentors to junior auditors, helping them to develop their professional judgment? Surely, that seems like it might be a good use case for AI too.
A senior partner from a large law firm (parts of the conference were conducted under Chatham House Rules, so I can’t name them) noted that many corporate legal departments are embracing AI faster than legal firms—something the Thomson Reuters survey also showed—and that this disparity was putting pressure on the firms. Corporate counsel are demanding that external lawyers be more transparent about their AI usage—and critically, putting pressure on legal bills on the theory that many legal tasks can now be done in far fewer billable hours.
Changing career paths and the need for AI expertise
AI is also possibly going to change how professional service firms think about career paths within their business and even who leads these firms, several lawyers at the conference said. AI expertise is increasingly important to how these firms operate, and yet it is difficult to attract the talent these businesses need if these “non-qualified” technical experts (the term “non-qualified” is simply used to denote an employee who has not been admitted to the bar, but its pejorative connotations are hard to escape) know they will always be treated as second-class compared to the client-facing lawyers and also are ineligible for promotion to the highest ranks of the firm’s management.
Michael Buenger, executive vice president and chief operating officer at the National Center for State Courts in the U.S., said that if large law firms had trouble attracting and retaining AI expertise, the situation was far worse for governments. And he pointed out that judges and juries were increasingly being asked to rule on evidence, particularly video evidence, but also other kinds of documentary evidence, that might be AI manipulated, but without access to independent expertise to help them make calls about what has been altered by AI and how. If not addressed, he said, this could seriously undermine faith in the courts to deliver justice.
There were lots more insights from the conference, but that’s all we have space for today. Here’s more AI news.
Note: The essay above was written and edited by Fortune staff. The news items below were selected by the newsletter author, created using AI, and then edited and fact-checked.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
Want to know more about how to use AI to transform your business? Interested in what AI will mean for the fate of companies, and countries? Then join me at the Ritz-Carlton, Millenia in Singapore on July 22 and 23 for Fortune Brainstorm AI Singapore. This year’s theme is The Age of Intelligence. We will be joined by leading executives from DBS Bank, Walmart, OpenAI, Arm, Qualcomm, Standard Chartered, Temasek, and our founding partner Accenture, plus many others, along with key government ministers from Singapore and the region, top academics, investors and analysts. We will dive deep into the latest on AI agents, examine the data center build out in Asia, examine how to create AI systems that produce business value, and talk about how to ensure AI is deployed responsibly and safely. You can apply to attend here and, as loyal Eye on AI readers, I’m able to offer complimentary tickets to the event. Just use the discount code BAI100JeremyK when you checkout.
AI IN THE NEWS
Senate strips 10-year moratorium on state AI laws from Trump tax bill. The U.S. Senate voted 99-1 to remove the controversial measure from President Donald Trump’s landmark “Big Beautiful Bill.” The restrictions had been supported by Silicon Valley tech companies and venture capitalists as well as their allies in the Trump administration. Bipartisan opposition to the moratorium—led by Sen. Marsha Blackburn—centered on preserving state-level protections like Tennessee’s Elvis Act, which protects citizens from unauthorized use of their voice or likeness, including in AI-generated content. Critics warned that in the absence of federal AI regulation, the ban on state-level laws would leave U.S. citizens with no protection from AI harms at all. But tech companies argue that the increasing patchwork of state-level AI regulation is unworkable, hampering AI progress. Read more from Bloomberg News here.
Meta announced new AI leadership team and key hires from rival AI labs. Meta CEO Mark Zuckerberg sent a memo to employees formally announcing the creation of Meta Superintelligence Labs, a new organization uniting the company’s foundational AI model, product, and Fundamental AI Research (FAIR) teams under a single umbrella. Scale AI founder and CEO Alexandr Wang—who is joining Meta as part of a $14.3 billion investment into Scale—will have the title “chief AI officer” and will co-lead the new Superintelligence unit along with former GitHub CEO Nat Friedman. Zuckerberg also announced the hiring of 11 prominent AI researchers from OpenAI, Google DeepMind, and Anthropic. You can read more about Meta’s AI talent raid from Wired here.
Cloudflare begins blocking AI web-crawlers by default. Internet content delivery provider Cloudflare announced it has begun blocking AI companies’ web crawlers from accessing website content by default. Owners of the websites can choose to unblock specific crawlers—such as those Google uses to build its search index—or even opt for a “pay per crawl” option that will allow them to monetize the scraping of their content. With around 16% of global internet traffic passing through Cloudflare, the change could significantly impact AI development. (Full disclosure: Fortune is one of the initial participants in the Cloudflare crawler initiative.) Read more from CNBC here.
EYE ON AI RESEARCH
Even better than House? Microsoft has unveiled an AI system for medical diagnoses that it claims can accurately diagnose complex cases four times more accurately than individual human doctors (under certain conditions—more on that in a sec.) The “Microsoft AI Diagnostic Orchestrator” (MAI-DxO—gotta love those AI acronyms) consists of five AI “agents” that each have a distinct role to play in scouring the medical literature, hypothesizing what the patient’s condition might be, ordering tests to eliminate possibilities, and even trying to optimize these tests to derive the most useful information at the least cost. These five “AI doctors” then engage in a process Microsoft is dubbing “chain of debate,” where they collaborate and critique one another, ultimately arriving at a diagnosis.
In trials involving 304 real-world cases from the New England Journal of Medicine, MAI-DxO, achieved an 85.5% success rate, compared to about 20% for human doctors. Microsoft tried powering the system with different AI models from OpenAI, Google, Meta, Anthropic, and DeepSeek, but found it worked best when using OpenAI’s o3 model (Microsoft is a major investor in OpenAI, sells OpenAI’s models through its cloud service, and depends on OpenAI for many of its own AI offerings). As for the poor performance of the human docs, it is important to note that in the test they were not allowed to consult either medical textbooks or colleagues.
Nonetheless, Microsoft AI CEO Mustafa Suleyman said the system could transform healthcare—although the company also said MAI-DxO is just a research project and is not yet being turned into a product. You can read more from the Financial Times here.
FORTUNE ON AI
Mark Zuckerberg overhauled Meta’s entire AI org in a risky, multi-billion dollar bet on ‘superintelligence’ —by Sharon Goldman
Longtime Bessemer investor Mary D’Onofrio, who backed Anthropic and Canva, leaves for Crosslink Capital —by Allie Garfinkle
Ford CEO says new technologies like AI are leaving many workers behind, and companies need a plan —by Jessica Mathews
Commentary: When your AI assistant writes your performance review: A glimpse into the future of work —by David Ferrucci
AI CALENDAR
July 8-11: AI for Good Global Summit, Geneva
July 13-19: International Conference on Machine Learning (ICML), Vancouver
July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.
July 26-28: World Artificial Intelligence Conference (WAIC), Shanghai.
Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah. Apply to attend here.
Oct. 6-10: World AI Week, Amsterdam
Dec. 2-7: NeurIPS, San Diego
Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.
BRAIN FOOD
AI tries to run a vending machine business. Hilarity ensues, Part Deux. A month ago in the research section of this newsletter, I wrote about research from Andon Labs about what happens when you try to have various AI models run a simulated vending machine business. Now, Anthropic teamed up with Andon Labs to test one of its latest models, Claude 3.7 Sonnet, to see how it did running a real-life vending machine in Anthropic’s San Francisco office. The answer, as it turns out, is not well at all. As Anthropic writes in its blog on the experiment, “If Anthropic were deciding today to expand into the in-office vending market, we would not hire [Claude 3.7 Sonnet].”
The model made a lot of mistakes—like telling customers to send payments to a Venmo account that didn’t exist (it had hallucinated it)—and also a lot of poor business decisions, like offering far too many discounts (including an Anthropic employee discount in a location where 99% of the customers were Anthropic employees), failing to seize a good arbitrage opportunity, and failing to increase prices in response to high demand.
The entire Anthropic blog makes for fun reading. And the experiment makes it clear that AI agents probably are nowhere near ready for a lot of complex, multi-step tasks over long time periods.
AI Research
Positive attitudes toward AI linked to more prone to problematic social media use
People who have a more favorable view of artificial intelligence tend to spend more time on social media and may be more likely to show signs of problematic use, according to new research published in Addictive Behaviors Reports.
The new study was designed to explore a question that, until now, had been largely overlooked in the field of behavioral research. While many factors have been identified as risk factors for problematic social media use—including personality traits, emotional regulation difficulties, and prior mental health issues—no research had yet explored whether a person’s attitude toward artificial intelligence might also be linked to unhealthy social media habits.
The researchers suspected there might be a connection, since social media platforms are deeply intertwined with AI systems that drive personalized recommendations, targeted advertising, and content curation.
“For several years, I have been interested in understanding how AI shapes societies and individuals. We also recently came up with a framework called IMPACT to provide a theoretical framework to understand this. IMPACT stand for the Interplay of Modality, Person, Area, Country/Culture and Transparency variables, all of relevance to understand what kind of view people form regarding AI technologies,” said study author Christian Montag, a distinguished professor of cognitive and brain sciences at the Institute of Collaborative Innovation at University of Macau.
Artificial intelligence plays a behind-the-scenes role in nearly every major social media platform. Algorithms learn from users’ behavior and preferences in order to maximize engagement, often by showing content that is likely to capture attention or stir emotion. These AI-powered systems are designed to increase time spent on the platform, which can benefit advertisers and the companies themselves. But they may also contribute to addictive behaviors by making it harder for users to disengage.
Drawing from established models in psychology, the researchers proposed that attitudes toward AI might influence how people interact with social media platforms. In this case, people who trust AI and believe in its benefits might be more inclined to embrace AI-powered platforms like social media—and potentially use them to excess.
To investigate these ideas, the researchers analyzed survey data from over 1,000 adults living in Germany. The participants were recruited through an online panel and represented a wide range of ages and education levels. After excluding incomplete or inconsistent responses and removing extreme outliers (such as those who reported using social media for more than 16 hours per day), the final sample included 1,048 people, with roughly equal numbers of men and women.
Participants completed a variety of self-report questionnaires. Attitudes toward artificial intelligence were measured using both multi-item scales and single-item ratings. These included questions such as “I trust artificial intelligence” and “Artificial intelligence will benefit humankind” to assess positive views, and “I fear artificial intelligence” or “Artificial intelligence will destroy humankind” to capture negative perceptions.
To assess social media behavior, participants were asked whether they used platforms like Facebook, Instagram, TikTok, YouTube, or WhatsApp, and how much time they spent on them each day, both for personal and work purposes. Those who reported using social media also completed a measure called the Social Networking Sites–Addiction Test, which includes questions about preoccupation with social media, difficulty cutting back, and using social media to escape from problems.
Overall, 956 participants said they used social media. Within this group, the researchers found that people who had more positive attitudes toward AI also tended to spend more time on social media and reported more problematic usage patterns. This relationship held for both men and women, but it was stronger among men. In contrast, negative attitudes toward AI showed only weak or inconsistent links to social media use, suggesting that it is the enthusiastic embrace of AI—not fear or skepticism—that is more closely associated with excessive use.
“It is interesting to see that the effect is driven by the male sample,” Montag told PsyPost. “On second thought, this is not such a surprise, because in several samples we saw that males reported higher positive AI attitudes than females (on average). So, we must take into account gender for research questions, such as the present one.”
“Further I would have expected that negative AI attitudes would have played a larger role in our work. At least for males we observed that fearing AI went also along with more problematic social media use, but this effect was mild at best (such a link might be explained via negative affect and escapism tendencies). I would not be surprised if such a link becomes more visible in future studies. Let’s keep in mind that AI attitudes might be volatile and change (the same of course is also true for problematic social media use).”
To better understand how these variables were related, the researchers conducted a mediation analysis. This type of analysis can help clarify whether one factor (in this case, time spent on social media) helps explain the connection between two others (positive AI attitudes and problematic use).
The results suggested that people with positive attitudes toward AI tended to spend more time on social media, and that this increased usage was associated with higher scores on the addiction measure. In other words, time spent on social media partly accounted for the link between AI attitudes and problematic behavior.
“I personally believe that it is important to have a certain degree of positive attitude towards benevolent AI technologies,” Montag said. “AI will profoundly change our personal and business lives, so we should better prepare ourselves for active use of this technology. This said, our work shows that positive attitudes towards AI, which are known to be of relevance to predict AI technology use, might come with costs. This might be in form of over-reliance on such technology, or in our case overusing social media (where AI plays an important role in personalizing content). At least we saw this to be true for male study participants.”
Importantly, the researchers emphasized that their data cannot establish cause and effect. Because the study was cross-sectional—that is, based on a single snapshot in time—it is not possible to say whether positive attitudes toward AI lead to excessive social media use, or whether people who already use social media heavily are more likely to hold favorable views of AI. It’s also possible that a third factor, such as general interest in technology, could underlie both tendencies.
The study’s sample, while diverse in age and gender, skewed older on average, with a mean age of 45. This may limit the generalizability of the findings, especially to younger users, who are often more active on social media and may have different relationships with technology. Future research could benefit from focusing on younger populations or tracking individuals over time to see how their attitudes and behaviors change.
“In sum, our work is exploratory and should be seen as stimulating discussions. For sure, it does not deliver final insights,” Montag said.
Despite these limitations, the findings raise important questions about how people relate to artificial intelligence and how that relationship might influence their behavior. The authors suggest that positive attitudes toward AI are often seen as a good thing—encouraging people to adopt helpful tools and new innovations. But this same openness to AI might also make some individuals more vulnerable to overuse, especially when the technology is embedded in products designed to maximize engagement.
The researchers also point out that people may not always be aware of the role AI plays in their online lives. Unlike using an obvious AI system, such as a chatbot or virtual assistant, browsing a social media feed may not feel like interacting with AI. Yet behind the scenes, algorithms are constantly shaping what users see and how they engage. This invisible influence could contribute to compulsive use without users realizing how much the technology is guiding their behavior.
The authors see their findings as a starting point for further exploration. They suggest that researchers should look into whether positive attitudes toward AI are also linked to other types of problematic online behavior, such as excessive gaming, online shopping, or gambling—especially on platforms that make heavy use of AI. They also advocate for studies that examine whether people’s awareness of AI systems influences how those systems affect them.
“In a broader sense, we want to map out the positive and negative sides of AI technology use,” Montag explained. “I think it is important that we use AI in the future to lead more productive and happier lives (we investigated also AI-well-being in this context recently), but we need to be aware of potential dark sides of AI use.”
“We are happy if people are interested in our work and if they would like to support us by filling out a survey. Here we do a study on primary emotional traits and AI attitudes. Participants also get as a ‘thank you’ insights into their personality traits: https://affective-neuroscience-personality-scales.jimdosite.com/take-the-test/).”
The study, “The darker side of positive AI attitudes: Investigating associations with (problematic) social media use,” was authored by Christian Montag and Jon D. Elhai.
AI Research
How the Vatican Is Shaping the Ethics of Artificial Intelligence | American Enterprise Institute
As AI transforms the global landscape, institutions worldwide are racing to define its ethical boundaries. Among them, the Vatican brings a distinct theological voice, framing AI not just as a technical issue but as a moral and spiritual one. Questions about human dignity, agency, and the nature of personhood are central to its engagement—placing the Church at the heart of a growing international effort to ensure AI serves the common good.
Father Paolo Benanti is an Italian Catholic priest, theologian, and member of the Third Order Regular of St. Francis. He teaches at the Pontifical Gregorian University and has served as an advisor to both former Pope Francis and current Pope Leo on matters of artificial intelligence and technology ethics within the Vatican.
Below is a lightly edited and abridged transcript of our discussion. You can listen to this and other episodes of Explain to Shane on AEI.org and subscribe via your preferred listening platform. If you enjoyed this episode, leave us a review, and tell your friends and colleagues to tune in.
Shane Tews: When did you and the Vatican began to seriously consider the challenges of artificial intelligence?
Father Paolo Benanti: Well, those are two different things because the Vatican and I are two different entities. I come from a technical background—I was an engineer before I joined the order in 1999. During my religious formation, which included philosophy and theology, my superior asked me to study ethics. When I pursued my PhD, I decided to focus on the ethics of technology to merge the two aspects of my life. In 2009, I began my PhD studies on different technologies that were scaffolding human beings, with AI as the core of those studies.
After I finished my PhD and started teaching at the Gregorian University, I began offering classes on these topics. Can you imagine the faces of people in 2012 when they saw “Theology and AI”—what’s that about?
But the process was so interesting, and things were already moving fast at that time. In 2016-2017, we had the first contact between Big Tech companies from the United States and the Vatican. This produced a gradual commitment within the structure to understand what was happening and what the effects could be. There was no anticipation of the AI moment, for example, when ChatGPT was released in 2022.
The Pope became personally involved in this process for the first time in 2019 when he met some tech leaders in a private audience. It’s really interesting because one of them, simply out of protocol, took some papers from his jacket. It was a speech by the Pope about youth and digital technology. He highlighted some passages and said to the Pope, “You know, we read what you say here, and we are scared too. Let’s do something together.”
This commitment, this dialogue—not about what AI is in itself, but about what the social effects of AI could be in society—was the starting point and probably the core approach that the Holy See has taken toward technology.
I understand there was an important convening of stakeholders around three years ago. Could you elaborate on that?
The first major gathering was in 2020 where we released what we call the Rome Call for AI Ethics, which contains a core set of six principles on AI.
This is interesting because we don’t call it the “Vatican Call for AI Ethics” but the “Rome Call,” because the idea from the beginning was to create something non-denominational that could be minimally acceptable to everyone. The first signature was the Catholic Church. We held the ceremony on Via della Conciliazione, in front of the Vatican but technically in Italy, for both logistical and practical reasons—accessing the Pope is easier that way. But Microsoft, IBM, FAO, and the European Parliament president were also present.
In 2023, Muslims and Jews signed the call, making it the first document that the three Abrahamic religions found agreement on. We have had very different positions for centuries. I thought, “Okay, we can stand together.” Isn’t that interesting? When the whole world is scared, religions try to stay together, asking, “What can we do in such times?”
The most recent signing was in July 2024 in Hiroshima, where 21 different global religions signed the Rome Call for AI Ethics. According to the Pew Institute, the majority of living people on Earth are religious, and the religions that signed the Rome Call in July 2024 represent the majority of them. So we can say that this simple core list of six principles can bring together the majority of living beings on Earth.
Now, because it’s a call, it’s like a cultural movement. The real success of the call will be when you no longer need it. It’s very different to make it operational, to make it practical for different parts of the world. But the idea that you can find a common and shared platform that unites people around such challenging technology was so significant that it was unintended. We wanted to produce a cultural effect, but wow, this is big.
As an engineer, did you see this coming based on how people were using technology?
Well, this is where the ethicist side takes precedence over the engineering one, because we discovered in the late 80s that the ethics of technology is a way to look at technology that simply doesn’t judge technology. There are no such things as good or bad technology, but every kind of technology, once it impacts society, works as a form of order and displacement of power.
Think of a classical technology like a subway or metro station. Where you put it determines who can access the metro and who cannot. The idea is to move from thinking about technology in itself to how this technology will be used in a societal context. The challenge with AI is that we’re facing not a special-purpose technology. It’s not something designed to do one thing, but rather a general-purpose technology, something that will probably change the way we do everything, like electricity does.
Today it’s very difficult to find something that works without electricity. AI will probably have the same impact. Everything will be AI-touched in some way. It’s a global perspective where the new key factor is complexity. You cannot discuss such technology—let me give a real Italian example—that you can use in a coffee roastery to identify which coffee beans might have mold to avoid bad flavor in the coffee. But the same technology can be used in an emergency room to choose which people you want to treat and which ones you don’t.
It’s not a matter of the technology itself, but rather the social interface of such technology. This is challenging because it confuses tech people who usually work with standards. When you have an electrical plug, it’s an electrical plug intended for many different uses. Now it’s not just the plug, but the plug in context. That makes things much more complex.
In the Vatican document, you emphasize that AI is just a tool—an elegant one, but it shouldn’t control our thinking or replace human relationships. You mention it “requires careful ethical consideration for human dignity and common good.” How do we identify that human dignity point, and what mechanisms can alert us when we’re straying from it?
I’ll try to give a concise answer, but don’t forget that this is a complex element with many different applications, so you can’t reduce it to one answer. But the first element—one of the core elements of human dignity—is the ability to self-determine our trajectory in life. I think that’s the core element, for example, in the Declaration of Independence. All humans have rights, but you have the right to the pursuit of happiness. This could be the first description of human rights.
In that direction, we could have a problem with this kind of system because one of the first and most relevant elements of AI, from an engineering perspective, is its prediction capabilities.Every time a streaming platform suggests what you can watch next, it’s changing the number of people using the platform or the online selling system. This idea that interaction between human beings and machines can produce behavior is something that could interfere with our quality of life and pursuit of happiness. This is something that needs to be discussed.
Now, the problem is: don’t we have a cognitive right to know if we have a system acting in that way? Let me give you some numbers. When you’re 65, you’re probably taking three different drugs per day. When you reach 68 to 70, you probably have one chronic disease. Chronic diseases depend on how well you stick to therapy. Think about the debate around insulin and diabetes. If you forget to take your medication, your quality of life deteriorates significantly. Imagine using this system to help people stick to their therapy. Is that bad? No, of course not. Or think about using it in the workplace to enhance workplace safety. Is that bad? No, of course not.
But if you apply it to your life choices—your future, where you want to live, your workplace, and things like that—that becomes much more intense. Once again, the tool could become a weapon, or the weapon could become a tool. This is why we have to ask ourselves: do we need something like a cognitive right regarding this? That you are in a relationship with a machine that has the tendency to influence your behavior.
Then you can accept it: “I have diabetes, I need something that helps me stick to insulin. Let’s go.” It’s the same thing that happens with a smartwatch when you have to close the rings. The machine is pushing you to have healthy behavior, and we accept it. Well, right now we have nothing like that framework. Should we think about something in the public space? It’s not a matter of allowing or preventing some kind of technology. It’s a matter of recognizing what it means to be human in an age of such powerful technology—just to give a small example of what you asked me.
AI Research
Learn how to use AI safety for everyday tasks at Springfield training
ChatGPT, Google Gemini can help plan the perfect party
Ease some of the burden of planning a party and enlist the help of artificial intelligence.
- Free AI training sessions are being offered to the public in Springfield, starting with “AI for Everyday Life: Tiny Prompts, Big Wins” on July 30.
- The sessions aim to teach practical uses of AI tools like ChatGPT for tasks such as meal planning and errands.
- Future sessions will focus on AI for seniors and families.
The News-Leader is partnering with the library district and others in Springfield to present a series of free training sessions for the public about how to safely harness the power of Artificial Intelligence or AI.
The inaugural session, “AI for Everyday Life: Tiny Prompts, Big Wins” will be 5:30-7 p.m. Thursday, July 10, at the Library Center.
The goal is to help adults learn how to use ChatGPT to make their lives a little easier when it comes to everyday tasks such as drafting meal plans, rewriting letters or planning errand routes.
The 90-minute session is presented by the Springfield-Greene County Library District in partnership with 2oddballs Creative, Noble Business Strategies and the News-Leader.
“There is a lot of fear around AI and I get it,” said Gabriel Cassady, co-owner of 2oddballs Creative. “That is what really drew me to it. I was awestruck by the power of it.”
AI aims to mimic human intelligence and problem-solving. It is the ability of computer systems to analyze complex data, identify patterns, provide information and make predictions. Humans interact with it in various ways by using digital assistants — such as Amazon’s Alexa or Apple’s Siri — or by interacting with chatbots on websites, which help with navigation or answer frequently asked questions.
“AI is obviously a complicated issue — I have complicated feelings about it myself as far as some of the ethics involved and the potential consequences of relying on it too much,” said Amos Bridges, editor-in-chief of the Springfield News-Leader. “I think it’s reasonable to be wary but I don’t think it’s something any of us can ignore.”
Bridges said it made sense for the News-Leader to get involved.
“When Gabriel pitched the idea of partnering on AI sessions for the public, he said the idea came from spending the weekend helping family members and friends with a bunch of computer and technical problems and thinking, ‘AI could have handled this,'” Bridges said.
“The focus on everyday uses for AI appealed to me — I think most of us can identify with situations where we’re doing something that’s a little outside our wheelhouse and we could use some guidance or advice. Hopefully people will leave the sessions feeling comfortable dipping a toe in so they can experiment and see how to make it work for them.”
Cassady said Springfield area residents are encouraged to attend, bring their questions and electronic devices.
The training session — open to beginners and “family tech helpers” — will include guided use of AI, safety essentials, and a practical AI cheat sheet.
Cassady will explain, in plain English, how generative AI works and show attendees how to effectively chat with ChatGPT.
“I hope they leave feeling more confident in their understanding of AI and where they can find more trustworthy information as the technology advances,” he said.
Future training sessions include “AI for Seniors: Confident and Safe” in mid-August and “AI & Your Kids: What Every Parent and Teacher Should Know” in mid-September.
The training sessions are free but registration is required at thelibrary.org.
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business7 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Funding & Business4 days ago
HOLY SMOKES! A new, 200% faster DeepSeek R1-0528 variant appears from German lab TNG Technology Consulting GmbH
-
Jobs & Careers6 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure