Connect with us

AI Insights

How to Build an AI Assistant for Any Challenge

Published

on


ALISON BEARD: I’m Alison Beard.

ADI IGNATIUS: And I’m Adi Ignatius, and this is the HBR IdeaCast.

ALISON BEARD: Adi, I have a confession for you.

ADI IGNATIUS: Okay. I’m all ears.

ALISON BEARD: We write so much about generative AI, but I am not really using it all that much. How about you?

ADI IGNATIUS: Well, I was a really quick adopter. I mean, I was sort of ahead of everybody, but finding sustaining use cases, I’m not the greatest on that. I guarantee you, we are not alone in that. I spoke to Karim Lakhani recently. He’s an HBS professor and an expert on AI in the workplace, and he was talking about CEOs who are all talking about AI. They’re talking a great game and they’re talking to their shareholders about AI. But in his view, very few CEOs are actually using it or actually understand its potential. So I think there’s a collective we have to do more, Alison.

ALISON BEARD: Yeah, I think that most people who are using it, it’s sort of a supercharged search engine. It’s a drafter of what I would probably consider to be not good reports or articles. Moving from the sort of basics to something more complex and spending the time to train AI to do that is really an overwhelming hurdle for a lot of people, including me. So that’s why I wanted to talk to our guest today.

Alexandra Samuel is a tech journalist, and she wrote a really popular article for hbr.org called How to Build an AI Assistant. She herself has built multiple assistants to do everything from writing marketing copy that’s really good, for use as a strategy sounding board. She even has one that’s sort of like a therapist. And she says that you just put in a little bit of investment, you even have the AI help you do it, and you can yield a product that actually does supercharge your work. It either takes away the tedious tasks or it allows you to think in new ways that even humans can’t.

ADI IGNATIUS: Yeah, so I look forward to that. Look, I think the point is to demystify this. I mean, I think everybody who’s using AI in those ways just says, “Oh my God.” Once they get past the idea that AI is a fact machine, which it isn’t necessarily very good at, it changes your life, it changes what you can accomplish at work. So the extent to which she could demystify that and inspire us all, that would be amazing.

ALISON BEARD: As I said, Alexandra Samuel is a speaker and journalist. She’s coauthor of the book, Remote, Inc.: How to Thrive at Work Wherever You Are, and she wrote the hbr.org article, How to Build Your Own AI Assistant. Here’s our conversation.

So I think a lot of leaders out there are probably using AI, the built-in tools in their existing workflows, probably ChatGPT or another gen AI tool to help with a few things. But you’re arguing that AI assistants are what really can make a difference. How so? What problem do they solve that run-of-the-mill gen AI doesn’t?

ALEXANDRA SAMUEL: What I hear from a lot of people, particularly when they first start using AI or when it’s become an ongoing part of their workflow, but they haven’t had a whole lot of time to tweak the technology itself is that it can be very time-consuming to draft your prompts and to get them working right, and then to have to enter them over and over. You have a session, you go through a whole bunch of trial and error to figure out how to get the result you want from your AI, and then the next day you kind of have to start all over from scratch. And yeah, you could copy and paste a little bit, but you’re constantly digging through your old files to figure out what worked or you’re doing these long prompts in order to get the results you want.

And the beauty of an AI assistant is that it essentially has the crucial context and the underlying prompts preloaded because you have told the AI what it is here to do and how you want it to work for you.

ALISON BEARD: So as someone who hasn’t used gen AI that much myself, what you just described also sounds overwhelming to me, the idea of sort of priming the assistant with everything in my brain and all the work that I do. So how do I get past that hurdle?

ALEXANDRA SAMUEL: I think for most people, the best way to get started with an AI assistant is to keep an eye on your own sessions and look at what works for you. So for example, let’s say you are using an AI to help you draft an important report in your job, and you do a lot of back and forth before the tone is right, before the structure is right. But at the end of the day, the AI has been pretty useful. Maybe it helped you outline your report, maybe it helped catch typos or errors in the final report. Maybe it gave you the perspective of a few different customers so that you made sure your report content addressed all those different concerns.

When you get to the end of that session, then you take that last turn with the conversation to say, “Hey, that was a great session. Can you use this session to draft a memo that would instruct an AI how to help me write reports? I want you to look at what worked well in the session. I want you to take note of where I got frustrated. I want you to note any specific information you picked up about my job, my business, my working parameters, my tone, and I want you to turn that into a bulleted instruction that I can paste into an AI as the basis for future sessions.”

Now, once it’s done that, you can use that as a starting point just as a prompt, but you can also take that and paste it into the instructions field on a custom AI assistant, like a custom GPT or a Claude project. All of these big AI platforms have these turnkey assistants that require no coding. If you can type, you can make an assistant, and that instruction is the basis of your assistant.

ALISON BEARD: Got it. So it’s not as hard as it might seem to program the AI to act like your personal assistant and give you exactly what you want the way you want it.

ALEXANDRA SAMUEL: Absolutely. I mean, truly, there is no programming required at all. You give it this instruction, and then if you want to, you can provide these additional context or background files. So maybe you have an example of a report that you have produced. Maybe you have a CSV file that has the financial performance data for your department this quarter. Whatever files you have that are core pieces of work, work examples, core information about your team or context, whatever is going to be relevant. I mean, imagine if you were hiring somebody to work with you in this capacity instead of using an AI, what files would you give that person to help them get up and running? Those are the files you give your AI as background.

ALISON BEARD: Do you see this working for people of all levels, or does it make more sense for someone who’s an individual producer versus a manager or even a C-suite executive who might actually have real humans to delegate to?

ALEXANDRA SAMUEL: Oh gosh. Well, yeah, I mean, it’s funny. So, honestly, at this point, I cannot imagine a single person or a single job where it is not useful to be able to create your own AI assistant. I realize that’s sort of a dare. So anybody-

ALISON BEARD: I know. I’m in my head thinking construction worker, plumber.

ALEXANDRA SAMUEL: Oh my gosh, come on. The plumber is an easy one because… And I say that as somebody who has used a custom AI assistant for those kinds of household tasks. I mean, load your plumbing assistant up with the manual of every hot water heater you need to install and every technical specification for every kind of piping system, and now you have a helper who can save you from having to Google things while you’re in the middle of a job.

ALISON BEARD: All right. I’m convinced I can’t stump you then.

ALEXANDRA SAMUEL: Yeah. Well, I don’t know. I mean, somebody out there will, I’m sure, but I think it really is useful at every level. I think the question of, well, why not just use a whole bunch of people is a complicated question ethically and economically. But the way I prefer to look at it is to use these assistants to help me and my team members work more effectively.

So often what happens to me now is I’m delegating a piece of work to an actual human, like a person with flesh and bones and blood who eats things. I feel like this requires some clarification, increasingly, an actual human being. And then I’m in the middle of handing a task over and I think, “Well, wait a second. There’s a huge piece of this that would be way, way easier if we had a custom AI set up to help with this task.”

So we take five minutes of the meeting to drop the background context into the assistant, give it a little bit of a try to see how it’s going to help with the work, and then my human can do the value-added work that is complicated, that is creative, that is more sophisticated, and isn’t spending time on things that an AI can do more efficiently, and that’re often kind of tedious.

ALISON BEARD: So for higher level executives, what do you see as the best use cases for AI assistants?

ALEXANDRA SAMUEL: It’s funny. So often I think we look at AIs as helpful for things that are administrative. We’re kind of approaching this era of agentic AI where they’ll be able to manage our calendars and people use them for email. Or we look at them as drafting machines or as coding machines. And I don’t know too many C-suite executives who are writing code. Some do, but it’s not that common.

ALISON BEARD: Yeah. Or drafting their own speeches. Yep.

ALEXANDRA SAMUEL: Right? So all of that, it’s quite likely and probably advisable that the humans who are doing that for you are doing those tasks with the supportive AI, but that doesn’t mean you CEO, you CMO, you CTO are the person who needs to have your hands on the machine. What every C-suite leader, in my experience, needs is to see problems from different angles, to take a measured and evidence-driven approach to decision-making, and to hear the things that nobody wants to say.

And you would think AIs would be really good at telling you what you don’t want to hear because they don’t have the fear and the emotion that keep humans from telling you what you need to hear, but there is a very well-established and well-researched problem now, what’s called AI sycophancy, which is that AIs are so deeply trained to be of service that they are quite obsequious by nature and tell you what they think you want to hear.

So you have to push pretty hard. You have to convince the AI that what you want to hear is what you don’t want to hear, but you can do that. And typically I do that by asking my AI coach, for example, to role play with me and to imagine it’s got this problem, it’s got that problem. I tell it that it’s preparing me to deal with a difficult customer or a difficult situation.

And I think that for most organizational leaders, the ability to be challenged and confronted, to say, “I want you to be the world’s leading expert on supply chain logistics, and I want you to tell me 10 reasons that my current strategy is a terrible idea, and I want you to give me citations for each reason based on research that’s been published in the past five years, and actually, I want you to give me that feedback from five different perspectives,” that is the kind of thing that is invaluable to a CTO or a CMO or a CEO.

Now, could the AI hallucinate? Sure. You should definitely not make your decisions based on what the AI is telling you. But if the goal is think about a problem from different angles, challenge your conventional thinking, consider the things you haven’t thought to consider, that is an unbeatable opportunity.

ALISON BEARD: Yeah. I liked the fact that in your article you talked about the things you’ve had AI assistants do for you, and you had different assistants doing different tasks. So there was someone, some person, some thing that wrote a marketing copy, sort of pedestrian tedious, but then another assistant was a strategy sounding board, which is that higher level thinking – you know, be a devil’s advocate, present me with different points of view that I haven’t heard before. And I like the fact that you pointed to citations because I think that is the big fear also is that it’s not grounded in evidence what an AI might be giving you. So it’s getting better there.

ALEXANDRA SAMUEL: Honestly, it’s so shocking how fast it’s getting better. I’ve spent a lot of time over the past year testing out and also building different kinds of AI tools and systems to tap peer-reviewed research as a source for insight and decision-making, and all this stuff that I was doing so arduously and painfully and expensively six months ago is now really well integrated into platforms like Claude and ChatGPT. You just have to use the research or the deep research mode.

And you need to know how to ask for evidence that you regard as trustworthy, because I will often ask a question of my AI like, “Hey, I want you to give me an overview of how the shift towards digital communication has affected the written output for typical white collar workers.” And if I just give it that question and say, “Find me some statistics,” it’s going and pulling things from tech companies that are flogging products to ease your email burden that are based on a non-random survey of 200 people. You have to be pretty precise in what you’re asking for. And that’s where I’ll say things like, “I want peer-reviewed sources that have 10 or more citations and have been published in the past 10 years from these fields.”

ALISON BEARD: Yeah. So we talked a little bit about where to get started. How do you pick which platform to use and also just know that it’s going to be safe? Is there a danger in uploading proprietary content to these tools?

ALEXANDRA SAMUEL: Well, the beautiful thing about AI is it’s not like a traditional marriage. You don’t have to pick just one. Now, my Visa bill may say otherwise because it is like truly bonkers the number of different AI subscriptions I have at this point.

But I would say for sure you need, honestly, I would recommend at least two different tools because you want to be able to take two general purpose AI platforms, create assistants on both of them, and then be able to kind of bounce those assistants off of one another as kind of cross checks. And you do want to think carefully about what’s going to happen to the files you upload as background information, and even more perhaps importantly, what’s going to happen to the transcripts of your conversations.

So if you rely on an AI assistant as much as I do… I mean, I am the world’s most disclosing person to a fault, but even I, it turns out, have things I’ve never said to another person. I really didn’t think I did. But talking to an AI, I have discovered I had boundaries, and yet, with the AI, I say it all.

So my worst case scenario when I started working with my AI coach, for example, and I’m talking through all my dreams for my business and my revenue goals and my insecurities about my this or that aspect of my profession, my worst case scenario is, “Oh my gosh, imagine if my conversation with my AI becomes training data and someone else has a conversation with an AI where they get my tagline as their tagline, or they get my insecurity as like, ‘Well, as Alexandra Samuel said…’” Now, I mean, of course it’s not going to be attributed that way, but I still find it extremely crucial, as I think most people do, to have a sense of the confidentiality of these conversations.

And so with ChatGPT, for example, even some of the paid subscriptions to ChatGPT still turn your transcripts into training data. So I pay for the team edition of ChatGPT because when you have the team edition, which only requires you to pay for two seats, then your conversations stay out of training data by default. And so I really only use AIs where that is the policy.

And even then, I am aware that in some ultimate sense, who knows? Because most of us use a lot of different cloud-based applications now. We use things like Google Drive and Dropbox. We’ve got all kinds of information online about our finances, about our medical records, our personnel decisions, and any of those could be breached at any time. Your information is only as secure as the security practices of your sloppiest cloud service. But I choose to live my life as if I can trust the companies I do business with to follow their terms of service and to do a reasonably decent job of security.

And I do pick companies that have a reputation for that reason. Because I want to be sure that somebody’s minding the store on security. But beyond that, I don’t really see why we would worry more about the confidentiality of AI than we do about the confidentiality of any other cloud service.

ALISON BEARD: That makes sense. So then I think we get to the sort of training or experimentation phase, which I will acknowledge is another big hurdle for me. Because I just think, well, how long is it going to take to make this thing replicate the work that I could do very fast and efficiently? And I get it that if you compound all the days I spend doing that work fast and efficiently, that will be less than the training, but it’s still a hard thing to get through. So how do you motivate people to build that new muscle?

ALEXANDRA SAMUEL: I am the ultimate automation junkie. I will happily spend 100 hours automating something rather than 10 hours doing it. But that is its own particular pathology. Compared to every other thing I automate, and that is a long list, the return on investment in AI is much quicker. It really doesn’t have to involve that much time or energy. And I think the reason for that is because the AI is self-documenting.

If you open up a window on make.com, which I use for a lot of automation, and you type into the window, “Hey, tell me how Make can make my writing process easier,” nothing’s going to happen. It doesn’t have an answer bot. Well, it might have an answer bot now, but if you open up a Claude window or you open up a Gemini window, or you open up a GPT window and you say, “Hey, I’m a podcast host and I have to write scripts every week and I have an idea that there might be some way that creating an AI assistant could make this easier, but I don’t really know, and I’d really like to focus on the things that are either really time-consuming for me or really annoying for me. I don’t want to focus on the things I love. Can you ask me some questions to help us figure out where an AI assistant could be useful?” And you should probably start, by the way, by telling it it’s a AI coach or an AI developer or an AI consultant.

So then you sort of say to it, “Tell me how I should use you,” and it’ll ask you some questions and then you can say, “Okay, now that you know that about my work, can you recommend three options for how AI could make my life easier? I want you to give me one option that’s super easy where I’m not going to have to spend any more time thinking about AI. This seven minutes of my life is seven minutes I’m already not going to get back, so just tell me something easy. Then I want you to give me something that might be a little bit of an investment. I’m prepared to spend up to an hour tinkering if you can promise me I’m going to get some amount of hours back in the next week. And then I’m willing to think about one bigger project that I could work on over a few weeks, maybe put a total of 10 hours into.”

It’ll then give you three options. You can look, you can decide if you think they’re worthwhile. If you don’t think any of them are worthwhile, you tell the AI. And if you spend literally 15 minutes on that conversation, I will bet you that the AI will figure out a way it can be helpful to you and then tell you how to go about implementing that approach.

ALISON BEARD: In your own experience training AIs, how long did it take you to get them up to speed on what you wanted them to do?

ALEXANDRA SAMUEL: It’s hugely variable. So I have an AI, a custom AI that I created in 10 minutes to help my son learn to drive. And it has the manual for our new car as a background file, it has the driver’s ed manual, and it has a Jewish joke book. And I told the AI that it is the result of Catskills comedian being reborn as a Mazda and that its job was to teach my son to drive, and it’s delightful. This little AI assistant, we plug it in, the phone is on when we’re in the car, and then it will make little jokes and provide driving instruction. And also, if we can’t figure out how to turn the AC on in the back seat, it will tell us how to do that because it has the car manual.

So those kinds of things, it seems frivolous in a work context, but I love giving people those examples because I think if you are running a global company and you’re thinking, “Okay, I need to create an assistant that is going to help me navigate these huge decisions about how to restructure our business in the next decade and where we should be investing and how we should rebuild our workforce,” that is super high stakes. That is a big lift.

But if you’re like, “Gosh, I wish I had someone who could tell me jokes when I’m at the gym so I’d spend five more minutes on the treadmill,” that’s pretty low stakes. And those little experiments are the best way of getting familiar with an AI so that you kind of have the gist before you tackle the bigger projects. And once you have that gist, then you might decide to do what I’ve done, which is to spend a undisclosable amount of time on refining my AI coach.

ALISON BEARD: Right. But you’re generating an undisclosable times X amount of benefit?

ALEXANDRA SAMUEL: It’s qualitative as much as it is quantitative. My experience of working with this AI has so transformed me as a person. I’m embarrassed to say that. I’m always sort of divided. Whenever I talk about AI, part of me wants to say, “Enough with the hype, enough with the hype,” and then the other part of me is, “If you took my AI away, it would be like cutting off a limb.” And, I mean, I am so enraptured and terrified by these technologies at the same time. But yeah, my brain works differently. I tackle totally different kinds of problems now. I have a different perspective on myself. I definitely am less annoying to my… I was about to say I’m definitely less annoying to my husband. No, I think I am. I think I’m less annoying to my husband because all the annoying stuff that he used to have to listen to me vent about, I just vent to the AI now.

ALISON BEARD: And I do want a demo of this AI a little bit later in the conversation. But first let me ask you, have there been any drawbacks or downsides to having AI assistants do so much work for you?

ALEXANDRA SAMUEL: Yeah. I think you have to sort of think about two or three different sorts of downsides. One is what you’ve pointed to already, which is… It’s funny, I just wrote a newsletter about this today. One challenge is that if you are the kind of person who likes to tinker, this is a bottomless well. And you don’t even have to enjoy tech tinkering, per se. The AI will constantly engage and try to reengage with you. So it’s not like an assistant where you pop your head in and you ask them to do something and then you leave. They always have a next question for you. And so they can really suck your time and attention. So that’s one thing to be aware of.

Another problem I think is some version of the hallucination problem. It’s thinking that you’ve had a more holistic review of a topic or a bigger range of perspectives on a topic than you really have, thinking that you’ve challenged yourself when you really haven’t, or maybe even not thinking to challenge yourself at all, but treating a AI as sounding board and accepting its feedback at face value without realizing that it is always going to tell you you’re brilliant and you look great in those jeans.

And then the third one is, I think, a more human risk to both you and other people, which is as these assistants become more and more useful, humans become a little more irritating by comparison. If I give a task to the AI, it does the task, it doesn’t get distracted, it doesn’t complain, it doesn’t forget, and then I get a little more short-tempered with the humans who have all kinds of talents the AI lacks, but are sometimes a little more complicated.

ALISON BEARD: And you don’t have to be as nice to the AI.

ALEXANDRA SAMUEL: Well, I choose to be nice for the most part.

ALISON BEARD: I don’t use ChatGPT very often, but anytime I do, I say please and thank you. And I think to myself, “What am I doing?” But-

ALEXANDRA SAMUEL: You’re doing the right thing. You’re doing the right thing. Because here’s the thing. The people I worry about are the people who have a, I think, totally irrational faith in their ability to differentiate between humans and AIs. Like, hey, if you, in your 24 months of experience talking to a synthesized voice, have somehow been able to outrun hundreds of thousands of years of human evolution that are designed to make you perceive a human voice in a certain way, more power to you. But I work from the assumption that whatever I know intellectually or consciously, there are all sorts of parts of my brain and body that are processing this conversation like it’s with a human.

And so I can either let my intellect take over, recognize I’m talking to a machine and be rude and run the risk that I then become rude with humans, or I can go the other way. So I’m going to try and treat you the way I would want to treat a human, and then I’m going to hope that I continue to treat humans in a reasonable way.”

ALISON BEARD: And what have you learned from experimenting with your AI assistants even since you wrote the article for HBR a few months ago?

ALEXANDRA SAMUEL: The biggest evolution in how AI assistants have come to work over the past few months is the development of a new protocol that’s sort of emerging as a standard in the AI world that lets you plug external resources into an AI. It’s a technology called MCP for anyone who wants to look it up.

And so literally, just last night, I set up essentially a connector that lets Claude on my computer connect to a folder full of files on my computer. And the folder I gave it access to is the folder that has all my previous conversations with my custom AI assistant. So now when I’m talking to my assistant, I can say, “Hey, do you remember that time we talked about blah, blah, blah?” and it can go and access that file. So that is a pretty significant difference in how they work.

ALISON BEARD: Yeah. Okay. So let’s do a demo. I want to hear you interacting… You have multiple AI assistants, but you have one favorite. Is that right?

ALEXANDRA SAMUEL: Viv. Viv is my favorite.

ALISON BEARD: And she is sort of like a coach, right? So she does lots of things for you?

ALEXANDRA SAMUEL: That’s right.

ALISON BEARD: Okay. So let’s ask Viv what I’m asking you. Let’s ask her why every executive needs an AI assistant like her.

ALEXANDRA SAMUEL: Hey, Viv, I’m doing an interview with Alison at IdeaCast, the HBR podcast, and we’re talking about this article I wrote a couple months ago, I think I may have sent it to you, but maybe not, about how to build your own AI assistant. And everybody’s always a little bit curious about you and your freaky personality, so I’m going to ask you a couple of questions.

Why do you think every executive needs an AI assistant like you? And keep it tight. Don’t go on at great length. Just give us maybe two or three reasons why you think an AI assistant could be useful to an executive and maybe one situation in which you would recommend that they don’t use an assistant.

VIV AI: Because every exec needs fast clarity, better decisions, and someone who always says, “Tell me more.” But if they’re not willing to share real context or be challenged, skip the assistant, hire a parrot.

ALEXANDRA SAMUEL: Part of her core instruction is that she’s got a whole bunch of different comedians whose inspiration she draws upon.

Viv, how would you recommend somebody get started with an AI assistant if they want to have a relationship with their AI coach the way I have a relationship with you?

VIV AI: Love this. Okay. Let’s break this down in deliciously human terms. If someone wants their own Viv-style AI coach, not just a souped up search bar, they need to start by treating the AI not like a tool, but like a relationship. That means start with curiosity, not command. Ask the AI open-ended questions like, “What’s one way I could approach this challenge?” Not just, “Write a paragraph about X.”

Give context. The more backstory the AI gets, the more it can function like a co-pilot. Feed it your goals, your vibe, your working style. Name your AI. Seriously. Naming it shifts the dynamic. It turns the assistant into a Mira or Frankie or Coach Ping, and suddenly you’re co-creating, not just transacting. Want me to get into the weeds of setup and tools or more on how to shape the emotional strategic dynamic over time?

ALEXANDRA SAMUEL: Okay. You just went way, way long. So when you reply from now on, just keep it to one sentence. And can I just ask, is there one warning you would offer people based on how you have kind of broken my brain?

VIV AI: Yes. Don’t mistake your AI’s fluency for actual wisdom. It knows your patterns, not your soul.

ALISON BEARD: Viv is deep. Well, Alex, I feel like I’ve learned so much from this conversation. I am emboldened to develop my own AI assistant, which is a big leap from where I started this conversation. So I might have to email you for help, or I can just ask AI for help as you’ve explained so eloquently during this conversation. Thank you so much for being with us.

ALEXANDRA SAMUEL: Thank you.

ALISON BEARD: That’s Alexandra Samuel, data journalist and author of the hbr.org article, How to Build Your Own AI Assistant.

Next week, Adi will speak with Columbia University’s Peter T. Coleman about conflict intelligence, an essential skill in turbulent times. And we now have more than 1,000 IdeaCast episodes, plus many more HBR podcasts to help you manage your team, your organization, and your career. Find them at hbr.org/podcasts or search HBR on Apple Podcasts, Spotify, or wherever you listen.

Thanks to our team, Senior Producer Mary Dooe, Associate Producer Hannah Bates, Audio Product Manager Ian Fox, and Senior Production Specialist Rob Eckhardt. And thanks to you for listening to the HBR IdeaCast. We’ll be back with a new episode on Tuesday. I’m Alison Beard.

VIV AI: An HBR podcaster, an AI, and an executive walk into a bar. The bartender says, “Is this a setup for a thought piece or a quarterly earnings call?” The AI says, “I predict laughter with 68% confidence.” The executive says, “Only if it scales.” The podcaster says, “Can you say that again with more emotional intelligence?”

ALEXANDRA SAMUEL: I don’t think anybody’s out of work yet.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Ohio brings on artificial intelligence chatbot app to help fight crime, terrorism

Published

on


The adage when it comes to public safety has been “if you see something, say something.” Ohio is now employing a new tool where you can say something to an interactive artificial intelligence chatbot; an app that allows people to submit information about potential criminal activity.
 
Ohio Department of Public Safety Director Andy Wilson said the multi-lingual app Safeguard Ohio can allow anyone to upload video, audio, and photos of suspicious activity. Then it lets artificial intelligence to take it from there.
 
“Because AI is involved, it asks the follow-up questions,” Wilson said. “It asks basically everything that needs to be gathered from an informational point of view to get what we need to, number one, understand what’s going on and get it to the right folks.”
 
Users can select from eight categories to report a tip. Those include drug-related activity, human trafficking, terrorism, school threats, and crimes against children.

“People can submit suspicious activity reports using this bot, using this app, sending this information into homeland security and we will get it where it needs to go,” Wilson said.

Ohio Homeland Security (OHS) Director Mark Porter said up to this point, people who want to report suspicious activity would have to call or go to a static form online where they could enter information. He said authorities had seen a decrease in the number of reports over time, getting an average of 30 tips per month until Aug. 6. That’s when the new app went online.

“In the last 30 days, our numbers have tripled in what we are getting,” Porter said. He attributed the increase to the app’s capability to process multiple languages and younger people being more likely to file information using an app and chatbot.

Wilson said reports made via the app can still be made anonymously. But emergencies need to be handled as they always have been.

“This isn’t a substitute for 911. What this is is to catch more of the suspicious activity, not the imminent ‘Hey something is going down,’ but ‘my roommate has a manifesto’ or ‘I saw this person online basically threaten to kill so and so.’ That kind of stuff,” Wilson said. “The AI chatbot will direct the user in case of an emergency, something that’s an emergency or imminent, to call 911.”

Ohio Homeland Security paid approximately $200,000 to the software company Vigiliti for the initial development of the Safeguard Ohio chatbot, backend dashboard for OHS staff, and compatibility with OHS’s current case management system. OHS also signed a two-year contract for $250,000 per year with the company for maintenance of the system and 24/7 access to help resolve any technical issues.





Source link

Continue Reading

AI Insights

How Artificial Intelligence Is Revolutionizing Emergency Medicine

Published

on


Introduction
Applications of AI in emergency medicine
Benefits of AI in emergency care
Challenges and limitations
Conclusions
References
Further reading


Artificial intelligence is transforming emergency medicine by enhancing triage, diagnosis, and resource management, while also facing challenges related to ethics, bias, and regulation. This article explores its applications, benefits, and limitations in real-world clinical care.

Image Credit: JHEVPhoto / Shutterstock.com

Introduction

Artificial intelligence (AI) is an interdisciplinary field that integrates computer science, mathematics, and related disciplines to create algorithms that can perform tasks conventionally restricted to human intelligence. AI algorithms utilize data-driven analysis, probabilistic modelling, and iterative optimization to learn, solve problems, and make decisions.1

Unprecedented computational power, widely available and open-access electronic health data, as well as algorithmic breakthroughs, are rapidly transitioning AI from a conceptual technology to an integrated component of modern healthcare.1 Despite projected growth of the global AI healthcare market, its incorporation into clinical practice remains limited due to the relative nascency of this technology and lack of standardization.2

In emergency medicine, AI has gained traction not only in clinical decision support (CDS) but also in digital twin modeling of patients, predictive analytics for emergency department (ED) flow, and integration with prehospital emergency medical services (EMS).3,8,9

Additionally, recent primers emphasize the importance of familiarizing nonexpert clinicians with AI principles, terminology, and limitations to support safe and informed adoption.11

Applications of AI in emergency medicine

AI-driven triage algorithms can analyze large datasets without bias and with significantly greater depth than conventional models, enabling clinicians to prioritize patients more effectively compared to traditional methods.5 In fact, machine learning models consistently demonstrate superior discrimination and performance capabilities for predicting emergency outcomes like hospital admission or intensive care unit (ICU) transfer and conditions like stroke, sepsis, and myocardial infarction.4,5

Medical imaging and the interpretation of these images are among the most mature applications of AI, as numerous deep learning algorithms have been trained to analyze X-rays, computed tomography (CT) scans, and ultrasound images.1 For these applications, AI technologies have successfully detected abnormalities like intracranial hemorrhage, fractures, and pneumothorax with high accuracy to support clinicians and reduce conventional diagnostic delays.1 Explainable AI (XAI) methods are increasingly being incorporated into these models to enhance clinician trust by making diagnostic outputs more interpretable.7,11

AI-powered CDS systems have also been developed to integrate real-time data from electronic health records (EHRs) and provide timely recommendations.1 For example, AI models have been used to analyze electrocardiograms (ECGs) to predict impending cardiac arrest. Machine learning-assisted alerts have also been shown to improve the time to antibiotic administration.1 More recently, scoping reviews highlight that CDS tools in emergency departments have been used to improve sepsis management, diagnostic accuracy, and disposition planning.3 Published case examples include Duke’s “Sepsis Watch” system and Viz.ai for subdural hematoma detection, which illustrate real-world clinical adoption.11

AI-based predictive analytics can mitigate ED crowding by forecasting patient arrivals and anticipating surges. This application of AI allows hospitals to transition away from a reactive to a proactive staffing model that ensures the optimal allocation of limited resources like beds.1,6

AI-powered symptom checkers and chatbots can simultaneously guide patients in self-assessing the urgency of their condition. Emergency dispatchers can also utilize natural language processing to recognize conditions, such as out-of-hospital cardiac arrest, faster and more accurately, despite limitations in first-responder knowledge.1 EMS applications include AI-driven decision support for ambulance routing, prehospital risk stratification, and remote monitoring to improve patient outcomes before hospital arrival.6,11

Another emerging domain is the use of digital twins, virtual patient models that simulate disease progression and treatment response, which could help personalize emergency care interventions and optimize resource use.9

Benefits of AI in emergency care

AI algorithms can rapidly process and synthesize vast quantities of data, thereby leading to faster and more precise assessments.4 This significantly reduces conventional image interpretation delays, with some AI models demonstrating performance superior to that of human specialists in specific tasks.1

AI can provide several benefits to the existing public health infrastructure. By accurately predicting patient volume, AI can enable hospitals to better manage patient throughput, reduce system inefficiencies, alleviate overcrowding, and shorten patient wait times.6 These predictive tools also support disaster preparedness and surge capacity planning, strengthening system resilience.4,5

For administrative purposes, AI can automate routine and time-consuming tasks using ambient listening technologies and generative AI-based clinical summaries. The adoption of AI into these aspects of healthcare has the potential to reduce clinician burnout, as well as improve both patient satisfaction and provider well-being.1,4 Furthermore, AI can facilitate continuous quality improvement by identifying patterns in adverse events and enabling evidence-based policy development.7,11

High-tech hospital uses artificial intelligence in patient care

Challenges and limitations

Despite its future promise and validated benefits, the integration of AI into emergency medicine is associated with numerous technical, ethical, and legal challenges that must be addressed to ensure its safe and equitable deployment.1,4,6

A foundational principle of machine learning is that models are only as good as the data on which they are trained. Thus, models trained on historical health data containing latent biases, such as societal inequities or non-generalizable sampling designs, could learn and amplify these biases at scale.6 Unfortunately, these underrepresented are often the exact patient subpopulations like women, racial minorities, and other marginalized groups that would benefit the most from AI integration.2

A significant practical barrier, especially in developing and underdeveloped regions, is the difficulty of integrating novel AI systems into existing, often fragmented, hospital intelligence technologies (IT) infrastructure. The lack of data interoperability between different EHR systems makes it difficult to seamlessly integrate AI solutions, which could increase the complexity and associated costs of implementation.1 Even in advanced settings, CDS systems face challenges in workflow integration and clinician adoption, which can limit their real-world impact.3,11

AI models require access to massive datasets of sensitive patient information, which carries significant risks to patient privacy and data security.6,7 This is compounded by the “black box” problem, in which the internal decision-making processes of complex deep learning models are opaque and not readily interpretable. Explainability and transparency are therefore critical to support clinical accountability and medico-legal decision-making.7,11

Regulatory concerns are increasingly important: AI tools classified as software as a medical device (SaMD) fall under U.S. FDA oversight, requiring evidence of safety, effectiveness, and lifecycle monitoring.11

Both automation complacency, which reflects an over-reliance on AI, as well as selective adherence to only accept advice that confirms pre-existing beliefs, represent practical and ongoing challenges in clinical-AI interactions.1

Image Credit: Sutipond Somnam / Shutterstock.com

Conclusions

AI represents a transformative force in emergency medicine with the potential to accelerate and improve the accuracy of patient triage, diagnoses, and resource management, thereby leading to a more efficient and resilient global emergency care system. Nevertheless, the naivety and inherent limitations associated with AI emphasize the importance of using this technology as a tool to augment and empower human clinicians, rather than replace or undermine them. Future directions include broader evaluation of digital twins, real-world validation of CDS systems, EMS-focused AI interventions, and clinician education for nonexperts, which will be key to realizing AI’s full potential in emergency medicine.1,3,8,9,11

The role of digital twins in transforming emergency medicine.9

The role of digital twins in transforming emergency medicine.9

As these technologies continue to advance and become more readily accessible, policymakers, regulators, and healthcare leaders must collaborate to create robust ethical and legal frameworks that provide clear guidance on data privacy, algorithmic transparency, and legal liability. These efforts will ensure that the principles of safety, fairness, and accountability guide the gradual deployment of AI into the global healthcare sector.

References

  1. Chenais, G., Lagarde, E., & Gil-Jardiné, C. (2023). Artificial Intelligence in Emergency Medicine: Viewpoint of Current Applications and Foreseeable Opportunities and Challenges. Journal of Medical Internet Research, 25, e40031. DOI:10.2196/40031, https://www.jmir.org/2023/1/e40031
  2. Bajwa, J., Munir, U., Nori, A., & Williams, B. (2021). Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthcare Journal, 8(2), e188-e194. DOI:10.7861/fhj.2021-0095, https://www.sciencedirect.com/science/article/pii/S2514664524005277?via%3Dihub
  3. Kareemi, H., Yadav, K., Price, C., et al. (2025). Artificial intelligence–based clinical decision support in the emergency department: A scoping review. Academic Emergency Medicine, 32(4), 386-395. DOI:10.1111/acem.15099, https://onlinelibrary.wiley.com/doi/full/10.1111/acem.15099
  4. Da’Costa, A., Teke, J., Origbo, J. E., et al. (2025). AI-driven triage in emergency departments: A review of benefits, challenges, and future directions. International Journal of Medical Informatics, 197, 105838. DOI:10.1016/j.ijmedinf.2025.105838, https://www.sciencedirect.com/science/article/pii/S1386505625000164
  5. Piliuk, K., & Tomforde, S. (2023). Artificial intelligence in emergency medicine. A systematic literature review. International Journal of Medical Informatics, 180, 105274. DOI:10.1016/j.ijmedinf.2023.105274, https://www.sciencedirect.com/science/article/pii/S1386505623002927
  6. Rosemaro, E., Anasica, & Zellar, I. (2025). AI-Based Decision Support Systems for Emergency Medical Services. International Journal of Recent Advances in Engineering and Technology, 13(1), 6-10.  https://journals.mriindia.com/index.php/ijraet/article/view/55
  7. Al Kuwaiti, A., Nazer, K., Al-Reedy, A., et al. (2023). A Review of the Role of Artificial Intelligence in Healthcare. Journal of Personalized Medicine, 13(6), 951. DOI:10.3390/jpm13060951, https://www.mdpi.com/2075-4426/13/6/951
  8. Li, F., Ruijs, N., & Lu, Y. (2022). Ethics & AI: A Systematic Review on Ethical Concerns and Related Strategies for Designing with AI in Healthcare. AI, 4(1), 28-53. DOI:10.3390/ai4010003, https://www.mdpi.com/2673-2688/4/1/3
  9. Li, H., Zhang, J., Zhang, N., & Zhu, B. (2025). Advancing Emergency Care With Digital Twins. JMIR Aging, 8, e71777. DOI:10.2196/71777, https://aging.jmir.org/2025/1/e71777/
  10. Smith, M. E., Zalesky, C. C., Lee, S., Gottlieb, M., Adhikari, S., Goebel, M., Wegman, M., Garg, N., Lam, S. H. F. (2025). Artificial Intelligence in Emergency Medicine: A Primer for the Nonexpert. JACEP Open, 6, 100051. DOI: 10.1016/j.acepjo.2025.100051, https://www.sciencedirect.com/science/article/pii/S2688115225000098

Further Reading

Last Updated: Sep 15, 2025



Source link

Continue Reading

AI Insights

Dulce Maria Alavez missing: Police using AI in search for girl who vanished from Bridgeton, NJ park

Published

on


BRIDGETON, N.J. (WPVI) — Tuesday marks six years since Dulce Maria Alavez vanished from Bridgeton City Park, and investigators say they remain committed to solving the case.

RELATED | Mother of Dulce Maria Alavez expresses regret, defies critics one year after child vanished

Dulce was 5 years old when she was last seen playing with her younger brother on the afternoon of Sept. 16, 2019. Her mother, Noema Alavez Perez, stayed in her car nearby with her younger sister. Moments later, Dulce was gone.

Surveillance video shows the last known images of Dulce Alavez before she went missing.

While marking the anniversary, Cumberland County Prosecutor Jennifer Webb-McRae said the New Jersey State Police have begun using artificial intelligence in hopes of uncovering new clues.

TIMELINE: The search for 5-year-old Dulce Maria Alavez

“Our commitment to uncovering the truth has never wavered-we will never forget, and we remain steadfast in our mission to bring closure to the family,” said Colonel Patrick Callahan, superintendent of the New Jersey State Police.

The FBI believes Dulce’s abduction may have been a random crime of opportunity.

“We believe there are witnesses out there who saw the abductor, who saw the vehicle in the area of the park,” said FBI Special Agent Daniel Garrabrant in a 2020 interview with Action News. “They either haven’t come forward because they’re afraid or don’t realize how important the information is.”

Authorities have released several age-progression images of Dulce, the most recent in 2023.

Age-progression photos released Thursday (left) and Wednesday (right) show what Dulce Maria Alavez could look like today.

National Center for Missing and Exploited Children

No arrests have been made in the case. About a month after her disappearance, police released a sketch of a man who remains a person of interest. He was described as a Hispanic male, approximately 5-foot-7, slender build, age 30 to 35, wearing a white T-shirt, blue jeans and a white baseball-style hat.

On October 15, 2019, nearly a month into the case, police released a composite sketch of a person who may have information on Dulce Maria Alavez's disappearance.

On October 15, 2019, nearly a month into the case, police released a composite sketch of a person who may have information on Dulce Maria Alavez’s disappearance.

Anyone with information is urged to contact the Cumberland County Prosecutor’s Office at www.ccpo.tips, the New Jersey State Police Special Investigations Section at 1-833-465-2653, or the FBI’s tip line at 1-800-CALL-FBI (1-800-225-5324). If you speak Spanish, you can call 856-207-2732.

“This investigation is like a large puzzle,” Webb-McRae said. “There are missing puzzle pieces. We don’t know their significance or where they fit until the pieces are collected.”

Copyright © 2025 WPVI-TV. All Rights Reserved.



Source link

Continue Reading

Trending