Connect with us

AI Insights

Artificial Intelligence And State Tax Agencies

Published

on


In this episode of Tax Notes Talk, Ryan Minnick of the Federation of Tax Administrators discusses how state tax agencies are approaching artificial intelligence and shares insights from the FTA’s upcoming briefing paper on generative AI and its 2024 tax agency survey.

Tax Notes Talk is a podcast produced by Tax Notes. This transcript has been edited for clarity.

David D. Stewart: Welcome to the podcast. I’m David Stewart, editor in chief of Tax Notes Today International. This week: AI and state tax.

In the past few years, every industry has looked at implementing artificial intelligence into their workflows, and state tax agencies are no exception. But the sensitive nature of the data tax agencies must handle requires an especially careful use of the technology.

So how can state tax administrators use artificial intelligence, and how can agencies balance the adoption of new technology with the need to protect taxpayer information?

Here to talk more about this is Tax Notes reporter Emily Hollingsworth. Emily, welcome back to the podcast.

Emily Hollingsworth: Thanks, Dave. Glad to be back.

David D. Stewart: Now, there have been a lot of developments in the past year or so with state tax administrations and the adoption of AI. Could you give us some quick background on what’s been happening and where we are?

Emily Hollingsworth: Absolutely. Among states, AI policy and legislation have, in a word, exploded this year. The National Conference of State Legislatures recently said that all 50 states, the District of Columbia, Puerto Rico, and the Virgin Islands have introduced legislation about AI this year. Twenty-eight of these states and the Virgin Islands enacted legislation on AI, including AI regulations and also prohibiting AI use for certain criminal activity.

For quick context, I’ll be talking about two forms of AI. There’s generative AI, which can generate text, images, or other items. ChatGPT is a well-known example of a technology that uses generative AI. Then there’s machine learning AI. This is an older technology that uses algorithms to identify patterns and data.

When it comes to examples of state tax administrations and recent developments on AI, California’s Department of Tax and Fee Administration, or the CDTFA, announced back in late April, early May, that it will be working with the firm SymSoft Solutions to deploy a generative AI solution to enhance the department’s customer services. This solution is trained on the department’s reference materials, including manuals, guides, and other documents. When a taxpayer has a question, the solution will generate potential responses that the department agent can provide to the taxpayer. The goal is to use the solution to cut down on customer wait times and help alleviate workloads for agents during peak tax filing periods. Now, while the CDTFA and SymSoft are under a year-long contract to deploy the solution, as I understand it, the solution at the moment isn’t currently being used in real time for customers’ questions.

David D. Stewart: Well, I know you’ve been following this area pretty closely. Could you give us an idea of what sort of things you’ve been working on lately?

Emily Hollingsworth: Definitely. I’m currently working on a report about this generative AI solution from the CDTFA, looking more closely into the initial testing period the solution underwent in 2024, as well as the project’s next steps.

A separate report I’m working on looks into machine learning AI tools used by tax departments for decades, for everything from tax return processing to fraud detection and prevention. As state leaders, lawmakers, and advocacy groups call for oversight and regular inspection of AI tools, I’m looking into whether these machine learning tools would also be subject to this sort of inspection and oversight.

David D. Stewart: Now, I understand you recently talked with somebody about this. Who did you talk to?

Emily Hollingsworth: I talked to Ryan Minnick, who is the chief operating officer with the Federation of Tax Administrators.

David D. Stewart: And what sort of things did you get into?

Emily Hollingsworth: Well, the Federation of Tax Administrators, or FTA, has been doing an enormous amount to provide training and resources for state tax agencies looking to pilot or implement AI solutions. We got into the FTA’s upcoming briefing paper on generative AI, as well as its 2024 tax agency survey.

We also delved into topics like data security, transparency and trust, and the roles that state tax agencies are taking when it comes to piloting or implementing AI solutions.

David D. Stewart: All right. Let’s go to that interview.

Emily Hollingsworth: Ryan Minnick, welcome to the podcast.

Ryan Minnick: Awesome. Thanks so much for having me, Emily. I’m excited to chat with you today.

Emily Hollingsworth: Likewise. So the Federation of Tax Administrators is developing a briefing paper on generative AI technology. The paper is intended to equip tax agencies with information around generative AI, particularly as states and state legislatures are facing increasing pressure to stay competitive with other states and pilot or execute generative AI.

When is the briefing paper expected to be released, and what aspects of generative AI will this briefing paper cover?

Ryan Minnick: It’s a great question, and I guess I’ll back up a little bit and explain how this project started and where we are in the scope of things. So as you know, FTA, working with all the states, we’re very focused on emerging issues. So in the 10 years I’ve been here, whether it’s blockchain or whether it’s some movement to the cloud, there’s always some sort of technical innovation on the horizon. And so whenever we see a big one, we tend to organize information around it to help our members understand what their peers are thinking, understand what’s happening in the private and academic sectors, just to really make informed decisions for themselves. So the unique thing about states always is that states are sovereign entities. They’re going to take whatever approach that they feel is best, but they rely on organizations like FTA to convene them and help guide them a little bit and give them the information that is the most important to them.

So a little over a year and a half ago, we formed an AI working group, and we did it in two parts. We started with education, so we hosted, over the course of a 12-month period, two rounds of briefings with experts from every corner of the technology sector and academic sector. So we had researchers that focused on the application of transparency and ethics in AI, we had researchers who focused on large language models and how they actually work, and everything at every depth, from technical to business.

We had some private sector groups graciously share with us concepts that they had built for other sectors that leverage these technologies, just so the attendees could get a feel for understanding how you take this fantastical thing of generative AI and start to translate it into what you could actually use it for in your day-to-day work — because there’s been, I think, so much really good marketing by the companies that produce generative AI technologies that it becomes a little bit hard to conceptualize what you might actually do in a government agency with it. How is it potentially going to help us? What’s the innovation there?

So after those two rounds of briefings, we took a set of volunteers that represents a good number of our members. A couple dozen people got together, and they organized themselves into three groups. Those three groups have been focused on some background education information, so just helping anyone who’s reading the white paper understand the terminology that’s used frequently in the technology, the ways that we encounter the technology in our day-to-day lives, other forms of AI, because people often confuse generative AI for things like machine learning and other tools that have been around for quite some time.

We have a group that focuses on opportunities — so not necessarily in production, although some are, but different examples of where this technology might be utilized or where it could be utilized. And they even went a step further and fleshed out some of those concepts just to articulate to both business and technology stakeholders what the possibilities are.

And then the third group — which I have to admit, my partner in crime for this particular working group is our general counsel, Brian Oliner, who’s formerly the AG representing the Maryland Comptroller’s Office. And every time he and I talk about emerging technology, he’s such a good foil for a nerd like me because he’s a seasoned attorney who has seen complex vendor contracts and understands the ways that states need to protect themselves from a legal and statutory standpoint. And so he and I always like to hash out the details of technology. So he actually worked primarily with the third group, which is our considerations and risks group.

So you get through this white paper, and the goal is everyone from executive and business stakeholders in the agency through to maybe some of the technology stakeholders that deal primarily in strategy, they’re going to be able to consume the appropriate parts of this white paper and come out on the other side armed with information about how the technology might be applied, understand it maybe through the tax lens a little bit better — through the tax technology, tax agency technology lens a little better — and then also have at their fingertips a bunch of other resources to go out and look at.

So our teams, we’re not rewriting the book on generative AI. There’s so many brilliant researchers out there that are every day coming up with innovations and reports. So in some cases, we’re pointing the reader to those really great resources and just giving them the tax context they need to think about it. So that’s been the whole purpose of the group and how we’ve structured so far.

And what’s really exciting, and why I’m so glad about your question, is that I finally get to say that by the time our FTA technology conference happens the second week of August out in Tacoma, which is the only technology conference that serves the tax agency perspective. So there’s a lot of tax technology conferences out there. They’re usually practitioner or CPA driven; this one’s hosted by us. It’s for tax agencies and people who care about what tax agencies are doing with technology. By the time we get to that conference, we will have published the white paper.

And so that will be a really great venue for the IT and business leadership for the agencies who will have, at that point, had the white paper in their hands for a little bit for when they get together and convene, take what they do at our technology conference every year, which is think about the possibilities and what people are doing with technology and hear about successes and new ideas, take it to the next level and see how this fairly new yet very popular-to-talk-about emerging technology might fit into that.

Emily Hollingsworth: That sounds really interesting, and we certainly look forward as well to the briefing paper. We’ve discussed the working groups, and I was curious to know, is that program still open to FTA member agencies who may be interested in participating? We had also talked about this as well, but what are states learning or taking away from these work groups and sessions?

Ryan Minnick: Absolutely. It’s a great question. So the functional part of the working groups are starting to wrap up. So as we finalize the white paper — at least the first edition of the white paper — the folks that have worked in the three drafting groups that I mentioned before, they’re going to have an opportunity to collaborate with us on that final white paper version that is ultimately published to our members.

And it gets to the question that you asked, which is, what are states already getting out of this? I actually asked this question of our program co-leads when I was moderating the panel on this at the annual meeting, and the answers were really better than I had hoped. You always hope that people who participate in a working group, that they get value out of it even before the product is finished. And some of the things that they shared were, it was helpful for them to even hear how their peers — who were, as part of the working group, probably already people in their agencies who were thinking about this more than the average employee — to hear how their peers were thinking about this, to hear the level of curiosity, to hear the optimism for the possibilities.

And then even for some of the working group members, they’ve shared with me that this has actually helped make the technology feel a little bit more tangible and a little bit more real. I think the hype curve for generative AI has been really substantial. The underlying technology has been around for a couple of decades, but if you look at — for our conversation today, if we look at ChatGPT launching 20 months ago or so, if you look at that as the inflection point of the generative AI craze, the hype curve was very scary for people in a lot of roles, but in particular in government, because the initial premise was, oh my gosh, this can do everything. It can think. It’s a facsimile of a human, it can do all these wonderful things.

But of course, as time has gone on, we’ve all realized that, like any technology tool, it’s a new version of a tool that’s going to hopefully help improve productivity. It’s going to help our organizations do work better. But if you’ve played with some of these tools recently, they’re not taking anybody’s job away anytime soon; if anything, they’re probably freeing up knowledge workers to do work a little bit more efficiently, a little bit more effectively.

The bigger thing that I think is coming out of these trends, and that I think some of our members saw as they were participating in the working group, is the real opportunity for training in this space. This is a shift that is more like the shift from a typewriter to a word processor than it is a shift from a server in your server closet at the office to a server in the cloud. This is a fundamental change in how you interact with technology, which requires you to just completely rethink everything that you’re doing.

And it’s also one of those inflection points where people entering the workforce now are already receiving substantial exposure to it. And so you’re going to have a point where, much like the typewriter to the word processor, you’re going to have a big chunk of your workforce who you’re going to have to upskill and train on it, because they’re going to experience it for the first time when they’re in the middle stages of their career, but you’re at the same time going to be bringing people in at the early stages of their career who are going to be natively understanding it. And it doesn’t happen a lot with emerging technologies; it’s always a little bit more subtle than, I think, the point that we’re at now.

Emily Hollingsworth: Thank you. How transparent do you believe that state tax agencies should be when informing the public about AI use?

Ryan Minnick: It’s a tricky question to answer, not because I don’t believe in transparency, but because, like I mentioned before, every state’s their own sovereign island, so they have their own regulations and rules that they go through. And so I genuinely believe that our members are as communicative as possible, as they’re able to be, with all their stakeholders.

And the only times that it might take a little bit more time to share information is when that information being shared might either compromise someone’s data security, any data that we want to protect, or also that it might, potentially — in the case of fraud fighting — it might give away to criminals how we’re doing something to prevent them from committing crimes. So I think those tend to be the two areas where you see maybe a lag in that sharing, because we want to make sure that we’re not oversharing.

So 10 or 15 years ago, when we started — in a collaboration for the security summit with the IRS and the tax software industry — developing better methods to protect individuals’ identities when they were filing their individual income tax returns, a lot of that work now is very public. There’s a lot of that work that’s been made available, and people understand what that group is and how it works.

But at the time, we were putting those frameworks in place, and we were sharing as much as we were able to without compromising the nature of the work, because we prioritized making sure that the criminals couldn’t figure out what we were doing as we were doing it, because you want to make sure that — unfortunately, the problem with the public internet is criminals — I know in the tax world we use “fraudster” a lot, but in general, criminals of any variety, whether you’re committing fraud individually, you’re targeting someone for identity theft, or you’re doing it at scale because you’re trying to use data from the dark web — those are areas where we are, in general, as transparent as possible.

But when it comes to individual members and what they do, I don’t govern what information they release and at what time. So I think we as an organization commit to keeping our members informed and, to the extent we can, keeping the broader tax community informed of trends as they happen.

Emily Hollingsworth: Thank you. We’ll move on now to the survey that the FTA had released in 2024 with EY. The survey looked at 37 state tax agencies and two city tax departments.

I thought that its section, in particular, on AI was pretty interesting. For example — and I’ll read a few of the numbers from the survey — but it said that 15 percent of tax administrations are “conducting pilots or are already using AI in core functions.” It also said that 9 percent of respondents said that machine learning AI is being used in core functions, while 12 percent had said that they’re conducting pilot programs on machine learning technology — again, distinct from generative AI. So I was curious, is the FTA planning to release a state tax agency survey this year?

Ryan Minnick: So we’ve not released the follow-up survey yet; we’re still working on determining what that survey design looks like. We see great value in continuing this effort. That was the first comprehensive survey we had issued in well over a decade. It was a priority of Sharonne Bonardi, our executive director, who formerly was the deputy comptroller for the state of Maryland. And when she joined FTA, one of the resources she wanted us to have —primarily for our members, but also for the general public — was a state of the state tax agencies, helping people understand the priorities and the ways that tax agencies were thinking about emerging issues.

And so that was really that report that you mentioned — which I think is a great read, I think everybody should go to our website and download it — when it comes to the AI question, we actually drafted and sent out and published that survey right as generative AI was blowing up. You wish you had a time machine, right? Because I wish we could have structured that question a little bit differently, because as we have been for the last year and a half on a road show, talking about insights from the survey, I get a question about this little section all the time.

And ultimately the nonexciting answer is that the question was so broad that it was really incredibly difficult to know exactly how somebody was thinking when they answered the question. We asked about AI because at the time, before generative AI came out, AI was seen as advanced machine learning — I mean, maybe some algorithmic work, maybe some natural language processing. So things like when you call into a phone tree and you say naturally what you’re looking for, and the phone tree tries its best to help you find the right person. But that was really the intent of the question when we were writing it, because generative AI had just started emerging as this — we hadn’t quite seen the splash from ChatGPT yet.

Of course, now fast-forward, and everybody sees that question, they’re like, “All these states have AI in production? Oh my goodness, this is crazy.” No. They don’t. Not the AI you’re thinking of, this generative AI that’s on everybody’s brain today; AI in the sense that a lot of processing in tax agencies across the country, there’s a lot of machine learning that takes place. It’s programmatic. It’s looking for patterns in terms of fraud. It’s looking for noncompliance, so criminal fraud versus just general tax fraud, people that either underreport or don’t accurately or correctly answer something. It’s also looking for things like accuracy; it’s monitoring trends. There’s a lot of uses for those technologies.

So I think the more accurate, interesting insight from that survey is exactly what you pointed out, which is the machine learning piece. Even years after a lot of these machine learning trends started to hit, there’s still agencies that are looking at machine learning and how they can use it in different ways. And I’ll say as a technologist supporting tax agencies, I think that’s a great thing, because there’s a lot of really great uses for what I think a lot of people in the public think is old technology, because right now we’ve all moved on to generative AI. We all want Siri to work better on our iPhones, and we’re not really thinking about anything else. But machine learning in some contexts is a better solution to a lot of the problems people want to solve with generative AI. And it’s not only better in terms of safety, but it could be better in terms of performance, in terms of cost.

Generative AI certainly has its place: It’s powered by large language models; it handles language super well. There’s a lot you can do with it. It has a lot of configuration that’s required, a lot of training that’s required so that you get accurate and nonhallucinatory answers. But machine learning, that’s good old-fashioned math. That’s really sophisticated math. And so what generative AI can’t do really well is math. There’s some exceptions, but for the most part it’s good at language. And most of tax is math. So we actually find ourselves with pretty advanced machine learning capabilities that are available, that have been for a long time, that a lot of agencies use in production that, unfortunately for — I suppose for the hype of it, they fall under that form of AI.

But I typically, even when I’m talking about these things on stage somewhere, I’ll talk about machine learning, and I’ll talk about generative AI. I almost never use the AI term broadly because AI is artificial intelligence, and so far nothing we’ve developed is artificially intelligent; it just has the appearance of it.

So doing math really fast seems very intelligent. So that’s machine learning. Interpreting language really fast, or drafting a country music song in the style of Garth Brooks or whatever people have asked ChatGPT to do, that seems very artificially intelligent, but in neither of those cases is that term accurate.

I digress a little bit because I know your question was about our survey, but it’s so interesting because, going through the results and seeing what states were thinking about and how they were responding to the survey, that was my takeaway, is that I feel like I wish I could go back in time and ask more separately about the different emerging forms of the technology because I think we would’ve gotten maybe a more representative answer. I think we got some good insights, but 12 percent of states are not actively using generative AI in production. So I hope no one reads the report and thinks that.

Emily Hollingsworth: I guess that also leads into my question. So we have the percentage — for example, 12 percent said that they’re piloting machine learning technology. Do we know how many states that translates to?

Ryan Minnick: Yeah. Based on the survey itself, I assume, based on the data team that put all that together, to me that would be a handful, four or five states that responded to the survey. But that also, like I said, that question’s a little bit of a misnomer in how you interpret it. So that could be four or five states at the time of the survey were actively piloting a new use of the technology. Those same states could probably have already had that technology in place doing something else at the same time. So that’s something else I think bears explaining.

If you’re curious about how tax agencies work, people listening, we don’t sit still. One of the things that I’ve learned in my 10 years at FTA is that agencies are always looking forward — how they can do their job better, how they can better serve the citizen, how they can take innovations and leverage them to do more with less, because unfortunately in government, I think that’s a lot of the situation we always find ourselves with. So budgets don’t necessarily grow as much as we’d like, or sometimes they get cut. Oftentimes legislatures ask for agencies to do new things, and they don’t always give them money to do that. So everybody’s trying to do the most with what they have.

And so when you get innovative technologies that could be used in a line of business — and tax agencies have numerous lines of business: They have everything from receiving data, you have your traditional tax return processing, you’ve got auditing and collections and customer experience, and you have your legal and tax policy group that has to interpret things that come out of [the] legislature. And there’s a lot of moving parts. And so with this question, the way I interpret the answer is four or five or six states at the time of the survey were looking at machine learning to potentially solve a problem somewhere in their agency, irrespective of wherever they were already using that technology, if they were, to solve problems in a different place.

Emily Hollingsworth: This is a question that I had when we were discussing state transparency. It relates to California’s announcement in April that it had secured a contract with a company and is testing a generative AI solution. Now, this agreement is going to test the solution under a limited — I think this is a testing environment, so it’s not necessarily something that’s going to go out to the public immediately or be used during large periods of time like filing season.

But I was curious to know what your thoughts are on this development. California has also been very transparent about its developments in AI and has also done a lot of work to vet and test those solutions. So I was curious to know what your thoughts were on that particular development.

Ryan Minnick: Well, it’s a great question, first of all. California, certainly as one of the larger states, certainly by staff, they have the largest numbers of people working on tax administration across their several agencies that do that work. I think even in terms of technology, they’re a great example of transparency in government.

So you look at their technology modernization plan that they’ve been doing — I think they’re in part two, and I forget what phase of part two they’re in — but they started publicly sharing that modernization strategic plan 10-plus years ago, when they were in the first phase, or the first part. So it doesn’t surprise me at all that they were incredibly transparent about piloting a technology and going about the process of securing a contract to do so.

I think it’s also helpful that they shared the scope of what they were thinking about. I know that oftentimes parts of the procurement process are public, and so people can see what states are doing in different agency areas on a regular basis. But California certainly in this case went one step further. I know you all covered it, I know a couple other media outlets did, and they shared, they said, “We’re looking at the potential for this technology. It’s a very controlled experiment.”

I can’t comment on the project specifically because, first of all, I’m not a part of it. They’re just a member, and we don’t usually get into that level of detail about it. But in concept, what they did was great. They decided to do something, they shared what they were doing, and now I presume — not being familiar with the project specifically — I presume they’re doing that. And then to the extent that they make a decision, they’ll come back and take the next step.

I think that’s, generally speaking, what a lot of agencies look at in terms of a pilot. They want to be very upfront with people who are impacted by whatever they’re testing. So sometimes we’re able to test technologies in a bubble or in a vacuum, and so we don’t necessarily have that need to share what we’re testing, because we’re not doing it — for example, if you’re wanting to test the potential for a technology that would leverage generative AI, but you’re not going to actually test it with taxpayers, so you’re just going to test it internally, and you’re going to test it only on maybe a synthetic data set — so something that’s not even real data or real information, it’s something that’s manufactured for the purpose of testing — just to understand how [the] technology works. That’s a super great, super safe experiment, because it’s not touching anything sensitive, and if it’s something that doesn’t work out, then you didn’t go through a big implementation to put it in front of everybody.

Separately, if you go down the route like California did, and you want to do a limited trial of something, and you can define that scope and you can let folks know about it in the way that makes sense within that state’s rules and how the agency operates and how the state as a whole has policies on it, I think that’s really great, too.

One of the fun things about technology in general — I guess one of the most fun things about technology — is that especially these days, really good ideas can come from anywhere. And the more — this is true inside of government, outside of government — this is my tip to nontechnologists from a technologist is, read about technology; understand how people are using it. Think about how they’re using it, and think about how you might be able to. We’re seeing so many innovative uses for not even just generative AI, but just tools in general that are being made available.

And I think part of that is this increasing level of curiosity of people wanting to figure out how to be more effective, figure out how to optimize things a little better, figure out how to deliver on their mission in the way that best serves their stakeholders, whoever their stakeholders may be. In our context, it’s taxpayers and tax administrators, but in somebody else’s, it might be readers, if you’re a journalist. How can you leverage technologies and understanding how other sectors are doing it? That’s one of the reasons why our work group interviewed so many private sector professionals. Some of them weren’t even in the tax world — they were just people working on generative AI in the private sector somewhere else. We just wanted to hear what they were thinking about and how they approached the project or what their view on the knowledge was.

Because most of the time, I can talk about tax all day long, and Emily, you can translate it into journalism, whether it’s tax or otherwise. Likewise, you could tell me something that you’re doing in order to maybe reach readers of Tax Notes a little bit easier, a little bit better, get into their inbox a little bit faster. I’m going to listen with my tax administrator ears and think, “Oh, how could I potentially take this really cool thing that you’re doing and help benefit my members or the taxpayers?” or, “How can I help my members benefit their stakeholders?”

You get into the question of, is transparency important? Absolutely. I think data security is also very important, and fighting crime is also really important. So you have to balance everything. But at the end of the day, it’s because there’s so much curiosity out there, and I think people who pay attention to these things and can be helpful — especially if people have great ideas, there’s some great careers in tax administration, I suppose I should put a shameless plug in — so if you have great ideas for tax administration, share them with tax administrators, because we want to hear them. We want to discharge those duties as faithfully and efficiently and effectively as possible.

Emily Hollingsworth: Absolutely. And Ryan, again, thank you so much for coming on the podcast.

Ryan Minnick: Oh, of course. Happy to do it anytime. It was great talking to you today, Emily.



Source link

AI Insights

Dolby Vision 2 bets on artificial intelligence

Published

on


Dolby Vision 2 will use AI to fine-tune TV picture quality in real time, taking both the content and the viewing environment into account. 

The “Content Intelligence” system blends scene analysis, environmental sensing, and machine learning to adjust the image on the fly. Features like “Precision Black” enhance dark scenes, while “Light Sense” adapts the picture to the room’s lighting.

Hisense will be the first to feature this AI-driven technology in its RGB Mini LED TVs. The MediaTek Pentonic 800 is the first processor with Dolby Vision 2 AI built in.



Source link

Continue Reading

AI Insights

How an Artificial Intelligence (AI) Software Development Company Turns Bold Ideas into Measurable Impact

Published

on


Artificial intelligence is no longer confined to research labs or Silicon Valley boardrooms. It’s quietly running in the background when your bank flags a suspicious transaction, when your streaming service recommends the perfect Friday-night movie, or when a warehouse robot picks and packs your order faster than a human could.

For businesses, the challenge is not whether to adopt AI. It’s how to do it well. Turning raw data and algorithms into profitable, efficient, and scalable solutions requires more than curiosity. It calls for a dedicated artificial intelligence (AI) software development company — a partner that blends technical mastery, industry insight, and creative problem-solving into a clear path from concept to reality.

Why Businesses Lean on AI Development Experts

The AI landscape is moving at breakneck speed. A new framework, algorithm, or hardware optimization can make yesterday’s cutting-edge solution feel outdated overnight. Keeping up internally often means diverting resources from your core business. And that’s where specialists step in.

  • Navigating complexity: Modern artificial intelligence systems aren’t plug-and-play. They involve layers of machine learning models, vast datasets, and intricate integrations. A seasoned partner knows the pitfalls and how to avoid them.
  • Bespoke over “one-size-fits-all”: Off-the-shelf AI products can feel like wearing a suit that almost fits. Custom-built solutions mould perfectly to a business’s data, workflows, and goals.
  • Accelerating results: Time is money. An experienced AI team brings established workflows, pre-built tools, and domain expertise to slash development time and hit the market faster.

The right development company doesn’t just deliver code; it delivers confidence, clarity, and a competitive edge.

What an AI Software Development Company Really Does

Imagine a workshop where engineers, data scientists, and business analysts work side-by-side, not just building tools but engineering transformation. That’s the reality inside a high-performing AI development company.

Custom AI solutions

Predictive analytics solutions that spot market trends before they peak, computer vision systems that inspect thousands of products per hour, or natural language processing (NLP) engines that handle customer queries with human-like understanding, the work is always tailored to the problem at hand.

System integration

Artificial intelligence is most powerful when it blends seamlessly into the systems you already rely on (from ERP platforms to IoT networks), creating a fluid, interconnected digital ecosystem.

Data engineering

AI feeds on data, but only clean, structured, and relevant data delivers results. Development teams collect, filter, and organize information into a form that algorithms can actually learn from.

Continuous optimization

AI isn’t a “set it and forget it” investment. Models drift, business needs evolve, and market conditions change. Continuous monitoring and retraining ensure the system stays sharp.

The Services That Power AI Transformation

A top-tier AI development partner wears many hats — consultant, architect, integrator, and caretaker — ensuring every stage of the AI journey is covered.

AI consulting

Before writing a single line of code, consultants assess your readiness, map potential use cases, and create a strategic roadmap to minimize risk and maximize ROI.

Model development

From supervised learning models that predict customer churn to reinforcement learning algorithms that teach autonomous systems to make decisions, this is where the real magic happens.

LLM deployment

Implementing large language models fine-tuned for industry-specific needs, e.g., for automated report generation, advanced customer service chatbots, or multilingual content creation. LLM deployment is as much about optimization and cost control as it is about raw capability.

AI agents development

Building autonomous, task-driven agents that can plan, decide, and act with minimal human input. From scheduling complex workflows to managing dynamic, real-time data feeds, digital agents are the bridge between intelligence and action.

AI integration

The best artificial intelligence isn’t a separate tool; it’s woven into your existing platforms. Imagine your CRM not just storing customer data but predicting which leads are most likely to convert.

Maintenance and support

AI models are like high-performance cars; they need regular tuning. Post-launch support ensures they continue to perform at peak efficiency.

The AI Implementation Process

Every successful AI project follows a deliberate and well-structured path. Following a proven AI implementation process, you can keep projects focused, transparent, and measurable.

  1. Discovery and goal setting: Clarify the “why” before tackling the “how.” What problem are we solving? How will success be measured?
  2. Data preparation: Gather datasets, clean them of inconsistencies, and label them so the AI understands the patterns it’s being trained on.
  3. Model selection and training: Choose algorithms suited to the challenge — whether that’s a neural network for image recognition or a gradient boosting model for risk scoring.
  4. Testing and validation: Rigorously test against real-world conditions to ensure accuracy, scalability, and fairness.
  5. Deployment and integration: Roll out AI into the live environment, integrating it with existing workflows and tools.
  6. Monitoring and continuous improvement: Keep a pulse on performance, retraining when needed, and adapting to evolving business goals.

Industries Seeing the Biggest Wins from AI

While every sector can find value in AI, some industries are already reaping transformative benefits.

  • Healthcare: AI is helping radiologists detect anomalies in scans, predicting patient risks, and even accelerating the search for new treatments.
  • Finance: Beyond fraud detection, AI models are powering real-time risk analysis and automating compliance, saving both time and reputation.
  • Retail and eCommerce: Personalized product recommendations, demand forecasting, and dynamic pricing are reshaping the customer experience.
  • Manufacturing: AI-driven predictive maintenance prevents costly downtime, while computer vision ensures every product meets quality standards.
  • Logistics: From route optimization to real-time fleet tracking, AI keeps goods moving efficiently.

Choosing the Right AI Development Partner

Not all AI partners are created equal. The best ones act as an extension of your team, translating business goals into technical blueprints and technical solutions into business outcomes. Look for:

  • Proven technical mastery — experience in your industry and with the AI technologies you need.
  • Room to grow — scalable solutions that expand with your data and ambitions.
  • Security at the core — a partner who treats data protection and compliance as non-negotiable.
  • Clear communication — transparent reporting, realistic timelines, and a commitment to keeping you informed at every stage.

Artificial intelligence has become the driving force behind modern business competitiveness, but it doesn’t run on autopilot. Behind every successful deployment is a team that knows how to design, train, and fine-tune systems to meet the realities of a specific industry.

A reliable artificial intelligence software development company is more than a vendor; it’s a long-term partner. It shapes AI into a tool that fits seamlessly into daily operations, strengthens a company’s existing capabilities, and evolves in step with changing demands.

In the end, AI’s true potential comes from the interplay between human expertise and machine intelligence. The companies that invest in that partnership now won’t merely adapt to the future. They’ll set its direction.



Source link

Continue Reading

AI Insights

‘World Models,’ an Old Idea in AI, Mount a Comeback

Published

on


The latest ambition of artificial intelligence research — particularly within the labs seeking “artificial general intelligence,” or AGI — is something called a world model: a representation of the environment that an AI carries around inside itself like a computational snow globe. The AI system can use this simplified representation to evaluate predictions and decisions before applying them to its real-world tasks. The deep learning luminaries Yann LeCun (of Meta), Demis Hassabis (of Google DeepMind) and Yoshua Bengio (of Mila, the Quebec Artificial Intelligence Institute) all believe world models are essential for building AI systems that are truly smart, scientific and safe.

The fields of psychology, robotics and machine learning have each been using some version of the concept for decades. You likely have a world model running inside your skull right now — its how you know not to step in front of a moving train without needing to run the experiment first.

So does this mean that AI researchers have finally found a core concept whose meaning everyone can agree upon? As a famous physicist once wrote: Surely youre joking. A world model may sound straightforward — but as usual, no one can agree on the details. What gets represented in the model, and to what level of fidelity? Is it innate or learned, or some combination of both? And how do you detect that its even there at all?

It helps to know where the whole idea started. In 1943, a dozen years before the term “artificial intelligence” was coined, a 29-year-old Scottish psychologist named Kenneth Craik published an influential monograph in which he mused that “if the organism carries a ‘small-scale model’ of external reality … within its head, it is able to try out various alternatives, conclude which is the best of them … and in every way to react in a much fuller, safer, and more competent manner.” Craiks notion of a mental model or simulation presaged the “cognitive revolution” that transformed psychology in the 1950s and still rules the cognitive sciences today. What’s more, it directly linked cognition with computation: Craik considered the “power to parallel or model external events” to be “the fundamental feature” of both “neural machinery” and “calculating machines.”

The nascent field of artificial intelligence eagerly adopted the world-modeling approach. In the late 1960s, an AI system called SHRDLU wowed observers by using a rudimentary “block world” to answer commonsense questions about tabletop objects, like “Can a pyramid support a block?” But these handcrafted models couldn’t scale up to handle the complexity of more realistic settings. By the late 1980s, the AI and robotics pioneer Rodney Brooks had given up on world models completely, famously asserting that “the world is its own best model” and “explicit representations … simply get in the way.”

It took the rise of machine learning, especially deep learning based on artificial neural networks, to breathe life back into Craik’s brainchild. Instead of relying on brittle hand-coded rules, deep neural networks could build up internal approximations of their training environments through trial and error and then use them to accomplish narrowly specified tasks, such as driving a virtual race car. In the past few years, as the large language models behind chatbots like ChatGPT began to demonstrate emergent capabilities that they weren’t explicitly trained for — like inferring movie titles from strings of emojis, or playing the board game Othello — world models provided a convenient explanation for the mystery. To prominent AI experts such as Geoffrey Hinton, Ilya Sutskever and Chris Olah, it was obvious: Buried somewhere deep within an LLM’s thicket of virtual neurons must lie “a small-scale model of external reality,” just as Craik imagined.

The truth, at least so far as we know, is less impressive. Instead of world models, today’s generative AIs appear to learn “bags of heuristics”: scores of disconnected rules of thumb that can approximate responses to specific scenarios, but don’t cohere into a consistent whole. (Some may actually contradict each other.) It’s a lot like the parable of the blind men and the elephant, where each man only touches one part of the animal at a time and fails to apprehend its full form. One man feels the trunk and assumes the entire elephant is snakelike; another touches a leg and guesses it’s more like a tree; a third grasps the elephant’s tail and says it’s a rope. When researchers attempt to recover evidence of a world model from within an LLM — for example, a coherent computational representation of an Othello game board — they’re looking for the whole elephant. What they find instead is a bit of snake here, a chunk of tree there, and some rope.

Of course, such heuristics are hardly worthless. LLMs can encode untold sackfuls of them within their trillions of parameters — and as the old saw goes, quantity has a quality all its own. That’s what makes it possible to train a language model to generate nearly perfect directions between any two points in Manhattan without learning a coherent world model of the entire street network in the process, as researchers from Harvard University and the Massachusetts Institute of Technology recently discovered.

So if bits of snake, tree and rope can do the job, why bother with the elephant? In a word, robustness: When the researchers threw their Manhattan-navigating LLM a mild curveball by randomly blocking 1% of the streets, its performance cratered. If the AI had simply encoded a street map whose details were consistent — instead of an immensely complicated, corner-by-corner patchwork of conflicting best guesses — it could have easily rerouted around the obstructions.

Given the benefits that even simple world models can confer, it’s easy to understand why every large AI lab is desperate to develop them — and why academic researchers are increasingly interested in scrutinizing them, too. Robust and verifiable world models could uncover, if not the El Dorado of AGI, then at least a scientifically plausible tool for extinguishing AI hallucinations, enabling reliable reasoning, and increasing the interpretability of AI systems.

That’s the “what” and “why” of world models. The “how,” though, is still anyones guess. Google DeepMind and OpenAI are betting that with enough “multimodal” training data — like video, 3D simulations, and other input beyond mere text — a world model will spontaneously congeal within a neural network’s statistical soup. Meta’s LeCun, meanwhile, thinks that an entirely new (and non-generative) AI architecture will provide the necessary scaffolding. In the quest to build these computational snow globes, no one has a crystal ball — but the prize, for once, may just be worth the AGI hype.



Source link

Continue Reading

Trending