Connect with us

AI Insights

Artificial Intelligence And State Tax Agencies

Published

on


In this episode of Tax Notes Talk, Ryan Minnick of the Federation of Tax Administrators discusses how state tax agencies are approaching artificial intelligence and shares insights from the FTA’s upcoming briefing paper on generative AI and its 2024 tax agency survey.

Tax Notes Talk is a podcast produced by Tax Notes. This transcript has been edited for clarity.

David D. Stewart: Welcome to the podcast. I’m David Stewart, editor in chief of Tax Notes Today International. This week: AI and state tax.

In the past few years, every industry has looked at implementing artificial intelligence into their workflows, and state tax agencies are no exception. But the sensitive nature of the data tax agencies must handle requires an especially careful use of the technology.

So how can state tax administrators use artificial intelligence, and how can agencies balance the adoption of new technology with the need to protect taxpayer information?

Here to talk more about this is Tax Notes reporter Emily Hollingsworth. Emily, welcome back to the podcast.

Emily Hollingsworth: Thanks, Dave. Glad to be back.

David D. Stewart: Now, there have been a lot of developments in the past year or so with state tax administrations and the adoption of AI. Could you give us some quick background on what’s been happening and where we are?

Emily Hollingsworth: Absolutely. Among states, AI policy and legislation have, in a word, exploded this year. The National Conference of State Legislatures recently said that all 50 states, the District of Columbia, Puerto Rico, and the Virgin Islands have introduced legislation about AI this year. Twenty-eight of these states and the Virgin Islands enacted legislation on AI, including AI regulations and also prohibiting AI use for certain criminal activity.

For quick context, I’ll be talking about two forms of AI. There’s generative AI, which can generate text, images, or other items. ChatGPT is a well-known example of a technology that uses generative AI. Then there’s machine learning AI. This is an older technology that uses algorithms to identify patterns and data.

When it comes to examples of state tax administrations and recent developments on AI, California’s Department of Tax and Fee Administration, or the CDTFA, announced back in late April, early May, that it will be working with the firm SymSoft Solutions to deploy a generative AI solution to enhance the department’s customer services. This solution is trained on the department’s reference materials, including manuals, guides, and other documents. When a taxpayer has a question, the solution will generate potential responses that the department agent can provide to the taxpayer. The goal is to use the solution to cut down on customer wait times and help alleviate workloads for agents during peak tax filing periods. Now, while the CDTFA and SymSoft are under a year-long contract to deploy the solution, as I understand it, the solution at the moment isn’t currently being used in real time for customers’ questions.

David D. Stewart: Well, I know you’ve been following this area pretty closely. Could you give us an idea of what sort of things you’ve been working on lately?

Emily Hollingsworth: Definitely. I’m currently working on a report about this generative AI solution from the CDTFA, looking more closely into the initial testing period the solution underwent in 2024, as well as the project’s next steps.

A separate report I’m working on looks into machine learning AI tools used by tax departments for decades, for everything from tax return processing to fraud detection and prevention. As state leaders, lawmakers, and advocacy groups call for oversight and regular inspection of AI tools, I’m looking into whether these machine learning tools would also be subject to this sort of inspection and oversight.

David D. Stewart: Now, I understand you recently talked with somebody about this. Who did you talk to?

Emily Hollingsworth: I talked to Ryan Minnick, who is the chief operating officer with the Federation of Tax Administrators.

David D. Stewart: And what sort of things did you get into?

Emily Hollingsworth: Well, the Federation of Tax Administrators, or FTA, has been doing an enormous amount to provide training and resources for state tax agencies looking to pilot or implement AI solutions. We got into the FTA’s upcoming briefing paper on generative AI, as well as its 2024 tax agency survey.

We also delved into topics like data security, transparency and trust, and the roles that state tax agencies are taking when it comes to piloting or implementing AI solutions.

David D. Stewart: All right. Let’s go to that interview.

Emily Hollingsworth: Ryan Minnick, welcome to the podcast.

Ryan Minnick: Awesome. Thanks so much for having me, Emily. I’m excited to chat with you today.

Emily Hollingsworth: Likewise. So the Federation of Tax Administrators is developing a briefing paper on generative AI technology. The paper is intended to equip tax agencies with information around generative AI, particularly as states and state legislatures are facing increasing pressure to stay competitive with other states and pilot or execute generative AI.

When is the briefing paper expected to be released, and what aspects of generative AI will this briefing paper cover?

Ryan Minnick: It’s a great question, and I guess I’ll back up a little bit and explain how this project started and where we are in the scope of things. So as you know, FTA, working with all the states, we’re very focused on emerging issues. So in the 10 years I’ve been here, whether it’s blockchain or whether it’s some movement to the cloud, there’s always some sort of technical innovation on the horizon. And so whenever we see a big one, we tend to organize information around it to help our members understand what their peers are thinking, understand what’s happening in the private and academic sectors, just to really make informed decisions for themselves. So the unique thing about states always is that states are sovereign entities. They’re going to take whatever approach that they feel is best, but they rely on organizations like FTA to convene them and help guide them a little bit and give them the information that is the most important to them.

So a little over a year and a half ago, we formed an AI working group, and we did it in two parts. We started with education, so we hosted, over the course of a 12-month period, two rounds of briefings with experts from every corner of the technology sector and academic sector. So we had researchers that focused on the application of transparency and ethics in AI, we had researchers who focused on large language models and how they actually work, and everything at every depth, from technical to business.

We had some private sector groups graciously share with us concepts that they had built for other sectors that leverage these technologies, just so the attendees could get a feel for understanding how you take this fantastical thing of generative AI and start to translate it into what you could actually use it for in your day-to-day work — because there’s been, I think, so much really good marketing by the companies that produce generative AI technologies that it becomes a little bit hard to conceptualize what you might actually do in a government agency with it. How is it potentially going to help us? What’s the innovation there?

So after those two rounds of briefings, we took a set of volunteers that represents a good number of our members. A couple dozen people got together, and they organized themselves into three groups. Those three groups have been focused on some background education information, so just helping anyone who’s reading the white paper understand the terminology that’s used frequently in the technology, the ways that we encounter the technology in our day-to-day lives, other forms of AI, because people often confuse generative AI for things like machine learning and other tools that have been around for quite some time.

We have a group that focuses on opportunities — so not necessarily in production, although some are, but different examples of where this technology might be utilized or where it could be utilized. And they even went a step further and fleshed out some of those concepts just to articulate to both business and technology stakeholders what the possibilities are.

And then the third group — which I have to admit, my partner in crime for this particular working group is our general counsel, Brian Oliner, who’s formerly the AG representing the Maryland Comptroller’s Office. And every time he and I talk about emerging technology, he’s such a good foil for a nerd like me because he’s a seasoned attorney who has seen complex vendor contracts and understands the ways that states need to protect themselves from a legal and statutory standpoint. And so he and I always like to hash out the details of technology. So he actually worked primarily with the third group, which is our considerations and risks group.

So you get through this white paper, and the goal is everyone from executive and business stakeholders in the agency through to maybe some of the technology stakeholders that deal primarily in strategy, they’re going to be able to consume the appropriate parts of this white paper and come out on the other side armed with information about how the technology might be applied, understand it maybe through the tax lens a little bit better — through the tax technology, tax agency technology lens a little better — and then also have at their fingertips a bunch of other resources to go out and look at.

So our teams, we’re not rewriting the book on generative AI. There’s so many brilliant researchers out there that are every day coming up with innovations and reports. So in some cases, we’re pointing the reader to those really great resources and just giving them the tax context they need to think about it. So that’s been the whole purpose of the group and how we’ve structured so far.

And what’s really exciting, and why I’m so glad about your question, is that I finally get to say that by the time our FTA technology conference happens the second week of August out in Tacoma, which is the only technology conference that serves the tax agency perspective. So there’s a lot of tax technology conferences out there. They’re usually practitioner or CPA driven; this one’s hosted by us. It’s for tax agencies and people who care about what tax agencies are doing with technology. By the time we get to that conference, we will have published the white paper.

And so that will be a really great venue for the IT and business leadership for the agencies who will have, at that point, had the white paper in their hands for a little bit for when they get together and convene, take what they do at our technology conference every year, which is think about the possibilities and what people are doing with technology and hear about successes and new ideas, take it to the next level and see how this fairly new yet very popular-to-talk-about emerging technology might fit into that.

Emily Hollingsworth: That sounds really interesting, and we certainly look forward as well to the briefing paper. We’ve discussed the working groups, and I was curious to know, is that program still open to FTA member agencies who may be interested in participating? We had also talked about this as well, but what are states learning or taking away from these work groups and sessions?

Ryan Minnick: Absolutely. It’s a great question. So the functional part of the working groups are starting to wrap up. So as we finalize the white paper — at least the first edition of the white paper — the folks that have worked in the three drafting groups that I mentioned before, they’re going to have an opportunity to collaborate with us on that final white paper version that is ultimately published to our members.

And it gets to the question that you asked, which is, what are states already getting out of this? I actually asked this question of our program co-leads when I was moderating the panel on this at the annual meeting, and the answers were really better than I had hoped. You always hope that people who participate in a working group, that they get value out of it even before the product is finished. And some of the things that they shared were, it was helpful for them to even hear how their peers — who were, as part of the working group, probably already people in their agencies who were thinking about this more than the average employee — to hear how their peers were thinking about this, to hear the level of curiosity, to hear the optimism for the possibilities.

And then even for some of the working group members, they’ve shared with me that this has actually helped make the technology feel a little bit more tangible and a little bit more real. I think the hype curve for generative AI has been really substantial. The underlying technology has been around for a couple of decades, but if you look at — for our conversation today, if we look at ChatGPT launching 20 months ago or so, if you look at that as the inflection point of the generative AI craze, the hype curve was very scary for people in a lot of roles, but in particular in government, because the initial premise was, oh my gosh, this can do everything. It can think. It’s a facsimile of a human, it can do all these wonderful things.

But of course, as time has gone on, we’ve all realized that, like any technology tool, it’s a new version of a tool that’s going to hopefully help improve productivity. It’s going to help our organizations do work better. But if you’ve played with some of these tools recently, they’re not taking anybody’s job away anytime soon; if anything, they’re probably freeing up knowledge workers to do work a little bit more efficiently, a little bit more effectively.

The bigger thing that I think is coming out of these trends, and that I think some of our members saw as they were participating in the working group, is the real opportunity for training in this space. This is a shift that is more like the shift from a typewriter to a word processor than it is a shift from a server in your server closet at the office to a server in the cloud. This is a fundamental change in how you interact with technology, which requires you to just completely rethink everything that you’re doing.

And it’s also one of those inflection points where people entering the workforce now are already receiving substantial exposure to it. And so you’re going to have a point where, much like the typewriter to the word processor, you’re going to have a big chunk of your workforce who you’re going to have to upskill and train on it, because they’re going to experience it for the first time when they’re in the middle stages of their career, but you’re at the same time going to be bringing people in at the early stages of their career who are going to be natively understanding it. And it doesn’t happen a lot with emerging technologies; it’s always a little bit more subtle than, I think, the point that we’re at now.

Emily Hollingsworth: Thank you. How transparent do you believe that state tax agencies should be when informing the public about AI use?

Ryan Minnick: It’s a tricky question to answer, not because I don’t believe in transparency, but because, like I mentioned before, every state’s their own sovereign island, so they have their own regulations and rules that they go through. And so I genuinely believe that our members are as communicative as possible, as they’re able to be, with all their stakeholders.

And the only times that it might take a little bit more time to share information is when that information being shared might either compromise someone’s data security, any data that we want to protect, or also that it might, potentially — in the case of fraud fighting — it might give away to criminals how we’re doing something to prevent them from committing crimes. So I think those tend to be the two areas where you see maybe a lag in that sharing, because we want to make sure that we’re not oversharing.

So 10 or 15 years ago, when we started — in a collaboration for the security summit with the IRS and the tax software industry — developing better methods to protect individuals’ identities when they were filing their individual income tax returns, a lot of that work now is very public. There’s a lot of that work that’s been made available, and people understand what that group is and how it works.

But at the time, we were putting those frameworks in place, and we were sharing as much as we were able to without compromising the nature of the work, because we prioritized making sure that the criminals couldn’t figure out what we were doing as we were doing it, because you want to make sure that — unfortunately, the problem with the public internet is criminals — I know in the tax world we use “fraudster” a lot, but in general, criminals of any variety, whether you’re committing fraud individually, you’re targeting someone for identity theft, or you’re doing it at scale because you’re trying to use data from the dark web — those are areas where we are, in general, as transparent as possible.

But when it comes to individual members and what they do, I don’t govern what information they release and at what time. So I think we as an organization commit to keeping our members informed and, to the extent we can, keeping the broader tax community informed of trends as they happen.

Emily Hollingsworth: Thank you. We’ll move on now to the survey that the FTA had released in 2024 with EY. The survey looked at 37 state tax agencies and two city tax departments.

I thought that its section, in particular, on AI was pretty interesting. For example — and I’ll read a few of the numbers from the survey — but it said that 15 percent of tax administrations are “conducting pilots or are already using AI in core functions.” It also said that 9 percent of respondents said that machine learning AI is being used in core functions, while 12 percent had said that they’re conducting pilot programs on machine learning technology — again, distinct from generative AI. So I was curious, is the FTA planning to release a state tax agency survey this year?

Ryan Minnick: So we’ve not released the follow-up survey yet; we’re still working on determining what that survey design looks like. We see great value in continuing this effort. That was the first comprehensive survey we had issued in well over a decade. It was a priority of Sharonne Bonardi, our executive director, who formerly was the deputy comptroller for the state of Maryland. And when she joined FTA, one of the resources she wanted us to have —primarily for our members, but also for the general public — was a state of the state tax agencies, helping people understand the priorities and the ways that tax agencies were thinking about emerging issues.

And so that was really that report that you mentioned — which I think is a great read, I think everybody should go to our website and download it — when it comes to the AI question, we actually drafted and sent out and published that survey right as generative AI was blowing up. You wish you had a time machine, right? Because I wish we could have structured that question a little bit differently, because as we have been for the last year and a half on a road show, talking about insights from the survey, I get a question about this little section all the time.

And ultimately the nonexciting answer is that the question was so broad that it was really incredibly difficult to know exactly how somebody was thinking when they answered the question. We asked about AI because at the time, before generative AI came out, AI was seen as advanced machine learning — I mean, maybe some algorithmic work, maybe some natural language processing. So things like when you call into a phone tree and you say naturally what you’re looking for, and the phone tree tries its best to help you find the right person. But that was really the intent of the question when we were writing it, because generative AI had just started emerging as this — we hadn’t quite seen the splash from ChatGPT yet.

Of course, now fast-forward, and everybody sees that question, they’re like, “All these states have AI in production? Oh my goodness, this is crazy.” No. They don’t. Not the AI you’re thinking of, this generative AI that’s on everybody’s brain today; AI in the sense that a lot of processing in tax agencies across the country, there’s a lot of machine learning that takes place. It’s programmatic. It’s looking for patterns in terms of fraud. It’s looking for noncompliance, so criminal fraud versus just general tax fraud, people that either underreport or don’t accurately or correctly answer something. It’s also looking for things like accuracy; it’s monitoring trends. There’s a lot of uses for those technologies.

So I think the more accurate, interesting insight from that survey is exactly what you pointed out, which is the machine learning piece. Even years after a lot of these machine learning trends started to hit, there’s still agencies that are looking at machine learning and how they can use it in different ways. And I’ll say as a technologist supporting tax agencies, I think that’s a great thing, because there’s a lot of really great uses for what I think a lot of people in the public think is old technology, because right now we’ve all moved on to generative AI. We all want Siri to work better on our iPhones, and we’re not really thinking about anything else. But machine learning in some contexts is a better solution to a lot of the problems people want to solve with generative AI. And it’s not only better in terms of safety, but it could be better in terms of performance, in terms of cost.

Generative AI certainly has its place: It’s powered by large language models; it handles language super well. There’s a lot you can do with it. It has a lot of configuration that’s required, a lot of training that’s required so that you get accurate and nonhallucinatory answers. But machine learning, that’s good old-fashioned math. That’s really sophisticated math. And so what generative AI can’t do really well is math. There’s some exceptions, but for the most part it’s good at language. And most of tax is math. So we actually find ourselves with pretty advanced machine learning capabilities that are available, that have been for a long time, that a lot of agencies use in production that, unfortunately for — I suppose for the hype of it, they fall under that form of AI.

But I typically, even when I’m talking about these things on stage somewhere, I’ll talk about machine learning, and I’ll talk about generative AI. I almost never use the AI term broadly because AI is artificial intelligence, and so far nothing we’ve developed is artificially intelligent; it just has the appearance of it.

So doing math really fast seems very intelligent. So that’s machine learning. Interpreting language really fast, or drafting a country music song in the style of Garth Brooks or whatever people have asked ChatGPT to do, that seems very artificially intelligent, but in neither of those cases is that term accurate.

I digress a little bit because I know your question was about our survey, but it’s so interesting because, going through the results and seeing what states were thinking about and how they were responding to the survey, that was my takeaway, is that I feel like I wish I could go back in time and ask more separately about the different emerging forms of the technology because I think we would’ve gotten maybe a more representative answer. I think we got some good insights, but 12 percent of states are not actively using generative AI in production. So I hope no one reads the report and thinks that.

Emily Hollingsworth: I guess that also leads into my question. So we have the percentage — for example, 12 percent said that they’re piloting machine learning technology. Do we know how many states that translates to?

Ryan Minnick: Yeah. Based on the survey itself, I assume, based on the data team that put all that together, to me that would be a handful, four or five states that responded to the survey. But that also, like I said, that question’s a little bit of a misnomer in how you interpret it. So that could be four or five states at the time of the survey were actively piloting a new use of the technology. Those same states could probably have already had that technology in place doing something else at the same time. So that’s something else I think bears explaining.

If you’re curious about how tax agencies work, people listening, we don’t sit still. One of the things that I’ve learned in my 10 years at FTA is that agencies are always looking forward — how they can do their job better, how they can better serve the citizen, how they can take innovations and leverage them to do more with less, because unfortunately in government, I think that’s a lot of the situation we always find ourselves with. So budgets don’t necessarily grow as much as we’d like, or sometimes they get cut. Oftentimes legislatures ask for agencies to do new things, and they don’t always give them money to do that. So everybody’s trying to do the most with what they have.

And so when you get innovative technologies that could be used in a line of business — and tax agencies have numerous lines of business: They have everything from receiving data, you have your traditional tax return processing, you’ve got auditing and collections and customer experience, and you have your legal and tax policy group that has to interpret things that come out of [the] legislature. And there’s a lot of moving parts. And so with this question, the way I interpret the answer is four or five or six states at the time of the survey were looking at machine learning to potentially solve a problem somewhere in their agency, irrespective of wherever they were already using that technology, if they were, to solve problems in a different place.

Emily Hollingsworth: This is a question that I had when we were discussing state transparency. It relates to California’s announcement in April that it had secured a contract with a company and is testing a generative AI solution. Now, this agreement is going to test the solution under a limited — I think this is a testing environment, so it’s not necessarily something that’s going to go out to the public immediately or be used during large periods of time like filing season.

But I was curious to know what your thoughts are on this development. California has also been very transparent about its developments in AI and has also done a lot of work to vet and test those solutions. So I was curious to know what your thoughts were on that particular development.

Ryan Minnick: Well, it’s a great question, first of all. California, certainly as one of the larger states, certainly by staff, they have the largest numbers of people working on tax administration across their several agencies that do that work. I think even in terms of technology, they’re a great example of transparency in government.

So you look at their technology modernization plan that they’ve been doing — I think they’re in part two, and I forget what phase of part two they’re in — but they started publicly sharing that modernization strategic plan 10-plus years ago, when they were in the first phase, or the first part. So it doesn’t surprise me at all that they were incredibly transparent about piloting a technology and going about the process of securing a contract to do so.

I think it’s also helpful that they shared the scope of what they were thinking about. I know that oftentimes parts of the procurement process are public, and so people can see what states are doing in different agency areas on a regular basis. But California certainly in this case went one step further. I know you all covered it, I know a couple other media outlets did, and they shared, they said, “We’re looking at the potential for this technology. It’s a very controlled experiment.”

I can’t comment on the project specifically because, first of all, I’m not a part of it. They’re just a member, and we don’t usually get into that level of detail about it. But in concept, what they did was great. They decided to do something, they shared what they were doing, and now I presume — not being familiar with the project specifically — I presume they’re doing that. And then to the extent that they make a decision, they’ll come back and take the next step.

I think that’s, generally speaking, what a lot of agencies look at in terms of a pilot. They want to be very upfront with people who are impacted by whatever they’re testing. So sometimes we’re able to test technologies in a bubble or in a vacuum, and so we don’t necessarily have that need to share what we’re testing, because we’re not doing it — for example, if you’re wanting to test the potential for a technology that would leverage generative AI, but you’re not going to actually test it with taxpayers, so you’re just going to test it internally, and you’re going to test it only on maybe a synthetic data set — so something that’s not even real data or real information, it’s something that’s manufactured for the purpose of testing — just to understand how [the] technology works. That’s a super great, super safe experiment, because it’s not touching anything sensitive, and if it’s something that doesn’t work out, then you didn’t go through a big implementation to put it in front of everybody.

Separately, if you go down the route like California did, and you want to do a limited trial of something, and you can define that scope and you can let folks know about it in the way that makes sense within that state’s rules and how the agency operates and how the state as a whole has policies on it, I think that’s really great, too.

One of the fun things about technology in general — I guess one of the most fun things about technology — is that especially these days, really good ideas can come from anywhere. And the more — this is true inside of government, outside of government — this is my tip to nontechnologists from a technologist is, read about technology; understand how people are using it. Think about how they’re using it, and think about how you might be able to. We’re seeing so many innovative uses for not even just generative AI, but just tools in general that are being made available.

And I think part of that is this increasing level of curiosity of people wanting to figure out how to be more effective, figure out how to optimize things a little better, figure out how to deliver on their mission in the way that best serves their stakeholders, whoever their stakeholders may be. In our context, it’s taxpayers and tax administrators, but in somebody else’s, it might be readers, if you’re a journalist. How can you leverage technologies and understanding how other sectors are doing it? That’s one of the reasons why our work group interviewed so many private sector professionals. Some of them weren’t even in the tax world — they were just people working on generative AI in the private sector somewhere else. We just wanted to hear what they were thinking about and how they approached the project or what their view on the knowledge was.

Because most of the time, I can talk about tax all day long, and Emily, you can translate it into journalism, whether it’s tax or otherwise. Likewise, you could tell me something that you’re doing in order to maybe reach readers of Tax Notes a little bit easier, a little bit better, get into their inbox a little bit faster. I’m going to listen with my tax administrator ears and think, “Oh, how could I potentially take this really cool thing that you’re doing and help benefit my members or the taxpayers?” or, “How can I help my members benefit their stakeholders?”

You get into the question of, is transparency important? Absolutely. I think data security is also very important, and fighting crime is also really important. So you have to balance everything. But at the end of the day, it’s because there’s so much curiosity out there, and I think people who pay attention to these things and can be helpful — especially if people have great ideas, there’s some great careers in tax administration, I suppose I should put a shameless plug in — so if you have great ideas for tax administration, share them with tax administrators, because we want to hear them. We want to discharge those duties as faithfully and efficiently and effectively as possible.

Emily Hollingsworth: Absolutely. And Ryan, again, thank you so much for coming on the podcast.

Ryan Minnick: Oh, of course. Happy to do it anytime. It was great talking to you today, Emily.



Source link

AI Insights

Microsoft launches $4B artificial intelligence reskilling institute

Published

on


Microsoft unveiled a new initiative Wednesday that’s intended to bring artificial intelligence skills to millions of people around the world.

Microsoft Elevate will spend $4 billion in cash and technology donations to philanthropic, educational, and labor organizations over the next four years, as it seeks to accelerate the proliferation of AI technology.

Microsoft makes the AI tool CoPilot, and is a key partner of OpenAI, the maker of ChatGPT. The company is investing aggressively in the infrastructure needed to power its AI push, pledging to spend $80 billion on data centers this year.

The investments come as Microsoft lays off thousands of employees in in its home state, Washington, and globally.

RELATED: Latest Microsoft layoffs could hit 9,000 employees

“ One of the things that has changed the most dramatically about Microsoft is we’ve moved as a company — as our industry has moved as an industry — from one that spent almost every dollar it earned on employing people to what is in fact the greatest capital and infrastructure investment in the history of global infrastructure,” Microsoft President and Vice Chair Brad Smith said at a launch event in Seattle.

In an interview with KUOW, Smith said that restructuring is “ frankly something that should always be hard, but it is something that needs to be done for a company to be successful for many decades and not just a few years.”

Smith said Microsoft Elevate will employ about 300 people, and partner with organizations around the world on a variety of initiatives aimed at increasing AI literacy. The Microsoft Elevate Academy plans to help 20 million people earn AI skilling credentials to be more competitive in an uncertain job market.

“ I think in many ways it gives us the opportunity to reach everybody,” Smith said, “and that includes people who will be using and designing AI in the future, say the future of what computer science education becomes, people who are designing AI systems for businesses, but consumers as well, students and teachers who can use AI to better reach and prepare for helping students.”

The initiative also includes the creation of Microsoft’s AI Economy Institute, a think tank of academics that will study the societal impacts of AI.

The effect generative AI will have on education remains a source of much speculation and debate.

RELATED: Learning tool or BS machine? How AI is shaking up higher ed

While some educators are embracing the technology, others are struggling to rein in cheating and question whether the technology could undermine the very premise of education as we know it.

Regardless of the ongoing debate, Microsoft has always been at the forefront of bringing technology into the classroom, first with PCs and now AI. The company is betting that the resources it is devoting to Microsoft Elevate will help shape a path forward that allows AI to be more useful than disruptive in education and across the economy.

RELATED: AI should be used in class, not feared. That’s the message of these Seattle area teachers

“ There are many different skills that we’re all going to need to work together to pursue, but I think there’s also a North Star that should guide us,” Smith said. “It’s a North Star that might sound unusual coming from a tech company, but I think it’s a North Star that matters most. We need to use AI to help us think more, not less.”



Source link

Continue Reading

AI Insights

Artificial Intelligence and Criminal Exploitation: A New Era of Risk

Published

on


 

WASHINGTON, D.C. – The House Judiciary Subcommittee on Crime and Federal Government Surveillance will hold a hearing on Wednesday, July 16, 2025, at 10:00 a.m. ET. The hearing, “Artificial Intelligence and Criminal Exploitation: A New Era of Risk,” will examine the growing threat of Artificial Intelligence (AI)-enabled crime, including how criminals are leveraging AI to conduct fraud, identity theft, child exploitation, and other illicit activities. It will also explore the capabilities and limitations of law enforcement in addressing these evolving threats, as well as potential legislative and policy responses to ensure public safety in the age of AI.

WITNESSES

  • LTC Andrew Bowne, Former Counsel, Department of the Air Force Artificial Intelligence Accelerator at the Massachusetts Institute of Technology
  • Ari Redbord, Global Head of Policy, TRM Labs;  former Assistant United States Attorney
  • Zara Perumal, Co-Founder, Overwatch Data; former member, Threat Analysis Department, Google



Source link

Continue Reading

AI Insights

AI shapes autonomous underwater “gliders” | MIT News

Published

on


Marine scientists have long marveled at how animals like fish and seals swim so efficiently despite having different shapes. Their bodies are optimized for efficient, hydrodynamic aquatic navigation so they can exert minimal energy when traveling long distances.

Autonomous vehicles can drift through the ocean in a similar way, collecting data about vast underwater environments. However, the shapes of these gliding machines are less diverse than what we find in marine life — go-to designs often resemble tubes or torpedoes, since they’re fairly hydrodynamic as well. Plus, testing new builds requires lots of real-world trial-and-error.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Wisconsin at Madison propose that AI could help us explore uncharted glider designs more conveniently. Their method uses machine learning to test different 3D designs in a physics simulator, then molds them into more hydrodynamic shapes. The resulting model can be fabricated via a 3D printer using significantly less energy than hand-made ones.

The MIT scientists say that this design pipeline could create new, more efficient machines that help oceanographers measure water temperature and salt levels, gather more detailed insights about currents, and monitor the impacts of climate change. The team demonstrated this potential by producing two gliders roughly the size of a boogie board: a two-winged machine resembling an airplane, and a unique, four-winged object resembling a flat fish with four fins.

Peter Yichen Chen, MIT CSAIL postdoc and co-lead researcher on the project, notes that these designs are just a few of the novel shapes his team’s approach can generate. “We’ve developed a semi-automated process that can help us test unconventional designs that would be very taxing for humans to design,” he says. “This level of shape diversity hasn’t been explored previously, so most of these designs haven’t been tested in the real world.”

But how did AI come up with these ideas in the first place? First, the researchers found 3D models of over 20 conventional sea exploration shapes, such as submarines, whales, manta rays, and sharks. Then, they enclosed these models in “deformation cages” that map out different articulation points that the researchers pulled around to create new shapes.

The CSAIL-led team built a dataset of conventional and deformed shapes before simulating how they would perform at different “angles-of-attack” — the direction a vessel will tilt as it glides through the water. For example, a swimmer may want to dive at a -30 degree angle to retrieve an item from a pool.

These diverse shapes and angles of attack were then used as inputs for a neural network that essentially anticipates how efficiently a glider shape will perform at particular angles and optimizes it as needed.

Giving gliding robots a lift

The team’s neural network simulates how a particular glider would react to underwater physics, aiming to capture how it moves forward and the force that drags against it. The goal: find the best lift-to-drag ratio, representing how much the glider is being held up compared to how much it’s being held back. The higher the ratio, the more efficiently the vehicle travels; the lower it is, the more the glider will slow down during its voyage.

Lift-to-drag ratios are key for flying planes: At takeoff, you want to maximize lift to ensure it can glide well against wind currents, and when landing, you need sufficient force to drag it to a full stop.

Niklas Hagemann, an MIT graduate student in architecture and CSAIL affiliate, notes that this ratio is just as useful if you want a similar gliding motion in the ocean.

“Our pipeline modifies glider shapes to find the best lift-to-drag ratio, optimizing its performance underwater,” says Hagemann, who is also a co-lead author on a paper that was presented at the International Conference on Robotics and Automation in June. “You can then export the top-performing designs so they can be 3D-printed.”

Going for a quick glide

While their AI pipeline seemed realistic, the researchers needed to ensure its predictions about glider performance were accurate by experimenting in more lifelike environments.

They first fabricated their two-wing design as a scaled-down vehicle resembling a paper airplane. This glider was taken to MIT’s Wright Brothers Wind Tunnel, an indoor space with fans that simulate wind flow. Placed at different angles, the glider’s predicted lift-to-drag ratio was only about 5 percent higher on average than the ones recorded in the wind experiments — a small difference between simulation and reality.

A digital evaluation involving a visual, more complex physics simulator also supported the notion that the AI pipeline made fairly accurate predictions about how the gliders would move. It visualized how these machines would descend in 3D.

To truly evaluate these gliders in the real world, though, the team needed to see how their devices would fare underwater. They printed two designs that performed the best at specific points-of-attack for this test: a jet-like device at 9 degrees and the four-wing vehicle at 30 degrees.

Both shapes were fabricated in a 3D printer as hollow shells with small holes that flood when fully submerged. This lightweight design makes the vehicle easier to handle outside of the water and requires less material to be fabricated. The researchers placed a tube-like device inside these shell coverings, which housed a range of hardware, including a pump to change the glider’s buoyancy, a mass shifter (a device that controls the machine’s angle-of-attack), and electronic components.

Each design outperformed a handmade torpedo-shaped glider by moving more efficiently across a pool. With higher lift-to-drag ratios than their counterpart, both AI-driven machines exerted less energy, similar to the effortless ways marine animals navigate the oceans.

As much as the project is an encouraging step forward for glider design, the researchers are looking to narrow the gap between simulation and real-world performance. They are also hoping to develop machines that can react to sudden changes in currents, making the gliders more adaptable to seas and oceans.

Chen adds that the team is looking to explore new types of shapes, particularly thinner glider designs. They intend to make their framework faster, perhaps bolstering it with new features that enable more customization, maneuverability, or even the creation of miniature vehicles.

Chen and Hagemann co-led research on this project with OpenAI researcher Pingchuan Ma SM ’23, PhD ’25. They authored the paper with Wei Wang, a University of Wisconsin at Madison assistant professor and recent CSAIL postdoc; John Romanishin ’12, SM ’18, PhD ’23; and two MIT professors and CSAIL members: lab director Daniela Rus and senior author Wojciech Matusik. Their work was supported, in part, by a Defense Advanced Research Projects Agency (DARPA) grant and the MIT-GIST Program.



Source link

Continue Reading

Trending