Connect with us

AI Insights

A Kentucky Town Experimented With AI. The Results Were Stunning

Published

on


A county in Kentucky conducted a month-long “town hall” with nearly 8,000 residents in attendance earlier this year, thanks to artificial intelligence technology.

Bowling Green, Kentucky’s third largest city and a part of Warren County, is facing a huge population spike by 2050. To scale the city in preparation for this, county officials wanted to incorporate the community’s input.

Community outreach is tough business: town halls, while employed widely, don’t tend to gather a huge crowd, and when people do come, it’s a self-selecting pool of people with strong negative opinions only and not representative of the town at large.

On the other hand, gathering the opinion of a larger portion of the city via online surveys would result in a dataset so massive that officials and volunteers would have a hard time combing through and making sense out of it.

Instead, county officials in Bowling Green had AI do that part. And participation was massive: in a roughly month-long online survey, about 10% of Bowling Green residents voiced their opinions on the policy changes they wanted to see in their city. The results were then synthesized by an AI tool and made into a policy report, which is still visible for the public to see on the website.

“If I have a town hall meeting on these topics, 23 people show up,” Warren County judge executive Doug Gorman told PBS News Hour in an interview published this week. “And what we just conducted was the largest town hall in America.”

The Bowling Green Experiment

The county got the help of a local strategy firm to launch a website in February where residents could submit anonymous ideas. For the survey they used Pol.is, an open-source online polling platform used around the world for civic engagement, and to particularly great success in Taiwan.

The prompt was open-ended, just asking participants what they wanted to see in their community over the next 25 years. They could then continue to participate further by voting on other answers.

Over the course of the 33 days that the website was accepting answers, nearly 8,000 residents weighed in more than a million times, and shared roughly 4,000 unique ideas calling for new museums, the expansion of pedestrian infrastructure, green spaces and more.

The answers were then compiled into a report using Sensemaker, an AI tool by Google’s tech incubator Jigsaw that analyzes large sets of online conversations, categorizes what’s said into overarching topics, and analyzes agreement and disagreement to create actionable insights.

At the end, Sensemaker found 2,370 ideas that at least 80% of the respondents could agree on. Some of the most agreed upon ideas included increasing the amount of healthcare specialists in the city so that residents don’t have to rely on services an hour away in Nashville, repurposing empty retail spaces and adding more restaurants to the north side of the city.

The online survey was able to reach people that the county could not have reached otherwise like the politically disengaged or those who could not find the time from work to attend town halls.

The format was also better at reaching immigrants by offering the survey in multiple languages and then automatically translating answers. That was welcomed by people like Daniel Tarnagda, an immigrant from Burkina Faso and a local non-profit founder who leads a soccer team of under-18 immigrants who struggle to speak English.

“I knew that people want to be part of something. But if you don’t ask, you don’t know,” Tarnagda told PBS.

The volunteers for the project are now compiling the ideas from the report to make concrete policy recommendations to county leadership by the end of the year. According to a survey that Jigsaw conducted with local leaders, AI saved the county an average of 28 work days.

Agreement Beyond Party Lines

The Bowling Green experiment was Sensemaker’s first large-scale proof of concept, Jigsaw wrote in a blog article from earlier this year.

One of the most striking things they found out in Bowling Green was that when the ideas were anonymous and stripped of political identity, the constituents found that they agreed on a lot.

“When most of us don’t participate, then the people who do are usually the ones that have the strongest opinions, maybe the least well-informed, angriest, and then you start to have a caricatured idea of what the other side thinks and believes. So one of the most consequential things we could do with AI is to figure out how to help us stay in the conversation together,” Jigsaw CEO Yasmin Green told PBS.

Jigsaw announced this week that they are now partnering with the Napolitan Institute, a research and public polling organization founded by famous pollster Scott Rasmussen, to compile information on how Americans from every congressional district view the founding ideals of America, the state of the country now, and where it’s going. Unlike the Bowling Green experiment, the aim is not policy but to get an understanding of where the nation stands.

AI’s Potential: The Good and The Bad

There are still concerns inherently tied to this experimentation with AI in local governance. Although the website for Bowling Green’s survey explicitly notes that “no personal information was captured, and no demographic data was stored,” that does not necessarily mean any future applications of this elsewhere would follow suit.

Artificial intelligence is the subject of privacy concerns due to their vulnerability to data breaches, which would become a problem if people were to get doxxed for their political beliefs they submitted in confidence.

AI also has an issue with the bias of its creators getting baked into its algorithm. Just last month, researchers found that Elon Musk’s Grok chatbot would consult with Musk’s own –rather controversial– opinions before answering sensitive questions. If an AI were to generate neutral policy suggestions, a flaw as such would be insurmountable.

But if these concerns are adequately addressed, then AI could have the potential to completely revolutionize civic engagement. And it could show a pathway for moving past political polarization and towards enacting tangible change, similar to the way AI created a space for a divided community in Bowling Green to find its common ground.



Source link

AI Insights

DVIDS – News – ARMY INTELLIGENCE

Published

on



by Holly Comanse

Artificial intelligence (AI) may be trending, but it’s nothing new. Alan Turing, an English mathematician said to be the “father of theoretical computer science,” conceptualized the term algorithm to solve mathematical problems and test machine intelligence. Not long after, the very first AI chatbot, called ELIZA, was released in 1966. However, it was the generative pre-trained transformer (GPT) model, first introduced in 2018, that provided the foundation for modern AI chatbots.

Both AI and chatbots continue to evolve with many unknown variables related to accuracy and ethical concerns, such as privacy and bias. Job security and the environmental impact of AI are other points of contention. While the unknown may be distressing, it’s also an opportunity to adjust and adapt to an evolving era.

The 1962 television cartoon “The Jetsons” presented the concept of a futuristic family in the year 2062, with high-tech characters that included Rosie the Robot, the family’s maid, and an intelligent robotic supervisor named Uniblab. Such a prediction isn’t much of a stretch anymore. Robotic vacuums, which can autonomously clean floors, and other personal AI assistant devices are now commonplace in the American household. A recent report valued the global smart home market at $84.5 billion in 2024 and predicted it would reach $116.4 billion by 2029.

ChatGPT, released in 2022, is a widely used AI chat-based application with 400 million active users. It independently browses the internet, allowing for more up-to-date and automatic results, but it’s used for conversational topics, not industry-specific information. The popularity and limitations of ChatGPT led some organizations to develop their own AI chatbots with the ability to reflect current events, protect sensitive information and deliver search results for company-specific topics.

One of those organizations is the U.S. Army, which now has an Army-specific chatbot known as CamoGPT. Currently boasting 75,000 users, CamoGPT started development in the fall of 2023 and was first deployed in the spring of 2024. Live data is important for the Army and other companies that choose to implement their own AI chat-based applications. CamoGPT does not currently have access to the internet because it is still in a prototype phase, but connecting to the net is a goal. Another goal for the platform is to accurately respond to questions that involve current statistics and high-stakes information. What’s more, CamoGPT can process classified data on SIPRNet and unclassified information on NIPRNet.

THE MORE THE MERRIER
Large language models (LLMs) are a type of AI chatbot that can understand and generate human language based on inputs. LLMs undergo extensive training, require copious amounts of data and can be tedious to create. They can also process and respond to information as a human would. Initially, the information fed to the bot must be input individually and manually by human beings until a pattern is established, at which point the computer can take over. Updating facts can be a daunting task when considering the breadth of data from around the world that AI is expected to process.

Aidan Doyle, a data engineer at the Army Artificial Intelligence Integration Center (AI2C), works on a team of three active-duty service members, including another data engineer and a data analyst, as well as four contracted software developers and one contracted technical team lead. “It’s a small team, roles are fluid, [and] everyone contributes code to Camo[GPT],” Doyle said.

Doyle’s team is working to transition CamoGPT into a program of record and put more focus into developing an Army-specific LLM. “An Army-specific LLM would perform much better at recognizing Army acronyms and providing recommendations founded in Army doctrine,” Doyle said. “Our team does not train LLMs; we simply host published, open-source models that have been trained by companies like Meta, Google and Mistral.”

The process of training LLMs involves pre-training the model by showing it as many examples of natural language as possible from across the internet. Everything from greetings to colloquialisms must be input so it can mimic human conversation. Sometimes, supervised learning is necessary for specific information during the fine-tuning step. Then the model generates different answers to questions, and humans evaluate and annotate the model responses and flag problems that arise. Once preferred responses are identified, developers adjust the model accordingly. This is a post-training step called reinforcement learning with human feedback, or alignment. Finally, the model generates both the questions and answers itself in the self-play step. When the model is ready, it is deployed.

A LITTLE TOO CREATIVE
The use of AI in creative fields faces significant challenges, such as the potential for plagiarism and inaccuracy. Artists spend a lot of time creating work that can easily be duplicated by AI without giving the artist any credit. AI can also repackage copyrighted material. It can be tough to track down the original source for something when it is generated with AI.

AI can—and sometimes does—introduce inaccuracies. When AI fabricates information unintentionally, it is called a hallucination. AI can make a connection between patterns it recognizes and pass them off as truth for a different set of circumstances. Facts presented by AI should always be verified, and websites such as CamoGPT often come with a disclaimer that hallucinations are possible. Journalists and content creators should be cautious with AI use as they run the risk of spreading misinformation.

Images and videos can also be intentionally manipulated. For example, AI-generated content on social media can be posted for shock value. Sometimes it’s easy to spot when something has been created by AI, and you can take it with a grain of salt. In other cases, social media trends can go viral before they are vetted.

For these reasons, the Army decided to implement CamoGPT. Not only does it currently have the ability to process classified information discreetly, but developmental advances will also ensure that CamoGPT provides minimal errors in its responses.

CONCLUSION
It’s becoming clear that analog is on the way out. Even older websites like Google and other search engines have started to prioritize AI summary results. Utilizing technology like AI, LLMs and other chatbots can save time and automate tedious tasks, which increases productivity and efficiency. CamoGPT is still evolving, and the team at AI2C is working hard to improve its accuracy and abilities. Other AI systems within the Army are still being developed, but the potential is limitless. While we may not be living in the future that the creator of “The Jetsons” predicted, we’re getting closer. In another 37 years, when 2062 rolls around, we may all be using flying cars—and those vehicles just might drive themselves with the help of AI.

For more information, go to https://www.camogpt.army.mil/camogpt.

HOLLY COMANSE provides contract support to the U.S. Army Acquisition Support Center from Honolulu, Hawaii, as a writer and editor for Army AL&T magazine and TMGL, LLC. She previously served as a content moderator and data specialist training artificial intelligence for a news app. She holds a B.A. in journalism from the University of Nevada, Reno.





Source link

Continue Reading

AI Insights

Big tech is offering AI tools to California students. Will it save jobs?

Published

on


By Adam Echelman, CalMatters

""
Students work in the library at San Bernardino Valley College on May 30, 2023. California education leaders are striking deals with tech companies to provide students with opportunities to learn AI. Photo by Lauren Justice for CalMatters

This story was originally published by CalMatters. Sign up for their newsletters.

As artificial intelligence replaces entry-level jobs, California’s universities and community colleges are offering a glimmer of hope for students: free AI training that will teach them to master the new technology. 

“You’re seeing in certain coding spaces significant declines in hiring for obvious reasons,” Gov. Gavin Newsom said Thursday during a press conference from the seventh floor of Google’s San Francisco office.

Flanked by leadership from California’s higher education systems, he called attention to the recent layoffs at Microsoft, at Google’s parent company, Alphabet, and at Salesforce Tower, just a few blocks away, home to the tech company that is still the city’s largest private employer.

Now, some of those companies — including Google and Microsoft — will offer a suite of AI resources for free to California schools and universities. In return, the companies could gain access to millions of new users.

The state’s community colleges and its California State University campuses are “the backbone of our workforce and economic development,” Newsom said, just before education leaders and tech executives signed agreements on AI.

The new deals are the latest developments in a frenzy that began in November 2022, when OpenAI publicly released the free artificial intelligence tool ChatGPT, forcing schools to adapt.

The Los Angeles Unified School District implemented an AI chatbot last year, only to cancel it three months later without disclosing why. San Diego Unified teachers started using AI software that suggested what grades to give students, CalMatters reported. Some of the district’s board members were unaware that the district had purchased the software. 

Last month, the company that oversees Canvas, a learning management system popular in California schools and universities, said it would add “interactive conversations in a ChatGPT-like environment” into its software

To combat potential AI-related cheating, many K-12 and college districts are using a new feature from the software company Turnitin to detect plagiarism, but a CalMatters investigation found that the software accused students who did real work instead.

Mixed signals?

These deals are sending mixed signals, said Stephanie Goldman, the president of the Faculty Association of California Community Colleges. “Districts were already spending lots of money on AI detection software. What do you do when it’s built into the software they’re using?”

Don Daves-Rougeaux, a senior adviser for the community college system, acknowledged the potential contradiction but said it’s part of a broader effort to keep up with the rapid pace of changes in AI. He said the community college system will frequently reevaluate the use of Turnitin along with all other AI tools. 

California’s community college system is responsible for the bulk of job training in the state, though it receives the least funding from the state per student. 

“Oftentimes when we are having these conversations, we are looked at as a smaller system,” said Daves-Rougeaux. The state’s 116 community colleges collectively educate roughly 2.1 million students.

In the deals announced Thursday, the community college system will partner with Google, Microsoft, Adobe and IBM to roll out additional AI training for teachers. Daves-Rougeaux said the system has also signed deals that will allow students to use exclusive versions of Google’s counterpart to ChatGPT, Gemini, and Google’s AI research tool, Notebook LLM. Daves-Rougeaux said these tools will save community colleges “hundreds of millions of dollars,” though he could not provide an exact figure. 

“It’s a tough situation for faculty,” said Goldman. “AI is super important but it has come up time and time again: How do you use AI in the classroom while still ensuring that students, who are still developing critical thinking skills, aren’t just using it as a crutch?”

One concern is that faculty could lose control over how AI is used in their classrooms, she added.

The K-12 system and Cal State University system are forming their own tech deals. Amy Bentley-Smith, a spokesperson for the Cal State system, said it is working on its own AI programs with Google, Microsoft, Adobe and IBM as well as Amazon Web Services, Intel, LinkedIn, Open AI and others. 

Angela Musallam, a spokesperson for the state government operations agency, said California high schools are part of the deal with Adobe, which aims to promote “AI literacy,” the idea that students and teachers should have basic skills to detect and use artificial intelligence.

Much like the community college system, which is governed by local districts, Musallam said individual K-12 districts would need to approve any deal. 

Will deals make a difference to students, teachers?

Experts say it’s too early to tell how effective AI training will actually be.

Justin Reich, an associate professor at MIT, said a similar frenzy took place 20 years ago when teachers tried to teach computer literacy. “We do not know what AI literacy is, how to use it, and how to teach with it. And we probably won’t for many years,” Reich said. 

The state’s new deals with Google, Microsoft, Adobe and IBM allow these tech companies to recruit new users — a benefit for the companies — but the actual lessons aren’t time-tested, he said. 

“Tech companies say: ‘These tools can save teachers time,’ but the track record is really bad,” said Reich. “You cannot ask schools to do more right now. They are maxed out.”

Erin Mote, the CEO of an education nonprofit called InnovateEDU, said she agrees that state and education leaders need to ask critical questions about the efficacy of the tools that tech companies offer but that schools still have an imperative to act. 

“There are a lot of rungs on the career ladder that are disappearing,” she said. “The biggest mistake we could make as educators is to wait and pause.”

Last year, the California Community Colleges Chancellor’s Office signed an agreement with NVIDIA, a technology infrastructure company, to offer AI training similar to the kinds of lessons that Google, Microsoft, Adobe and IBM will deliver. 

Melissa Villarin, a spokesperson for the chancellor’s office, said the state won’t share data about how the NVIDIA program is going because the cohort of teachers involved is still too small. 

This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.



Source link

Continue Reading

AI Insights

Warren County Schools hosts AI Workshop

Published

on


KENTUCKY — On this week’s program, we’re keeping you up to date on happenings within Kentucky’s government, which includes ongoing work this summer with legislative committees and special task forces in Frankfort.

During this “In Focus Kentucky” segment, reporter Aaron Dickens shared how leaders in Warren County Public Schools are helping educators bring their new computer science knowledge to the front of classrooms.

Also in this segment, we shared details about the U.S. Department of Energy selecting the Paducah Gaseous Diffusion Plant site in Paducah, Kentucky, as one of four sites for the development of artificial intelligence data centers and associated energy infrastructure. This initiative is reportedly part of the Trump administration’s plan to accelerate AI development, hoping to leverage federal land assets to establish high-performance computing facilities and reliable energy sources for the burgeoning AI industry.

You can watch the full “In Focus Kentucky” segment in the player above.




Source link

Continue Reading

Trending