Connect with us

AI Insights

Regulatory Policy and Practice on AI’s Frontier

Published

on


Adaptive, expert-led regulation can unlock the promise of artificial intelligence.

Technological breakthroughs, historically, have played a distinctive role in accelerating economic growth, expanding opportunity, and enhancing standards of living. Technology enables us to get more out of the knowledge we have and prior scientific discoveries, in addition to generating new insights that enable new inventions. Technology is associated with new jobs, higher incomes, greater wealth, better health, educational improvements, time-saving devices, and many other concrete gains that improve people’s day-to-day lives. The benefits of technology, however, are not evenly distributed, even when an economy is more productive and growing overall. When technology is disruptive, costs and dislocations are shouldered by some more than others, and periods of transition can be difficult.

Theory and experience teach that innovative technology does not automatically improve people’s station and situation merely by virtue of its development. The way technology is deployed and the degree to which gains are shared—in other words, turning technology’s promise into reality without overlooking valid concerns—depends, in meaningful part, on the policy, regulatory, and ethical decisions we make as a society.

Today, these decisions are front and center for artificial intelligence (AI).

AI’s capabilities are remarkable, with profound implications spanning health care, agriculture, financial services, manufacturing, education, energy, and beyond. The latest research is demonstrably pushing AI’s frontier, advancing AI-based reasoning and AI’s performance of complex multistep tasks, and bringing us closer to artificial general intelligence (high-level intelligence and reasoning that allows AI systems to autonomously perform highly complex tasks at or beyond human capacity in many diverse instances and settings). Advanced AI systems, such as AI agents (AI systems that autonomously complete tasks toward identified objectives), are leading to fundamentally new opportunities and ways of doing things, which can unsettle the status quo, possibly leading to major transformations.

In our view, AI should be embraced while preparing for the change it brings. This includes recognizing that the pace and magnitude of AI breakthroughs are faster and more impactful than anticipated. A terrific indication of AI’s promise is the 2024 Nobel Prize in chemistry, winners of which used AI to “crack the code” of protein structures, “life’s ingenious chemical tools.” At the same time, as AI becomes widely used, guardrails, governance, and oversight should manage risks, safeguard values, and look out for those disadvantaged by disruption.

Government can help fuel the beneficial development and deployment of AI in the United States by shaping a regulatory environment conducive to AI that fosters the adoption of goods, services, practices, processes, and tools leveraging AI, in addition to encouraging AI research.

It starts with a pro-innovation policy agenda. Once the goal of promoting AI is set, the game plan to achieve it must be architected and implemented. Operationalizing policy into concrete progress can be difficult and more challenging when new technology raises novel questions infused with subtleties.

Regulatory agencies that determine specific regulatory requirements and enforce compliance play a significant part in adapting and administering regulatory regimes that encourage rather than stifle technology. Pragmatic regulation compatible with AI is instrumental so that regulation is workable as applied to AI-led innovation, further unlocking AI’s potential. Regulators should be willing to allow businesses flexibility to deploy AI-centered uses that challenge traditional approaches and conventions. That said, regulators’ critical mission of detecting and preventing harmful behavior should not be cast aside. Properly calibrated governance, guardrails, and oversight that prudently handle misuse and misconduct can support technological advancement and adoption over time.

Regulators can achieve core regulatory objectives, including, among other things, consumer protection, investor protection, and health and safety, without being anchored to specific regulatory requirements if the requirements—fashioned when agentic and other advanced AI was not contemplated—are inapt in the context of current and emerging AI.

We are not implying that vital governmental interests that are foundational to many regulatory regimes should be jettisoned. Rather, it is about how those interests are best achieved as technology changes, perhaps dramatically. It is about regulating in a way that allows AI to reach its promise and ensuring that essential safeguards are in place to protect persons from wrongdoing, abuses, and harms that could frustrate AI’s real-world potential by undercutting trust in—and acceptance of—AI. It is about fostering a regulatory environment that allows for constructive AI-human collaboration—including using AI agents to help monitor other AI agents while humans remain actively involved addressing nuances, responding to an AI agent’s unanticipated performance, engaging matters of greatest agentic AI uncertainty, and resolving tough calls that people can uniquely evaluate given all that human judgment embodies.

This takes modernizing regulation—in its design, its detail, its application, and its clarity—to work, very practically, in the context of AI by accommodating AI’s capabilities.

Accomplishing this type of regulatory modernity is not easy. It benefits from combining technological expertise with regulatory expertise. When integrated, these dual perspectives assist regulatory agencies in determining how best to update regulatory frameworks and specific regulatory requirements to accommodate expected and unexpected uses of advanced AI. Even when underpinning regulatory goals do not change, certain decades-old—or newer—regulations may not fit with today’s technology, let alone future technological breakthroughs. In addition, regulatory updates may be justified in light of regulators’ own use of AI to improve regulatory processes and practices, such as using AI agents to streamline permitting, licensing, registration, and other types of approvals.

Regulatory agencies are filled with people who bring to bear valuable experience, knowledge, and skill concerning agency-specific regulatory domains, such as financial services, antitrust, food, pharmaceuticals, agriculture, land use, energy, the environment, and consumer products. That should not change.

But the commissions, boards, departments, and other agencies that regulate so much of the economy and day-to-day life—the administrative state—should have more technological expertise in-house relevant to AI. AI’s capabilities are materially increasing at a rapid clip, so staying on top of what AI can do and how it does it—including understanding leading AI system architecture and imagining how AI might be deployed as it advances toward its frontier—is difficult. Without question, there are individuals across government with impressive technological chops, and regulators have made commendable strides keeping apprised of technological innovation. Indeed, certain parts of government are inherently technology-focused. Many regulatory agencies are not, however; but even at those agencies, in-depth understanding of AI is increasingly important.

Regulatory agencies should bring on board more individuals with technology backgrounds from the private sector, academia, research institutions, think tanks, and elsewhere—including computer scientists, physicists, software engineers, AI researchers, cryptographers, and the like.

For example, we envision a regulatory agency’s lawyers working closely with its AI engineers to ensure that regulatory requirements contemplate and factor in AI. Lawyers with specific regulatory knowledge can prompt large language models to measure a model’s interpretation of legal and regulatory obligations. Doing this systematically and with a large enough sample size requires close collaboration with AI engineers to automate the analysis and benchmark a model’s results. AI engineers could partner with an agency’s regulatory experts in discerning the technological capabilities of frontier AI systems to comport with identified regulatory objectives in order to craft regulatory requirements that account for and accommodate the use of AI in consequential contexts. AI could accelerate various regulatory functions that typically have taken considerable time for regulators to perform because they have demanded significant human involvement. To illustrate, regulators could use AI agents to assist the review of permitting, licensing, and registration applications that individuals and businesses must obtain before engaging in certain activities, closing certain transactions, or marketing and selling certain products. Regulatory agencies could augment humans by using AI systems to conduct an initial assessment of applications and other requests against regulatory requirements.

The more regulatory agencies have the knowledge and experience of technologists in-house, the more understanding regulatory agencies will gain of cutting-edge AI. When that enriched technological insight is combined with the breadth of subject-matter expertise agencies already possess, regulatory agencies will be well-positioned to modernize regulation that fosters innovation while preserving fundamental safeguards. Sophisticated technological know-how can help guide regulators’ decisions concerning how best to revise specific regulatory features so that they are workable with AI and conducive to technological progress. The technical elements of regulation should be informed by the technical elements of AI to ensure practicable alignment between regulation and AI, allowing AI innovation to flourish without incurring undue risks.

With more in-house technological expertise, we think regulatory agencies will grow increasingly comfortable making the regulatory changes needed to accommodate, if not accelerate, the development and adoption of advanced AI.

There is more to technological progress that propels economic growth than technological capability in and of itself. An administrative state that is responsive to the capabilities of AI—including those on AI’s expanding frontier—could make a big difference converting AI’s promise into reality, continuing the history of technological breakthroughs that have improved people’s lives for centuries.

Troy A. Paredes



Source link

AI Insights

Duke University pilot project examining pros and cons of using artificial intelligence in college – Independent Tribune

Published

on



Duke University pilot project examining pros and cons of using artificial intelligence in college  Independent Tribune



Source link

Continue Reading

AI Insights

Experiential learning: A solution to AI-driven challenges

Published

on


I was halfway into my sustainable agriculture lecture at UC Santa Barbara on an otherwise pleasant February afternoon when I heard the sound no teacher wants to hear: one of my students, in the back row, snoring. Loudly. I decided to plow ahead, even as other students turned around and erupted into giggles. Finally, someone shook the offending student awake, and class proceeded.

Later that week, a teaching assistant approached me to explain how bad the snorer felt about the incident. It wasn’t that the student was uninterested or found my lecture boring, the TA explained; they just struggled to stay awake through such a passive and sedentary experience. It wasn’t the content of my class that was the problem. It was the format.

The longer I’ve taught (this is my 11th year as a professor), the more I’ve leaned on experiential learning: hands-on activities that get students out of their seats and engaging all their senses and capacities. Even as universities in my state are signing deals with tech companies to bring free AI training to campus, I see students clamoring for something else: meaningful in-person experiences where they can make strong connections with mentors and peers.

As I’ve redesigned my classes to integrate more field trips to local farms, volunteer work with community organizations and hands-on lessons focused on building tangible skills, I’ve found that students work harder, learn more, and look forward to class. Instead of just showing slides of compost, I bring my students to our campus farm to harvest castings (nutrient-dense worm poop!) from the worm bins. Instead of just lecturing about how California farmers are adapting to water scarcity, I take students to visit a farm that operates without irrigation, where we help prune and harvest grapes and olives. Long wait lists for these types of classes indicate that demand is far greater than supply.

I’m a proponent of experiential learning in almost every educational context, but there are several reasons why it is particularly relevant and essential this school year.

For one thing, generative AI has upended most traditional assignments. We can no longer assume that writing submitted by students is indicative of what they’ve learned. As many of my colleagues have found out the hard way, students are routinely completely unfamiliar with the content of their own papers. In this environment, there’s a real advantage to directly supervising and assessing students’ learning, rather than relying on proxies that robots can fake.

As I’ve redesigned my classes to integrate more field trips to local farms, volunteer work with community organizations and hands-on lessons focused on building tangible skills, I’ve found that students work harder, learn more, and look forward to class.

Liz Carlisle

Second, today’s young adults face an uncertain economy and job market, partly due to AI. Many employers are deploying AI instead of hiring entry-level workers, or simply pausing hiring while waiting for markets to settle. As instructors, we must admit that we aren’t 100% sure which technical skills our students will need to succeed in this rapidly evolving workplace, especially five to 10 years down the road. Experiential learning has the advantage of helping students build the timeless, translatable skills that will AI-proof their employability: teamwork, communication, emotional intelligence and project management. As a bonus, community-engaged learning approaches can introduce students to professional settings in real time, ensuring a more up-to-date and relevant experience than any pre-cooked lesson plan.

Finally, and not unrelated to the above two points, Gen Z is experiencing a mental health crisis that inhibits many students’ ability to focus, set goals and develop self-confidence. There is nothing quite like putting a shovel and some seeds in their hands (preferably out of cellphone range) and watching them build a garden with their peers. The combined effect of being outdoors, digitally detoxing, moving about, bonding with others, and feeling a sense of accomplishment and making a difference is a powerful tonic for rumination and constant online isolation.

The field of environmental studies lends itself to outdoor experiential learning, and this has long been a key component of courses in ecology and earth science. But this approach can be quite powerful across the curriculum. I’ve known political science professors who take students to city council meetings, historians who walk students through the streets of their city to witness legacies of earlier eras, and writing instructors who bring groups of students to wild spaces to develop narrative essays on site.

With support from my department, I’m grateful to be able to teach an entirely experiential field course — but I’m equally excited about integrating modest experiential elements into my 216-person lecture course. Even one experiential assignment (like attending and reflecting on a public event) or hands-on activity in the discussion section can catalyze and deepen learning. 

To be sure, effective experiential learning is an art form that requires significant investment of time and energy from the instructor — and often from community partners as well. This work needs to be appropriately valued and compensated, and off-campus experiences require transportation funding and careful planning to ensure student safety. But the payoff can be the most meaningful and memorable experience of a student’s academic career. Instead of snoozing through a lecture, they can actively develop themselves into the adult they wish to become.

•••

Liz Carlisle is an associate professor of environmental studies at UC Santa Barbara and a Public Voices Fellow of the OpEd Project.

The opinions expressed in this commentary represent those of the author. EdSource welcomes commentaries representing diverse points of view. If you would like to submit a commentary, please review our guidelines and contact us.





Source link

Continue Reading

AI Insights

The Role Of AI In Enhancing Online Gaming Engagement

Published

on


This content was produced in partnership with VegasSlotsOnline.

Artificial intelligence is really revolutionizing the online gaming landscape by providing personalized solutions to consumers. Precision-based platforms are now personalizing gaming worlds based on personal choice without compromising regulatory mechanisms or consumer safety.

Machine learning innovations are really beginning to imprint virtual entertainment and gaming. Personalized recommendations, dynamic UI and risk management tools are commonplace in contemporary casino systems. Insights gleaned from user behavior are helping to optimize engagement plans, delivering safer and more enjoyable experiences.

The emergence of personalization through AI in internet gaming

Building sophisticated AI models has enabled a better understanding of behavior patterns. Sites examine a set of session lengths, game picks, stakes and timing of interaction to create individual profiles. Such profiles inform dynamic interfaces, which do not employ blanket templates. Interfaces are dynamic, presenting game offers of a comparable mechanic or styling preference in a non-intrusive manner. Suggestions can be variant slot themes, table game variants or even timing recommendations by peaks of historical activity. Such subtle refinements ensure that players are presented with experiences related to their interests, minimizing friction while navigating and providing variety. Implementing such models can increase user satisfaction within retention constraints appropriate for playing enjoyment and regulatory obligations, clearing the ground for even more innovative responsible gaming tools that adapt in sync with emerging behaviors.

Blending affordability with individualization

Price sensitivity indices are integrated into tailored systems. As a potential illustration, a website can identify frugal tendencies and thus emphasize low-risk headlines. This is especially true with minimum deposit online casinos, where minimum deposit limits are set low to facilitate access without compromise on disciplined usage. Artificial intelligence systems can subtly direct players towards options that allow spending, which may be monitored and sanction enjoyment in the same breath. Budget-sensitive audience groups can be given low-variance/volatility game lists, allowing for a customized experience mindful of their wallet and desire to play frequently. In the long term, this generates a sustainable cycle of engagement, where entertainment becomes a frequent, low-stakes activity rather than a rare, high-stakes event, such that affordability and personalization work in conjunction to construct a balanced virtual gaming sphere.

Increasing engagement via intelligent content distribution

Artificially intelligent content distribution systems respond on a discrete, granular basis to the preferences demonstrated by the user. Recommendations that refresh with shifts in behavior, such as increased use of evenings on live dealers or non-standard access on weekends for table games, cause refreshes in content viewing. Instead of static lists, casino lobbies offer individualized menus that are streamlined to focus on probable areas of interest. Display banners and graphic composition become less general, focusing instead on relevance and context. Such improvements make a better end-user experience possible, reducing time spent navigating and money spent interacting with games and better aligning individual taste profiles. In the long term, such responsiveness also helps identify subtle changes in engagement, allowing platforms to introduce new titles or features at points of most significant interest while gradually retiring less relevant ones, thereby producing a better, more dynamic relationship between platform design and user behavior.

Striking a balance with responsible play

AI systems are not merely crafted to engage optimally; there are plans for responsible gaming. Frequency, session length or increase in stakes deviating from patterns of regularity are tracked by detection algorithms, which raise alerts or nudges that cause self-awareness. When threshold levels are crossed, soft messages reminding of breaks taken or activity review are provided. Adaptive deposit limits can also be activated, where stakes are established based on past behavior and safety limits are defined. Focusing on player welfare averts personalization from turning into remorse-causing over-engagement and, on a related note, puts regulators’ minds at ease knowing safety is paramount. In the long term, such systems can evolve into forward-looking buddies, offering supporting tools like summaries of playing, spending dashboards or voluntary pauses, such that entertainment is complemented by accountability.

Regulatory and privacy concerns

Personalization elements must operate within a framework of data protection and compliance. Regulators examine player data usage, calling on platforms to clarify how behavioral analytics affect platform presentation. Platforms must include precise opt-out mechanisms for personalization elements and collection procedures must comply with privacy guidelines. Anonymization and data minimization are standard practices within the industry, ensuring that AI models learn patterns without holding personally identifiable information. Responsible vendors must utilize personalization algorithms to provide audit and transparency, demonstrating that the recommendations are not exploitative or unfairly nudging users towards higher spending.

Artificial intelligence-powered personalization represents a paradigm shift in video gaming, transitioning from static interfaces to dynamic, context-sensitive worlds. Personalized content streaming, dynamic recommendation and risk/benefit tailoring define a new norm of player-centric experience. The deployment of such technology fosters engagement that aligns with personal preferences and boundaries, strengthening a healthier bond between users and platforms.

Although AI introduces advanced opportunities for personalization, ongoing attention must focus on ethics, fairness and regulatory alignment. Transparency about algorithmic functionality, robust safeguards against over-entitlement and accessible user controls remain essential. Successful integration occurs when gaming platforms balance enjoyment, trust and safety, ensuring that personalization supports rather than undermines player well-being.

If you or anyone you know has a gambling problem, call 1-800-GAMBLER.





Source link

Continue Reading

Trending