Connect with us

AI Insights

Crisis? Clarity! How artificial intelligence helps with

Published

on


Understanding crises – shaping the future: A quiet guide for turbulent times (© Dilok Klaisataporn @ iStock)

Crises are part of life – but how we deal with them determines our inner maturity. While many people are looking for guidance in difficult phases, it is becoming increasingly clear that artificial intelligence can be a valuable companion in self-reflection.

In his book “Crises as turning points – learning, growing, shaping”, Markus Schall describes how modern AI tools can help to remove mental blocks, gain emotional clarity and understand oneself better.

AI as a tool for sorting thoughts

The idea of talking to an artificial intelligence about personal issues sounds strange to many people at first. But if you try it out, you quickly realize that the neutral, non-judgmental “conversation partner” can help enormously in sorting thoughts, recognizing patterns and preparing decisions – without any human judgment or distraction.

Markus Schall, an experienced entrepreneur, software developer and author, takes an unusual approach in his book: he combines traditional life coaching with modern technologies. In a separate chapter, he explains how AI-based self-reflection works – and why it can be a realistic start to personal development for many people. Instead of waiting a long time for therapy or getting lost in internet forums, AI offers the opportunity to ask specific questions, observe oneself – and gain clarity step by step.

“It’s not about seeing AI as a substitute for human closeness,” says Schall. “But rather as a tool – similar to a notebook or a good guidebook – that can help us to pause and think more deeply in our everyday lives.”

The book “Crises as Turning Points” is not a crash course in crisis management, but a calm, reflective companion. It combines biographical insights with tangible impulses – from nutrition and social dynamics to practical techniques for more personal responsibility and serenity. The integration of AI is just one of many tools designed to encourage readers to find their own paths.

Finding new perspectives with artificial intelligence

But the role of artificial intelligence does not end with sorting out past thoughts. It is becoming increasingly clear that AI can also serve as a source of inspiration for new perspectives – both in terms of personal growth and professional reorientation.

Anyone who consciously engages in an interactive conversation with a well-trained AI often discovers surprising ideas, new approaches or unexpected ways out of deadlocked patterns. Whether it’s about the realignment of a project, the next career move or the desire for more inner clarity – AI is increasingly becoming the sparring partner of the future. In “Crises as turning points”, precisely this aspect is addressed in a practical way and made tangible.

More than a crisis guide – a compass for personal development

“Crises as Turning Points” is far more than a book about difficult phases in life. It is a silent companion for people who do not want to stand still. If you take the time to delve deeper, you will realize that it is not just about awareness: This is not just about coping with crises – but about recognizing deeper patterns and breaking down old ways of thinking.

Among other things, the book addresses early imprints from childhood and shows how much they influence our behavior today – often unconsciously. It sheds light on the family dynamics that shape us and helps us to see through unhealthy role models. Aspects of mental clarity and resilience are also addressed: for example, through practical tips on microdosing lithium orotate and vitamin D3 – both natural substances that can promote mental balance when used mindfully.

Anyone looking for guidance will not find any quick fixes in this book – but many clever questions, valuable perspectives and concrete impulses for a self-determined, clearer path. And sometimes it is precisely this that initiates the decisive turning point.

The book is available from bookshops, BoD, Amazon and other booksellers.

https://schall-verlag.de/buch-krisen.php

M. Schall Verlag

Hackenweg 97

26127 Oldenburg

Germany

https://schall-verlag.de

Herr Markus Schall

info@schall-verlag.de

Schall-Verlag was founded in 2025 by Markus Schall – out of a desire to publish books that create clarity, stimulate reflection and consciously avoid the hectic flow of the zeitgeist. The publishing house does not see itself as a mass marketplace, but as a curated platform for content with attitude, depth and substance.

The focus is on topics such as personal development, crisis management, social dynamics, technological transformation and critical thinking. All books are the result of genuine conviction, not market analysis – and are aimed at readers who are looking for guidance, insight and new perspectives.

The publishing house is deliberately designed to be compact, independent and with high standards in terms of language, content and design. Schall-Verlag is based in Oldenburg (Lower Saxony) and plans multilingual publications in German and English.

This release was published on openPR.



Source link

AI Insights

America’s 2025 AI Action Plan: Deregulation and Global Leadership

Published

on


Shot of Data Center With Multiple Rows of Fully Operational Server Racks. Modern Telecommunications, Cloud Computing, Artificial Intelligence, Database, Super Computer Technology Concept.
Credit: Gorodenkoff via Adobe Stock

In July 2025, the White House released America’s AI Action Plan, a sweeping policy framework asserting that “the United States is in a race to achieve global dominance in artificial intelligence,” and that whoever controls the largest AI hub “will set global AI standards and reap broad economic and military benefits” (see Introduction). The Plan, following a January 2025 executive order, underscores the Trump administration’s vision of a deregulated, innovation-driven AI ecosystem designed and optimized to accelerate technological progress, expand workforce opportunities, and assert U.S. leadership internationally.

This article outlines the Plan’s development, key pillars, associated executive orders, and the legislative and regulatory context that frames its implementation. It also situates the Plan within ongoing legal debates about state versus federal authority in regulating AI, workforce adaptation, AI literacy, and cybersecurity.

Laying the Groundwork for AI Dominance

January 2025: Executive Order Calling for Deregulation

The first major executive action of Trump’s second term was the January 23, 2025, order titled “Removing Barriers to American Leadership in Artificial Intelligence.” This Executive Order (EO) formally rescinded policies deemed obstacles to AI innovation under the prior administration, particularly regarding AI regulation. Its stated purpose was to consolidate U.S. leadership by ensuring that AI systems are “free from ideological bias or engineered social agendas,” and that federal policies actively foster innovation.

The EO emphasized three broad goals:

  1. Promoting human flourishing and economic competitiveness: AI development was framed as central to national prosperity, with the federal government creating conditions for private-sector-led growth.
  2. National security: Leadership in AI was explicitly tied to the United States’ global strategic position.
  3. Deregulation: Existing federal regulations, guidance, and directives perceived as constraining AI innovation were revoked, streamlining federal involvement and eliminating bureaucratic barriers.

The January order set the stage for the July 2025 Action Plan, signaling a decisive break from the prior administration’s cautious, regulatory stance.

April 2025: Office of Management and Budget Memoranda

Prior to the release of America’s AI Action Plan, the Trump administration issued key guidance to facilitate federal adoption and procurement of AI technologies. This guidance focused on streamlining agency operations, promoting responsible innovation, and ensuring that federal AI use aligns with broader strategic objectives.

Two memoranda were issued by the Office of Management and Budget (OMB) on April 3, 2025, provided a framework for this shift:

  • Accelerating Federal Use of AI through Innovation, Governance, and Public Trust” (M-25-21): OMB Empowers Chief AI Officers to serve as change agents, promoting agency-wide AI adoption. Through this memorandum, agencies empower AI leaders to remove barriers to AI innovation. Also, they require federal agencies to track AI adoption through maturity assessments, identifying high-impact use cases that necessitate heightened oversight. This balances the rapid deployment of AI with privacy, civil rights, and civil liberties protections. 
  • Driving Efficient Acquisition of Artificial Intelligence in Government” (M-25-22): Provides agencies with tools and concise, effective guidance on how to acquire “best-in-class” AI systems quickly and responsibly while promoting innovation across the federal government. It streamlined procurement processes, emphasizing competitive acquisition and prioritization of American AI technologies. M-25-22 also reduced reporting burdens while maintaining accountability for lawful and responsible AI use.

These April memoranda laid the procedural foundation for federal AI adoption, ensuring agencies could implement emerging AI technologies responsibly while aligning with strategic U.S. objectives.

July 2025: America’s AI Action Plan

Released on July 23, 2025, the AI Action Plan builds on the April memoranda by articulating clear principles for government procurement of AI systems, particularly Large Language Models (LLMs), to ensure federal adoption aligns with American values:

  1. Truth-seeking: LLMs must respond accurately to factual inquiries, prioritize historical accuracy and scientific inquiry, and acknowledge uncertainty.
  2. Ideological neutrality: LLMs should remain neutral and nonpartisan, avoiding the encoding of ideological agendas such as DEI unless explicitly prompted by users.

The Plan emphasizes that these principles are central to federal adoption, establishing expectations that agencies procure AI systems responsibly and in accordance with national priorities. OMB guidance, to be issued by November 20, 2025, will operationalize these principles by requiring federal contracts to include compliance terms and decommissioning costs for noncompliant vendors. Unlike the April memoranda, which focused narrowly on agency adoption and contracting, the July Plan set broad national objectives designed to accelerate U.S. leadership in artificial intelligence across sectors. These foundational principles inform the broader strategic vision outlined in the Plan, which is organized into three primary pillars:

  1. Accelerating AI Innovation
  2. Building American AI Infrastructure
  3. Leading in International AI Diplomacy and Security

Across 3 pillars, the Plan identifies over 90 federal policy actions. The Plan highlights the Trump administration’s objective of achieving “unquestioned and unchallenged global technological dominance,” positioning AI as a driver of economic growth, job creation, and scientific advancement.

Pillar 1: Accelerating AI Innovation

The Plan emphasizes the United States must have the “most powerful AI systems in the world” while ensuring these technologies create broad economic and scientific benefits. Not only should the U.S. have the most powerful systems, but also the most transformative applications. 

The pillar covers topics in AI adoption, regulation, and federal investment.

  • Removing bureaucratic “red tape and onerous regulation”: The administration argued that AI innovation should not be slowed by federal rules, particularly those at the state level that are considered “burdensome.” Funding for AI projects is directed toward states with favorable regulatory climates, potentially pressuring states to align with federal deregulatory priorities.
  • Encouraging open-source and open-weight AI: Expanding access to AI systems for researchers and startups is intended to catalyze rapid innovation. Particularly, the administration is looking to invest in AI interpretability, control, and robustness breakthroughs to create an “AI evaluations ecosystem.”
  • Federal adoption and workforce development: Federal agencies are instructed to accelerate AI adoption, particularly in defense and national security applications.
  • Workforce development: The uses of technology should ultimately create economic growth, new jobs, and scientific advancement. Policies also support workforce retraining to ensure that American workers thrive in an AI-driven economy, including pre-apprenticeship programs and high-demand occupation initiatives. 
  • Advancing protections: Ensuring that frontier AI protects free speech and American values. Notably, the pillar includes measures to “combat synthetic media in the legal system,” including deepfakes and fake AI-generated evidence.

Consistent with the innovation pillar, the Plan emphasizes AI literacy, recognizing that training and oversight are essential to AI accountability. This aligns with analogous principles in the EU AI Act, which requires deployers to inform users of potential AI harms. The administration proposes tax-free reimbursement for private-sector AI training and skills development programs to incentivize adoption and upskilling.

Pillar 2: Building American AI Infrastructure

AI’s computational demands require unprecedented energy and infrastructure. The Plan identifies infrastructure development as critical to sustaining global leadership, demonstrating the Administration’s pursuit of large-scale industrial plans. It contains provisions for the following:

  • Data center expansion: Federal agencies are directed to expedite permitting for large-scale data centers, defined as—in a July 23, 2025 EO titled “Accelerating Federal Permitting Of Data Center Infrastructure”—facilities “requiring 100 megawatts (MW) of new load dedicated to AI inference, training, simulation, or synthetic data generation.” These policies ease federal regulatory burdens to facilitate the rapid and efficient buildout of infrastructure. This EO revokes the Biden Administration’s January 2025 Executive Order on “Advancing United States Leadership in Artificial Intelligence Infrastructure,” but maintains an emphasis on expediting permits and leasing federal lands for AI infrastructure development.
  • Energy and workforce development: To meet AI power requirements, the Plan calls for streamlined permitting for semiconductor manufacturing facilities and energy infrastructure, for example, strengthening and growing the electric grid. The Plan also calls for the development of covered components, defined by the July 23, 2025 EO as “materials, products, and infrastructure that are required to build Data Center Projects or otherwise upon which Data Center Projects depend.” Additionally, investments will be made in workforce training to operate these high-demand systems. This is on par with the new national initiative to increase high-demand occupations such as electricians and HVAC technicians.
  • Cybersecurity and secure-by-design AI: Recognizing AI systems as both defensive tools and potential security risks, the Administration directs information sharing of AI threats between public and private sectors and updates incident response plans to account for AI-specific threats.

Pillar 3: Leading in International AI Diplomacy and Security

The Plan extends beyond domestic priorities to assert U.S. leadership globally. The following measures illustrate a dual focus of fostering innovation while strategically leveraging American technological dominance:

  • Exporting American AI: The Plan reflects efforts to drive the adoption of American AI systems, computer hardware, and standards. Commerce and State Departments are tasked with partnering with the industry to deliver “secure full-stack AI export packages… to America’s friends and allies” including hardware, software, and applications to allies and partners (see “White House Unveils America’s AI Action Plan”)
  • Countering foreign influence: The Plan explicitly seeks to restrict access to advanced AI technologies by adversaries, including China, while promoting the adoption of American standards abroad.
  • Global coordination: Strategic initiatives are proposed to align protection measures internationally and ensure the U.S. leads in evaluating national security risks associated with frontier AI models.

[Learn more about the pillars at ai.gov]

California’s Reception and Industry Response

The Plan addresses the interplay between federal and state authority, emphasizing that states may legislate AI provided their regulations are not “unduly restrictive to innovation.” Federal funding is explicitly conditioned on state regulatory climates, incentivizing alignment with the Plan’s deregulatory priorities. For California, this creates a favorable environment for the state’s robust tech sector, encouraging continued innovation while aligning with federal objectives. Simultaneously, the Federal Trade Commission (FTC) is directed to review its AI investigations to avoid burdening innovation, a policy reflected in the removal of prior AI guidance from the FTC website in March 2025, further supporting California’s leading role in AI development.

The White House released an article showcasing acclaim for the Plan. Among the supporters are the AI Innovation Association, Center for Data Innovation, Consumer Technology Association, and the US Chamber of Commerce. Leading tech companies—including California-based companies Meta, Anthropic, xAI, and Zoom—praised the Plan’s focus on federal adoption, infrastructure buildout, and innovation acceleration. 

California’s Anthropic highlighted alignment with its own policy priorities, including safety testing, AI interpretability, and secure deployment in a reflection. The reflection includes commentary on how to accelerate AI infrastructure and adoption, promote secure AI development, democratize AI’s benefits, and establish a natural standard by proposing a framework for frontier model transparency. The AI Action Plan’s recommendations to increase federal government adoption of AI include proposals aligned with policy priorities and recommendations Anthropic made to the White House; recommendations made in response to the Office of Science and Technology’s “Request for Information on the Development of an AI Action Plan.” Additionally, Anthropic released a “Build AI in America” report detailing steps the Administration can take to accelerate the buildout of the nation’s AI infrastructure. The company is looking to work with the administration on measures to expand domestic energy capacity.

California’s tech industry has not only embraced the Action Plan but positioned itself as a key partner in shaping its implementation. With companies like Anthropic, Meta, and xAI already aligning their priorities to federal policy, California has an opportunity to set a national precedent for constructive collaboration between industry and government. By fostering accountability principles grounded in truth-seeking and ideological neutrality, and by maintaining a regulatory climate favorable to innovation, the state can both strengthen its relationship with Washington and serve as a model for other states seeking to balance growth, safety, and public trust in the AI era.

America’s AI Action Plan moves from policy articulation to implementation, the coordination between federal guidance and state-level innovation will be critical. California’s tech industry is already demonstrating how strategic alignment with national priorities can accelerate adoption, build infrastructure, and set standards for responsible AI development. The Plan offers an opportunity for states to serve as models of effective governance, showing how deregulation, accountability principles, and public-private collaboration can advance technological leadership while safeguarding public trust. By continuing to harmonize innovation with ethical oversight, the United States can solidify its position as the global leader in artificial intelligence.





Source link

Continue Reading

AI Insights

Job seekers, HR professionals grapple with use of artificial intelligence

Published

on


RALEIGH, N.C. (WTVD) — The conversation surrounding the use of generative artificial intelligence, such as OpenAI’s ChatGPT, Microsoft CoPilot, Google Gemini, and others, is rapidly evolving and continuing to provoke questions of thought.

The debate comes as North Carolina Governor Josh Stein signed into law an executive order geared toward artificial intelligence.

It’s a space that is transforming at a pace much quicker than many people can adapt to, and is finding its way more and more into everyday use.

One of those spaces is the job market.

“I’ll even share with my experience yesterday. So I had gotten a completely generative AI-written resume, and my first reaction was, ‘Oh, I don’t love this. ‘ And then my second reaction was, ‘but why?’ I’m going to want them doing this at work. So why wouldn’t I want them doing it in the application process?” said human resources executive Steve O’Brien.

O’Brien’s comments caught the attention of colleagues internally and externally.

“I think what we need to do is ask ourselves, how do we interview in a world where generative AI is involved. Not how do we exclude generative AI from the interview process,” added O’Brien.

According to the 2025 Job Seeker Nation Report by Employ, 69% of applicants say they use artificial intelligence to find or match their work history with relevant job listings. That is up by one percent compared to 2024. Alternatively, in 2025, Employ found that 52% of applicants write or review resumes using artificial intelligence, down from 58% in 2024.

“I think recruiters are getting very good at spotting this AI-generated content. Every resume sounds the same, every line sounds the same, and the resume is missing the stories that. I mean, humans love stories,” said resume and career coaching expert Mir Garvy.

ALSO SEE Judge orders search shakeup in Google monopoly case, but keeps hands off Chrome and default deals

Meanwhile, career website Zety found that 58% of HR managers believe it’s ethical for candidates to use AI during their job search.

“Now those applicant tracking systems are AI-informed. But when all of us have access to tools like ChatGPT, in a sense, we have now a more level playing field,” Garvy said.

“If you had asked me six months ago, I’d have said that I was disappointed that generative AI had made the resume. But I don’t think that I have that opinion anymore,” said O’Brien. “So I don’t fault the candidates who are being asked to write 75 resumes and reply to 100 jobs before they get an interview for trying to figure out an efficient way to engage in that marketplace.”

The pair, along with job seekers, agree that AI is a tool that is best used to aid and assist, but not replace.

“(Artificial intelligence) should tell your story. It should highlight the things that are most important and downplay or eliminate the things that aren’t,” said Garvy.

O’Brien added, “If you completely outsource the creative process to ChatGPT, that’s probably not great, right? You are sort of erasing yourself from the equation. But if there’s something in there that you need help articulating, you need a different perspective on how to visualize, I have found it to be an extraordinary partner.”

Copyright © 2025 WTVD-TV. All Rights Reserved.



Source link

Continue Reading

AI Insights

North Carolina Governor Creates AI Council, State Accelerator

Published

on


North Carolina Gov. Josh Stein on Tuesday signed an executive order (EO) creating the state’s Artificial Intelligence Leadership Council, tasked with advising on and supporting AI strategy, policy and training. The move comes just more than a year after the state published its AI responsible use framework.

Executive Order No. 24: Advancing Trustworthy Artificial Intelligence That Benefits All North Carolinians sets the direction for the council and creates the North Carolina AI Accelerator, which will serve as a hub within the N.C. Department of Information Technology (NCDIT). Council duties include creating a state AI road map; recommending AI policy, governance and ethics frameworks; guiding the accelerator; addressing workforce, economic and infrastructure impacts; and issuing recommendations for AI and public safety. Its first report is due June 30, 2026.

“AI has the potential to transform how we work and live, carrying with it both extraordinary opportunities and real risks,” Stein said in a news release. “Our state will be stronger if we are equipped to take on these challenges responsibly. I am looking forward to this council helping our state effectively deploy AI to enhance government operations, drive economic growth and improve North Carolinians’ lives.”


State CIO and NCDIT Secretary Teena Piccione will co-chair the council alongside state Department of Commerce Secretary Lee Lilley. The governor named 22 additional members from the public and private sectors. They include technology leaders, educators, state legislators and state agency representatives such as David Yokum, chief scientist of the Office of State Budget and Management. Vera Cubero, emerging technologies consultant for the N.C. Department of Public Instruction, and Charlotte CIO Markell Storay are among the appointees, each of whom will serve a two-year term.

“I am honored to chair this council dedicated to strategically harnessing the exponential potential of AI for the benefit of North Carolina’s people, businesses and communities,” Piccione said in the release. “The AI Accelerator, along with our other initiatives, puts us in a strong position to implement swift and transformative solutions that will not only position North Carolina at the forefront of technological innovation but also uphold the latest standards of data privacy and security.”

The AI Accelerator will serve as the hub for AI governance, research, development and training. It is housed in the NCDIT, where staff will develop an AI governance framework, risk assessment and statewide definitions for AI and generative AI, according to the EO. When it comes to AI, Piccione sees significant potential for its use in government, identifying use cases in areas including procurement, fraud detection and cybersecurity, she told Government Technology earlier this year.

The state, like others, has been accelerating its AI moves of late. NCDIT named its first AI governance and policy executive this year, the University of North Carolina has been working with faculty to address AI in classroom settings, and some state agencies are looking at ways to safely implement chat and other services. North Carolina now joins other states that have appointed councils; are working toward ethical governance; and are wrestling with data centers, AI use and how it impacts energy use, also mentioned in the EO.





Source link

Continue Reading

Trending