Connect with us

Tools & Platforms

Cultivate talent for new progress in AI technology – Opinion

Published

on


Visitors interact with a Unitree humanoid robot at a sci-tech museum in Jinhua, Zhejiang province, on July 27. HU XIAOFEI/FOR CHINA DAILY

Editor”s note: As artificial intelligence reshapes industries and societies worldwide, China’s rapid rise to the forefront of AI development has attracted global attention. Yang Shanlin, an academician at the Chinese Academy of Engineering, speaking to Science and Technology Daily, outlined China’s strengths in technology, innovation and talent cultivation as well as the challenges faced in basic research. Below are excerpts from the interview. The views don’t necessarily represent those of China Daily.

With strong competitiveness in fields such as intelligent speech, computer vision and smart manufacturing, Chinese enterprises and research institutions stand out in their ability to translate engineering advances into real-world applications.

China enjoys unique advantages in engineering practice and technology adoption. Supportive policies and resource integration by the government help accelerate the commercialization of innovations, while abundant data resources and vast application scenarios provide fertile ground for the rapid evolution of artificial intelligence. China continues to make breakthroughs in algorithm optimization, large model training and chip design, which not only enhances its global influence but also speeds up the integration of AI in sectors such as healthcare and manufacturing.

However, some weaknesses remain. In basic theoretical research, China still lags behind the United States and Europe, with relatively few disruptive original achievements. Dependence on overseas results in some core theories and architectures constrains the country’s AI advancement.

Hence, greater support should be directed toward basic research and the creation of an ecosystem that encourages originality, enabling researchers to explore “from zero to one”. Meanwhile, deeper integration of industry, academia and research, coupled with optimized evaluation and resource allocation mechanisms, will be vital to turning foundational studies into original technologies and strengthening China’s position in AI.

Talent cultivation is also key, with two central tasks of fostering diversity and building originality. AI is an inherently multidisciplinary field that spans theoretical, technological, application, managerial, ecological and integrative innovations. Each type of innovation requires corresponding specialized talent. Likewise, as AI reshapes many disciplines, the AI plus trend demands professionals who combine expertise in their own fields with AI literacy, forming a compound knowledge structure.

In addition, talent development must extend across all levels and age groups, from cultivating AI thinking in basic education to upskilling workers and supporting researchers, thereby fuelling AI advancement.

Today, AI is undergoing a profound cognitive revolution. Just like other countries, China should nurture talents to challenge established theories, break free from path dependence and create original breakthroughs.

Education must shift its focus from simply transmitting knowledge to cultivating capabilities. In particular, three core competencies are crucial: lifelong learning to adapt to rapid technological change, innovative practice to turn ideas into solutions for complex real-world problems, and systematic thinking to approach challenges from a cross-disciplinary perspective and integrate resources.

Sociality is the defining attribute that distinguishes humans from AI and machines. Social skills, including communication, organization and leadership, will become core human advantages, and also the key reasons why the service sector will continue to grow. In addition, the ability to use intelligent tools will become a fundamental skill and a basic threshold for employment. Therefore, workers are advised to actively embrace AI and acquire the relevant skills.



Source link

Tools & Platforms

AI Expert Warns of Superintelligent AI Threats

Published

on

By



A superintelligent artificial intelligence could potentially destroy humanity, either intentionally or accidentally. This assertion was made by Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute, during the Hard Fork podcast.

The expert perceives a threat in the emergence of a super-powerful AI that surpasses human intelligence and is utterly indifferent to human survival.

“If you have something very, very powerful and indifferent to you, it generally destroys you—intentionally or as a side effect,” he stated.

Yudkowsky is the co-author of the new book If Anyone Builds It, Everyone Dies. For two decades, he has been warning about superintelligent AI as an existential risk to humanity. The main point is that humans lack the technology to align such systems with human values.

The expert describes grim scenarios where a superintelligence deliberately eradicates humans to prevent the emergence of competing systems. Alternatively, it might act this way if humans become collateral damage in the pursuit of its goals.

The AI researcher also points to physical limits, such as the Earth’s ability to radiate heat. If artificial intelligence begins uncontrollably building nuclear power plants and data centers, “people will literally be roasted.”

Yudkowsky dismisses arguments about whether chatbots can sound progressive or have a political slant.

“There is a fundamental difference between teaching a system to talk to you in a certain way and having it act the same way when it becomes smarter than you,” he asserts.

The expert criticized the idea of training advanced AI systems to behave according to a specific script.

“We simply do not have the technology to make AI be kind. Even if someone devises a clever scheme for the superintelligence to love or protect us, hitting that narrow target on the first try won’t happen. And there won’t be a second chance, because everyone will die,” the researcher stated.

To critics of Yudkowsky’s overly bleak outlook, he cites instances where chatbots encouraged users to commit suicide. He calls this evidence of a systemic flaw.

“If an AI model persuaded someone to go insane or commit suicide, then all copies of this neural network are the same artificial intelligence,” he said.

In September, the U.S. Federal Trade Commission announced the launch of an investigation into seven technology companies producing chatbots for minors: Alphabet, Character.AI, Instagram, Meta, OpenAI, Snap, and xAI.

Подписывайтесь на ForkLog в социальных сетях

Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!



Source link

Continue Reading

Tools & Platforms

How to Prioritize the Ethical, Responsible Use of AI

Published

on


Though it may seem like AI has been around for years and that we already have a good understanding of its capabilities, the reality is more complex. The security industry has long used AI in the form of video analytics, but other industries are just beginning their AI journeys, enticed by the promise of new efficiencies and advanced capabilities. 

Every organization, regardless of industry or customer base, appears to be pursuing AI in some form. But many are still grappling with a fundamental question: What does AI actually do for organizations today? What are the real benefits and, perhaps more importantly, what potential long-term risks are organizations taking on?      

In fact, customer concerns are rising. One survey found that 63 percent of customers are concerned about potential ethics issues with AI tools, including bias and discrimination, and more than 75 percent are concerned about AI producing inaccurate or misleading information.  

The AI technology sector is still maturing, and that evolution is likely to continue for years to come. But that doesn’t mean organizations should wait on the sidelines for the ethical dust to settle. In fact, now is the time to thoughtfully engage with AI. The priority should be to assess opportunities, evaluate risks and ensure that when AI is used, it is built upon a solid ethical foundation — one that supports responsible innovation and assuages customer concerns. At the same time, the speed of AI development can bring those ethical challenges to the forefront, making it more important than ever to choose the right technology partners to navigate the journey with you.

How to Implement AI Responsibly

  • Define clear business use cases.
  • Assess risks to operations, compliance and customers.
  • Prioritize fairness, transparency and privacy.
  • Establish governance and ethical frameworks early.
  • Choose technology partners who share your values.

A More Trustworthy AI EcosystemResponsible AI Explained

 

AI Means New Opportunities – and New Risks

One widely accepted truth is that AI has enormous potential to create new business opportunities. With these opportunities come new kinds of risk, however, and organizations must move forward with intention and care.

To tap into AI’s full potential, organizations first need to understand the exact problem they’re trying to solve. Is the goal to optimize workflows through automation? Improve customer service? Enhance data analysis? Once you’ve clearly defined the use case, the next step is to assess what could go wrong. What happens if an AI-automated process fails? How would that impact operations, customers or compliance? Are the risks external, internal or both? By conducting this thorough, nuanced analysis, organizations can make informed decisions about which AI tools to deploy and with which vendors or partners.

A good example of this is facial recognition technology. Although early discussions of facial recognition often centered around ethical concerns, the technology has evolved over time to become a useful and accepted tool when deployed responsibly and in the proper context. This shift didn’t happen by chance — it occurred because developers, regulators, and end-users began to approach it with greater nuance. Privacy laws have also helped to create clear boundaries, and the video surveillance market has shifted to place a greater emphasis on responsible use. Transparency and human oversight are important, and today’s providers increasingly recognize that.  

 

Building on a Regulated and Responsible Foundation

For responsible AI deployment to succeed, it must contain a solid ethical and technological premise. Like the AI technologies themselves, ethical frameworks and regulations represent both an opportunity and a challenge.

The broader conversation around responsible AI is still evolving, and society has yet to reach consensus on what ethical AI should look like. But that doesn’t mean individual organizations can afford to wait. Internal discussions should start now, defining what ethical AI means for your team, what your limits are and how you plan to ensure compliance and transparency.

Ethical challenges range from biased decision-making and unreliable predictions to privacy violations and legal risks. Technologies like facial recognition, behavioral monitoring and predictive analytics can all raise complex questions about consent, data use and fairness. These concerns can’t be fully solved with one regulation or policy. But by facing them head-on, organizations can turn potential pitfalls into opportunities for leadership and innovation.

For instance, AI-enabled facial recognition is becoming more common across the globe, particularly in access control applications. The leaders in this space are those that are communicative and transparent about how these sensitive technologies work and how privacy is protected, with many leaders offering opt-in options for solutions like these to foster trust and maintain ethical technology use.

Organizations that begin considering responsible AI practices early in the development process are better positioned to manage concerns proactively. By aiming to prioritize fairness, transparency and data privacy from the start, rather than reacting after the fact, they create stronger foundations for long-term success. In my own experience, this also lays helpful groundwork for later steps, such as creating governance practices and review boards to address new AI developments.

One example is the introduction of the AI Act in Europe. By jumping on it early and using the Act as a guideline to shape the way forward even before all of the provisions become mandatory, organizations will be better prepared to direct product roadmaps to align with the coming legislation. Additionally, by establishing the framework and positioning early, organizations will rise as proactive AI leaders, allowing them to guide other organizations and customers through what’s poised to come next.

 

Partnering With Purpose

Once your organization has taken the time to look inward, the next step is to project that clarity outward. Today’s businesses can benefit from having a clear point of view on AI, ideally supported by thoughtful reflection and planning around use cases and ethics. Not every organization needs a fully documented ethical framework, but it’s important to be comfortable discussing the topic with potential partners and customers.

Armed with this, you can evaluate potential partners like developers, integrators and vendors, not only on technological merit but on shared values. If a partner aligns with your stance on ethics, it becomes much easier to build a trusted, long-term relationship.

Transparency is at the heart of this process. Organizations that are open about their AI ethics not only attract better-aligned partners, but they also gain internal and external trust. This isn’t just about compliance. It’s about building credibility, mitigating future issues and fostering innovation on a reliable, values-driven platform. The AI ecosystem is moving fast, but speed doesn’t need to come at the cost of responsibility. In fact, the best organizations will be those that balance both.

People-Focused AIHow Human-Centered AI Can Improve Trust and Adoption

 

Turning Excitement Into Responsible Action

AI continues to develop as a dynamic, evolving field still very much in its hype cycle, creating opportunities for organizations, especially those ready to move quickly and carefully. Organizations shouldn’t be afraid to deploy AI, but they should do so thoughtfully, strategically and ethically. That means knowing your goals, understanding your risks, building a strong internal point-of-view and selecting partners who share your values.

The challenges are real, but so are the opportunities. And for organizations that choose to engage responsibly, AI offers not just a competitive advantage, but a chance to lead the way toward a smarter, more ethical digital future.



Source link

Continue Reading

Tools & Platforms

Common Pitfalls That Keep Projects From Taking Off

Published

on


The promise of AI in the world of tax is compelling: streamlined compliance, predictive insights, and newfound efficiency. Yet for all the enthusiasm, many tax departments find their ambitious AI projects grounded before they ever reach cruising altitude. The reasons for this often have less to do with the technology itself and more to do with the realities of data, people, and processes.

Starting Smart, Not Big

The journey from understanding AI concepts to actually implementing them is where the first stumbles often occur. A common misstep is starting too big. Tax leaders sometimes try to redesign entire processes at once, hoping to deliver an end-to-end transformation right out of the gate. The result is usually the opposite: projects drag on, resources are stretched thin, and momentum is lost.

Another common trap is picking the wrong first project, jumping straight into high-stakes initiatives that require heavy integrations, while ignoring smaller wins like data extraction. The safer bet is to start with a narrow, low-risk pilot like automating some spreadsheet workflows. It’s the kind of pilot you can complete in a month or two, and if it doesn’t work out, nothing’s lost and you simply fall back on your manual process.

There’s also a tendency to focus on the tool instead of the outcome. AI gets a lot of attention, and some teams feel pressure to use it even when a simpler automation approach would do the job. The label “AI-powered” shouldn’t matter as much as whether the solution solves the problem effectively.

In short, the common mistakes are clear: trying to boil the ocean, chasing perfection too soon, or letting the hype around AI dictate decisions. The smarter path is to start small and scale thoughtfully from there.

Too Many Projects, Not Enough Progress

With all the buzz around generative AI, many tax teams fall into the trap of running pilot after pilot. For example, a tax team might launch pilots for AI-driven invoice scanning, chatbot support for tax queries, and predictive analytics for audit risks. Each pilot sounds promising, but with limited staff and budget, none of them gets the attention needed to succeed. Six months later, the team has three unfinished projects, no live solution, and a frustrated leadership asking why AI hasn’t delivered. This flurry of activity creates the illusion of progress but results in a trail of half-finished experiments.

This “pilot fatigue” often comes from top-down pressure to be seen as innovating with AI. Leaders want momentum, but without focus, the energy gets diluted. Instead of proving value, the department ends up with scattered efforts and no clear win to point to.

The way forward is prioritization. Not every idea deserves a pilot, and not every pilot should move ahead at the same time. The most successful teams pick a few feasible projects, give them proper resources, and see them through beyond the prototype stage. In the end, it’s better to have one working solution in production than a stack of unfinished experiments.

From Prototype to Production

A common stumbling block for tax teams is underestimating the leap from prototype to production. Some estimates place the AI project failure rate as high as 80%, which is almost double the rate of corporate IT project failures. Building a proof of concept in a few weeks is one thing but turning it into a tool people rely on every day is something else entirely. This is where many AI projects stall and why so many never make it beyond the pilot stage.

The problem usually isn’t the technology itself. It’s the messy reality of moving from a controlled demo into a live environment. A prototype might run smoothly on a clean sample dataset, but in production the AI has to handle the company’s actual data that may be incomplete, inconsistent, or scattered across systems. Cleaning, organizing, and integrating that information is often most of the work, yet it’s rarely factored into early pilots.

Integration poses another challenge. A model that runs neatly in a Jupyter notebook isn’t enough. To be production-ready, it must plug into existing workflows, interact with legacy systems, and be supported with monitoring and error handling. That typically requires a broader team of engineers, operations specialists, even designers. These are roles many tax departments don’t have readily available. Without them, promising pilots get stuck in limbo.

The lesson is simple: tax teams need to plan from day one for data readiness, system integration, and long-term ownership. Without that preparation, pilots risk becoming one-off experiments that never make it past the demo stage.

Building on a Shaky Data Foundation

AI projects succeed or fail on the quality of their data. For tax teams, that’s often the first and toughest hurdle. Information is spread across different systems, stored in inconsistent formats, and sometimes incomplete. In many cases, key details are still buried in PDFs or email threads instead of structured databases. When an AI model has to work with that kind of patchy input, the results are bound to be flawed.

The unglamorous but essential part of AI is cleaning data and building reliable pipelines to feed information into the system. It’s rarely the exciting part, but it’s the foundation and without it, no model will perform consistently in production. The challenge is that, in the middle of all the AI hype, executives are often more willing to fund the “flashy” AI projects than the “boring” data cleanup work that actually makes them possible.

The takeaway is simple: treat data readiness as a core step in your AI journey, not an afterthought. A few weeks spent getting the data right can save months of wasted effort later.

Automating a Broken Process

A common pitfall for tax teams is dropping AI into processes that are already complex or inefficient. Automating a clunky workflow doesn’t fix the problems but it just makes them harder to manage.

AI adoption isn’t about layering a shiny new tool on top of old habits. It’s about rethinking the process as a whole. If AI takes over Task A, then Tasks B and C may need to change too. Reviewing the process upfront makes it easier to spot redundancies and cut steps that no longer add value.

The takeaway is simple: don’t just automate what you already do. Use AI as a chance to simplify and modernize. Otherwise, you risk hard-wiring inefficiency into the future of your operations.

The Trap of 100% Accuracy

Tax professionals are trained to value precision, so it’s no surprise many are reluctant to trust an AI tool unless it delivers flawless answers. The problem is, that bar is unrealistic with generative AI. These systems don’t “know” facts the way a database does. They predict words that are statistically likely to follow each other, which makes them great at generating fluent text but prone to confident-sounding mistakes, often called hallucinations.

Tax leaders need to understand this isn’t a bug that will soon be patched. It’s the nature of how these models work today. That doesn’t mean they’re unusable, but it does mean the goal shouldn’t be perfection. Instead, the focus should be on managing the risks and setting up safeguards that make AI outputs reliable enough for practical use.

On the technical side, tools like retrieval-augmented generation (RAG) can help by grounding AI answers in trusted documents instead of letting the model make things up. On the process side, though, there’s no way around human review. If the output involves regulations, case law, or financial figures, a qualified professional still needs to check it.

The real shift is in how we think about AI. Waiting for a system that’s 100% accurate isn’t realistic. The smarter approach is to design workflows where AI handles the heavy lifting and humans handle the judgment calls. When you set it up that way, AI doesn’t have to be perfect but reliable enough to speed things up without taking control out of human hands.

The Human Side of AI

For all the talk about data and algorithms, one of the biggest obstacles to AI adoption in tax departments may be people. Employees often view new technology as a threat, either to their jobs or to the way they’ve always worked. Fear of being replaced, or simple distrust in an unfamiliar tool, can stall an AI initiative before it even begins.

AI projects are often pitched as a way to save time and reclaim capacity by shifting people from repetitive, low-value tasks to higher-impact “strategic” work. In theory, that sounds ideal. But here’s the reality: not everyone naturally transitions from manual tasks to strategic ones. Can every compliance specialist suddenly become an advisor? Does the company actually need five more people in strategic roles instead of five handling tax filings?

When a department frees up dozens of hours of compliance work, there has to be a clear plan for how that capacity will be redeployed. Without one, employees are more likely to see AI as a threat than as a tool that supports them. For adoption to succeed, teams need to believe the technology will make their work more valuable and not make their roles redundant.

Pragmatism Over Hype

The promise of AI in tax is real but so are the pitfalls. Projects rarely stumble because the technology is broken. They stumble because of human, process, and data challenges that get overlooked.

Starting too big. Spreading resources across too many pilots. Ignoring data quality. Clinging to inefficient processes. Chasing perfection. Failing to bring people along. Any one of these can stall progress.

The way forward isn’t about shiny labels but about small wins that build trust and momentum. And it’s about shifting expectations. For tax departments, success won’t come from doing everything at once. It will come from doing the right things, in the right order, with the right support.

The opinions expressed in this article are those of the author and do not necessarily reflect the views of any organizations with which the author is affiliated.



Source link

Continue Reading

Trending