HCLTech, OpenAI Partner to Drive Enterprise-Scale AI Adoption
HCLTech has announced a multi-year strategic collaboration with OpenAI to drive large-scale enterprise AI transformation, becoming one of the first strategic services partnerships with OpenAI.
The company has said that its deep industry knowledge and AI engineering expertise have laid the foundation for scalable AI innovation with OpenAI.
The collaboration will enable HCLTech’s clients to leverage OpenAI’s industry-leading AI product portfolio alongside HCLTech’s foundational and applied AI offerings for rapid and scalable GenAI deployment, an official release said.
The company informed that it will integrate OpenAI’s industry-leading models and solutions across its industry-focused offerings, capabilities and proprietary platforms, including AI Force, AI Foundry, AI Engineering and industry-specific AI accelerators.
This deep integration will help HCLTech’s clients modernise business processes, enhance customer and employee experiences and unlock growth opportunities, covering the full AI lifecycle from AI readiness assessments and integration to enterprise-scale adoption, governance and change management.
HCLTech will roll out ChatGPT Enterprise and OpenAI APIs internally, empowering its employees with secure, enterprise-grade generative AI tools.
Vijay Guntur, global CTO and head of ecosystems at HCLTech, said, “We are honoured to work with OpenAI…This collaboration underscores our commitment to empowering Global 2000 enterprises with transformative AI solutions.”
“It reaffirms HCLTech’s robust engineering heritage and aligns with OpenAI’s spirit of innovation. Together, we are driving a new era of AI-powered transformation across our offerings and operations at a global scale,” he said.
Giancarlo ‘GC’ Lionetti, chief commercial officer at OpenAI, said, “HCLTech’s deep industry knowledge and AI engineering expertise sets the stage for scalable AI innovation. As one of the first system integration companies to integrate OpenAI to improve efficiency and enhance customer experiences, they’re accelerating productivity and setting a new standard for how industries can transform using generative AI.”In June 2023, HCLTech announced an expanded collaboration with Microsoft to leverage OpenAI models, like GPT‑3 and Codex, via Azure OpenAI Service.
As part of that initiative, the company established a generative AI centre of excellence with Microsoft, focusing specifically on developing industry‑tailored, scalable AI solutions.
AI Insights
There is No Such Thing as Artificial Intelligence – Nathan Beacom
One man tried to kill a cop with a butcher knife, because OpenAI killed his lover. A 29-year-old mother became violent toward her husband when he suggested that her relationship with ChatGPT was not real. A 41-year-old now-single mom split with her husband after he became consumed with chatbot communication, developing bizarre paranoia and conspiracy theories.
These stories, reported by the New York Times and Rolling Stone, represent the frightening, far end of the spectrum of chatbot-induced madness. How many people, we might wonder, are quietly losing their minds because they’ve turned to chatbots as a salve for loneliness or frustrated romantic desire?
We might not all be losing our minds. But there are subtle, pernicious ways in which chatbots still affect us. Because they have been designed to present themselves as personal beings, we cannot help but to personify them. We ask them for help in making decisions, for advice, for counsel. Companies are setting about making a great deal of money by replacing therapeutic relationships with “therapy chatbots” and are proposing to offer AI companions to the elderly, so that their faraway children need not visit so often. Are you lonely? Talk to a machine. Corporations are happy to endow these programs with human names, like Abbi, Claude, and Alexa.
This is a disaster. In uncritically letting these machines shape our lives, we become prey to all kinds of manipulation, we lose sight of reality, and we are induced, in an important way, to take a reductive view of actual people. Chatbots offer us a form of relationship without friction, without burden and responsibility. This illusory kind of relationship hampers our ability to engage in the difficult challenge of real bonds, which are the only things that can give value to human life. The more we personify AI, the more we slouch toward lives of isolation and deception.
In working to avoid all of this, it is important to recognize that the fundamental idea of artificial intelligence is a falsehood. There is no such thing as artificial intelligence, and in fact, I will suggest that the very phrase is an oxymoron. If we understand what “artificial intelligence” is, we’ll be free from its deceptions, free to cultivate true intelligence in ourselves and others.
Language matters. Confucius, when asked what he would do to heal society, said he would first “make right the names.” The health of human society must be grounded in truth and honesty, and names, the sage thought, should match reality as best they can. The term “artificial intelligence,” then, because it is based on a falsehood, should be abandoned in favor of language that reflects the reality of what it is.
Computation versus understanding.
We often understand intelligence today to refer to a certain excellence in carrying out mind-dependent tasks. Thus, when a computer produces similar products to the products of intelligence, we begin to call it “intelligent,” too.
The idea that intelligence is reducible to task completion is embodied by the famous Turing Test (put forward by the famed mathematician, Alan Turing, in the 1940s), which proposes that if a user communicating with both a machine and human is unable to distinguish between the two, the machine can be said to be “intelligent.” We are clearly at this point already, because users have become convinced, in certain instances and to varying degrees, that AI tools really are thinking things.
Philosopher John Searle famously posed a contrary thought experiment, known as the “Chinese room.” Imagine two people are communicating through a closed door. One knows Chinese and one does not. The non-speaker is equipped with a complicated, exhaustive set of rules that allow him to match the right response characters to the characters submitted by the Chinese speaker. The Chinese speaker, as a result, receives adequate and sensible written Chinese responses to his queries and statements. This Chinese person could become convinced that he is having a real conversation despite the fact that his conversation partner has no idea of the meaning of the characters in play. He is only following a set of patterns and rules.
This thought experiment shows us how the Turing test fails as an assessment of intelligence, since it could be that the machine being tested has no understanding at all, but merely follows a set of “rules,” which are in fact merely material processes designed to reliably produce certain symbolic outputs in response to certain inputs.
Indeed, this is really what is going on even in the most complex computer. In understanding this, it’s simplest to start with a pocket calculator. The machine has been programmed such that when this button, this button, and that button are pressed, a certain set of pixels will appear on the screen. At no point in that process does the calculator understand math, because understanding refers to the subjective comprehension of a thing. The calculator produces symbols that humans understand as representing concepts. The machine possesses the symbol, but not the concept. No one thinks that their pocket calculator is a thinking thing.
So it is also with more advanced machines. Computers, even when very complex, are still machines that reliably produce certain symbolic outputs based on certain mechanical inputs (usually typing on keys). Like a calculator, a computer does not literally contain information, have a memory, or think (what it contains are charged wires and transistors and so on)—so when we say that a computer contains information or memory, we are using those terms loosely. The complex network of transistors responding to electrical charges is wondrously impressive, and a testament to human ingenuity. But it is not thinking.
Because so many of us do not understand how computers work, and because the mechanical processes are hidden from view, the process of imitating the products of human intelligence feels almost like real intelligence. But, no matter how complex and well-designed these processes are, they remain mechanical processes, containing no inherent understanding.
Part of the increased illusion of personality with “AI” over other forms of computation is its responsiveness. “AI”s operate by use of what is called a neural net (itself a dubious term, presuming an equivalence between the circuits and the much more mysterious working of neurons). These are computational models through which “data” can be run, and through which the machine can collate statistically significant correlations between data points. This allows the machine, if it is “trained” on enough data, to detect regularities and produce probable responses to symbolic inputs, according to a designated program. Because of the way chatbots, which run on Large Language Models (LLMs), present themselves to us, as responses apparently from nowhere popping up on our screen, we may not recognize that the machines producing these responses are, in fact, run on huge servers that occupy massive warehouses in rural America. This produces the illusion of conversation. But the chatbot is more accurately described as a glorified, very impressive autocomplete program, selecting the next most probable words based on statistical correlations.
This is part of why users can be fooled into conspiracy theories or romances. In a very human tone, the machine will produce what is most statistically probable that the user is looking for. The “AI” is “trained” on data from across the internet, including romance novels and conspiracy theories. And so, if the user queries along a path likely to produce those results, those are the results they will achieve.
Science fiction has given us images of societies run by “AI.” In these stories, the machine is more capable of aggregating data and providing reliable solutions to societal issues than human beings. This machine may be portrayed as malevolent or benevolent, depending on the story. You may think of the malicious “The Entity” in the most recent Mission: Impossible, or of the benevolent AI that runs the planet Attin in Star Wars: Skeleton Crew. But we should recognize that, no matter how good the “AI” is portrayed, it is a dead thing, a tool, with no desires, no personality, no judgment. It instead embodies the desires, personality, and judgment of those who have designed it and the data upon which it has been “trained.”
Those who recognize this, as “AI” advances, will be able to see the clay feet of the new idol. There will be a temptation to treat “AI” like an oracle. Recognizing that it is a machine and not a wise truth teller will help us to avoid forking over our own capacity for judgment to what is, after all, merely a (very striking) human artifact.
Kind versus degree.
The difference between mechanical and mental processes is not one of degree, but of kind. It is not as though a calculator thinks a little bit and a supercomputer thinks a lot. Both processes are of the same sort, just different degrees of complexity. But a mental process is of a different kind altogether. Information, concepts, and the relations between them are not mechanical processes, even if they correlate with or depend on physical processes in a living brain.
To see that this is true, one must only recognize that mental realities are never fully describable in physical terms. Even a fully exhaustive explanation of how the human brain works would leave the life of the mind a mystery, because it would not include notions of concepts, ideas, thoughts, or choices. You could describe in total physical detail the nature of neurons and their interactions, and you will still not have described a thought or an idea. These mental terms are ineliminably personal, and cannot retain their meaning if a reduction to physical language is attempted.
The relationship between the material brain and mind is a mysterious one. We know that events in the brain affect the mind. We also know that events in the mind affect the brain. This is how treatment for certain kinds of obsessive-compulsive disorder works. The subjective understanding of safety that comes through exposure therapy actually changes and restructures the brain away from fear-generating responses, and the choice to participate in exposure therapy is, likewise, only fully describable in mental—that is, mind-related—terms. To use only the terms of physical science, one would be limited to the merely descriptive series of physical events, describing how photons hit the retina at time 1, initiating a series of electric and chemical transfers inside brain tissue. Totally absent would be the mental realities required to fully explain what is going on, including the subjective notions of fear and safety. In such cases, a patient chooses to sit and experience the thing they fear. It is precisely the patient’s choosing (mental) to do this and accepting their own safety as true (mental) that changes the way the body (including the brain) responds.
It does not follow from the fact that the human mind and human brain are clearly interrelated that a set of electrified wires could somehow summon a mind. This gets back to our point about the difference in kind between the life of the mind, as embodied in an organic being, and the function of a machine. Just as we would say that a child fooled by an animatronic mouse at Chuck E. Cheese simply doesn’t understand that it’s only a machine, so we should think of someone who is fooled by a very good “AI.”
These philosophical issues are complex, and can’t be fully explicated here. But I hope, at least, to have provided some tools for conceptual clarification and to have cast doubt on the possibility of “artificial intelligence.”
The fundamental deception of chatbots.
Part of the reason that it is so important to be clearer about what “AI” is and is not, is that these machines—and their associated tradeoffs, both practical and moral—are becoming ubiquitous in our lives. A great deal could be written about the risks “AI” poses, and, indeed, it has been written. Perhaps we are familiar with the idea that an overdependence on “AI” will cause an atrophy in our own ability to read, think, reason, and relate. And we know of the doomsday scenarios of an “AI” that decides to clear humans off the earth with nuclear bombs.
But even the simpler chatbots today have a moral valence: They are immoral, because they are fundamentally deceptive. They are presented to us by companies like OpenAI and Google as though they were thinking things, and their development is geared toward making them more and more deceptive, until even the critical user can be fooled into thinking that the machine thinks.
Aside from simple dishonesty, the deception of intelligence in these machines serves as a distraction from real personal relationships. By creating simulacra of sympathy, of engaging conversation, and of sage advice (like Claude telling you how to prepare for a date, or comforting you after the loss of your mother), these machines lead us away from forming real personal bonds with the people in our lives.
Chatbots pervert our sense of what human relationships are. Because the “AI” caters to us, because there is really only one person in the relationship, simulations of human bonding by “AIs” are fundamentally self-centered. In choosing the low-friction option of a machine that caters to our every desire, we are shaped toward selfishness, rather than drawn out into true empathy, sympathy, and care for others. They are also likely to cause our ability to handle the difficulties of human relationships to atrophy. Gaining wisdom about how to manage differences, misunderstandings, and heartbreak takes practice, gained through friction and failure. It is only through difficulty that we learn how to be fully mature humans.
Chatbots also bias us toward the idea that connection is reducible to words. Already, the idea of “AI” therapy is in use and producing profits for enterprising corporations. The idea of this technology is that therapy is about simply hearing the “right words.” In reality, therapy, like all human relationships, is not so much about the words as about being understood by another. This is something the machine cannot do, despite the language of marketers and users.
When we remember that LLMs are very fancy autocompletes, we should be aware that Sam Altman and the other “AI” boosters are trying to fool us. Researchers at Apple, gratefully, bucked this trend, publishing their findings with respect to the ways in which the appearance of thought falls apart when LLMs are given certain logic and reasoning puzzles. But many AI boosters understand these machines and programs, and are, at least implicitly, encouraging the public to believe AIs can think, relate, and understand the user.
A new term.
“AI”s are certainly artificial, having been made by human hands. But they are not intelligent. To call them “artificial intelligence” is to accept, not just a fiction, but a lie. It is to misconstrue both the nature of machines and of man. It is to give in to the ways in which chatbots threaten to atrophy our humanity, and, in extreme cases, even drive us to madness.
In lieu of “artificial intelligence,” I propose a more accurate, ethical, and socially responsible name: “pattern engine.” Early computers, which would find mathematical differences, were called “difference engines.” This name adequately recognized the reality of the machine at hand. “AI”s are indeed engines, and engines made for aggregating patterns and sorting data into statistical correlations. They are, truly, engines that sort things into patterns and produce outputs based on the statistical weight of what has been sorted.
A healthy society must be based on truth. And as technological advancement speeds forward faster than our ability to understand and adapt, we can at least not be fooled about what’s happening. Join me, if you will, in calling “AI” what it is. If it catches on, maybe we can find ways to use pattern engines in a way that dignifies humanity, rather than degrades it.
Jobs & Careers
Capgemini to Acquire WNS for $3.3 Billion with Focus on Agentic AI
Capgemini has announced a definitive agreement to acquire WNS, a mid-sized Indian IT firm, for $3.3 billion in cash. This marks a significant step towards establishing a global leadership position in agentic AI.
The deal, unanimously approved by the boards of both companies, values WNS at $76.50 per share—a premium of 28% over the 90-day average and 17% above the July 3 closing price.
The acquisition is expected to immediately boost Capgemini’s revenue growth and operating margin, with normalised EPS accretion of 4% by 2026, increasing to 7% post-synergies in 2027.
“Enterprises are rapidly adopting generative AI and agentic AI to transform their operations end-to-end. Business process services (BPS) will be the showcase for agentic AI,” Aiman Ezzat, CEO of Capgemini, said.
“Capgemini’s acquisition of WNS will provide the group with the scale and vertical sector expertise to capture that rapidly emerging strategic opportunity created by the paradigm shift from traditional BPS to agentic AI-powered intelligent operations.”
Pending regulatory approvals, the transaction is expected to close by the end of 2025.
WNS’ integration is expected to strengthen Capgemini’s presence in the US market while unlocking immediate cross-selling opportunities through its combined offerings and clientele.
WNS, which reported $1.27 billion in revenue for FY25 with an 18.7% operating margin, has consistently delivered a revenue growth of around 9% over the past three fiscal years.
“As a recognised leader in the digital BPS space, we see the next wave of transformation being driven by intelligent, domain-centric operations that unlock strategic value for our clients,” Keshav R Murugesh, CEO of WNS, said. “Organisations that have already digitised are now seeking to reimagine their operating models by embedding AI at the core—shifting from automation to autonomy.”
The companies expect to drive additional revenue synergies between €100 million and €140 million, with cost synergies of up to €70 million annually by the end of 2027.
“WNS and Capgemini share a bold, future-focused vision for Intelligent Operations. I’m confident that Capgemini is the ideal partner at the right time in WNS’ journey,” Timothy L Main, chairman of WNS’ board of directors, said.
Capgemini, already a major player with over €900 million in GenAI bookings in 2024 and strategic partnerships with Microsoft, Google, AWS, Mistral AI, and NVIDIA, aims to solidify its position as a transformation partner for businesses looking to embed agentic AI at scale.
Funding & Business
Israel Set to Look Past Shekel Rally and Hold Interest Rates
Israel’s central bank is set to hold interest rates for a 12th consecutive time, with policymakers waiting to see if the shekel’s recent rally helps tame inflation and paves the way for a cut.
Source link
-
Funding & Business6 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business3 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business6 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Funding & Business6 days ago
Europe’s Most Ambitious Startups Aren’t Becoming Global; They’re Starting That Way
-
Tools & Platforms6 days ago
Winning with AI – A Playbook for Pest Control Business Leaders to Drive Growth
-
Jobs & Careers4 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit