AI Research
Thinking Machines named OpenAI’s first APAC partner

Thinking Machines Data Science is joining forces with OpenAI to help more businesses across Asia Pacific turn artificial intelligence into measurable results. The collaboration makes Thinking Machines the first official Services Partner for OpenAI in the region.
The partnership comes as AI adoption in APAC continues to rise. An IBM study found that 61% of enterprises already use AI, yet many struggle to move beyond pilot projects and deliver real business impact. Thinking Machines and OpenAI aim to change that by offering executive training on ChatGPT Enterprise, support for building custom AI applications, and guidance on embedding AI into everyday operations.
Stephanie Sy, Founder and CEO of Thinking Machines, framed the partnership around capability building: “We’re not just bringing in new technology but we’re helping organisations build the skills, strategies, and support systems they need to take advantage of AI. For us, it’s about reinventing the future of work through human-AI collaboration and making AI truly work for people across the Asia Pacific region.”
Turning AI pilots into results with Thinking Machines
In an interview with AI News, Sy explained that one of the biggest hurdles for enterprises is how they frame AI adoption. Too often, organisations see it as a technology acquisition rather than a business transformation. That approach leads to pilots that stall or fail to scale.
“The main challenge is that many organisations approach AI as a technology acquisition rather than a business transformation,” she said. “This leads to pilots that never scale because three fundamentals are missing: clear leadership alignment on the value to create, redesign of workflows to embed AI into how work gets done, and investment in workforce skills to ensure adoption. Get those three right—vision, process, people—and pilots scale into impact.”
Leadership at the centre
Many executives still treat AI as a technical project rather than a strategic priority. Sy believes that boards and C-suites need to set the tone. Their role is to decide whether AI is a growth driver or just a managed risk.
“Boards and C-suites set the tone: Is AI a strategic growth driver or a managed risk? Their role is to name a few priority outcomes, define risk appetite, and assign clear ownership,” she said. Thinking Machines often begins with executive sessions where leaders can explore where tools like ChatGPT add value, how to govern them, and when to scale. “That top-down clarity is what turns AI from an experiment into an enterprise capability.”
Human-AI collaboration in practice
Sy often talks about “reinventing the future of work through human-AI collaboration.” She explained what this looks like in practice: a “human-in-command” approach where people focus on judgment, decision-making, and exceptions, while AI handles routine steps like retrieval, drafting, or summarising.
“Human-in-command means redesigning work so people focus on judgment and exceptions, while AI takes on retrieval, drafting, and routine steps, with transparency through audit trails and source links,” she said. The results are measured in time saved and quality improvements.
In workshops run by Thinking Machines, professionals using ChatGPT often free up one to two hours per day. Research supports these outcomes—Sy pointed to an MIT study showing a 14% productivity boost for contact centre agents, with the biggest gains seen among less-experienced staff. “That’s clear evidence AI can elevate human talent rather than displace it,” she added.
Agentic AI with Thinking Machines’ guardrails
Another area of focus for Thinking Machines is agentic AI, which goes beyond single queries to handle multi-step processes. Instead of just answering a question, agentic systems can manage research, fill forms, and make API calls, coordinating entire workflows with a human still in charge.
“Agentic systems can take work from ‘ask-and-answer’ to multi-step execution: coordinating research, browsing, form-filling, and API calls so teams ship faster with a human in command,” Sy said. The promise is faster execution and productivity, but the risks are real. “The principles of human-in-command and auditability remain critical; to avoid the lack of proper guardrails. Our approach is to pair enterprise controls and auditability with agent capabilities to ensure actions are traceable, reversible, and policy-aligned before we scale.”
Governance that builds trust
While adoption is accelerating, governance often lags behind. Sy cautioned that governance fails when it’s treated as paperwork instead of part of daily work.
“We keep humans in command and make governance visible in daily work: use approved data sources, enforce role-based access, maintain audit trails, and require human decision points for sensitive actions,” she explained. Thinking Machines also applies what it calls “control + reliability”: restricting retrieval to trusted content and returning answers with citations. Workflows are then adapted to local rules in sectors such as finance, government, and healthcare.
For Sy, success isn’t measured in the volume of policies but in auditability and exception rates. “Good governance accelerates adoption because teams trust what they ship,” she said.
Local context, regional scale
Asia Pacific’s cultural and linguistic diversity poses unique challenges for scaling AI. A one-size-fits-all model doesn’t work. Sy emphasised that the right playbook is to build locally first and then scale deliberately.
“Global templates fail when they ignore how local teams work. The playbook is build locally, scale deliberately: fit the AI to local language, forms, policies, and escalation paths; then standardise the parts that travel such as your governance pattern, data connectors, and impact metrics,” she said.
That’s the approach Thinking Machines has taken in Singapore, the Philippines, and Thailand—prove value with local teams first, then roll out region by region. The aim is not a uniform chatbot but a reliable pattern that respects local context while maintaining scalability.
Skills over tools
When asked what skills will matter most in an AI-enabled workplace, Sy pointed out that scale comes from skills, not just tools. She broke this down into three categories:
- Executive literacy: the ability for leaders to set outcomes and guardrails, and know when and where to scale AI.
- Workflow design: the redesign of human-AI handoffs, clarifying who drafts, who approves, and how exceptions escalate.
- Hands-on skills: prompting, evaluation, and retrieval from trusted sources so answers are verifiable, not just plausible.
“When leaders and teams share that foundation, adoption moves from experimenting to repeatable, production-level results,” she said. In Thinking Machines’ programs, many professionals report saving one to two hours per day after just a one-day workshop. To date, more than 10,000 people across roles have been trained, and Sy noted the pattern is consistent: “skills + governance unlock scale.”
Industry transformation ahead
Looking to the next five years, Sy sees AI shifting from drafting to full execution in critical business functions. She expects major gains in software development, marketing, service operations, and supply chain management.
“For the next wave, we see three concrete patterns: policy-aware assistants in finance, supply chain copilots in manufacturing, and personalised yet compliant CX in retail—each built with human checkpoints and verifiable sources so leaders can scale with confidence,” she said.
A practical example is a system Thinking Machines built with the Bank of the Philippine Islands. Called BEAi, it’s a retrieval-augmented generation (RAG) system that supports English, Filipino, and Taglish. It returns answers linked to sources with page numbers and understands policy supersession, turning complex policy documents into everyday guidance for staff. “That’s what ‘AI-native’ looks like in practice,” Sy said.
Thinking Machines expands AI across APAC
The partnership with OpenAI will start with programs in Singapore, the Philippines, and Thailand through Thinking Machines’ regional offices before expanding further across APAC. Future plans include tailoring services to sectors such as finance, retail, and manufacturing, where AI can address specific challenges and open new opportunities.
For Sy, the goal is clear: “AI adoption isn’t just about experimenting with new tools. It’s about building the vision, processes, and skills that let organisations move from pilots to impact. When leaders, teams, and technology come together, that’s when AI delivers lasting value.”
See also: X and xAI sue Apple and OpenAI over AI monopoly claims

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
AI Research
Spotlab.ai hiring AI research scientist for multimodal diagnostics and global health

In a LinkedIn post, Miguel Luengo-Oroz, co-founder and CEO of Spotlab.ai, confirmed the company is hiring an Artificial Intelligence Research Scientist. The role is aimed at early career researchers, postdoctoral candidates, and recent PhD graduates in AI.
Luengo-Oroz writes: “Are you a young independent researcher, postdoc, just finished your PhD (or on the way there) in AI and wondering what’s next? If you’re curious, ready to tackle tough scientific and technical challenges, and want to build AI for something that matters, this might be for you.”
Spotlab.ai targets diagnostics role with new hire
The position will focus on building and deploying multimodal AI solutions for diagnostics and biopharma research. Applications include blood cancers and neglected tropical diseases.
The scientist will be expected to organize and prepare biomedical datasets, train and test AI models, and deploy algorithms in real-world conditions. The job description highlights interaction with medical specialists and product managers, as well as drafting technical documentation. Scientific publications are a priority, with the candidate expected to contribute across the research cycle from experiment planning to peer review.
Spotlab.ai is looking for candidates with experience in areas such as biomedical image processing, computer vision, NLP, video processing, and large language models. Proficiency in Python and deep learning frameworks including TensorFlow, Keras, and PyTorch is required, with GPU programming experience considered an advantage.
Company positions itself in global health AI
Spotlab.ai develops multimodal AI for diagnostics and biopharma research, with projects addressing gaps in hematology, infectious diseases, and neglected tropical diseases. The Madrid-based startup team combines developers, engineers, doctors, and business managers, with an emphasis on gender parity and collaboration across disciplines.
CEO highlights global mission
Alongside the job listing, Luengo-Oroz underscored the company’s broader mission. A former Chief Data Scientist at the United Nations, he has worked on technology strategies in areas ranging from food security to epidemics and conflict prevention. He is also the inventor of MalariaSpot.org, a collective intelligence videogame for malaria diagnosis.
Luengo-Oroz writes: “Take the driver’s seat of our train (not just a minion) at key stages of the journey, designing AI systems and doing science at Champions League level from Madrid.”
AI Research
YARBROUGH: A semi-intelligent look at artificial intelligence – Rockdale Citizen
AI Research
Rice University creative writing course introduced Artificial Intelligence, AI
Courtesy Brandi Smith
Rice is bringing generative artificial intelligence into the creative writing world with this fall’s new course, “ENGL 306: AI Fictions.” Ian Schimmel, an associate teaching professor in the English and creative writing department, said he teaches the course to help students think critically about technology and consider the ways that AI models could be used in the creative processes of fiction writing.
The course is structured for any level of writer and also includes space to both incorporate and resist the influence of AI, according to its description.
“In this class, we never sit down with ChatGPT and tell it to write us a story and that’s that,” Schimmel wrote in an email to the Thresher. “We don’t use it to speed up the artistic process, either. Instead, we think about how to incorporate it in ways that might expand our thinking.”
Schimmel said he was stunned by the capabilities of ChatGPT when it was initially released in 2022, wondering if it truly possessed the ability to write. He said he found that the topic generated more questions than answers.
The next logical step, for Schimmel, was to create a course centered on exploring the complexities of AI and fiction writing, with assigned readings ranging from New York Times opinion pieces critical of its usage to an AI-generated poetry collection.
Schimmel said both students and faculty share concerns about how AI can help or harm academic progress and potentially cripple human creativity.
“Classes that engage students with AI might be some of the best ways to learn about what these systems can and cannot do,” Schimmel wrote. “There are so many things that AI is terrible at and incapable of. Seeing that firsthand is empowering. Whenever it hallucinates, glitches or makes you frustrated, you suddenly remember: ‘Oh right — this is a machine. This is nothing like me.”
“Fear is intrinsic to anything that shakes industry like AI is doing,” Robert Gray, a Brown College senior, wrote in an email to the Thresher. “I am taking this class so that I can immerse myself in that fear and learn how to navigate these new industrial landscapes.”
The course approaches AI from a fluid perspective that evolves as the class reads and writes more with the technology, Schimmel said. Their answers to the complex ethical questions surrounding AI usage evolve with this.
“At its core, the technology is fundamentally unethical,” Schimmel wrote. “It was developed and enhanced, without permission, on copyrighted text and personal data and without regard for the environment. So in that failed historical context, the question becomes: what do we do now? Paradoxically, the best way for us to formulate and evidence arguments against this technology might be to get to know it on a deep and personal level.”
Generative AI is often criticized for its ethicality, such as the energy output and water demanded for its data centers to function or how the models are trained based on data sets of existing copyrighted works.
Amazon and Google-backed Anthropic recently settled a class-action lawsuit with a group of U.S. authors who accused the company of using millions of pirated books to train its Claude chatbot to respond to human prompts.
With the assistance of AI, students will be able to attempt large-scale projects that typically would not be possible within a single semester, according to the course overview. AI will accelerate the writing process for drafting a book outline, and students can “collaborate” with AI to write the opening chapters of a novel for NaNoWriMo, a worldwide writing event held every November where participants would produce a 50,000-word first draft of a novel.
NaNoWriMo, short for National Novel Writing Month, announced its closing after more than 20 years in spring 2025. It received widespread press coverage for a statement released in 2024 that said condemnation of AI in writing “has classist and ableist undertones.” Many authors spoke out against the perceived endorsement of using generative AI for writing and the implication that disabled writers would require AI to produce work.
Each weekly class involves experimentation in dialogues and writing sessions with ChatGPT, with Schimmel and his students acknowledging the unknown and unexplored within AI and especially the visual and literary arts. Aspects of AI, from creative copyrights to excessive water usage to its accuracy as an editor, were discussed in one Friday session in the Wiess College classroom.
“We’re always better off when we pay attention to our attention. If there’s a topic (or tech) that creates worry, or upset, or raises difficult questions, then that’s a subject that we should pursue,” Schimmel wrote. “It’s in those undefined, sometimes uncomfortable places where we humans do our best, most important learning.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi