Connect with us

Tools & Platforms

In the Face of AI, Joe Lai Is Optimistic About the Future of Creativity

Published

on


Joe Lai, chief technology officer of Creativ Company, initially joined as a co-op student nearly a decade ago. The ease with which he picked up new technology stacks was instrumental in getting the company’s first generation of models off the ground in 2016.

Since then, Joe has been working to build models in Spark, building data pipelines using Kafka, enabling API layers using next-generation openWhisk framework, and many more exciting technologies.

Recently, Joe has been building Creativ Insights’ HTML5/Angular responsive interface. This interface sits on top of openWhisk based API layer, which can dynamically scale to handle traffic of over 100 TPS. Joe is fluent in Scala, Java, Python, HTML5/Angular/JS, and building learning modules running on TensorFlow and Spark frameworks using hybrid backends like graph, document, key/value or relational.

A graduate of the University of Waterloo with a degree in computer science and a minor in statistics, Joe lives in Toronto, Canada.

Joe sits down with LBB to discuss integrating AI into the Creativ’s marketing-technology stack, using large language models (LLMs) to cluster data, and the challenges of collaborating with AI…

LBB> ​​What’s the most impactful way that AI is helping you in your current role?

Joe> As the CTO, my entire job revolves around integrating AI into our marketing-technology stack. We have built large language models (LLMs) trained on various datasets, ranging from social media comments to internal customer data, that allows us to identify and provide custom actionable insights for our clients.

These LLMs can identify changes in consumer sentiment and brand perception, generate campaign reports and competitor analysis, and pinpoint topics of interest at a speed and scale that was previously impossible without AI.

This not only allows us to provide our clients with more informed decisions, but it also allows us to focus more on providing high-value strategies where our experience and creativity shines.

LBB> We hear a lot about AI driving efficiencies and saving time. But are there any ways that you see the technology making qualitative improvements to your work, too?

Joe> Absolutely. Our use of AI and LLMs raises our overall quality of insights by being able to uncover patterns that humans may overlook.

There’s a lot of data out there. Using LLMs to cluster data across multiple channels can help us identify emerging topics in conversations happening all over the world – ones that might not be obvious through traditional methods like keyword tracking. This richer level of understanding means that our reports and insights are not only faster, but also more precise, context-driven, and strategically valuable.

LBB> What are the biggest challenges in collaborating with AI as a creative professional, and how have you overcome them?

Joe> One of our biggest challenges is ensuring that our AI outputs are not only factually correct, but also contextually and strategically correct. This is where our years of experience in the tech and creative space kicks in.

We use this experience to curate our LLMs to fit the use case and context of the client, and we involve human validation to validate that our data and results are high quality. This process ensures that our AI models understand the specific nuances of the client, and that our resulting insights align with the client’s views and objectives.

LBB> How do you balance the use of AI with your own creative instincts and intuition?

Joe> I view AI as a powerful tool rather than a replacement for creative instincts. The AI models can surface patterns and ideas that I may not have known or considered, and I can then apply my own knowledge, creativity, and experience to either accept, refine, or reject these suggestions.

This allows me to utilise the powerful capabilities of AI without losing touch with my creative intuition.

LBB> And how do you ensure that the work produced with AI maintains a sense of authenticity or human touch?

Joe> The key is grounding AI in real-life experiences and perspectives. We tailor our LLMs to fit our clients’ individual needs, which means that our models understand the specific nuances behind the industry and company.

For example, gaming communities typically have their own slang and phrases that can be game-specific or industry used terms. We ensure that we capture these contextual differences in our models so that our output properly reflects the real world. We also make sure that a human reviews these outputs to validate that the results are contextually relevant and credible.

LBB> Do you think there are any misconceptions or misunderstandings in the way we currently talk about AI in the industry?

Joe> Yes. I think a common misconception is that AI is an autonomous decision maker. At its core, LLMs are language models. They are able to consume and process information like no human can, but its outputs are ultimately statistical and they do not have the same understanding that humans do.

In its current state, AI still requires high-quality inputs and human validation in order to produce the right results. This is why I treat AI as a very powerful tool in my arsenal that can influence my decision making, but does not make decisions for me.

LBB> What ethical considerations come to mind when using AI to generate or assist with creative content?

Joe> Ethical use of AI is core to the beliefs at Creativ. We are strict with data protection and data privacy, and we only use data that is either publicly available or is provided to us.

When it comes to training our models, we ensure that no sensitive data is exposed or misused. We are also transparent with how we use AI in our work, and we are cognisant of potential bias in our models.

LBB> Have you seen attitudes towards AI change in recent times? If so, how?

Joe> For sure. When ChatGPT first released and gained global popularity, it felt like people were either dismissive of its accuracy and sceptical of its usefulness, or they were fearing for their job safety, feeling that AI would overtake human thinking and creativity.

I think nowadays, as these LLM models have continued to mature, people have become more accepting towards AI. They generally view AI as a legitimate tool that can be very useful in certain situations, but they are still far away from being able to replace people’s thinking and creativity.

LBB> Broadly speaking, does the industry’s current conversation around AI leave you feeling generally positive, or generally concerned, about creativity’s future?

Joe> I feel generally optimistic; there is an enormous potential for AI to augment creativity across all industries. However, I think it has less to do with the tech itself, and more with how it is being used. If we treat AI as a shortcut to creativity, then we risk diluting real creative work with ‘AI slop’. But if we use AI to enhance human creativity, I believe we can unlock innovation that we never thought was possible.

LBB> Do you think AI has the potential to create entirely new forms of art or media that weren’t possible before? If so, how?

Joe> I believe so. AI has the ability to digest and consume information that we’ve never seen before. This includes all forms of media, whether it is text, audio, visual, you name it. I think it won’t be long before someone nudges AI in a direction where it can produce art and media in a format that we’re just beginning to imagine.

LBB> Thinking about your own role/discipline, what kind of impact do you think AI will have in the medium-term future? To what extent will it change the way people in your role work?

Joe> In the medium term, AI will shift my role to creating AI-driven pipelines that increase both the speed and efficiency of our systems. Instead of spending countless hours in cleaning and digesting millions of data points, I can spend time in generating and curating more insightful reports for my clients.

AI handles the heavy analytical lifting while I can focus more on client engagement, technical innovation, and integrating more complex and efficient pipelines for my company.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Opinion | Governing AI – The Kathmandu Post

Published

on



Opinion | Governing AI  The Kathmandu Post



Source link

Continue Reading

Tools & Platforms

AI Week in Review 25.09.06

Published

on


Figure 1. It’s another robot party, with Google Androidify inviting people to create their own Android bot characters.

Switzerland’s EPFL launched the open multilingual Apertus LLM that comes in two sizes, 8B and 70B, and targets massively multilingual coverage, with over 1800 languages supported. The models were trained on 15 trillion tokens across more than 1,000 languages; 40% of the data is non-English. Apertus is fully open, sharing weights on HuggingFace as well as data, training script details and a Technical Report.

Nous Research released Hermes 4 14B, a hybrid-reasoning LLM positioned as the compact sibling to its 70B and 405B variants. The 14B model can run locally (available on HuggingFace) while supporting dual “think/non-think” modes and function-calling support. Nous Research published a technical report sharing training and benchmark details.

Nous also announced the Husky Hold’em Bench, a poker-themed benchmark designed to test long-horizon reasoning and strategy under uncertainty. The suite provides a consistent evaluation scaffold for agentic AI models.

Tencent announced the open-source release of Hunyuan-MT-7B, a multilingual translation model that supports 33 languages, boasts lightning-fast high-performance inference for translation, and can be deployed on edge devices. They also released an ensemble “Chimera” variant:

But that’s not all. We’re also open-sourcing Hunyuan-MT-Chimera-7B, the industry’s first open-source integrated translation model. It intelligently refines translations from multiple models to deliver more accurate and professional results for specialized use cases.

Both Hunyuan-MT-7B and Hunyuan-MT-Chimera-7B are available on HuggingFace.

Alibaba’s Tongyi Lab released WebWatcher, an open research agent that can browse and complete web tasks end-to-end. The HuggingFace demo and WebWatcher code and technical paper are provided for reproducibility. It is positioned as a reference for building web-capable agents.

A graph of different levels of exam

AI-generated content may be incorrect.

Figure 2. Webwatcher benchmark metrics shows its excellent performance on visual understanding benchmarks.

Google introduced EmbeddingGemma, a small 308M-parameter multilingual embedding model for on-device RAG, search and similarity workloads. EmbeddingGemma documentation emphasizes small memory footprint with strong MMTEB performance relative to its size.

Google launched Androidify, a playful generative tool for custom Android avatars. Powered by Gemini 2.5 Flash and Imagen, Androidify lets users generate Android-style characters, remix them into sticker packs, and share to social apps and Messages. Google positions it as an “AI sandbox” for creative expression.

GitHub shipped Actions for AI Labeler and AI Content Moderator, powered by GitHub Models. The Actions let repos auto-apply labels or flag content in workflows, reducing toil in triage and moderation. GitHub positions them as drop-in CI/CD components with configurable prompts and thresholds.

Mistral added enterprise connectors and a Memories capability to Le Chat to integrate data sources and persist knowledge for better context. The update expands Le Chat beyond chat to more agentic workflows. Mistral’s post outlines new features and supported integrations.

OpenAI launched Projects in ChatGPT, a feature similar to Claude Workspaces that adds structured workspaces, larger file uploads, and more granular memory controls. They also added a “Branching” feature to ChatGPT to fork a conversation into alternative directions without losing the original thread.

Apple’s FastVLM family (0.5B/1.5B/7B) is now available on Hugging Face, providing faster vision encoding via the FastViTHD encoder. The FastVLM model was presented in the FastVLM paper published last December.

Luxury membership platform Vivrelle launched Ella, an AI personal styling tool, in partnership with fashion retailers Revolve and FWRD. Ella offers personalized outfit and styling recommendations across all platforms, covering rental, retail, and pre-owned options.

Google’s latest video-generation model, Veo 3, is coming to Google Photos. This new model allows U.S. users to turn still images into higher-quality four-second video clips (without audio) via the mobile app’s Create tab.

OpenAI published a research explainer on why language models hallucinate. The team argues current training evaluation incentives reward “confident guessing” over calibrated uncertainty; they advocate benchmarks that penalize wrong answers more than abstentions. The paper ties hallucinations to the statistical nature of next-token prediction and calls for uncertainty-aware evaluation.

Google DeepMind revealed how Deep Loop Shaping improves gravitational wave observations, helping scientists understand the universe better. In experiments at LIGO Livingston, Deep Loop Shaping reinforcement-learning method reduced noise in a critical feedback loop by 30–100 times, enabling detection of many more gravitational-wave events. DeepMind says the technique could be applied to vibration and noise suppression in aerospace and robotics.

Share

Anthropic raised $13B at a $183B post-money valuation, citing $5B run-rate revenue and 300k business customers. The company says Claude Code passed $500M run-rate within months of full launch; funds will support enterprise demand, safety research, and global expansion.

Anthropic agreed to a landmark $1.5B settlement with authors over training data and opt-out rights. The deal, awaiting court approval, creates a claims process, an opt-out mechanism for authors, and a remuneration program for future training use. Over 500,000 writers are eligible for payouts. This record payout is for illegally pirating books to train its Claude AI model, not for the act of training on copyrighted material itself. It’s the most sweeping U.S. copyright settlement related to AI training to date.

Microsoft announced an agreement with the Federal Government to scale AI adoption, including Microsoft Copilot and Azure OpenAI. Microsoft AI services have achieved security and compliance authorizations, including FedRAMP High and DoD provisional authorization for Microsoft 365 Copilot, to support rapid AI rollout across Federal agencies.

Broadcom’s quarterly shows accelerating AI chip revenue and a blockbuster new customer order that could be OpenAI. The company said a new customer placed over $10 billion in AI infrastructure orders, fueling speculation about OpenAI ties. OpenAI is reportedly partnering with Broadcom to launch its first in-house AI chip in 2026, as it aims to diversify compute away from off-the-shelf GPUs.

OpenAI has had a busy week of aquisitions, deals, reorganization, and launches:

CoreWeave agreed to acquire OpenPipe, a startup known for reinforcement-learning tooling for agents. The companies announced a definitive agreement. This helps build out CoreWeave’s technology stack to support AI use cases.

New York Fed survey finds AI adoption up sharply, but job impacts limited, for now. Roughly 40% of services firms and 26% of manufacturers in the New York region report using AI, with retraining more common than layoffs; firms expect both hiring and some future reductions as AI adoption deepens.

Warner Bros. is suing AI startup Midjourney for copyright infringement, alleging it allows users to generate copyrighted characters like Superman and Batman without permission. The lawsuit claims Midjourney knowingly lifted content restrictions, profiting from “piracy and copyright infringement.”

California and Delaware Attorneys General are investigating OpenAI following reports of tragic deaths potentially linked to ChatGPT interactions, demanding improved safety measures for young users. In related news, both OpenAI’s ChatGPT and Google’s Gemini AI products have raised significant child safety concerns, with Common Sense Media rating Gemini as “High Risk” due to the potential for inappropriate content and unsafe advice.

Concerns about inappropriate interactions and lack of safety features in AI for children could drive further regulatory action. The FTC is making inquiries into how AI chatbots affect children’s mental health, according to WSJ reporting. The agency may seek documents from OpenAI, Meta and Character.AI.

Tesla shareholders will vote on investing in Elon Musk’s xAI to bolster its AI, robotics, and energy ambitions. Proposed by a shareholder, this move aims to secure Tesla’s stake in advanced AI capabilities and drive value.

AI agent startup Sierra, co-founded by Bret Taylor, raised $350 million in funding, valuing the company at $10 billion. Sierra helps enterprises build customer service AI agents.

Harish Abbott’s Augment, featuring its AI assistant “Augie” for logistics, secured an $85 million Series A round led by Redpoint. Augie automates repetitive tasks like gathering bids, tracking packages, and managing invoices for freight companies.

Isotopes AI came out of stealth on Thursday with a $20 million seed round. Its AI agent, Aidnn, enables business managers to query complex data in natural language, drafting planning documents from various sources like ERP and CRM.

OpenAI lays out its case for AI’s role in expanding economic opportunity. This company essay by Applications CEO Fidji Simo is a strategic positioning piece ahead of the launch of their Jobs platform, to explain push for AI literacy through certifications and their Jobs Platforms. He argues that AI broadens access to income-generating tools, but that AI’s disruption requires reskilling at scale.

The premise for AI reskilling is simple:

Studies show⁠(opens in a new window) that AI-savvy workers are more valuable, more productive, and are paid more than workers without AI skills.

Yes, AI literacy is vital to being successful in the AI era, and soon most American workers will interface with AI at work. The AI technology revolution is so fast that even OpenAI’s stated goal of 10 million certifications by 2030 seems insufficient.

Our motivation for writing AI Changes Everything is similar; we want you to help you stay on top of AI. If there is a topic you want me to dive further into for your own AI up-skilling, leave a comment and I’ll dig into it.

Leave a comment



Source link

Continue Reading

Tools & Platforms

Microlearning Offers A Flexible Approach to Gen AI Education

Published

on


Microlearning has emerged as a dynamic approach to corporate education, breaking down complex topics into concise, focused lessons that are easier to digest and apply. For corporations striving to remain competitive in the age of generative artificial intelligence (Gen AI), this strategy offers a powerful way to upskill employees without disrupting daily operations.

By delivering bite-sized, actionable content tailored to specific roles, microlearning empowers employees to absorb information at their own pace, practice what they’ve learned, and quickly apply new skills. For businesses navigating the complexities of digital transformation, this approach provides the agility needed to stay ahead of the curve.

Why corporations need microlearning for Gen AI education  

In today’s fast-paced business environment, corporate leaders face the challenge of equipping employees with the skills required to harness the power of technologies like Gen AI. The vast potential of Gen AI for streamlining processes, enhancing decision-making, and driving innovation makes it an essential area of focus. Yet traditional training programs, which often demand significant time and resources, are no longer practical for many companies.

Microlearning offers a solution by making education flexible, personalized, and accessible. Lessons typically last 10–15 minutes and are delivered through formats that cater to different learning styles, such as videos, interactive exercises, and quizzes. This format is ideal for employees juggling demanding workloads, as it allows them to integrate learning into their schedules seamlessly.

Furthermore, microlearning ensures relevance by offering tailored learning paths. For example, a marketing team can focus on modules that explore Gen AI-powered audience segmentation, while a customer service team might learn about automated response systems and predictive analytics. This customization ensures that training is directly applicable, increasing engagement and retention.

Client Case Study in Gen AI Education: Microlearning in Action 

To illustrate how microlearning can transform corporate training, consider the case of a multinational consumer packaged goods (CPG) firm that sought to integrate Gen AI into its operations. The company recognized the potential of AI tools to enhance productivity and innovation but faced several challenges:

  1. Time Constraints: Employees were already stretched thin, managing tight deadlines and critical projects.
  2. Skill Gaps: Teams varied widely in their familiarity with AI technologies, requiring training tailored to different levels of expertise.
  3. Scalability: With offices spread across multiple time zones, delivering consistent, high-quality training to a global workforce was a major challenge.

To address these challenges, the company asked me to help it adopt a microlearning strategy.

Designing a Microlearning Program 

We began by identifying the key areas where Gen AI could make an immediate impact, including sales forecasting, product development, and customer experience management. Working with subject matter experts, they created a series of microlearning modules tailored to specific roles and objectives.

For example:

  • Sales Teams: Modules focused on using AI tools to predict customer needs, improve lead scoring, and optimize outreach strategies.
  • Product Developers: Training covered AI-driven design tools and algorithms to accelerate prototyping and refine product features.
  • Customer Support Teams: Lessons explored AI chatbots, sentiment analysis, and personalized service recommendations.

Each module was designed to be engaging and interactive, encouraging employees to apply what they learned immediately. The content was hosted on a mobile-friendly Learning Management System (LMS), ensuring accessibility for employees regardless of location or time zone.

Making Learning Flexible and Personalized 

Flexibility was a cornerstone of the program. Employees could access the modules whenever it suited them, such as during breaks, commutes, or downtime between meetings. The LMS also included progress tracking, enabling participants to monitor their development and revisit areas where they needed additional support.

To enhance engagement, we helped the company incorporate gamification elements, such as badges and leaderboards, to motivate learners and celebrate achievements. Employees could also choose their own learning paths, selecting modules that aligned with their roles and career aspirations. This personalization ensured that training was not only relevant but also empowering, as employees felt a greater sense of ownership over their learning journey.

Support and Mentorship 

From our experience with other companies, self-paced learning works best with guidance, so we helped the company pair the microlearning program with optional mentorship opportunities. Experienced AI practitioners within the organization served as mentors, hosting weekly virtual office hours where employees could ask questions and receive advice.

For instance, a sales manager might consult a mentor about integrating AI tools into an existing CRM system, while a customer support specialist could seek tips on optimizing chatbot responses for better customer satisfaction. These interactions provided valuable context and practical insights, reinforcing the concepts covered in the microlearning modules.

Results That Speak for Themselves 

After six months, the microlearning initiative delivered measurable results across multiple metrics:

  1. Increased Efficiency: Sales teams reported a 22% reduction in time spent on lead qualification, thanks to AI-enhanced processes.
  2. Improved Innovation: Product developers cut prototyping time by 18%, enabling faster iteration and delivery of new products.
  3. Enhanced Customer Experience: Customer satisfaction scores improved by 26%, as support teams used AI tools to provide quicker, more personalized service.

These results not only demonstrated the immediate impact of microlearning but also highlighted its long-term potential to drive operational excellence and competitive advantage.

Building a Culture of Continuous Learning 

Beyond the tangible outcomes, the microlearning program had a profound effect on the company’s culture. Employees became more confident and proactive in experimenting with AI tools, sharing their learnings with colleagues, and proposing new applications for the technology.

For example, a marketing team used insights from their training to develop an AI-powered campaign that outperformed previous efforts by 30%. Similarly, a regional office implemented an AI tool for inventory management, significantly reducing waste and costs. These successes reinforced a culture of continuous learning and innovation, where employees were empowered to take initiative and explore the possibilities of emerging technologies.

Microlearning is not a one-and-done solution; it is a dynamic approach that evolves with the needs of the business. As Gen AI capabilities advance, companies can expand their training libraries to cover new applications, ensuring that employees remain at the forefront of innovation.

For example, future modules might focus on advanced AI ethics, regulatory compliance, or integrating AI into sustainability initiatives, while managing risks. By continuously updating and refining their microlearning programs, corporations can maintain a skilled and adaptable workforce ready to tackle the challenges of tomorrow.

The Strategic Advantage of Gen AI Education Through Microlearning 

For corporations, microlearning offers a strategic advantage in an increasingly competitive landscape. It allows businesses to upskill employees quickly and efficiently, driving productivity and innovation while minimizing disruption. Moreover, by tailoring training to the unique needs of different teams and roles, microlearning ensures that every employee can contribute meaningfully to the company’s success. Whether it’s a sales representative using AI to close deals faster or an operations manager leveraging AI for process optimization, the benefits of this approach extend across the organization.

By embracing microlearning, corporations not only enhance their operational capabilities but also foster a culture of growth, adaptability, and forward-thinking. In an era defined by rapid technological change, this mindset is critical for long-term success. Microlearning represents the future of corporate education. Its ability to deliver focused, engaging, and personalized training makes it the ideal approach for equipping employees with the skills they need to thrive in the age of Gen AI. By adopting this strategy, corporations can ensure that their teams are not just keeping up with change but leading it, driving innovation and setting new benchmarks for success.


Have you read?
The World’s Best Medical Schools.
The World’s Best Universities.
The World’s Best International High Schools.
Global Mobility and Wealth Security: Why Citizenship by Investment (CBI) and Residency by Investment (RBI) Programs Are Essential for Global Executives and High-Net-Worth Individuals.

Copyright 2025 The CEOWORLD magazine. All rights reserved. This material (and any extract from it) must not be copied, redistributed or placed on any website, without CEOWORLD magazine’ prior written consent. For media queries, please contact: info@ceoworld.biz




Source link

Continue Reading

Trending