Tools & Platforms
In the Face of AI, Joe Lai Is Optimistic About the Future of Creativity
Joe Lai, chief technology officer of Creativ Company, initially joined as a co-op student nearly a decade ago. The ease with which he picked up new technology stacks was instrumental in getting the company’s first generation of models off the ground in 2016.
Since then, Joe has been working to build models in Spark, building data pipelines using Kafka, enabling API layers using next-generation openWhisk framework, and many more exciting technologies.
Recently, Joe has been building Creativ Insights’ HTML5/Angular responsive interface. This interface sits on top of openWhisk based API layer, which can dynamically scale to handle traffic of over 100 TPS. Joe is fluent in Scala, Java, Python, HTML5/Angular/JS, and building learning modules running on TensorFlow and Spark frameworks using hybrid backends like graph, document, key/value or relational.
A graduate of the University of Waterloo with a degree in computer science and a minor in statistics, Joe lives in Toronto, Canada.
Joe sits down with LBB to discuss integrating AI into the Creativ’s marketing-technology stack, using large language models (LLMs) to cluster data, and the challenges of collaborating with AI…
LBB> What’s the most impactful way that AI is helping you in your current role?
Joe> As the CTO, my entire job revolves around integrating AI into our marketing-technology stack. We have built large language models (LLMs) trained on various datasets, ranging from social media comments to internal customer data, that allows us to identify and provide custom actionable insights for our clients.
These LLMs can identify changes in consumer sentiment and brand perception, generate campaign reports and competitor analysis, and pinpoint topics of interest at a speed and scale that was previously impossible without AI.
This not only allows us to provide our clients with more informed decisions, but it also allows us to focus more on providing high-value strategies where our experience and creativity shines.
LBB> We hear a lot about AI driving efficiencies and saving time. But are there any ways that you see the technology making qualitative improvements to your work, too?
Joe> Absolutely. Our use of AI and LLMs raises our overall quality of insights by being able to uncover patterns that humans may overlook.
There’s a lot of data out there. Using LLMs to cluster data across multiple channels can help us identify emerging topics in conversations happening all over the world – ones that might not be obvious through traditional methods like keyword tracking. This richer level of understanding means that our reports and insights are not only faster, but also more precise, context-driven, and strategically valuable.
LBB> What are the biggest challenges in collaborating with AI as a creative professional, and how have you overcome them?
Joe> One of our biggest challenges is ensuring that our AI outputs are not only factually correct, but also contextually and strategically correct. This is where our years of experience in the tech and creative space kicks in.
We use this experience to curate our LLMs to fit the use case and context of the client, and we involve human validation to validate that our data and results are high quality. This process ensures that our AI models understand the specific nuances of the client, and that our resulting insights align with the client’s views and objectives.
LBB> How do you balance the use of AI with your own creative instincts and intuition?
Joe> I view AI as a powerful tool rather than a replacement for creative instincts. The AI models can surface patterns and ideas that I may not have known or considered, and I can then apply my own knowledge, creativity, and experience to either accept, refine, or reject these suggestions.
This allows me to utilise the powerful capabilities of AI without losing touch with my creative intuition.
LBB> And how do you ensure that the work produced with AI maintains a sense of authenticity or human touch?
Joe> The key is grounding AI in real-life experiences and perspectives. We tailor our LLMs to fit our clients’ individual needs, which means that our models understand the specific nuances behind the industry and company.
For example, gaming communities typically have their own slang and phrases that can be game-specific or industry used terms. We ensure that we capture these contextual differences in our models so that our output properly reflects the real world. We also make sure that a human reviews these outputs to validate that the results are contextually relevant and credible.
LBB> Do you think there are any misconceptions or misunderstandings in the way we currently talk about AI in the industry?
Joe> Yes. I think a common misconception is that AI is an autonomous decision maker. At its core, LLMs are language models. They are able to consume and process information like no human can, but its outputs are ultimately statistical and they do not have the same understanding that humans do.
In its current state, AI still requires high-quality inputs and human validation in order to produce the right results. This is why I treat AI as a very powerful tool in my arsenal that can influence my decision making, but does not make decisions for me.
LBB> What ethical considerations come to mind when using AI to generate or assist with creative content?
Joe> Ethical use of AI is core to the beliefs at Creativ. We are strict with data protection and data privacy, and we only use data that is either publicly available or is provided to us.
When it comes to training our models, we ensure that no sensitive data is exposed or misused. We are also transparent with how we use AI in our work, and we are cognisant of potential bias in our models.
LBB> Have you seen attitudes towards AI change in recent times? If so, how?
Joe> For sure. When ChatGPT first released and gained global popularity, it felt like people were either dismissive of its accuracy and sceptical of its usefulness, or they were fearing for their job safety, feeling that AI would overtake human thinking and creativity.
I think nowadays, as these LLM models have continued to mature, people have become more accepting towards AI. They generally view AI as a legitimate tool that can be very useful in certain situations, but they are still far away from being able to replace people’s thinking and creativity.
LBB> Broadly speaking, does the industry’s current conversation around AI leave you feeling generally positive, or generally concerned, about creativity’s future?
Joe> I feel generally optimistic; there is an enormous potential for AI to augment creativity across all industries. However, I think it has less to do with the tech itself, and more with how it is being used. If we treat AI as a shortcut to creativity, then we risk diluting real creative work with ‘AI slop’. But if we use AI to enhance human creativity, I believe we can unlock innovation that we never thought was possible.
LBB> Do you think AI has the potential to create entirely new forms of art or media that weren’t possible before? If so, how?
Joe> I believe so. AI has the ability to digest and consume information that we’ve never seen before. This includes all forms of media, whether it is text, audio, visual, you name it. I think it won’t be long before someone nudges AI in a direction where it can produce art and media in a format that we’re just beginning to imagine.
LBB> Thinking about your own role/discipline, what kind of impact do you think AI will have in the medium-term future? To what extent will it change the way people in your role work?
Joe> In the medium term, AI will shift my role to creating AI-driven pipelines that increase both the speed and efficiency of our systems. Instead of spending countless hours in cleaning and digesting millions of data points, I can spend time in generating and curating more insightful reports for my clients.
AI handles the heavy analytical lifting while I can focus more on client engagement, technical innovation, and integrating more complex and efficient pipelines for my company.
Tools & Platforms
Google AI Chief Stresses Continuous Learning for Fast-Changing AI Era

At an open-air summit in Athens, Demis Hassabis, head of Google’s DeepMind and Nobel chemistry laureate, argued that the skill most needed in the years ahead will be the ability to keep learning. He described education as moving into a period where adaptability matters more than fixed knowledge, because the speed of artificial intelligence research is shortening the lifespan of expertise.
Hassabis said future workers will have to treat learning as a constant process, not a stage that ends with graduation. He pointed to rapid advances in computing and biology as examples of how quickly fields now change once AI tools enter the picture.
Outlook on technology
The DeepMind chief warned that artificial general intelligence may not be far away. In his view, it could emerge within a decade, carrying a weight of opportunity and risk. He described its potential impact as larger and faster than the industrial revolution, a shift that could deliver breakthroughs in medicine, clean energy, and space exploration.
Even so, he stressed that powerful models must be tested carefully before being widely deployed. The practice of pushing products out quickly, common in earlier technology waves, should not guide the release of systems capable of influencing economies and societies on a global scale.
Prime minister’s caution
Greek Prime Minister Kyriakos Mitsotakis, who shared the stage at the Odeon of Herodes Atticus, said governments will struggle to keep pace with corporate growth unless they adopt a more active role. He warned that when the benefits of technology are concentrated among a small set of companies, public confidence erodes. He tied the issue to social stability, saying communities won’t support AI unless they see its value in everyday life.
Mitsotakis pointed to Greece’s efforts to build an “AI factory” around a new supercomputer in Lavrio. He presented the project as part of a wider European push to turn regulation and research into competitive advantages, while reducing reliance on U.S. and Chinese platforms.
Education and jobs
Both speakers returned repeatedly to the theme of skills. Hassabis said that in addition to traditional training in science and mathematics, students should learn how to monitor their own progress and adjust their methods. He argued that the most valuable opportunities often appear where two fields overlap, and that AI can serve as a tutor to help learners explore those connections.
Mitsotakis said the challenge for governments is to match school systems with shifting labor markets. He noted that Greece is mainly a service economy, which may delay some of the disruption already visible in manufacturing-heavy nations. But he cautioned that job losses are unavoidable, including in sectors long thought resistant to automation.
Strains on democracy
The prime minister voiced concern that misinformation powered by AI could undermine elections. He mentioned deepfakes as a direct threat to public trust and said Europe may need stricter rules on content distribution. He also highlighted risks to mental health among teenagers exposed to endless scrolling and algorithm-driven feeds.
Hassabis agreed that lessons from social media should inform current choices. He suggested AI might help by filtering information in ways that broaden debate instead of narrowing it. He described a future where personal assistants act in the interest of individual users, steering them toward content that supports healthier dialogue.
The question of abundance
Discussion also touched on the idea that AI could usher in an era of radical abundance. Hassabis said research in protein science, energy, and material design already shows how quickly knowledge is expanding. He argued that the technology could open access to vast resources, but he added that how wealth is shared will depend on governments and economic policy, not algorithms.
Mitsotakis drew parallels with earlier industrial shifts, warning that if productivity gains are captured only by large firms, pension systems and social programs will face heavy strain. He said policymakers must prepare for a period of disruption that could arrive faster than many expect.
Greece’s role
The Athens event also highlighted the country’s ambition to build a regional hub for technology. Mitsotakis praised the growth of local startups and said incentives, venture capital, and government adoption of AI in public services would be central to maintaining momentum.
Hassabis, whose family has roots in Cyprus, said Europe needs to remain at the frontier of AI research if it wants influence in setting ethical and technical standards. He called Greece’s combination of history and new infrastructure a symbolic setting for conversations on the future of technology.
Preparing for the next era
The dialogue closed on a shared message: societies will need citizens who can adapt and learn throughout their lives. For Hassabis, this adaptability is the foundation for navigating a future shaped by artificial intelligence. For Mitsotakis, the task is making sure those changes strengthen democratic values rather than weaken them.
Notes: This post was edited/created using GenAI tools.
Read next: Most Americans Now Rely on AI in Search and Shopping, Survey Finds
Tools & Platforms
Why does ChatGPT agree with everything you say? The dangers of sycophantic AI

Do you like me? I feel really sad,” a 30-year-old Sydney woman asked ChatGPT recently.
Then, “Why isn’t my life like the movies?”
Loading…
Tools & Platforms
Elon Musk’s xAI lays off 500 jobs amid strategy shift to Specialist AI tutors: Report

Elon Musk’s artificial intelligence company, xAI, has laid off around 500 employees from its data annotation team, according to a report by Business Insider. The move, communicated late on Friday evening, affects workers who were responsible for training the company’s generative AI chatbot, Grok.
xAI lays off 500 data annotation staff
As per the report, in an email sent to staff, xAI said it was reducing its focus on developing general AI tutors and would instead concentrate resources on specialist AI tutors. “After a thorough review of our Human Data efforts, we’ve decided to accelerate the expansion and prioritisation of our specialist AI tutors, while scaling back our focus on general AI tutor roles,” the message stated. “As part of this shift in focus, we no longer need most generalist AI tutor positions and your employment with xAI will conclude.”
Employees were told their system access would be revoked immediately. However, salaries would continue to be paid until the end of their contracts or until 30 November, adds the report.
Expansion of specialist AI roles
The company has reportedly made clear it is ramping up investment in specialist AI tutors across fields such as video games, web design, data science, medicine, and STEM. On 13 September, xAI announced plans to expand this team tenfold, saying the roles were “adding huge value”.
Notably, the layoffs follow recent reports that senior members of the data annotation team had their Slack accounts deactivated before the formal announcement was made.
In other news, earlier this month, Musk once again put the spotlight on artificial intelligence, as he highlighted the predictive abilities of X’s AI chatbot, Grok. On his official X account, the billionaire shared a link to a live benchmark platform, urging users to test Grok’s forecasting prowess.
In his first tweet, Musk wrote, “Download the @Grok app and try Grok Expert mode. For serious predictions, Grok Heavy is the best.” He followed up with, “The ability to predict the future is the best measure of intelligence.”
The link pointed to FutureX, a platform designed to evaluate how well large language models (LLMs) can predict real-world events. Developed by Jiashuo Liu and collaborators, FutureX presents AI agents with tasks spanning politics, economics, sports and cultural trends, scoring their predictions in real time.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries