Tools & Platforms
Lumify warns AI readiness must catch up to enterprise adoption
As artificial intelligence tools move rapidly from novelty to necessity, enterprises across Australia and New Zealand are scrambling to prepare their people – not just their systems – for what comes next.
For Michael Blignaut, an IT and process instructor at Lumify Work New Zealand, this moment feels like déjà vu.
“Cybersecurity is our fastest growing area,” he said, pointing to the same kind of urgency now emerging around artificial intelligence. “Every single one of our partners – AWS, Microsoft, all of them – have got huge amounts of cybersecurity training.”
Lumify Work, formerly known as Auldhouse in New Zealand and DDLS in Australia, is Australasia’s largest provider of corporate IT training, with nearly four decades of experience. It offers education across IT, project management, cybersecurity, and now a growing portfolio in AI. As new technologies go mainstream, organisations are looking for more than just tools – they need a strategy to roll them out responsibly.
“AI has moved from that vague buzzword to a vital business tool,” Blignaut said.
“It’s really reshaping how people think and work.” But he also cautions against a simplistic approach. “It’s not a one-size-fits-all magic wand. Unless companies really think about staff and training, and how they’re going to manage their AI adoption and address ethical concerns, I think there are going to be issues.”
The enthusiasm is undeniable. With tools like Microsoft Copilot and ChatGPT entering daily workflows, demand for AI training is exploding – especially among end users.
“Just using Copilot in emails, in Outlook and in Excel seems to get people very excited,” said Blignaut. “It’s that basic end-user usage where there seems to be a lot of wow and excitement.”
But that excitement can mask new risks. “People either don’t trust it, or they’ve been given the wrong answer by whatever tool they use. But there’s also an overreliance: everything from ‘it can solve all our problems’ to ‘it’s not doing what I need’.”
This rapid adoption has elevated issues like data privacy, governance, and training fit-for-purpose. “AI governance is knowing what people are going to do with data, how companies are going to adopt AI and really use it to the potential benefit of the organisation,” Blignaut said. In regulated sectors or for firms handling sensitive data, that means rethinking internal frameworks – starting with education.
Blignaut’s advice for businesses still unsure about jumping into AI? Start smart.
“It’s about thinking through your adoption strategies—and not being slow about putting in place really great implementation pathways,” he said. “How are we going to get everybody in the organisation to use their tools while staying safe and not opening the company up to breaches in privacy and all of those ethical bits and pieces?”
Assessment tools are a useful starting point. “There are a good number of AI readiness assessments – or Lumify can also help with that,” he said.
“Before you adopt any new technology or tool, there’s that initial awareness to see where the company is at and what they’re actually going to use it for, and making sure everybody’s aware of where the business actually needs AI and how it can assist.”
As with cybersecurity, the upskilling challenge isn’t limited to technical staff. Training now spans everyone—from executives navigating governance to frontline workers learning prompting. “I like having people in class with me,” said Blignaut, “but I think that’s where we’re going to settle: a bit of a mix.”
Hybrid training delivery – once rare pre-COVID – is now standard. Lumify offers formats ranging from one-day intro workshops to five-day technical intensives, delivered in-person, online, or both.
Vendor-specific certifications remain strong, especially those from Microsoft and Amazon. But interest is also growing in tool-agnostic programs, such as AI Certs, an internationally recognised certification body. “We’ve also got a really cool set of vendor-neutral or tool-neutral tools through AI Certs,” Blignaut said. “With all things AI, it’s amazing how things are changing—and changing again. Keeping certifications current and standard is going to be a huge amount of work for them, but so far, so good.”
Blignaut said one skill will become foundational: the ability to prompt AI effectively. “To me, it’s always about the prompting,” he explains.
“Being able to ask the right question, being able to really frame your prompt. Across all of those platforms, being able to ask the right question or prompt – I think that’s where the challenge is going to be for everybody.”
He also emphasises critical thinking and iterative refinement. “AI does hallucinate. Being agile about this thinking – not being shy to iterate and double-check your answers, reframing and re-asking the question in another way and being quite specific—iterating, iterating and iterating again is absolutely important.”
Blignaut believes AI will be a net creator of jobs, but not without disruption. Lumify is already designing reskilling programs to help displaced workers transition into new roles, including non-technical tracks that focus on digital literacy and adaptability.
Ultimately, Blignaut said, the companies that thrive in an AI-enabled world will be those that treat training as a continuous, strategic function – not a one-off fix.
“Before you can lead in AI, you’ve got to understand it,” he said. “And that starts with asking the right questions – of your people, your data, and your systems.”
Tools & Platforms
RACGP releases new AI guidance
News
A new resource guides GPs through the practicalities of using conversational AI in their consults, how the new technology works, and what risks to be aware of.
AI is an emerging space in general practice, with more than half of GPs not familiar with specific AI tools.
Artificial intelligence (AI) is becoming increasingly relevant in healthcare, but at least 80% of GPs have reported that they are not at all, or not very, familiar with specific AI tools.
To help GPs broaden their understanding of the technology, and weigh up the potential advantages and disadvantages of its use in their practice, the RACGP has unveiled a comprehensive new resource focused on conversational AI.
Unlike AI scribes, which convert a conversation with a patient into a clinical note that can be incorporated into a patient’s health record, conversational AI is technology that enables machines to interpret, process, and respond to human language in a natural way.
Examples include AI-powered chatbots and virtual assistants that can support patient interactions, streamline appointment scheduling, and automate routine administrative tasks.
The college resource offers further practical guidance on how conversational AI can be applied effectively in general practice and highlights key applications. These include:
- answering patient questions regarding their diagnosis, potential side effects of prescribed medicines or by simplifying jargon in medical reports
- providing treatment/medication reminders and dosage instructions
- providing language translation services
- guiding patients to appropriate resources
- supporting patients to track and monitor blood pressure, blood sugar, or other health markers
- triaging patients prior to a consultation
- preparing medical documentation such as clinical letters, clinical notes and discharge summaries
- providing clinical decision support by preparing lists of differential diagnoses, supporting diagnosis, and optimising clinical decision support tools (for investigation and treatment options)
- suggesting treatment options and lifestyle recommendations.
Dr Rob Hosking, Chair of the RACGP’s Practice and Technology Management Expert Committee, told newsGP there are several potential advantages to these tools in general practice.
‘Some of the potential benefits include task automation, reduced administrative burden, improved access to care and personalised health education for patients,’ he said.
Beyond the clinical setting, conversational AI tools can also have a range of business, educational and research applications, such as automating billing and analysing billing data, summarising the medical literature and answering clinicians’ medical questions.
However, while there are a number of benefits, Dr Hosking says it is important to consider some of the potential disadvantages to its use as well.
‘Conversational AI tools can provide responses that appear authoritative but on review are vague, misleading, or even incorrect,’ he explained.
‘Biases are inherent to the data on which AI tools are trained, and as such, particular patient groups are likely to be underrepresented in the data.
‘There is a risk that conversational AI will make unsuitable and even discriminatory recommendations, rely on harmful and inaccurate stereotypes, and/or exclude or stigmatise already marginalised and vulnerable individuals.’
While some conversational AI tools are designed for medical use, such as Google’s MedPaLM and Microsoft’s BioGPT, Dr Hosking pointed out that most are designed for general applications and not trained to produce a result within a clinical context.
‘The data these general tools are trained on are not necessarily up-to-date or from high-quality sources, such as medical research,’ he said.
The college addresses these potential problems, as well as other ethical and privacy considerations, that come with using AI in healthcare.
For GPs deciding whether to use conversational AI, Dr Hosking notes that there are a number of considerations to ensure the delivery of safe and quality care, and that says that patients should play a key role in the decision-making process as to whether to use it in their specific consultation.
‘GPs should involve patients in the decision to use AI tools and obtain informed patient consent when using patient-facing AI tools,’ he said.
‘Also, do not input sensitive or identifying data.’
However, before conversational AI is brought into practice workflows, the RACGP recommends GPs are trained on how to use it safely, including knowledge around the risks and limitations of the tool, and how and where data is stored.
‘GPs must ensure that the use of the conversational AI tool complies with relevant legislation and regulations, as well as any practice policies and professional indemnity insurance requirements that might impact, prohibit or govern its use,’ the college resource states.
‘It is also worth considering that conversational AI tools designed specifically by, and for use by, medical practitioners are likely to provide more accurate and reliable information than that of general, open-use tools.
‘These tools should be TGA-registered as medical devices if they make diagnostic or treatment recommendations.’
While the college recognises that conversational AI could revolutionise parts of healthcare delivery, in the interim, it recommends that GPs be ‘extremely careful’ in using the technology at this time.
‘Many questions remain about patient safety, patient privacy, data security, and impacts for clinical outcomes,’ the college said.
Dr Hosking, who has yet to implement conversational AI tools in his own clinical practice, shared the sentiment.
‘AI will continue to evolve and really could make a huge difference in patient outcomes and time savings for GPs,’ he said.
‘But it will never replace the important role of the doctor-patient relationship. We need to ensure AI does not create health inequities through inbuilt biases.
‘This will help GPs weigh up the potential advantages and disadvantages of using conversational AI in their practice and inform of the risks associated with these tools.’
Log in below to join the conversation.
AI AI scribes artificial intelligence conversational AI
newsGP weekly poll
How often do you include integrative medicine, defined as blending conventional and complementary medicine practices, in your practice to deliver personalised healthcare?
Tools & Platforms
AI Shopping Is Here. Will Retailers Get Left Behind?
AI doesn’t care about your beautiful website.
Visit any fashion brand’s homepage and you’ll see all sorts of dynamic or interactive elements from image carousels to dropdown menus that are designed to catch shoppers’ eyes and ease navigation.
To the large language models that underlie ChatGPT and other generative AI, many of these features might as well not exist. They’re often written in the programming language JavaScript, which for the moment at least most AI struggles to read.
This giant blindspot didn’t matter when generative AI was mostly used to write emails and cheat on homework. But a growing number of startups and tech giants are deploying this technology to help users shop — or even make the purchase themselves.
“A lot of your site might actually be invisible to an LLM from the jump,” said A.J. Ghergich, global vice president of Botify, an AI optimisation company that helps brands from Christian Louboutin to Levi’s make sure their products are visible to and shoppable by AI.
The vast majority of visitors to brands’ websites are still human, but that’s changing fast. US retailers saw a 1,200 percent jump in visits from generative AI sources between July 2024 and February 2025, according to Adobe Analytics. Salesforce predicts AI platforms and AI agents will drive $260 billion in global online sales this holiday season.
Those agents, launched by AI players such as OpenAI and Perplexity, are capable of performing tasks on their own, including navigating to a retailer’s site, adding an item to cart and completing the checkout process on behalf of a shopper. Google’s recently introduced agent will automatically buy a product when it drops to a price the user sets.
This form of shopping is very much in its infancy; the AI shopping agents available still tend to be clumsy. Long term, however, many technologists envision a future where much of the activity online is driven by AI, whether that’s consumers discovering products or agents completing transactions.
To prepare, businesses from retail behemoth Walmart to luxury fashion labels are reconsidering everything from how they design their websites to how they handle payments and advertise online as they try to catch the eye of AI and not just humans.
“It’s in every single conversation I’m having right now,” said Caila Schwartz, director of consumer insights and strategy at Salesforce, which powers the e-commerce of a number of retailers, during a roundtable for press in June. “It is what everyone wants to talk about, and everyone’s trying to figure out and ask [about] and understand and build for.”
From SEO to GEO and AEO
As AI joins humans in shopping online, businesses are pivoting from SEO — search engine optimisation, or ensuring products show up at the top of a Google query — to generative engine optimisation (GEO) or answer engine optimisation (AEO), where catching the attention of an AI responding to a user’s request is the goal.
That’s easier said than done, particularly since it’s not always clear even to the AI companies themselves how their tools rank products, as Perplexity’s chief executive, Aravind Srinivas, admitted to Fortune last year. AI platforms ingest vast amounts of data from across the internet to produce their results.
Though there are indications of what attracts their notice. Products with rich, well-structured content attached tend to have an advantage, as do those that are the frequent subject of conversation and reviews online.
“Brands might want to invest more in developing robust customer-review programmes and using influencer marketing — even at the micro-influencer level — to generate more content and discussion that will then be picked up by the LLMs,” said Sky Canaves, a principal analyst at Emarketer focusing on fashion, beauty and luxury.
Ghergich pointed out that brands should be diligent with their product feeds into programmes such as Google’s Merchant Center, where retailers upload product data to ensure their items appear in Google’s search and shopping results. These types of feeds are full of structured data including product names and descriptions meant to be picked up by machines so they can direct shoppers to the right items. One example from Google reads:
Ghergich said AI will often read this data before other sources such as the HTML on a brand’s website. These feeds can also be vital for making sure the AI is pulling pricing data that’s up to date, or as close as possible.
As more consumers turn to AI and agents, however, it could change the very nature of online marketing, a scenario that would shake even Google’s advertising empire. Tactics that work on humans, like promoted posts with flashy visuals, could be ineffective for catching AI’s notice. It would force a redistribution of how retailers spend their ad budgets.
Emarketer forecasts that spending on traditional search ads in the US will see slower growth in the years ahead, while a larger share of ad budgets will go towards AI search. OpenAI, whose CEO, Sam Altman, has voiced his distaste for ads in the past, has also acknowledged exploring ads on its platform as it looks for new revenue streams.

“The big challenge for brands with advertising is then how to show up in front of consumers when traditional ad formats are being circumvented by AI agents, when consumers are not looking at advertisements because agents are playing a bigger role,” said Canaves.
Bots Are Good Now
Retailers face another set of issues if consumers start turning to agents to handle purchases. On the one hand, agents could be great for reducing the friction that often causes consumers to abandon their carts. Rather than going through the checkout process themselves and stumbling over any annoyances, they just tell the agent to do it and off it goes.
But most websites aren’t designed for bots to make purchases — exactly the opposite, in fact. Bad actors have historically used bots to snatch up products from sneakers to concert tickets before other shoppers can buy them, frequently to flip them for a profit. For many retailers, they’re a nuisance.
“A lot of time and effort has been spent to keep machines out,” said Rubail Birwadker, senior vice president and global head of growth at Visa.
If a site has reason to believe a bot is behind a transaction — say it completes forms too fast — it could block it. The retailer doesn’t make the sale, and the customer is left with a frustrating experience.
Payment players are working to create methods that will allow verified agents to check out on behalf of a consumer without compromising security. In April, Visa launched a programme focused on enabling AI-driven shopping called Intelligent Commerce. It uses a mix of credential verification (similar to setting up Apple Pay) and biometrics to ensure shoppers are able to checkout while preventing opportunities for fraud.
“We are going out and working with these providers to say, ‘Hey, we would like to … make it easy for you to know what’s a good, white-list bot versus a non-whitelist bot,’” Birwadker said.
Of course the bot has to make it to checkout. AI agents can stumble over other common elements in webpages, like login fields. It may be some time before all those issues are resolved and they can seamlessly complete any purchase.
Consumers have to get on board as well. So far, few appear to be rushing to use agents for their shopping, though that could change. In March, Salesforce published the results of a global survey that polled different age groups on their interest in various use cases for AI agents. Interest in using agents to buy products rose with each subsequent generation, with 63 percent of Gen-Z respondents saying they were interested.
Canaves of Emarketer pointed out that younger generations are already using AI regularly for school and work. Shopping with AI may not be their first impulse, but because the behaviour is already ingrained in their daily lives in other ways, it’s spilling over into how they find and buy products.
More consumers are starting their shopping journeys on AI platforms, too, and Schwartz of Salesforce noted that over time this could shape their expectations of the internet more broadly, the way Google and Amazon did.
“It just feels inevitable that we are going to see a much more consistent amount of commerce transactions originate and, ultimately, natively happen on these AI agentic platforms,” said Birwadker.
Tools & Platforms
Sixty-Eight Organizations Support Trump’s Pledge to Educate K-12 Students on AI
IBL News | New York
Sixty-eight organizations have signed to date the White House’s Pledge to America’s Youth: Investing in AI Education over the next four years, which follows President Trump’s April 23 executive order in this regard.
Some companies signing the pledge include Google, Amazon, Apple, IBM, Pearson, NVIDIA, OpenAI, Microsoft, Oracle, Adobe, Cisco, Dell, Intel, McGraw-Hill, Workday, Booz Allen, and Magic School AI.
These organizations pledge “to make available, over the next four years, resources for youth and teachers through funding and grants, educational materials and curricula, technology and tools, teacher professional development programs, workforce development resources, and/or technical expertise and mentorship,” working alongside the White House Task Force on Artificial Intelligence Education.
“The Pledge will help make AI education accessible to K-12 students across the country, sparking curiosity in the technology and preparing the next generation for an AI-enabled economy. Fostering young people’s interest and expertise in artificial intelligence is crucial to maintaining American technological dominance,” added.
Michael Kratsios, Director of the White House Office of Science and Technology Policy and Chair of the White House Task Force on AI Education, invited other organizations to join the pledge.
“AI is reshaping our economy and the way we live and work, and we must ensure the next generation of American workers is equipped with the skills they need to lead in this new era,” said Secretary of Labor Lori Chavez-DeRemer.
Brian Stone, performing the duties of the National Science Foundation (NSF) director, said that his institution will fund cutting-edge research, support teacher development, and expand access to STEM education.
As of June 30, 2025, these were the organizations supporting the Pledge:
Accenture |
ACT | The App Association |
Adobe |
Alpha Schools |
Amazon |
AMD |
Apple |
AT&T |
AutoDesk |
Booz Allen |
Brainly |
Business Software Alliance |
Cengage Group |
Charter Communications |
Cisco |
ClassLink |
Clever |
Code.org |
Cognizant |
Comprendo.dev |
Consumer Technology Association |
Cyber Innovation Center |
Dell Technologies |
Ed Technology Specialists |
Farm-Ed |
GlobalFoundries (GF) |
HiddenLayer |
HMH |
HP |
IBM |
IEEE |
Information Technology Industry Council (ITI) |
Intel |
Interplay |
Intuit |
ISACA |
MagicSchool |
Mason Contractors Association of America (MCAA) |
McGraw Hill |
Meta |
Microsoft |
National Children’s Museum |
NVIDIA |
OpenAI |
Oracle |
Palo Alto Networks |
Pathfinder |
Pearson |
Prisms of Reality |
Qualcomm |
Roblox |
Salesforce |
SAP America, Inc. |
Scale AI |
ServiceNow |
SHRM |
Siemens |
Software & Information Industry Association |
Stemuli |
TeachShare |
Telecommunications Industry Association (TIA) |
Thinkverse |
Vantage Data Centers |
Varsity Tutors |
Winnie |
Workday |
Y Combinator |
-
Funding & Business6 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business3 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business6 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Tools & Platforms6 days ago
Winning with AI – A Playbook for Pest Control Business Leaders to Drive Growth
-
Jobs & Careers4 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit
-
Funding & Business4 days ago
Dust hits $6M ARR helping enterprises build AI agents that actually do stuff instead of just talking