True story: I had to threaten Replit AI’s brain that I would report it’s clever but dumb suggestions to the AI police for lying.
I also told ChatGPT image creation department how deeply disappointed I was that it could not, after 24 hrs of iterations, render the same high-quality image twice without changing an item on the image or misspelling. All learnings and part of the journey.
We need to remain flexible and open to new tools and approaches, and simultaneously be laser focused. It’s a contradiction, but once you start down this road, you will understand. Experimentation is a must. But it’s also important to ignore the noise and constant hype and CAPS.
How our business’ tech stack evolves
A few years ago, we started with ChatGPT and a few spreadsheets. Today, our technology arsenal spans fifteen AI platforms, from Claude and Perplexity to specialised tools like RollHQ for project management and Synthesia for AI video materials. Yet the most important lesson we’ve learned isn’t about the technology itself. It’s about the critical space between human judgment and machine capability.
The data tells a compelling story about where business stands today: McKinsey reports that 72 percent of organizations have adopted AI for at least one business function, yet only one percent believe they’ve reached maturity in their implementation. Meanwhile, 90 percent of professionals using AI report working faster, with 80 percent saying it improves their work quality.
This gap between widespread adoption and true excellence defines the challenge facing every service organisation today, including our own.
Our journey began like many others, experimenting with generative AI for document drafting and research. We quickly discovered that quality was low and simply adding tools wasn’t enough. What mattered was creating a framework that put human expertise at the center while leveraging AI’s processing power. This led us to develop what we call our “human creating the loop” approach, an evolution beyond the traditional human-in-the-loop model. It has become more about AI-in-the-loop for us than the other way round.
The distinction matters.
Human-in-the-loop suggests people checking machine outputs. Human creating the loop means professionals actively designing how AI integrates into workflows, setting boundaries, and maintaining creative control. Every client deliverable, every strategic recommendation, every customer interaction flows through experienced consultants who understand context, nuance, and the subtleties that define quality service delivery.
Our evolving tech stack
Our technology portfolio has grown strategically, with each tool selected for specific capabilities.
Each undergoes regular evaluation against key metrics, with fact-checking accuracy being paramount. We’ve found that combining multiple tools for fact checking and verification, especially Perplexity’s cited sources with Claude’s analytical capabilities, dramatically improves reliability.
The professional services landscape particularly demonstrates why human judgment remains irreplaceable. AI can analyse patterns, generate reports, and flag potential issues instantly. But understanding whether a client concern requires immediate attention or strategic patience, whether to propose bold changes or incremental improvements; these decisions require wisdom that comes from experience, not algorithms.
That’s also leaving aside the constant habit of AI generalising, making things up and often blatantly lying.
For organisations beginning their AI journey, start with clear boundaries rather than broad adoption.
Investment in training will be crucial.
Research shows that 70 percent of AI implementation obstacles are people and process-related, not technical. Create internal champions who understand both the technology and your industry’s unique requirements.
Document what works and what doesn’t. Share learnings across teams. Address resistance directly by demonstrating how AI enhances rather than replaces human expertise.
The data supports this approach. Organisations with high AI-maturity report three times higher return on investment than those just beginning. But maturity doesn’t mean maximum automation. It means thoughtful integration that amplifies human capabilities.
Looking ahead, organisations that thrive will be those that view AI as an opportunity to elevate human creativity rather than replace it.
Alexander PR’s AI policy framework
Our approach to AI centres on human-led service delivery, as outlined in our core policy pillars:
- Oversight: Human-Led PR
We use AI selectively to improve efficiency, accuracy, and impact. Every output is reviewed, adjusted, and approved by experienced APR consultants – our approach to AI centres on AI-in-the-loop assurance and adherence to APR’s professional standards.
- Confidentiality
We treat client confidentiality and data security as paramount. No sensitive client information is ever entered into public or third-party AI platforms without explicit permission.
- Transparency
We are upfront with clients and stakeholders about when, how, and why we use AI to support our human-led services. Where appropriate, this includes clearly disclosing the role AI plays in research, content development, and our range of communications outputs.
- Objectivity
We regularly audit AI use to guard against bias and uphold fair, inclusive, and accurate communication. Outputs are verified against trusted sources to ensure factual integrity.
- Compliance
We adhere to all applicable privacy laws, industry ethical standards, and our own company values. Our approach to AI governance is continuously updated as technology and regulation evolve.
- Education
Our team stays up to date on emerging AI tools and risks. An internal working group regularly reviews best practices and ensures responsible and optimal use of evolving technologies.
This framework is a living document that adapts as technology and regulations evolve. The six pillars provide structure while allowing flexibility for innovation. We’ve learned transparency builds trust. Clients appreciate knowing when AI assists in their projects, understanding it means more human time for strategic thinking.
Most importantly, we’ve recognised our policy must balance innovation with responsibility. As new tools emerge and capabilities expand, we evaluate them against our core principle: does this enhance our ability to deliver exceptional service while maintaining the trust our clients place in us?
The answer guides every decision, ensuring our AI adoption serves our mission rather than defining it.
For more on our approach and regular updates on all things AI reputation, head to Alexander PR’s website or subscribe to the AI Rep Brief newsletter.