Connect with us

AI Research

What makes a good AI prompt? Here are 4 expert tips

Published

on


“And do you work well with AI?”

As tools such as ChatGPT, Copilot and other generative artificial intelligence (AI) systems become part of everyday workflows, more companies are looking for employees who can answer “yes” to this question. In other words, people who can prompt effectively, think with AI, and use it to boost productivity.

In fact, in a growing number of roles, being “AI fluent” is quickly becoming as important as being proficient in office software once was.

But we’ve all had that moment when we’ve asked an AI chatbot a question and received what feels like the most generic, surface level answer. The problem isn’t the AI – you just haven’t given it enough to work with.

Think of it this way. During training, the AI will have “read” virtually everything on the internet. But because it makes predictions, it will give you the most probable, most common response. Without specific guidance, it’s like walking into a restaurant and asking for something good. You’ll likely get the chicken.

Your solution lies in understanding that AI systems excel at adapting to context, but you have to provide it. So how exactly do you do that?

Crafting better prompts

You may have heard the term “prompt engineering”. It might sound like you need to design some kind of technical script to get results.

But today’s chatbots are great at human conversation. The format of your prompt is not that important. The content is.

To get the most out of your AI conversations, it’s important that you convey a few basics about what you want, and how you want it. Our approach follows the acronym CATS – context, angle, task and style.

Context means providing the setting and background information the AI needs. Instead of asking “How do I write a proposal?” try “I’m a nonprofit director writing a grant proposal to a foundation that funds environmental education programs for urban schools”. Upload relevant documents, explain your constraints, and describe your specific situation.

Angle (or attitude) leverages AI’s strength in role-playing and perspective-taking. Rather than getting a neutral response, specify the attitude you want. For example, “Act as a critical peer reviewer and identify weaknesses in my argument” or “Take the perspective of a supportive mentor helping me improve this draft”.

Task is specifically about what you actually want the AI to do. “Help me with my presentation” is vague. But “Give me three ways to make my opening slide more engaging for an audience of small business owners” is actionable.

Style harnesses AI’s ability to adapt to different formats and audiences. Specify whether you want a formal report, a casual email, bullet points for executives, or an explanation suitable for teenagers. Tell the AI what voice you want to use – for example, a formal academic style, technical, engaging or conversational.

In a growing number of roles, being able to use AI is quickly becoming as important as being proficient in office software once was.
Shutterstock

Context is everything

Besides crafting a clear, effective prompt, you can also focus on managing the surrounding information – that is to say on “context engineering”. Context engineering refers to everything that surrounds the prompt.

That means thinking about the environment and information the AI has access to: its memory function, instructions leading up to the task, prior conversation history, documents you upload, or examples of what good output looks like.

You should think about prompting as a conversation. If you’re not happy with the first response, push for more, ask for changes, or provide more clarifying information.

Don’t expect the AI to give a ready-made response. Instead, use it to trigger your own thinking. If you feel the AI has produced a lot of good material but you get stuck, copy the best parts into a fresh session and ask it to summarise and continue from there.

Keeping your wits

A word of caution though. Don’t get seduced by the human-like conversation abilities of these chatbots.

Always retain your professional distance and remind yourself that you are the only thinking part in this relationship. And always make sure to check the accuracy of anything an AI produces – errors are increasingly common.

AI systems are remarkably capable, but they need you – and human intelligence – to bridge the gap between their vast generic knowledge and your particular situation. Give them enough context to work with, and they might surprise you with how helpful they can be.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Cognigy Leads in Opus Research’s 2025 Conversational AI Intelliview

Published

on


Distinguished for Innovation, Enterprise Readiness, and Visionary Approach to Agentic AI

Cognigy, a global leader in AI-powered customer service solutions, has been recognized as the leader in the newly released 2025 Conversational AI Intelliview from Opus Research. The report, titled “Decision-Maker’s Guide to Self-Service & Enterprise Intelligent Assistants,” shows Cognigy as the leading platform across critical evaluation areas including product capability, enterprise fit, GenAI maturity, and deployment performance.

This recognition underscores Cognigy’s commitment to empowering enterprises with production-ready, scalable AI solutions that go far beyond chatbot basics. The report cites Cognigy’s strengths in visual AI agent orchestration, tool and function calling, AI Ops and observability, and a deep commitment to enterprise-grade control—all delivered through a platform built to scale real-time customer interactions across voice and digital channels.

“Cognigy exemplifies the next stage of conversational AI maturity,” said Ian Jacobs, VP & Lead Analyst at Opus Research. “Their agentic approach—combining real-time reasoning, orchestration, and observability—demonstrates how GenAI can move beyond experimentation into meaningful, measurable transformation in the contact center.”

Cognigy was one of the few vendors identified in the report as a “True Believer” in the evolution of GenAI-driven self-service, with tools designed to simplify deployment while giving enterprises full control. The platform’s AI Agent Manager enables businesses to create, configure, and continuously improve intelligent agents—defining persona, memory scope, and access to tools and knowledge—all through a flexible, low-code interface. Cognigy uniquely blends deterministic logic with generative capabilities, ensuring both speed and reliability in automation.

“This recognition from Opus Research is more than a milestone—it’s validation that our strategy is working,” said Alan Ranger, Vice President at Cognigy. “We’re delivering real-world, enterprise-grade automation that’s transforming contact centers. From financial services to healthcare to global retail, our customers are scaling faster, resolving issues in real time, and delivering truly modern service experiences.”

With global Fortune 500 customers and partnerships across the CCaaS and AI ecosystem, Cognigy continues to lead the way in delivering enterprise-ready AI that combines usability, speed, and impact. This latest industry acknowledgment further solidifies its position as the go-to platform for intelligent self-service.

To download a copy of the report, visit https://www.cognigy.com/opus-research-2025-conversational-ai-intelliview.



Source link

Continue Reading

AI Research

MIT researchers say using ChatGPT can rot your brain, truth is little more complicated – The Economic Times

Published

on



MIT researchers say using ChatGPT can rot your brain, truth is little more complicated  The Economic Times



Source link

Continue Reading

AI Research

Frontiers broadens AI‑driven integrity checks with dual integration

Published

on


Image: Shutterstock.com/EtiAmmos

Frontiers has announced that external fraud‑screening tools – Cactus Communications’ Paperpal Preflight, and Clear Skies’ Papermill Alarm and Oversight – have been integrated into its own Artificial Intelligence Review Assistant (AIRA) submission-screening system.

The expansion delivers what the companies describe as “an unprecedented, multilayered defence against organised research fraud, strengthening the reliability and integrity of every manuscript submitted to Frontiers”.

AIRA was launched in 2018, making Frontiers one of the early adopters of AI in submission checking. In 2022, Frontiers added its own papermill check to its comprehensive catalogue of AIRA checks, with the aim of tackling the industry-wide problem of manufactured manuscripts. The latest version, released in 2025, uses more than 15 data points and signals of potential manufactured manuscripts to be investigated and validated by a human expert.

Dr Elena Vicario, Head of Research Integrity at Frontiers, said: “Maintaining trust in the scholarly record demands constant innovation. By combining the unique strengths of Clear Skies and Cactus with our own AI capabilities, we are raising the bar for integrity screening and giving editors and reviewers the confidence that every submission has been rigorously vetted.”

Commenting on the importance of the partnership, Nikesh Gosalia, President, Global Academic and Publisher Relations at Cactus Communications, said: “This partnership with Frontiers reflects the confidence leading publishers have in our AI-driven solutions. Paperpal Preflight is a vital tool that supports editorial teams and existing homegrown solutions in identifying and addressing potential issues early in the publishing workflow.

“As one of the world’s largest and most impactful research publishers, Frontiers is taking an important step in strengthening research integrity, and we are proud to collaborate with them in this mission of safeguarding research.”

Adam Day, Founder and CEO of Clear Skies, added: “Clear Skies is thrilled to be working with the innovative team at Frontiers to integrate AIRA with Oversight. This integration makes our multi-award-winning services, including the Papermill Alarm, available across the Frontiers portfolio.

“Oversight is the first index of research integrity and recipient of the inaugural EPIC Award for integrity tools from the Society for Scholarly Publishing (SSP). As well as providing strategic Oversight to publishers, our detailed article reports support human Oversight of research integrity investigations on publications as well as journal submissions.”



Source link

Continue Reading

Trending