Jobs & Careers
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
Image by Author | Ideogram
We’ve all spent the last couple of years or so building applications with large language models. From chatbots that actually understand context to code generation tools that don’t just autocomplete but build something useful, we’ve all seen the progress.
Now, as agentic AI is becoming mainstream, you’re likely hearing familiar refrains: “It’s just hype,” “LLMs with extra steps,” “marketing fluff for venture capital.” While healthy skepticism is warranted —as it should be with any emerging technology— dismissing agentic AI as mere hype overlooks its practical benefits and potential.
Agentic AI isn’t just the next shiny thing in our perpetual cycle of tech trends. And in this article, we’ll see why.
What Exactly Is Agentic AI?
Let’s start with trying to understand what agentic AI is.
Agentic AI refers to systems that can autonomously pursue goals, make decisions, and take actions to achieve objectives — often across multiple steps and interactions. Unlike traditional LLMs that respond to individual prompts, agentic systems maintain context across extended workflows, plan sequences of actions, and adapt their approach based on results.
Think of the difference between asking an LLM “What’s the weather like?” versus an agentic system that can check multiple weather services, analyze your calendar for outdoor meetings, suggest rescheduling if severe weather is expected, and actually send those calendar updates with your approval.
The key characteristics that separate agentic AI from standard LLM applications include:
Autonomous goal pursuit: These systems can break down complex objectives into actionable steps and execute them independently. Rather than requiring constant human prompting, they maintain focus on long-term goals.
Multi-step reasoning and planning: Agentic systems can think several moves ahead, considering the consequences of actions and adjusting strategies based on intermediate results.
Tool integration and environment interaction: They can work with APIs, databases, file systems, and other external resources as extensions of their capabilities.
Persistent context and memory: Unlike stateless LLM interactions, agentic systems maintain awareness across extended sessions, learning from previous interactions and building on past work.
From Simple Prompts to Agentic AI Systems
My journey (and perhaps, yours, too) with LLMs began with the classic use cases we all remember: text generation, summarization, and basic question-answering. The early applications were impressive but limited. You’d craft a prompt, get a response, and start over. Each interaction was isolated, requiring careful prompt engineering to maintain any sense of continuity.
The breakthrough came when we started experimenting with multi-turn conversations and function calling. Suddenly, LLMs could not just generate text but interact with external systems. This was our first experience with something more sophisticated than pattern matching and text completion.
But even these enhanced LLMs had limitations. They were:
- Reactive rather than proactive,
- Dependent on human guidance for complex tasks, and
- Struggled with multi-step workflows that required maintaining state across interactions.
Agentic AI systems address these limitations head-on. Recently, you’ve likely seen implementations of agents that can manage entire software development workflows — from initial requirements gathering through getting scripts ready for deployment.
Understanding the Agentic AI Architecture
The technical architecture of agentic AI systems reveals why they’re fundamentally different from traditional LLM applications. While a standard LLM application follows a simple request-response pattern, agentic systems implement sophisticated control loops that enable autonomous behavior.
Standard LLM Apps vs.Agentic AI Systems | Image by Author | draw.io (diagrams.net)
At the core is what we can call the “perceive-plan-act” cycle. The agent continuously perceives its environment through various inputs (user requests, system states, external data), plans appropriate actions based on its goals and current context, and then acts by executing those plans through tool usage or direct interaction.
The planning component is particularly important. Modern agentic systems employ techniques like tree-of-thought reasoning, where they explore multiple possible action sequences before committing to a path. This allows them to make more informed decisions and recover from errors more gracefully.
Memory and context management represent another architectural leap. While traditional LLMs are essentially stateless, agentic systems maintain both short-term working memory for immediate tasks and long-term memory for learning from past interactions. This persistent state enables them to build on previous work and provide increasingly personalized assistance.
Tool integration has evolved beyond simple function calling to sophisticated orchestration of multiple services.
Real-World Agentic AI Applications That Actually Work
The proof of any technology lies in its practical applications. In my experience, agentic AI works great when you require sustained attention, multi-step execution, and adaptive problem-solving.
Customer support automation has evolved beyond simple chatbots to agentic systems that can research issues, coordinate with multiple internal systems, and even escalate complex problems to human agents with detailed context and suggested solutions.
Development workflow automation is yet another promising application. You can build an agent that can take a high-level feature request, analyze existing codebases, generate implementation plans, write code across multiple files, run tests, fix issues, and even prepare deployment scripts. The key difference from code generation tools is their ability to maintain context across the entire development lifecycle.
Intelligent data processing is yet another example where agents can be helpful. Rather than writing custom scripts for each data transformation task, you can create agents that can understand data schemas, identify quality issues, suggest and implement cleaning procedures, and generate comprehensive reports — all while adapting their approach based on the specific characteristics of each dataset.
These applications succeed because they handle the complexity that human developers would otherwise need to manage manually. They’re not replacing human judgment but augmenting our capabilities by handling the orchestration and execution of well-defined processes.
Addressing the Skepticism Around Agentic AI
I understand the skepticism. Our industry has a long history of overhyped technologies that promised to revolutionize everything but delivered marginal improvements at best. The concerns about agentic AI are legitimate and worth addressing directly.
“It’s Just LLMs with Extra Steps” is a common criticism, but it misses the emergent properties that arise from combining LLMs with autonomous control systems. The “extra steps” create qualitatively different capabilities. It’s like saying a car is just an engine with extra parts — technically true, but the combination creates something fundamentally different from its components.
Reliability and hallucination concerns are valid but manageable with proper system design. Agentic systems can implement verification loops, human approval gates for critical actions, and rollback mechanisms for errors. In my experience, the key is designing systems that fail gracefully and maintain human oversight where appropriate.
Cost and complexity arguments have merit, but the economics improve as these systems become more capable. An agent that can complete tasks that would require hours of human coordination often justifies its computational costs, especially when considering the total cost of ownership including human time and potential errors.
Agentic AI and Developers
What excites me most about agentic AI is how it’s changing the developer experience. These systems serve as intelligent collaborators rather than passive tools. They can understand project context, suggest improvements, and even anticipate needs based on development patterns.
The debugging experience alone has been transformative. Instead of manually tracing through logs and stack traces, you can now describe symptoms to an agent that can analyze multiple data sources, identify potential root causes, and suggest specific remediation steps. The agent maintains context about the system architecture and recent changes, providing insights that would take considerable time to gather manually.
Code review has evolved from a manual process to a collaborative effort with AI agents that can identify not just syntax issues but architectural concerns, security implications, and performance bottlenecks. These agents understand the broader context of the application and can provide feedback that considers business requirements alongside technical constraints.
Project management has benefited enormously from agents that can track progress across multiple repositories, identify blockers before they become critical, and suggest resource allocation based on historical patterns and current priorities.
Looking Forward: The Practical Path to Agentic AI
The future of agentic AI isn’t about replacing developers—it’s about amplifying our capabilities and allowing us to focus on higher-level problem-solving. The agentic AI systems we’re building today handle routine tasks, coordinate complex workflows, and provide intelligent assistance for decision-making.
The technology is mature enough for practical applications while still rapidly evolving. The frameworks and tools are becoming more accessible, allowing developers to experiment with agentic capabilities without building everything from scratch.
I recommend you start small but think big. Begin with well-defined, contained workflows where the agent can provide clear value. Focus on tasks that require sustained attention or coordination across multiple systems — areas where traditional automation falls short but human oversight remains feasible.
To sum up: the question isn’t whether agentic AI will become mainstream — it’s how quickly we can learn to work effectively with these new collaborative partners, if you will.
Conclusion
Agentic AI represents a significant step in how we build and interact with AI systems. Of course, these systems are not perfect, and they require thoughtful implementation and appropriate oversight. But they’re also not just pure hype.
For developers willing to move beyond the initial skepticism and experiment with these systems, agentic AI offers genuine opportunities to build more intelligent, capable, and autonomous applications.
The hype cycle will eventually settle, as it always does. When it does, I believe we’ll find that agentic AI has quietly become an essential part of our development toolkit — not because it was overhyped, but because it actually works.
Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she’s working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.
Jobs & Careers
Piyush Goyal Announces Second Tranche of INR 10,000 Cr Deep Tech Fund
IIT Madras and its alumni association (IITMAA) held the sixth edition of their global innovation and alumni summit, ‘Sangam 2025’, in Bengaluru on 4 and 5 July. The event brought together over 500 participants, including faculty, alumni, entrepreneurs, investors and students.
Union Commerce and Industry Minister Shri Piyush Goyal, addressing the summit, announced a second tranche of ₹10,000 crore under the government’s ‘Fund of Funds’, this time focused on supporting India’s deep tech ecosystem. “This money goes to promote innovation, absorption of newer technologies and development of contemporary fields,” he said.
The Minister added that guidelines for the fund are currently being finalised, to direct capital to strengthen the entire technology lifecycle — from early-stage research through to commercial deployment, not just startups..
He also referred to the recent Cabinet decision approving $12 billion (₹1 lakh crore) for the Department of Science and Technology in the form of a zero-interest 50-year loan. “It gives us more flexibility to provide equity support, grant support, low-cost support and roll that support forward as technologies get fine-tuned,” he said.
Goyal said the government’s push for indigenous innovation stems from cost advantages as well. “When we work on new technologies in India, our cost is nearly one-sixth, one-seventh of what it would cost in Switzerland or America,” he said.
The Minister underlined the government’s focus on emerging technologies such as artificial intelligence, machine learning, and data analytics. “Today, our policies are structured around a future-ready India… an India that is at the forefront of Artificial Intelligence, Machine Learning, computing and data analytics,” he said.
He also laid out a growth trajectory for the Indian economy. “From the 11th largest GDP in the world, we are today the fifth largest. By the end of Calendar year 2025, or maybe anytime during the year, we will be the fourth-largest GDP in the world. By 2027, we will be the third largest,” Goyal said.
Sangam 2025 featured a pitch fest that saw 20 deep tech and AI startups present to over 250 investors and venture capitalists. Selected startups will also receive institutional support from the IIT Madras Innovation Ecosystem, which has incubated over 500 ventures in the last decade.
Key speakers included Aparna Chennapragada (Chief Product Officer, Microsoft), Srinivas Narayanan (VP Engineering, OpenAI), and Tarun Mehta (Co-founder and CEO, Ather Energy), all IIT Madras alumni. The summit also hosted Kris Gopalakrishnan (Axilor Ventures, Infosys), Dr S. Somanath (former ISRO Chairman) and Bengaluru South MP Tejasvi Surya.
Prof. V. Kamakoti, Director, IIT Madras, said, “IIT Madras is committed to playing a pivotal role in shaping ‘Viksit Bharat 2047’. At the forefront of its agenda are innovation and entrepreneurship, which are key drivers for National progress.”
Ms. Shyamala Rajaram, President of IITMAA, said, “Sangam 2025 is a powerful confluence of IIT Madras and its global alumni — sparking bold conversations on innovation and entrepreneurship.”
Prof. Ashwin Mahalingam, Dean (Alumni and Corporate Relations), IIT Madras, added, “None of this would be possible without the unwavering support of our alumni community. Sangam 2025 embodies the strength of that network.”
Jobs & Careers
Serve Machine Learning Models via REST APIs in Under 10 Minutes
Image by Author | Canva
If you like building machine learning models and experimenting with new stuff, that’s really cool — but to be honest, it only becomes useful to others once you make it available to them. For that, you need to serve it — expose it through a web API so that other programs (or humans) can send data and get predictions back. That’s where REST APIs come in.
In this article, you will learn how we’ll go from a simple machine learning model to a production-ready API using FastAPI, one of Python’s fastest and most developer-friendly web frameworks, in just under 10 minutes. And we won’t just stop at a “make it run” demo, but we will add things like:
- Validating incoming data
- Logging every request
- Adding background tasks to avoid slowdowns
- Gracefully handling errors
So, let me just quickly show you how our project structure is going to look before we move to the code part:
ml-api/
│
├── model/
│ └── train_model.py # Script to train and save the model
│ └── iris_model.pkl # Trained model file
│
├── app/
│ └── main.py # FastAPI app
│ └── schema.py # Input data schema using Pydantic
│
├── requirements.txt # All dependencies
└── README.md # Optional documentation
Step 1: Install What You Need
We’ll need a few Python packages for this project: FastAPI for the API, Scikit-learn for the model, and a few helpers like joblib and pydantic. You can install them using pip:
pip install fastapi uvicorn scikit-learn joblib pydantic
And save your environment:
pip freeze > requirements.txt
Step 2: Train and Save a Simple Model
Let’s keep the machine learning part simple so we can focus on serving the model. We’ll use the famous Iris dataset and train a random forest classifier to predict the type of iris flower based on its petal and sepal measurements.
Here’s the training script. Create a file called train_model.py in a model/ directory:
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
import joblib, os
X, y = load_iris(return_X_y=True)
clf = RandomForestClassifier()
clf.fit(*train_test_split(X, y, test_size=0.2, random_state=42)[:2])
os.makedirs("model", exist_ok=True)
joblib.dump(clf, "model/iris_model.pkl")
print("✅ Model saved to model/iris_model.pkl")
This script loads the data, splits it, trains the model, and saves it using joblib. Run it once to generate the model file:
python model/train_model.py
Step 3: Define What Input Your API Should Expect
Now we need to define how users will interact with your API. What should they send, and in what format?
We’ll use Pydantic, a built-in part of FastAPI, to create a schema that describes and validates incoming data. Specifically, we’ll ensure that users provide four positive float values — for sepal length/width and petal length/width.
In a new file app/schema.py, add:
from pydantic import BaseModel, Field
class IrisInput(BaseModel):
sepal_length: float = Field(..., gt=0, lt=10)
sepal_width: float = Field(..., gt=0, lt=10)
petal_length: float = Field(..., gt=0, lt=10)
petal_width: float = Field(..., gt=0, lt=10)
Here, we’ve added value constraints (greater than 0 and less than 10) to keep our inputs clean and realistic.
Step 4: Create the API
Now it’s time to build the actual API. We’ll use FastAPI to:
- Load the model
- Accept JSON input
- Predict the class and probabilities
- Log the request in the background
- Return a clean JSON response
Let’s write the main API code inside app/main.py:
from fastapi import FastAPI, HTTPException, BackgroundTasks
from fastapi.responses import JSONResponse
from app.schema import IrisInput
import numpy as np, joblib, logging
# Load the model
model = joblib.load("model/iris_model.pkl")
# Set up logging
logging.basicConfig(filename="api.log", level=logging.INFO,
format="%(asctime)s - %(message)s")
# Create the FastAPI app
app = FastAPI()
@app.post("/predict")
def predict(input_data: IrisInput, background_tasks: BackgroundTasks):
try:
# Format the input as a NumPy array
data = np.array([[input_data.sepal_length,
input_data.sepal_width,
input_data.petal_length,
input_data.petal_width]])
# Run prediction
pred = model.predict(data)[0]
proba = model.predict_proba(data)[0]
species = ["setosa", "versicolor", "virginica"][pred]
# Log in the background so it doesn’t block response
background_tasks.add_task(log_request, input_data, species)
# Return prediction and probabilities
return {
"prediction": species,
"class_index": int(pred),
"probabilities": {
"setosa": float(proba[0]),
"versicolor": float(proba[1]),
"virginica": float(proba[2])
}
}
except Exception as e:
logging.exception("Prediction failed")
raise HTTPException(status_code=500, detail="Internal error")
# Background logging task
def log_request(data: IrisInput, prediction: str):
logging.info(f"Input: {data.dict()} | Prediction: {prediction}")
Let’s pause and understand what’s happening here.
We load the model once when the app starts. When a user hits the /predict endpoint with valid JSON input, we convert that into a NumPy array, pass it through the model, and return the predicted class and probabilities. If something goes wrong, we log it and return a friendly error.
Notice the BackgroundTasks part — this is a neat FastAPI feature that lets us do work after the response is sent (like saving logs). That keeps the API responsive and avoids delays.
Step 5: Run Your API
To launch the server, use uvicorn like this:
uvicorn app.main:app --reload
Visit: http://127.0.0.1:8000/docs
You’ll see an interactive Swagger UI where you can test the API.
Try this sample input:
{
"sepal_length": 6.1,
"sepal_width": 2.8,
"petal_length": 4.7,
"petal_width": 1.2
}
or you can use CURL to make the request like this:
curl -X POST "http://127.0.0.1:8000/predict" -H "Content-Type: application/json" -d \
'{
"sepal_length": 6.1,
"sepal_width": 2.8,
"petal_length": 4.7,
"petal_width": 1.2
}'
Both of the them generates the same response which is this:
{"prediction":"versicolor",
"class_index":1,
"probabilities": {
"setosa":0.0,
"versicolor":1.0,
"virginica":0.0 }
}
Optional Step: Deploy Your API
You can deploy the FastAPI app on:
- Render.com (zero config deployment)
- Railway.app (for continuous integration)
- Heroku (via Docker)
You can also extend this into a production-ready service by adding authentication (such as API keys or OAuth) to protect your endpoints, monitoring requests with Prometheus and Grafana, and using Redis or Celery for background job queues. You can also refer to my article : Step-by-Step Guide to Deploying Machine Learning Models with Docker.
Wrapping Up
That’s it — and it’s already better than most demos. What we’ve built is more than just a toy example. However, it:
- Validates input data automatically
- Returns meaningful responses with prediction confidence
- Logs every request to a file (api.log)
- Uses background tasks so the API stays fast and responsive
- Handles failures gracefully
And all of it in under 100 lines of code.
Kanwal Mehreen Kanwal is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.
Jobs & Careers
AI-Powered Face Authentication Hits Record 15.87 Crore in June as Aadhaar Transactions Soar
The adoption of artificial intelligence in India’s digital identity infrastructure is scaling new highs, with Aadhaar’s AI-driven face authentication technology witnessing an unprecedented 15.87 crore transactions in June 2025.
This marks a dramatic surge from 4.61 crore transactions recorded in the same month last year, showcasing the growing trust and reliance on facial biometrics for secure and convenient identity verification, according to an official statement from the electronics & IT ministry.
According to data released by the Unique Identification Authority of India (UIDAI), a total of 229.33 crore Aadhaar authentication transactions were carried out in June 2025, reflecting a 7.8% year-on-year growth.
The steady rise highlights Aadhaar’s critical role in India’s expanding digital economy and its function as an enabler for accessing welfare schemes and public services.
Since its inception, Aadhaar has facilitated over 15,452 crore authentication transactions.
The AI/ML-powered face authentication solution, developed in-house by UIDAI, operates seamlessly across Android and iOS platforms, allowing users to verify their identity with a simple face scan, the ministry informed.
“This not only enhances user convenience but also strengthens the overall security framework,” it said.
More than 100 government ministries, departments, financial institutions, oil marketing companies, and telecom service providers are actively using face authentication to ensure smoother, faster, and safer delivery of services and entitlements.
The system’s rapid expansion underscores how AI is reshaping the landscape of digital public infrastructure in India, it said.
UIDAI’s face authentication technology, with nearly 175 crore cumulative transactions so far, is increasingly becoming central to Aadhaar’s verification ecosystem.
In addition to face authentication, Aadhaar’s electronic Know Your Customer (e-KYC) service recorded over 39.47 crore transactions in June 2025 alone. E-KYC continues to streamline onboarding and compliance processes across banking, financial services, and other regulated sectors, reinforcing Aadhaar’s position as a foundation for ease of living and doing business in India, the ministry shared.
-
Funding & Business6 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Funding & Business3 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business6 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers6 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Jobs & Careers3 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit
-
Funding & Business3 days ago
Dust hits $6M ARR helping enterprises build AI agents that actually do stuff instead of just talking
-
Funding & Business6 days ago
Europe’s Most Ambitious Startups Aren’t Becoming Global; They’re Starting That Way