Jobs & Careers
Vibing With Amazon Kiro – KDnuggets


Image by Editor | ChatGPT
Products centered around large language models (LLMs), like ChatGPT or Gemini, have changed how developers work, making it easier to generate working code without a complete understanding of the underlying concepts. Tools like GitHub Copilot or Cursor show that AI can suggest effective code and boost developer productivity. However, these tools sometimes fail to understand our ultimate goal and may produce flawed code in the long run.
One of the currently thriving areas of AI research is agentic systems, where tools can generate what we need, understand our intent, and execute actions on our behalf. With an agent, we can have an intelligent assistant that comprehends our codebase, works across files, explains code behavior, and understands our ultimate goals — all capabilities that Amazon Kiro provides.
This article will explore Kiro and how it could help your work. Let’s get into it.
# Vibing With Kiro
Kiro is an integrated development environment (IDE) developed by the AWS team that utilizes AI agents directly in the environment. As the Kiro tagline suggests, “Go from vibe coding to viable code,” the goal is to enhance a developer’s vibe-coding prototype into a production-ready code system. This works because Kiro provides plans, reasons, and proposals that users can review and apply. This applies to the code, as Kiro will help the user develop the specification, design documentation, execute the testing, and set up the tasks as a continuous workflow using AI agents.
Kiro is fundamentally different from popular tools like GitHub Copilot. While GitHub Copilot functions as an autocomplete tool that enhances individual files with limited context, Kiro emphasizes improving the entire system using an agent-structured plan for the development project.
Kiro works in two main ways:
- Vibe: Communicating with the project using natural language to explore and develop ideas. This is typically used as an open-ended prompt to handle multiple requests when users need a quick single-file edit or want to explore different solutions. Examples include a single request, such as “write a pagination helper” or “clean up this SQL join.”
- Spec: Plan first, then build the project. The spec generates and maintains requirements, design, and tasks that the agent executes step-by-step to produce code, documentation, and tests across multiple files. The idea is to convert the goal into a plan and let the agent implement it while keeping the documentation. This approach is perfect if we understand the solution we want and want to scale up the project. For example, we use a Spec like “Add email verification and password reset.”
Kiro separates the allocation cost and usage for Spec and Vibe, reflecting how we work. The recommended workflow is to follow the Vibe initially to explore the solution and draft the required specification, then move on to the Spec to generate a detailed plan and ship features while documenting them.
In addition to Vibe and Spec, a few other prominent features make Kiro stand out:
With all these features, Kiro is clearly aimed at developers who want to accelerate their development cycle while maintaining essential engineering practices like documentation and design.
Currently, Kiro is only accessible through a free preview, and interested users must join a waitlist to try the IDE. The team has announced future tiers: Free, Pro (\$20), Pro+ (\$40), and Power (\$200), each offering different monthly allowances for Vibe and Spec requests, with optional top-ups at \$0.04 per Vibe and \$0.20 per Spec. Additionally, there is a two-week trial that provides extra usage for users who can access the IDE.
Any interested user can download and install the IDE by following the installation guide. Once you have acquired the code for the waiting list, you can explore the project page to start your first Amazon Kiro project.
# Conclusion
Amazon Kiro is an agentic IDE designed to boost developer productivity more effectively than standard AI IDE tools, which often function only as autocompletion tools. Kiro operates in two modes, Vibe and Spec, which support each other. Kiro also offers many features that enhance productivity, such as hooks, steering, autopilot, access to the MCP server, and robust security.
However, Kiro is currently limited to users on the waitlist, so it may be some time before it is widely available. Nevertheless, once it is open to the public, the IDE has the potential to significantly improve developer productivity.
I hope this has helped.
Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.
Jobs & Careers
NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue

NVIDIA disclosed on August 28, 2025, that two unnamed customers contributed 39% of its revenue in the July quarter, raising questions about the chipmaker’s dependence on a small group of clients.
The company posted record quarterly revenue of $46.7 billion, up 56% from a year ago, driven by insatiable demand for its data centre products.
In a filing with the U.S. Securities and Exchange Commission (SEC), NVIDIA said “Customer A” accounted for 23% of total revenue and “Customer B” for 16%. A year earlier, its top two customers made up 14% and 11% of revenue.
The concentration highlights the role of large buyers, many of whom are cloud service providers. “Large cloud service providers made up about 50% of the company’s data center revenue,” NVIDIA chief financial officer Colette Kress said on Wednesday. Data center sales represented 88% of NVIDIA’s overall revenue in the second quarter.
“We have experienced periods where we receive a significant amount of our revenue from a limited number of customers, and this trend may continue,” the company wrote in the filing.
One of the customers could possibly be Saudi Arabia’s AI firm Humain, which is building two data centers in Riyadh and Dammam, slated to open in early 2026. The company has secured approval to import 18,000 NVIDIA AI chips.
The second customer could be OpenAI or one of the major cloud providers — Microsoft, AWS, Google Cloud, or Oracle. Another possibility is xAI.
Previously, Elon Musk said xAI has 230,000 GPUs, including 30,000 GB200s, operational for training its Grok model in a supercluster called Colossus 1. Inference is handled by external cloud providers.
Musk added that Colossus 2, which will host an additional 550,000 GB200 and GB300 GPUs, will begin going online in the coming weeks. “As Jensen Huang has stated, xAI is unmatched in speed. It’s not even close,” Musk wrote in a post on X.Meanwhile, OpenAI is preparing for a major expansion. Chief Financial Officer Sarah Friar said the company plans to invest in trillion-dollar-scale data centers to meet surging demand for AI computation.
The post NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue appeared first on Analytics India Magazine.
Jobs & Careers
‘Reliance Intelligence’ is Here, In Partnership with Google and Meta

Reliance Industries chairman Mukesh Ambani has announced the launch of Reliance Intelligence, a new wholly owned subsidiary focused on artificial intelligence, marking what he described as the company’s “next transformation into a deep-tech enterprise.”
Addressing shareholders, Ambani said Reliance Intelligence had been conceived with four core missions—building gigawatt-scale AI-ready data centres powered by green energy, forging global partnerships to strengthen India’s AI ecosystem, delivering AI services for consumers and SMEs in critical sectors such as education, healthcare, and agriculture, and creating a home for world-class AI talent.
Work has already begun on gigawatt-scale AI data centres in Jamnagar, Ambani said, adding that they would be rolled out in phases in line with India’s growing needs.
These facilities, powered by Reliance’s new energy ecosystem, will be purpose-built for AI training and inference at a national scale.
Ambani also announced a “deeper, holistic partnership” with Google, aimed at accelerating AI adoption across Reliance businesses.
“We are marrying Reliance’s proven capability to build world-class assets and execute at India scale with Google’s leading cloud and AI technologies,” Ambani said.
Google CEO Sundar Pichai, in a recorded message, said the two companies would set up a new cloud region in Jamnagar dedicated to Reliance.
“It will bring world-class AI and compute from Google Cloud, powered by clean energy from Reliance and connected by Jio’s advanced network,” Pichai said.
He added that Google Cloud would remain Reliance’s largest public cloud partner, supporting mission-critical workloads and co-developing advanced AI initiatives.
Ambani further unveiled a new AI-focused joint venture with Meta.
He said the venture would combine Reliance’s domain expertise across industries with Meta’s open-source AI models and tools to deliver “sovereign, enterprise-ready AI for India.”
Meta founder and CEO Mark Zuckerberg, in his remarks, said the partnership is aimed to bring open-source AI to Indian businesses at scale.
“With Reliance’s reach and scale, we can bring this to every corner of India. This venture will become a model for how AI, and one day superintelligence, can be delivered,” Zuckerberg said.
Ambani also highlighted Reliance’s investments in AI-powered robotics, particularly humanoid robotics, which he said could transform manufacturing, supply chains and healthcare.
“Intelligent automation will create new industries, new jobs and new opportunities for India’s youth,” he told shareholders.
Calling AI an opportunity “as large, if not larger” than Reliance’s digital services push a decade ago, Ambani said Reliance Intelligence would work to deliver “AI everywhere and for every Indian.”
“We are building for the next decade with confidence and ambition,” he said, underscoring that the company’s partnerships, green infrastructure and India-first governance approach would be central to this strategy.
The post ‘Reliance Intelligence’ is Here, In Partnership with Google and Meta appeared first on Analytics India Magazine.
Jobs & Careers
Cognizant, Workfabric AI to Train 1,000 Context Engineers

Cognizant has announced that it would deploy 1,000 context engineers over the next year to industrialise agentic AI across enterprises.
According to an official release, the company claimed that the move marks a “pivotal investment” in the emerging discipline of context engineering.
As part of this initiative, Cognizant said it is partnering with Workfabric AI, the company building the context engine for enterprise AI.
Cognizant’s context engineers will be powered by Workfabric AI’s ContextFabric platform, the statement said, adding that the platform transforms the organisational DNA of enterprises, how their teams work, including their workflows, data, rules, and processes, into actionable context for AI agents.Context engineering is essential to enabling AI a
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies