Connect with us

Books, Courses & Certifications

AI certification mandatory for 10k GTU faculty members

Published

on


Ahmedabad Education

University warns of ineligibility for future endorsements over non-compliance with the new AICTE-aligned directive

Ahmedabad Mirror

Aug 05, 2025 06:00 AM | UPDATED: Aug 05, 2025 03:03 AM | 9 min read

In a major move to upgrade faculty skills in artificial intelligence, Gujarat Technological University (GTU) has mandated that all faculty members in its affiliated colleges must complete a certified online AI course. GTU has warned in a circular that non-compliance with the directive may lead to ineligibility for faculty endorsement in future. The decision, which follows guidelines from the All India Council for Technical Education (AICTE), will affect nearly 10,000 faculty members, GTU registrar KN Kher said.

Aligning with national policy

The university circular states that the new mandate is crucial in the current technological landscape. It aligns with the AICTE’s declaration of 2025 as the “Year of AI” and the vision of the National Education Policy (NEP) 2020. 

The objective, the circular notes, is “to build Artificial Intelligence (AI) competency among faculty across disciplines” and to support the national mission of fostering a comprehensive ecosystem for AI innovation. The directive aims to “build the future-ready Artificial Intelligence (AI) workforce and skill development.” Faculty members must complete a domain-relevant course from a list of approved platforms, which includes SWAYAM, NPTEL, Coursera, IITs, IIITs, and IISc, among others. Training programmes organised by the Department of Technical Education will be considered valid for government faculty. All existing and newly appointed faculty members are required to submit their completion certificates to their respective institutes and upload a copy to GTU Affiliation portal by June 30.

52.6% seats in diploma engg colleges vacant
​​​​​​​The Admission Committee for Professional Diploma Courses (ACPDC) on Monday said that 52.56 per cent seats in the diploma engineering colleges have remained vacant at the end of the second admission round. From the total of 60,804 seats in diploma institutes, only 28,842 seats have found takers, leaving 31,962 seats vacant. Officials in the ACPDC said that the majority of the seats have remained vacant in the self-finance institutes offering diploma courses. From a total of 38,591 seats, only 9,887 seats have found takers, leaving 28,704—74.38 per cent—seats vacant.  



Source link

Books, Courses & Certifications

How to Use Python’s dataclass to Write Less Code

Published

on



Image by Author | Canva

 

Introduction

 
Writing classes in Python can get repetitive really fast. You’ve probably had moments where you’re defining an __init__ method, a __repr__ method, maybe even __eq__, just to make your class usable — and you’re like, “Why am I writing the same boilerplate again and again?”

That’s where Python’s dataclass comes in. It’s part of the standard library and helps you write cleaner, more readable classes with way less code. If you’re working with data objects — anything like configs, models, or even just bundling a few fields together — dataclass is a game-changer. Trust me, this isn’t just another overhyped feature — it actually works. Let’s break it down step by step.

 

What Is a dataclass?

 
A dataclass is a Python decorator that automatically generates boilerplate code for classes, like __init__, __repr__, __eq__, and more. It’s part of the dataclasses module and is perfect for classes that primarily store data (think: objects representing employees, products, or coordinates). Instead of manually writing repetitive methods, you define your fields, slap on the @dataclass decorator, and Python does the heavy lifting. Why should you care? Because it saves you time, reduces errors, and makes your code easier to maintain.

 

The Old Way: Writing Classes Manually

 
Here’s what you might be doing today if you’re not using dataclass:

class User:
    def __init__(self, name, age, is_active):
        self.name = name
        self.age = age
        self.is_active = is_active

    def __repr__(self):
        return f"User(name={self.name}, age={self.age}, is_active={self.is_active})"

 
It’s not terrible, but it’s verbose. Even for a simple class, you’re already writing the constructor and string representation manually. And if you need comparisons (==), you’ll have to write __eq__ too. Imagine adding more fields or writing ten similar classes — your fingers would hate you.

 

The Dataclass Way (a.k.a. The Better Way)

 
Now, here’s the same thing using dataclass:

from dataclasses import dataclass

@dataclass
class User:
    name: str
    age: int
    is_active: bool

 

That’s it. Python automatically adds the __init__, __repr__, and __eq__ methods for you under the hood. Let’s test it:

# Create three users
u1 = User(name="Ali", age=25, is_active=True)
u2 = User(name="Almed", age=25, is_active=True)
u3 = User(name="Ali", age=25, is_active=True)

# Print them
print(u1) 

# Compare them
print(u1 == u2) 
print(u1 == u3)

 

Output:

User(name="Ali", age=25, is_active=True)
False
True

 

Additional Features Offered by dataclass

 

// 1. Adding Default Values

You can set default values just like in function arguments:

@dataclass
class User:
    name: str
    age: int = 25
    is_active: bool = True

 

u = User(name="Alice")
print(u)

 

Output:

User(name="Alice", age=25, is_active=True)

 

Pro Tip: If you use default values, put those fields after non-default fields in the class definition. Python enforces this to avoid confusion (just like function arguments).

 

// 2. Making Fields Optional (Using field())

If you want more control — say you don’t want a field to be included in __repr__, or you want to set a default after initialization — you can use field():

from dataclasses import dataclass, field

@dataclass
class User:
    name: str
    password: str = field(repr=False)  # Hide from __repr__

 
Now:

print(User("Alice", "supersecret"))

 

Output:

 

Your password isn’t exposed. Clean and secure.

 

// 3. Immutable Dataclasses (Like namedtuple, but Better)

If you want your class to be read-only (i.e., its values can’t be changed after creation), just add frozen=True:

@dataclass(frozen=True)
class Config:
    version: str
    debug: bool

 
Trying to modify an object of Config like config.debug = False will now raise an error: FrozenInstanceError: cannot assign to field 'debug'. This is useful for constants or app settings where immutability matters.

 

// 4. Nesting Dataclasses

Yes, you can nest them too:

@dataclass
class Address:
    city: str
    zip_code: int

@dataclass
class Customer:
    name: str
    address: Address

 
Example Usage:

addr = Address("Islamabad", 46511)
cust = Customer("Qasim", addr)
print(cust)

Output:

Customer(name="Qasim", address=Address(city='Islamabad', zip_code=46511))

 

Pro Tip: Using asdict() for Serialization

 
You can convert a dataclass into a dictionary easily:

from dataclasses import asdict

u = User(name="Kanwal", age=10, is_active=True)
print(asdict(u))

 

Output:

{'name': 'Kanwal', 'age': 10, 'is_active': True}

 

This is useful when working with APIs or storing data in databases.

 

When Not to Use dataclass

 
While dataclass is amazing, it’s not always the right tool for the job. Here are a few scenarios where you might want to skip it:

  1. If your class is more behavior-heavy (i.e., filled with methods and not just attributes), then dataclass might not add much value. It’s primarily built for data containers, not service classes or complex business logic.
  2. You can override the auto-generated dunder methods like __init__, __eq__, __repr__, etc., but if you’re doing it often, maybe you don’t need a dataclass at all. Especially if you’re doing validations, custom setup, or tricky dependency injection.
  3. For performance-critical code (think: games, compilers, high-frequency trading), every byte and cycle matters. dataclass adds a small overhead for all the auto-generated magic. In those edge cases, go with manual class definitions and fine-tuned methods.

 

Final Thoughts

 
Python’s dataclass isn’t just syntactic sugar — it actually makes your code more readable, testable, and maintainable. If you’re dealing with objects that mostly store and pass around data, there’s almost no reason not to use it. If you want to study deeper, check out the official Python docs or experiment with advanced features. And since it’s part of the standard library, there are zero extra dependencies. You can just import it and go.
 
 

Kanwal Mehreen is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook “Maximizing Productivity with ChatGPT”. As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She’s also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.



Source link

Continue Reading

Books, Courses & Certifications

AI Security Takes Center Stage at Black Hat USA 2025 – O’Reilly

Published

on



The security landscape is undergoing yet another major shift, and nowhere was this more evident than at Black Hat USA 2025. As artificial intelligence (especially the agentic variety) becomes deeply embedded in enterprise systems, it’s creating both security challenges and opportunities. Here’s what security professionals need to know about this rapidly evolving landscape.

AI systems—and particularly the AI assistants that have become integral to enterprise workflows—are emerging as prime targets for attackers. In one of the most interesting and scariest presentations, Michael Bargury of Zenity demonstrated previously unknown “0click” exploit methods affecting major AI platforms including ChatGPT, Gemini, and Microsoft Copilot. These findings underscore how AI assistants, despite their robust security measures, can become vectors for system compromise.

AI security presents a paradox: As organizations expand AI capabilities to enhance productivity, they must necessarily increase these tools’ access to sensitive data and systems. This expansion creates new attack surfaces and more complex supply chains to defend. NVIDIA’s AI red team highlighted this vulnerability, revealing how large language models (LLMs) are uniquely susceptible to malicious inputs, and demonstrated several novel exploit techniques that take advantage of these inherent weaknesses.

However, it’s not all new territory. Many traditional security principles remain relevant and are, in fact, more crucial than ever. Nathan Hamiel and Nils Amiet of Kudelski Security showed how AI-powered development tools are inadvertently reintroducing well-known vulnerabilities into modern applications. Their findings suggest that basic application security practices remain fundamental to AI security.

Looking forward, threat modeling becomes increasingly critical but also more complex. The security community is responding with new frameworks designed specifically for AI systems such as MAESTRO and NIST’s AI Risk Management Framework. The OWASP Agentic Security Top 10 project, launched during this year’s conference, provides a structured approach to understanding and addressing AI-specific security risks.

For security professionals, the path forward requires a balanced approach: maintaining strong fundamentals while developing new expertise in AI-specific security challenges. Organizations must reassess their security posture through this new lens, considering both traditional vulnerabilities and emerging AI-specific threats.

The discussions at Black Hat USA 2025 made it clear that while AI presents new security challenges, it also offers opportunities for innovation in defense strategies. Mikko Hypponen’s opening keynote presented a historical perspective on the last 30 years of cybersecurity advancements and concluded that security is not only better than it’s ever been but poised to leverage a head start in AI usage. Black Hat has a way of underscoring the reasons for concern, but taken as a whole, this year’s presentations show us that there are also many reasons to be optimistic. Individual success will depend on how well security teams can adapt their existing practices while embracing new approaches specifically designed for AI systems.



Source link

Continue Reading

Books, Courses & Certifications

Looking Forward to AI Codecon – O’Reilly

Published

on


I’m really looking forward to our second O’Reilly AI Codecon, Coding for the Agentic World, which is happening on September 9, online from 8 am to noon pacific time, with a follow-on day of additional demos on September 16. But I’m also looking forward to how the AI market itself unfolds: the surprising twists and turns ahead as users and developers apply AI to real world problems.

The pages linked above give details on the program for the events. What I want to give here is a bit of the why behind the program, with a bit more detail on some of the fireside chats I will be leading.

From Invention to Application

There has been so much focus in the past on the big AI labs, the model developers, and their razzle dazzle about AGI, or even ASI. That narrative implied that we were heading towards something unprecedented. But if this is a “normal technology” (albeit one as transformational as electricity, the internal combustion engine, or the internet), we know that LLMs themselves are just the beginning of a long process of discovery, product invention, business adoption, and societal adaptation.

That process of collaborative discovery of the real uses for AI and reinvention of the businesses that use it is happening most clearly in the software industry. It is where AI is being pushed to the limits, where new products beyond the chatbot are being introduced, where new workflows are being developed, and where we understand what works and what doesn’t.

This work is often being pushed forward by individuals, who are “learning by doing.” Some of these individuals work for large companies, others for startups, others for enterprises, and others as independent hackers.

Our focus in these AI Codecon events is to smooth adoption of AI by helping our customers cut through the hype and understand what is working. O’Reilly’s mission has always been changing the world by sharing the knowledge of innovators. In our events, we always look for people who are at the forefront of invention. As outlined in the call to action for the first event, I was concerned about the chatter that AI would make developers obsolete. I argued instead that it would profoundly change the process of software development and the jobs that developers do, but that it would make them more important than ever.

It looks like I was right. There is a huge ferment, with so much new to learn and do that it’s a really exciting time to be a software developer. I’m really excited about the practicality of the conversation. We’re not just talking about the “what if.” We’re seeing new AI powered services meeting real business needs. We are witnessing the shift from human-centric workflows to agent-centric workflows, and it’s happening faster than you think.

We’re also seeing widespread adoption of the protocols that will power it all. If you’ve followed my work from open source to web 2.0 to the present, you know that I believe strongly that the most dynamic systems have “an architecture of participation.” That is, they aren’t monolithic. The barriers to entry need to be low and business models fluid (at least in the early stages) for innovation to flourish.

When AI was framed as a race for superintelligence, there was a strong expectation that it would be winner takes all. The first company to get to ASI (or even just to AGI) would soon be so far ahead that it would inevitably become a dominant monopoly. Developers would all use its APIs, making it into the single dominant platform for AI development.

Protocols like MCP and A2A are instead enabling a decentralized AI future. The explosion of entrepreneurial activity around agentic AI reminds me of the best kind of open innovation, much like I saw in the early days of the personal computer and the internet.

I was going to use my opening remarks to sound that theme, and then I read Alex Komoroske’s marvelous essay, “Why Centralized AI Is Not Our Inevitable Future.” So I asked him to do it instead. He’s going to give an updated, developer-focused version of that as our kickoff talk.

Then we’re going into a section on agentic interfaces. We’ve lived for decades with the GUI (either on computers or mobile applications) and the web as the dominant ways we use computers. AI is changing all that.

It’s not just agentic interfaces, though. It’s really developing true AI-native products, searching out the possibilities of this new computing fabric.

The Great Interface Rethink

In the “normal technology” framing, a fundamental technology innovation is distinct from products based on it. Think of the invention of the LLM itself as electricity, and ChatGPT as the equivalent of Edison’s incandescent light bulb and the development of the distribution network to power it.

There’s a bit of a lesson in the fact that the telegraph was the first large-scale practical application of electricity, over 40 years before Edison’s lightbulb. The telephone was another killer app that used electricity to power it. But despite their scale, these were specialized devices. It was the infrastructure for incandescent lighting that turned electricity into a general purpose technology.

The world soon saw electrical resistance products like irons and toasters, and electric motors powering not just factories but household appliances such as washing machines and eventually refrigerators and air conditioning. Many of these household products were plugged into light sockets, since the pronged plug as we know it today wasn’t introduced until 30 years after the first light bulb.

Found on Facebook: “Any ideas what this would have been used for? I found it after pulling up carpet – it’s in the corner of a closet in my 1920s ‘fixer-upper’ that I’m slowly bringing back to life. It appears to be for a light bulb and the little flip top is just like floor outlets you see today, but can’t figure out why it would be directly on the floor.”

The lesson is that at some point in the development of a general purpose technology, product innovation takes over from pure technology innovation. That’s the phase we’re entering now.

Look at the evolution of LLM-based products: Github Copilot embedded AI into Visual Studio Code; the interface was an extension to VS Code, a ten year old GUI-based program. Google’s AI efforts were tied into its web-based search products. ChatGPT broke the mold and introduced the first radically new interface since the web browser. Suddenly, chat was the preferred new interface for everything. But Claude Code took things further with Artifacts and then Claude Code, and once coding assistants gained more complex interfaces, that kicked off today’s fierce competition between coding tools. The next revolution is the construction of a new computing paradigm where software is composed of intelligent, autonomous agents.

I’m really looking forward to Rachel-Lee Nabors’ talk on how, with an agentic interface, we might transcend the traditional browser: AI agents can adapt content directly to users, offering privacy, accessibility, and flexibility that legacy web interfaces cannot match.

But it seems to me that there will be two kinds of agents, which I call “demand side” and “supply side” agents. What’s a “demand side” agent? Instead of navigating complex apps, you’ll simply state your goal. The agent will understand the context, access the necessary tools, and present you with the result. The vision is still science fiction. The reality is often a kludge powered by browser use or API calls, with MCP servers increasingly offering an AI-friendlier interface for those demand side agents to interact with. But why should it stop there? MCP servers are static interfaces. What if there were agents on both sides of the conversation, in a dynamic negotiation? I suspect that while demand side agents will be developed by venture funded startups, most server side agents will be developed by enterprises as a kind of conversational interface for both humans and AI agents that want access to their complex workflows, data, and business models. And those enterprises will often be using agentic platforms tailored for their use. That’s part of the “supply side agent” vision of companies like Sierra. I’ll be talking with Sierra co-founder Clay Bavor about this next step in agentic development.

We’ve grown accustomed to thinking about agents as lonely consumers—“tell me the weather,” “scan my code,” “summarize my inbox.” But that’s only half the story. If we build supply-side agent infrastructure—autonomous, discoverable, governed, negotiated—we unlock agility, resilience, security, and collaboration.

My interest in product innovation, not just advances in the underlying technology, is also why I’m excited about my fireside chat with Josh Woodward, who co-led the team that developed Notebook.LM at Google. I’m a huge fan of Notebook.LM, which in many ways brought the power of RAG (Retrieval Augmented Generation) to end users, allowing them to collect a set of documents into a Google drive, and then use that collection to drive chat, audio overviews of documents, study guides, mind maps, and much more.

Notebook.LM is also a lovely way to build on the deep collaborative infrastructure provided by Google Drive. We need to think more deeply about collaborative interfaces for AI. Right now, AI interaction is mostly a solitary sport. You can share the outputs with others, but not the generative process. I wrote about this recently in “People Work in Teams, AI Assistants in Silos.” I think that’s a big miss, and I’m hoping to probe Josh about Google’s plans in this area, and eager to see other innovations in AI-mediated human collaboration.

GitHub is another existing tool for collaboration that has become central to the AI ecosystem. I’m really looking forward to talking with outgoing CEO Thomas Dohmke both about the ways that Github already provides a kind of exoskeleton for collaboration when using AI code generation tools. It seems to me that one of the frontiers of AI-human interfaces will be those that enable not just small teams but eventually large groups to collaborate. I suspect that Github may have more to teach us about that future than we now suspect.

And finally, we are now learning that managing context is a critical part of designing effective AI applications. My co-chair Addy Osmani will be talking about the emergence of context engineering as a real discipline, and its relevance to agentic AI development.

Tool-Chaining Agents and Real Workflows

Today’s AI tools are largely solo performers—a Copilot suggesting code or a ChatGPT answering a query. The next leap is from single agents to interconnected systems. The program is filled with sessions on “tool-to-tool workflows” and multi-agent systems.

Ken Kousen will showcase the new generation of coding agents, including Claude Code, Codex CLI, Gemini CLI, and Junie, that help developers navigate codebases, automate tasks, and even refactor intelligently. In her talk, Angie Jones takes it further: agents that go beyond code generation to manage PRs, write tests, and update documentation—stepping “out of the IDE” and into real-world workflows.

Even more exciting is the idea of agents collaborating with each other. The Demo Day will showcase a multi-agent coding system where agents share, correct, and evolve code together. This isn’t science fiction; Amit Rustagi’s talk on decentralized AI agent infrastructure using technologies like WebAssembly and IPFS provides a practical architectural framework for making these agent swarms a reality.

The Crucial Ingredient: Common Protocols

How do all these agents talk to each other? How do they discover new tools and use them safely? The answer that echoes throughout the agenda is the Model Context Protocol (MCP).

Much as the distribution network for electricity was the enabler for all of the product innovation of the electrical revolution, MCP is the foundational plumbing, the universal language that will allow this new ecosystem to flourish. Multiple sessions and an entire Demo Day are dedicated to it. We’ll see how Google is using it for agent-to-agent communication, how it can be used to control complex software like Blender with natural language, and even how it can power novel SaaS product demos.

The heavy focus on a standardized protocol signals that the industry is maturing past cool demos and is now building the robust, interoperable infrastructure needed for a true agentic economy.

If the development of the internet is any guide, though MCP is a beginning, not the end. TCP/IP became the foundation of a layered protocol stack. It is likely that MCP will be followed by many more specialized protocols.

Why This Matters

Theme Why It’s Thrilling
Autonomous, Distributed AI Agents that chain tasks and operate behind the scenes can unlock entirely new ways of building software.
Human Empowerment & Privacy The push against centralized AI systems is a reminder that tools should serve users, not control them.
Context as Architecture Elevating input design to first-class engineering—this will greatly improve reliability, trust, and AI behavior over time.
New Developer Roles We’re seeing developers transition from writing code to orchestrating agents, designing workflows, and managing systems.
MCP & Network Effects The idea of an “AI-native web,” where agents use standardized protocols to talk, is powerful, open-ended, and full of opportunity.

I look forward to seeing you there!


AI tools are quickly moving beyond chat UX to sophisticated agent interactions. Our upcoming AI Codecon event, Coding for the Agentic World, will highlight how developers are already using agents to build innovative and effective AI-powered experiences. We hope you’ll join us on September 9 to explore the tools, workflows, and architectures defining the next era of programming. It’s free to attend. Register now to save your seat. And join us for O’Reilly Demo Day on September 16 to see how experts are shaping AI systems to work for them via MCP.



Source link

Continue Reading

Trending