Jobs & Careers
HCLTech Appoints Amitabh Kant as Independent Director

HCLTech has appointed Amitabh Kant, India’s former G20 Sherpa, as an independent director on its board effective September 8.
Kant played a key role in India’s G20 Presidency in 2022-23, leading negotiations that resulted in the unanimous New Delhi Leaders’ Declaration. His work pushed forward global consensus on issues such as digital public infrastructure, climate finance, technology, and geopolitical cooperation.
Over a career spanning several decades, Kant has served in top government roles, including CEO of NITI Aayog, where he oversaw initiatives like the Aspirational Districts Program aimed at improving governance and development in India’s most backward regions.
He also helmed the Department for Industrial Policy and Promotion, the Delhi-Mumbai Industrial Corridor Development Corporation, and earlier, as Secretary of Tourism in Kerala, launched the “God’s Own Country” campaign that went on to inspire the nationwide “Incredible India” initiative.
At the national level, he has been closely associated with flagship programs such as Make in India, Startup India, the Ease of Doing Business reforms, and Production Linked Incentive schemes.
“We are delighted to have Amitabh Kant join the Board. His rich experience in building public sector institutions and contribution to India’s reforms will offer immense insights towards shaping HCLTech’s growth strategy,” said Roshni Nadar Malhotra, chairperson, HCLTech.
C. Vijayakumar, CEO & managing director of HCLTech, said, “Amitabh Kant joins us at a pivotal moment in our journey as well as within the industry. His thought leadership and long-term thinking will be invaluable in shaping our strategy.”
Kant said he looked forward to contributing to the company’s growth, calling HCLTech “among the finest corporate institutions in India.”
The post HCLTech Appoints Amitabh Kant as Independent Director appeared first on Analytics India Magazine.
Jobs & Careers
5 Tips for Building Optimized Hugging Face Transformer Pipelines


# Introduction
Hugging Face has become the standard for many AI developers and data scientists because it drastically lowers the barrier to working with advanced AI. Rather than working with AI models from scratch, developers can access a wide range of pretrained models without hassle. Users can also adapt these models with custom datasets and deploy them quickly.
One of the Hugging Face framework API wrappers is the Transformers Pipelines, a series of packages that consists of the pretrained model, its tokenizer, pre- and post-processing, and related components to make an AI use case work. These pipelines abstract complex code and provide a simple, seamless API.
However, working with Transformers Pipelines can get messy and may not yield an optimal pipeline. That is why we will explore five different ways you can optimize your Transformers Pipelines.
Let’s get into it.
# 1. Batch Inference Requests
Often, when using Transformers Pipelines, we do not fully utilize the graphics processing unit (GPU). Batch processing of multiple inputs can significantly boost GPU utilization and enhance inference efficiency.
Instead of processing one sample at a time, you can use the pipeline’s batch_size
parameter or pass a list of inputs so the model processes several inputs in one forward pass. Here is a code example:
from transformers import pipeline
pipe = pipeline(
task="text-classification",
model="distilbert-base-uncased-finetuned-sst-2-english",
device_map="auto"
)
texts = [
"Great product and fast delivery!",
"The UI is confusing and slow.",
"Support resolved my issue quickly.",
"Not worth the price."
]
results = pipe(texts, batch_size=16, truncation=True, padding=True)
for r in results:
print(r)
By batching requests, you can achieve higher throughput with only a minimal impact on latency.
# 2. Use Lower Precision And Quantization
Many pretrained models fail at inference because development and production environments do not have enough memory. Lower numerical precision helps reduce memory usage and speeds up inference without sacrificing much accuracy.
For example, here is how to use half precision on the GPU in a Transformers Pipeline:
import torch
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained(
model_id,
torch_dtype=torch.float16
)
Similarly, quantization techniques can compress model weights without noticeably degrading performance:
# Requires bitsandbytes for 8-bit quantization
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
model_id,
load_in_8bit=True,
device_map="auto"
)
Using lower precision and quantization in production usually speeds up pipelines and reduces memory use without significantly impacting model accuracy.
# 3. Select Efficient Model Architectures
In many applications, you do not need the largest model to solve the task. Selecting a lighter transformer architecture, such as a distilled model, often yields better latency and throughput with an acceptable accuracy trade-off.
Compact models or distilled versions, such as DistilBERT, retain most of the original model’s accuracy but with far fewer parameters, resulting in faster inference.
Choose a model whose architecture is optimized for inference and suits your task’s accuracy requirements.
# 4. Leverage Caching
Many systems waste compute by repeating expensive work. Caching can significantly enhance performance by reusing the results of costly computations.
with torch.inference_mode():
output_ids = model.generate(
**inputs,
max_new_tokens=120,
do_sample=False,
use_cache=True
)
Efficient caching reduces computation time and improves response times, lowering latency in production systems.
# 5. Use An Accelerated Runtime Via Optimum (ONNX Runtime)
Many pipelines run in a PyTorch not-so-optimal mode, which adds Python overhead and extra memory copies. Using Optimum with Open Neural Network Exchange (ONNX) Runtime — via ONNX Runtime — converts the model to a static graph and fuses operations, so the runtime can use faster kernels on a central processing unit (CPU) or GPU with less overhead. The result is usually faster inference, especially on CPU or mixed hardware, without changing how you call the pipeline.
Install the required packages with:
pip install -U transformers optimum[onnxruntime] onnxruntime
Then, convert the model with code like this:
from optimum.onnxruntime import ORTModelForSequenceClassification
ort_model = ORTModelForSequenceClassification.from_pretrained(
model_id,
from_transformers=True
)
By converting the pipeline to ONNX Runtime through Optimum, you can keep your existing pipeline code while getting lower latency and more efficient inference.
# Wrapping Up
Transformers Pipelines is an API wrapper in the Hugging Face framework that facilitates AI application development by condensing complex code into simpler interfaces. In this article, we explored five tips to optimize Hugging Face Transformers Pipelines, from batch inference requests, to selecting efficient model architectures, to leveraging caching and beyond.
I hope this has helped!
Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.
Jobs & Careers
Hyderabad-Based MIC Electronics, TOP2 Join Hands to Boost Chipmaking In India

MIC Electronics Limited has signed a memorandum of understanding (MoU) with Singapore-based VC firm TOP2 Pte Ltd to boost India’s semiconductor manufacturing capacity.
Under the agreement, TOP2 will help MIC identify and finalise a fabrication partner from Taiwan, aiming to start wafer production in India with a capacity of 25,000 to 30,000 wafers per month.
The collaboration supports India’s $10 billion semiconductor mission, which seeks to reduce reliance on imports and develop domestic chipmaking capabilities. India currently meets most of its semiconductor demand through imports, making access to established wafer lines a quicker and cost-effective option than building new facilities.
To Reduce Imports
MIC Electronics will share technical expertise, define fabrication needs and engage in discussions with potential partners. “By partnering with TOP2, we will be able to tap into proven global expertise and move at a faster pace on our journey to reduce import dependence and strengthen India’s position in the global semiconductor landscape,” said Rakshit Mathur, CEO of MIC Electronics.
TOP2 has already assisted three Indian firms this year in securing wafer production lines from the US, Europe and Japan. The company plans to bring two to three additional lines annually, citing growing demand driven by electric vehicles, 5G, AI and consumer electronics.
“This partnership with MIC allows us to bring together international expertise and local strength to make sure there is faster execution and greater resilience,” said Rao Panidapu, founding partner of TOP2.
The agreement underscores a shared commitment to technology transfer and sustainable growth, aligning with India’s semiconductor vision. Both companies see the MoU as a step towards making India more self-reliant and globally competitive in the semiconductor sector.
Jobs & Careers
Adobe Launches AI Agents for Enterprise Customer Experience

Adobe has announced the general availability of its AI agents designed to help businesses build, deliver, and optimise customer experiences. The launch is anchored in the Adobe Experience Platform (AEP) and its new Agent Orchestrator, which enables companies to manage, customise, and connect AI agents across Adobe and third-party ecosystems.
The company said the agents are capable of understanding context, planning multi-step actions, and refining responses with human oversight.
“Adobe’s agentic AI innovations are redefining customer experience orchestration in the era of AI, enabling businesses to unlock productivity with agent orchestration, reimagine longstanding processes and deliver personalised experiences at scale to drive business growth,” said Anjul Bhambhri, senior vice president of engineering at Adobe Experience Cloud in a statement.
Over 70% of AEP customers are already using Adobe’s AI Assistant, the conversational interface for interacting with agents. Brands including The Hershey Company, Lenovo, Merkle, Wegmans Food Markets, and Wilson Company have adopted the technology to enhance capabilities across marketing and customer engagement.
The release includes out-of-the-box AI agents across Adobe’s enterprise applications.
These range from Audience Agent for audience creation and optimisation, to Journey Agent for campaign orchestration, Experimentation Agent for analysing performance, Data Insights Agent for customer analytics, Site Optimisation Agent for website performance, and Product Support Agent for troubleshooting.
Adobe is also preparing to roll out Experience Platform Agent Composer, which will let businesses customise AI agents with brand guidelines and policy controls. New developer tools such as an Agent SDK and Agent Registry are also in the pipeline.
Additionally, Adobe announced partnerships with Cognizant, Google Cloud, Havas, Medallia, Omnicom, PwC, and VML to expand industry-specific use cases.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi