Connect with us

Jobs & Careers

TCS, CEA Collaborate to Advance Physical AI Research in France

Published

on


Tata Consultancy Services (TCS) partnered with the French Alternative Energies and Atomic Energy Commission (CEA) to accelerate innovation and industrialisation of physical AI solutions. 

Physical AI is focused on bringing together robotics, artificial intelligence and intelligent systems to help machines perceive, interpret and interact with the physical world, thereby advancing digital transformation and modernisation of industrial processes. 

TCS and CEA’s leading French research institute for intelligent digital systems will drive the design, development and deployment of cutting-edge physical AI-powered systems for real-world applications. 

By combining CEA’s deep expertise in digital transformation and scientific research with TCS’ domain knowledge and global scale, the partnership will deliver scalable AI-driven solutions tailored to industrial use cases, from manufacturing and logistics to automation, ultimately transforming efficiency and resilience across sectors.

Alexandre Bounouh, director of CEA-List, said, “This partnership will enable us to connect cutting-edge research with the concrete needs of businesses and to jointly invent the intelligent systems of tomorrow.” 

“By transforming collaboration between humans and machines, AI solutions applied to physical systems will optimise the production chain, thereby contributing to one of our core missions: boosting the resilience and competitiveness of French and European businesses,” he said.    

Together, TCS and CEA intend to offer organisations concrete solutions based on physical AI, proofs of concept, as well as training and technological support programs.

Some of the key areas of collaboration include versatile robots, advanced human-robot collaboration and socially assistive robots.

The partnership also relies on the unique expertise and leading-edge research of CEA in the convergence between physical AI and humans, TCS said. 

Rammohan Gourneni, managing director of TCS in France, said, “Physical AI is a key technology for the future of industry, as it combines the power of AI with the intelligence of physical systems. This partnership marks an important step in supporting our clients in their industrial transformation.”

TCS said that the partnership leverages the TCS Pace Port Paris research and innovation centre at the heart of the French technology ecosystem. The hub brings together experts, startups, researchers and large companies to accelerate the development of next-gen solutions on a large scale.

The post TCS, CEA Collaborate to Advance Physical AI Research in France appeared first on Analytics India Magazine.



Source link

Jobs & Careers

Tendulkar-Backed RRP Electronics Gets 100 Acres in Maharashtra for Semiconductor Fab

Published

on


The Maharashtra government has allocated 100 acres in Navi Mumbai to RRP Electronics for the establishment of a semiconductor fabrication facility. CM Devendra Fadnavis handed over a letter of comfort to the company, which plans to relocate a fab from Sherman, Texas, with a production capacity of 1.25 lakh wafers per month.

The project is backed by former cricketer Sachin Tendulkar and marks a significant step for India’s semiconductor mission. The new fab is expected to boost industrial growth, generate employment opportunities and enhance supply chains in the state.

“This allotment of land firmly positions Maharashtra at the heart of the India Semiconductor Mission roadmap. Our government is fully committed to extending all necessary support, be it in infrastructure, policy facilitation or skill development, to ensure the success of this initiative,” Fadnavis said.

He added that the facility would accelerate industrial growth and reinforce Maharashtra’s role as a hub for high-technology manufacturing.

Rajendra Chodankar, chairman of RRP Electronics, said, “We are thankful to the Maharashtra government, the honourable chief minister and his team for the continued encouragement and support towards enabling the state to take pioneering initiatives for the semiconductor ecosystem. This acquisition is a landmark step in our journey to make India self-reliant in semiconductors.”

The move comes a year after Maharashtra launched its first outsourced semiconductor assembly and test (OSAT) facility in Navi Mumbai, which was established by RRP itself. With the new fab, the state strengthens its position in the global semiconductor value chain.

Earlier in May, HorngCom Technology of Taiwan entered into a strategic collaboration with RRP to expand its OSAT capabilities in India. The agreement followed a successful technical assessment of RRP’s semiconductor facility in Mahape, Navi Mumbai, and marked HorngCom’s latest move to scale its operations globally.



Source link

Continue Reading

Jobs & Careers

5 Tips for Building Optimized Hugging Face Transformer Pipelines

Published

on


5 Tips for Building Optimized Hugging Face Transformer Pipelines5 Tips for Building Optimized Hugging Face Transformer PipelinesImage by Editor | ChatGPT

 

Introduction

 
Hugging Face has become the standard for many AI developers and data scientists because it drastically lowers the barrier to working with advanced AI. Rather than working with AI models from scratch, developers can access a wide range of pretrained models without hassle. Users can also adapt these models with custom datasets and deploy them quickly.

One of the Hugging Face framework API wrappers is the Transformers Pipelines, a series of packages that consists of the pretrained model, its tokenizer, pre- and post-processing, and related components to make an AI use case work. These pipelines abstract complex code and provide a simple, seamless API.

However, working with Transformers Pipelines can get messy and may not yield an optimal pipeline. That is why we will explore five different ways you can optimize your Transformers Pipelines.

Let’s get into it.

 

1. Batch Inference Requests

 
Often, when using Transformers Pipelines, we do not fully utilize the graphics processing unit (GPU). Batch processing of multiple inputs can significantly boost GPU utilization and enhance inference efficiency.

Instead of processing one sample at a time, you can use the pipeline’s batch_size parameter or pass a list of inputs so the model processes several inputs in one forward pass. Here is a code example:

from transformers import pipeline

pipe = pipeline(
    task="text-classification",
    model="distilbert-base-uncased-finetuned-sst-2-english",
    device_map="auto"
)

texts = [
    "Great product and fast delivery!",
    "The UI is confusing and slow.",
    "Support resolved my issue quickly.",
    "Not worth the price."
]

results = pipe(texts, batch_size=16, truncation=True, padding=True)
for r in results:
    print(r)

 

By batching requests, you can achieve higher throughput with only a minimal impact on latency.

 

2. Use Lower Precision And Quantization

 

Many pretrained models fail at inference because development and production environments do not have enough memory. Lower numerical precision helps reduce memory usage and speeds up inference without sacrificing much accuracy.

For example, here is how to use half precision on the GPU in a Transformers Pipeline:

import torch
from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained(
    model_id,
    torch_dtype=torch.float16
)

 

Similarly, quantization techniques can compress model weights without noticeably degrading performance:

# Requires bitsandbytes for 8-bit quantization
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    load_in_8bit=True,
    device_map="auto"
)

 

Using lower precision and quantization in production usually speeds up pipelines and reduces memory use without significantly impacting model accuracy.

 

3. Select Efficient Model Architectures

 
In many applications, you do not need the largest model to solve the task. Selecting a lighter transformer architecture, such as a distilled model, often yields better latency and throughput with an acceptable accuracy trade-off.

Compact models or distilled versions, such as DistilBERT, retain most of the original model’s accuracy but with far fewer parameters, resulting in faster inference.

Choose a model whose architecture is optimized for inference and suits your task’s accuracy requirements.

 

4. Leverage Caching

 
Many systems waste compute by repeating expensive work. Caching can significantly enhance performance by reusing the results of costly computations.

with torch.inference_mode():
    output_ids = model.generate(
        **inputs,
        max_new_tokens=120,
        do_sample=False,
        use_cache=True
    )

 

Efficient caching reduces computation time and improves response times, lowering latency in production systems.

 

5. Use An Accelerated Runtime Via Optimum (ONNX Runtime)

 
Many pipelines run in a PyTorch not-so-optimal mode, which adds Python overhead and extra memory copies. Using Optimum with Open Neural Network Exchange (ONNX) Runtime — via ONNX Runtime — converts the model to a static graph and fuses operations, so the runtime can use faster kernels on a central processing unit (CPU) or GPU with less overhead. The result is usually faster inference, especially on CPU or mixed hardware, without changing how you call the pipeline.

Install the required packages with:

pip install -U transformers optimum[onnxruntime] onnxruntime

 

Then, convert the model with code like this:

from optimum.onnxruntime import ORTModelForSequenceClassification

ort_model = ORTModelForSequenceClassification.from_pretrained(
    model_id,
    from_transformers=True
)

 

By converting the pipeline to ONNX Runtime through Optimum, you can keep your existing pipeline code while getting lower latency and more efficient inference.

 

Wrapping Up

 
Transformers Pipelines is an API wrapper in the Hugging Face framework that facilitates AI application development by condensing complex code into simpler interfaces. In this article, we explored five tips to optimize Hugging Face Transformers Pipelines, from batch inference requests, to selecting efficient model architectures, to leveraging caching and beyond.

I hope this has helped!
 
 

Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.



Source link

Continue Reading

Jobs & Careers

Hyderabad-Based MIC Electronics, TOP2 Join Hands to Boost Chipmaking In India

Published

on


MIC Electronics Limited has signed a memorandum of understanding (MoU) with Singapore-based VC firm TOP2 Pte Ltd to boost India’s semiconductor manufacturing capacity. 

Under the agreement, TOP2 will help MIC identify and finalise a fabrication partner from Taiwan, aiming to start wafer production in India with a capacity of 25,000 to 30,000 wafers per month.

The collaboration supports India’s $10 billion semiconductor mission, which seeks to reduce reliance on imports and develop domestic chipmaking capabilities. India currently meets most of its semiconductor demand through imports, making access to established wafer lines a quicker and cost-effective option than building new facilities.

To Reduce Imports

MIC Electronics will share technical expertise, define fabrication needs and engage in discussions with potential partners. “By partnering with TOP2, we will be able to tap into proven global expertise and move at a faster pace on our journey to reduce import dependence and strengthen India’s position in the global semiconductor landscape,” said Rakshit Mathur, CEO of MIC Electronics.

TOP2 has already assisted three Indian firms this year in securing wafer production lines from the US, Europe and Japan. The company plans to bring two to three additional lines annually, citing growing demand driven by electric vehicles, 5G, AI and consumer electronics. 

“This partnership with MIC allows us to bring together international expertise and local strength to make sure there is faster execution and greater resilience,” said Rao Panidapu, founding partner of TOP2.

The agreement underscores a shared commitment to technology transfer and sustainable growth, aligning with India’s semiconductor vision. Both companies see the MoU as a step towards making India more self-reliant and globally competitive in the semiconductor sector.



Source link

Continue Reading

Trending