Connect with us

Jobs & Careers

ANSR Signs MoU with Andhra Govt to Build GCC Hub in Vizag Creating 10,000 Jobs

Published

on


ANSR, a global company known for setting up and managing global capability centres (GCCs), has signed an MoU with the Government of Andhra Pradesh on July 9, 2025, to build a major innovation campus for GCCs in Visakhapatnam. 

The agreement was signed in the presence of Nara Lokesh, minister for information technology, electronics and communications, real time governance, and human resources development.

The upcoming campus, to be located in the Madhurawada IT Cluster, is expected to create over 10,000 jobs over the next five years, helping global companies tap into Andhra Pradesh’s growing pool of skilled professionals.

To further strengthen the ecosystem, ANSR plans to leverage its global network of partners including Accenture and ServiceNow to attract international interest and enhance the project’s global reach. 

 “By enabling world-class infrastructure and leveraging ANSR’s deep expertise in building GCCs, we aim to attract global enterprises, create thousands of high-value jobs, and fuel the next wave of digital growth from our state,” said Katamneni Bhaskar, IAS, secretary to the government, ITE&C department.

The Andhra Pradesh Economic Development Board (APEDB) will support the project by helping identify suitable land, coordinating with government agencies like APIIC, and assisting with approvals and other policy-level support.

This partnership aligns with the state’s IT & GCC Policy 4.0, which focuses on attracting global investment and encouraging digital innovation across Andhra Pradesh.

“We’re architecting the future of global capability distribution,” said Lalit Ahuja, founder and CEO of ANSR. The new GCC innovation campus will feature modern workspaces for global companies and their partners, along with training centres, labs, innovation studios, and specialised technology zones. 

The campus will also house global delivery centres designed to offer full time zone coverage and business continuity support.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Jobs & Careers

A Beginner’s Guide to AirTable for Data Analysis

Published

on



Image by Author | Ideogram

 

Introduction

 
AirTable is a cloud-based, user-friendly, and AI-driven platform for creating, managing, and sharing databases. It combines the best of Excel spreadsheets with relational database management systems. AirTable offers a freemium subscription model, whereby some limited features can be used for free, making it ideal for smaller projects or beginners, whereas the paid version provides advanced features and a larger amount of computing resources.

This article provides a starting point for anyone interested in AirTable and what it has to offer at a beginner’s level, specifically for data analysis. The article walks you through the process of creating a new AirTable app that incorporates some data and uses it for some basic analysis procedures.

 

Signing Up and Creating Your First Project

 
As a cloud-based tool, AirTable does not require downloading a desktop application, but simply accessing its website and signing up. If you have a Google account, for instance, you can use it for a quick sign-up; otherwise, there is the option to register using an email address.

Once signed up, we are ready to create our first project. In AirTable, the concept of base or app is analogous to a project or app — essentially, a container for all the data — so let’s create a new base. If you cannot see at first glance the “create blank app” button, you may have to check for the “Create” button on the bottom-left corner or, alternatively, if there is an “x” icon to be clicked on the top-right corner, click on it and you will be prompted to created a blank app.

You should then see a screen like this:

 

New base (project) in AirTable
New base (project) in AirTable

 

Now it’s time to import some data. AirTable bases consist of one or multiple tables. By default, an empty table named “Table 1” appears. Next to it, there is a tab called “+ Add or import“, which we will click on. In AirTable, there are various options to add data to our project, for instance, from spreadsheets in Google Sheets or Excel, Salesforce, Google Drive, Trello, and many more. We will use one of the simplest approaches: uploading a CSV file, concretely from a URL. To do so, select “CSV file“, and on the left-hand side of the emerging window, choose “Link (URL)“, as shown below:

 

Uploading CSV data via URL
Uploading CSV data via URL

 

Copy the following URL to a dataset I made available for you in GitHub, and paste it into the text field that appears. Then click on the right-hand side blue button, and when asked to create a new table or use an existing one, make sure you create a new table. Do not be tempted to use the existing default table called “Table 1”, as that table schema is not compatible with that of the dataset we are importing.

That’s it! You now have a new table populated with the imported data, which contains records of customers in a shopping mall, with the following attributes:

  • Customer ID: the numerical identifier of a customer.
  • Gender: the customer’s gender, namely male or female.
  • Age: the customer’s age expressed as an integer.
  • Income: the customer’s annual income in thousands of US dollars ($).
  • Spending score: a normalized score ranging between 1 and 100 of the customer’s spending level.

 

Beginning Data Analysis

 
In the imported table, all columns are of numerical type, except for “Gender”, which is categorical. In AirTable, a categorical column with one possible value per instance among a predefined set of them is called “Single Select”. You can check or modify the properties of “Gender” or any other field by hovering on the column header, clicking on the v-like icon that appears, and selecting “Edit field”. For this tutorial, we will leave are column types as they are, and proceed to perform some analysis.

Grouping customers by gender: Grouping data records by values of a categorical attribute is as easy as clicking on the “Group” button above the table. Select “Gender” and then “Collapse all” to see aggregated summaries of your data for each gender. By default, you will see the total (sum) of values per attribute and gender, but you can also choose to see the average (or median, min, max, etc.) values of columns like income, spending score, and so on. This can be done as shown in the following screenshot:

 

Analyzing grouped customers by gender
Analyzing grouped customers by gender

 

We can observe that males have, on average, higher income than females, but women spend more than men.

 

Average income and spending score by gender
Average income and spending score by gender

 

To remove a grouping of the data, simply click on the “Group” icon again, then click on the bin icon next to the created grouping to remove it and see your full, ungrouped table again.

Filtering young customers: Next, let’s try to do a filtering. This is an easy and intuitive operation available at the “Filter” icon next to the previously used “Group” icon. In the pop-up dialog, select “+ Add condition”. A filtering condition consists of three elements: a field or column name, an operator, and a value. Examples of conditions are “Age >= 39”, “Spending Score = 10”, “Gender is not Male”, etc. To filter young customers, we will set the condition “Age < 30”. This should filter a total of 55 customers. One interesting thing to do at this point is to combine the filter made with (once more) a grouping by gender, to check whether the findings about income and spending score in males vs. females still apply for young customers. Once you have tried this, filters can be easily removed similarly to groupings.

Using formulae to define an “income class” field: AirTable allows the creation of new columns under many different approaches, formulae being one of them. Simply click the “+” button next to the right-most column in your table to add a new column, and choose “Formula” as the creation method or column type. For instance, we can use the following formula:

IF({Annual Income (k$)} < 40, "Low",
IF({Annual Income (k$)} < 70, "Medium", "High"))

 

To create a new column called “Income class” whose values (categories) will be defined depending on the value of the annual income column, by following the above formula consisting of two nested conditionals. If you are not familiar with spreadsheet-like formulae syntax, don’t panic, there is a “Create formula with AI” button whereby AirTable’s AI assistant can help build a formula based on your specifications or goal.

 

Using formulae to create a new column
Using formulae to create a new column

 

Using interfaces to visualize your data: Airtable interfaces are used to generate data visualizations. This feature is limited in the free tier, but it is still possible to create simple dashboards with elements like bar charts and pivot tables — that is, cross-column tables that summarize and aggregate the data based on field combinations. To try creating an interface, click on “interfaces” at the top toolbar, and follow the assistant steps. You may end up getting something like this dashboard:

 

Interface dashboard
Interface dashboard in AirTable

 

Note that interfaces are designed to be shareable among teams, e.g., for driving business intelligence processes.

 

Wrapping Up

 
This article introduced AirTable, a versatile and user-friendly cloud-based platform for data management and analysis, combining features of spreadsheets and relational databases with AI-powered functions. The guide provided in this article is intended to introduce new users to AirTable and outline some basic functions for data analysis. Needless to say, while they have not been our main focus, AI features provided by the tool are arguably one of the recommended next steps to explore beyond this point.

 
 

Iván Palomares Carrascosa is a leader, writer, speaker, and adviser in AI, machine learning, deep learning & LLMs. He trains and guides others in harnessing AI in the real world.



Source link

Continue Reading

Jobs & Careers

Hugging Face Launches Reachy Mini, Open-Source Robot for AI Enthusiasts and Educators 

Published

on


Hugging Face, in collaboration with Pollen Robotics, launched Reachy Mini, a desktop-sized open-source robot designed for AI experimentation and education. It is now available for pre-order globally. 

Developed for human-robot interaction and creative coding, the robot is available in two versions: Lite at $299 and a wireless version at $449. Thomas Wolf, co-founder and chief science officer at Hugging Face, announced in a LinkedIn post that the first deliveries are expected right after the summer of 2025 through 2026. 

Built for developers, educators and hobbyists, Reachy Mini enables users to program and deploy AI applications using Python. The robot includes multimodal sensors and offers integration with Hugging Face for real-time behaviour sharing and experimentation.

Two Versions for Flexible Use

The Reachy Mini Lite lacks onboard computing and Wi-Fi, whereas the wireless version is equipped with a Raspberry Pi 5, a battery, and four microphones. While the Lite version is compatible with Mac and Linux, it has not yet been released for Windows.

Both variants feature motorised head movement, body rotation, animated antennas, and a wide-angle camera. A speaker and audio-visual interaction capabilities come standard. The robot is currently in the early stages of development. “We’re sharing it as-is, without warranties or guarantees, to engage with early adopters and gather feedback,” the company mentioned in its announcement.

The Mini Lite version consists of two microphones, while the Mini version has four and measures 11 “/28cm in height and 6.3”/16cm in width, weighing 3.3 lbs/1.5 kg. 

Open-Source Development

Users can test and deploy behaviours in real life or through simulation. Over 15 preloaded behaviours will be available at launch through the Hugging Face hub. Future programming support will be expanded to include JavaScript and Scratch.

Reachy Mini’s open-source hardware and software allow for full transparency and community participation. With a modular kit-based assembly, it encourages hands-on learning, coding with children, and collaborative building.

Users can join the growing community of over 10 million on Hugging Face to upload, download and evolve new robot behaviours, positioning Reachy Mini as a flexible tool for AI exploration and learning.



Source link

Continue Reading

Jobs & Careers

Hugging Face’s Latest Small Language Model Adds Reasoning Capabilities

Published

on


Hugging Face has released SmolLM3, a 3B parameter language model that offers long-context reasoning, multilingual capabilities, and dual-mode inference, making it one of the most competitive small-scale open models to date. The model is available under the Apache 2.0 license.

Trained on 11.2 trillion tokens, SmolLM3 outperforms other models in its class, including Llama-3.2-3B and Qwen2.5-3B, while rivalling larger 4B models such as Gemma3 and Qwen3. 

The model supports six languages, including English, French, Spanish, German, Italian, and Portuguese, and can process context lengths of up to 128k tokens, enabled by NoPE and YaRN techniques.

The release includes both a base model and an instruction-tuned model with dual reasoning modes. Users can toggle between different flags to control whether the model generates answers with or without reasoning traces.

Pretraining was conducted over three stages with evolving mixes of web, code, and math datasets. A mid-training phase extended the model’s context length and added general reasoning capabilities, followed by supervised fine-tuning and preference alignment using Anchored Preference Optimisation (APO).

SmolLM3 achieved strong results across 12 benchmarks, ranking high on knowledge and reasoning tasks and demonstrating strong multilingual and coding performance. Instructing and reasoning modes yielded further gains on tasks like LiveCodeBench and AIME 2025.

The full training recipe, including data mixtures, ablations, synthetic data generation, and model alignment steps, has also been made public on its GitHub and Hugging Face pages. This open approach aims to help the research community replicate and build on SmolLM3’s performance.

A few months back, Hugging Face launched SmolLM2, an open-source small language model trained on 11 trillion tokens, including custom datasets for math, code, and instruction-following. It outperforms models like Qwen2.5-1.5B and Llama3.2-1B on several benchmarks, particularly MMLU-Pro, while achieving competitive results on others, like TriviaQA and Natural Questions.

It appears that Hugging Face is focusing on minor but consistent improvements for its small language models.



Source link

Continue Reading

Trending