Jobs & Careers
Meta Invests $3.5 Bn in Ray-Ban Maker EssilorLuxottica SA
Meta has acquired a minority stake in EssilorLuxottica SA, the world’s leading eyewear manufacturer, which also makes Ray-Ban, for expanding the tech giant’s financial involvement in the smart glasses sector, as reported by Bloomberg.
Meta acquired slightly less than 3% of Ray-Ban producer EssilorLuxottica, a stake currently valued at about $3.5 billion based on market prices.
According to the report, the company is also contemplating additional investments that might boost its ownership to roughly 5% in the future, although these plans could still be subject to change.
Based in Menlo Park, California, Meta’s financial commitment to the eyewear company strengthens the alliance between the two firms, which have collaborated in recent years to create AI-driven smart glasses. Currently, Meta offers a pair of Ray-Ban glasses that launched in 2021, featuring integrated cameras and an AI assistant.
The glasses come with built-in Meta AI, enabling users to interact through voice commands. Users can communicate with phrases like “Hey Meta” to ask questions, stream music or podcasts, make phone calls, and broadcast live on Instagram or Facebook.
Meta states that the product is intended to help users “stay present and connected to who and what they care about most.”
Last month, the company unveiled Oakley-branded glasses in collaboration with EssilorLuxottica. Francesco Milleri, CEO of EssilorLuxottica, mentioned last year that Meta showed interest in acquiring a stake in the firm, but this plan did not materialise until now.
For EssilorLuxottica, this agreement enhances its involvement in the tech sector, which could be advantageous if Meta’s forward-looking initiatives succeed. Meta is also wagering on the possibility that people will eventually engage in work and leisure activities while using headsets or glasses, Bloomberg reported.
Jobs & Careers
Hugging Face’s Latest Small Language Model Adds Reasoning Capabilities
Hugging Face has released SmolLM3, a 3B parameter language model that offers long-context reasoning, multilingual capabilities, and dual-mode inference, making it one of the most competitive small-scale open models to date. The model is available under the Apache 2.0 license.
Trained on 11.2 trillion tokens, SmolLM3 outperforms other models in its class, including Llama-3.2-3B and Qwen2.5-3B, while rivalling larger 4B models such as Gemma3 and Qwen3.
The model supports six languages, including English, French, Spanish, German, Italian, and Portuguese, and can process context lengths of up to 128k tokens, enabled by NoPE and YaRN techniques.
The release includes both a base model and an instruction-tuned model with dual reasoning modes. Users can toggle between different flags to control whether the model generates answers with or without reasoning traces.
Pretraining was conducted over three stages with evolving mixes of web, code, and math datasets. A mid-training phase extended the model’s context length and added general reasoning capabilities, followed by supervised fine-tuning and preference alignment using Anchored Preference Optimisation (APO).
SmolLM3 achieved strong results across 12 benchmarks, ranking high on knowledge and reasoning tasks and demonstrating strong multilingual and coding performance. Instructing and reasoning modes yielded further gains on tasks like LiveCodeBench and AIME 2025.
The full training recipe, including data mixtures, ablations, synthetic data generation, and model alignment steps, has also been made public on its GitHub and Hugging Face pages. This open approach aims to help the research community replicate and build on SmolLM3’s performance.
A few months back, Hugging Face launched SmolLM2, an open-source small language model trained on 11 trillion tokens, including custom datasets for math, code, and instruction-following. It outperforms models like Qwen2.5-1.5B and Llama3.2-1B on several benchmarks, particularly MMLU-Pro, while achieving competitive results on others, like TriviaQA and Natural Questions.
It appears that Hugging Face is focusing on minor but consistent improvements for its small language models.
Jobs & Careers
Cerebras Brings Reasoning Time Down from 60 to 0.6 Seconds
Cerebras, the AI infrastructure firm, announced on July 8 that it will deploy Alibaba’s flagship Qwen3 reasoning model, featuring 235 billion parameters, on Cerebras hardware. The model is claimed to run at 1,500 tokens per second.
“That means reasoning time goes from 60 seconds on GPUs to just 0.6 seconds,” said the company in the announcement. Cerebras added that it is enabling the model with 131k context for enterprise customers, which allows production-grade code generation.
The model will be available for all to try later this week at Cerebras.
The company develops wafer-scale AI chips optimised for inference — a process which involves deriving insights from pre-trained AI models. Its cloud services host a range of AI models powered by its hardware, allowing users and developers to generate over 1,000 tokens per second.
In AI models, ‘reasoning’ involves using extra computation to analyse a user query step-by-step, aiming for an accurate and relevant answer. This process can be time-consuming, sometimes taking several minutes to complete.
Custom hardware systems often surpass the inference performance of traditional NVIDIA GPUs, which are frequently used for training and deploying AI models.
Along with Cerebras, companies like Groq and SambaNova have built hardware that offers superior performance for inference.
In May, Cerebras announced that its hardware has outperformed NVIDIA’s DGX B200, which consists of 8 Blackwell GPUs, in terms of output speed while deploying Meta’s Llama 4 Maverick model.
Cerebras achieved an output token speed of over 2,500 tokens per second, whereas NVIDIA demonstrated an output token speed of only 1,000 tokens per second.
However, NVIDIA outperformed systems from Groq, AMD, Google, and other vendors. “Only Cerebras stands – and we smoked Blackwell,” said Cerebras in a post on X. “We’ve tested dozens of vendors, and Cerebras is the only inference solution that outperforms Blackwell for Meta’s flagship model,” said the company.
Jobs & Careers
ANSR Signs MoU with Andhra Govt to Build GCC Hub in Vizag Creating 10,000 Jobs
ANSR, a global company known for setting up and managing global capability centres (GCCs), has signed an MoU with the Government of Andhra Pradesh on July 9, 2025, to build a major innovation campus for GCCs in Visakhapatnam.
The agreement was signed in the presence of Nara Lokesh, minister for information technology, electronics and communications, real time governance, and human resources development.
The upcoming campus, to be located in the Madhurawada IT Cluster, is expected to create over 10,000 jobs over the next five years, helping global companies tap into Andhra Pradesh’s growing pool of skilled professionals.
To further strengthen the ecosystem, ANSR plans to leverage its global network of partners including Accenture and ServiceNow to attract international interest and enhance the project’s global reach.
“By enabling world-class infrastructure and leveraging ANSR’s deep expertise in building GCCs, we aim to attract global enterprises, create thousands of high-value jobs, and fuel the next wave of digital growth from our state,” said Katamneni Bhaskar, IAS, secretary to the government, ITE&C department.
The Andhra Pradesh Economic Development Board (APEDB) will support the project by helping identify suitable land, coordinating with government agencies like APIIC, and assisting with approvals and other policy-level support.
This partnership aligns with the state’s IT & GCC Policy 4.0, which focuses on attracting global investment and encouraging digital innovation across Andhra Pradesh.
“We’re architecting the future of global capability distribution,” said Lalit Ahuja, founder and CEO of ANSR. The new GCC innovation campus will feature modern workspaces for global companies and their partners, along with training centres, labs, innovation studios, and specialised technology zones.
The campus will also house global delivery centres designed to offer full time zone coverage and business continuity support.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
How ChatGPT is breaking higher education, explained
-
Education2 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle