Connect with us

Jobs & Careers

Oracle Lands $300 Billion OpenAI Cloud Deal, One of the Largest in History

Published

on


OpenAI has struck one of the largest cloud computing deals in history with Oracle, agreeing to a contract worth about $300 billion over five years starting in 2027. 

According to the Wall Street Journal, the agreement will see Oracle supply OpenAI with roughly 4.5 gigawatts of computing capacity, an amount of power comparable to two Hoover Dams or enough to run around four million homes.

The scale of the deal is staggering compared to OpenAI’s current business. The AI company generates around $10 billion in annual revenue, making the long-term commitment a major financial bet on the future of its technology. 

For Oracle, the agreement represents a transformative moment. Shares of the company surged by nearly 43%  following the announcement, the biggest single-day gain since 1992. The stock jump added more than $100 billion to Chairman Larry Ellison’s wealth, briefly pushing him past Elon Musk and Jeff Bezos in global rich lists.

Still, both sides face significant risks. For OpenAI, the multi-hundred-billion-dollar commitment locks the company into massive infrastructure spending far outpacing its current revenues. 

For Oracle, delivering on such a colossal supply of cloud and AI infrastructure may require heavy investment and debt financing, while tying its fortunes closely to one customer.

The deal also carries major strategic implications. OpenAI has long relied on Microsoft’s Azure cloud, but working with Oracle shows it is diversifying its computing needs. 

The partnership also ties into “Stargate,” a massive AI infrastructure project backed by Oracle, SoftBank and others, aimed at powering the next generation of large-scale AI systems.

Ultimately, the agreement highlights how the race to lead in AI is now driven by huge bets on infrastructure. For Oracle, the win boosts its position against Microsoft, Amazon and Google in the cloud market.

The post Oracle Lands $300 Billion OpenAI Cloud Deal, One of the Largest in History appeared first on Analytics India Magazine.



Source link

Jobs & Careers

SG Teams Up with Salesforce to Digitise India’s Cricket Equipment Industry

Published

on


Sanspareils Greenlands (SG), a leading Indian cricket and sports equipment manufacturer, announced on September 15 a strategic collaboration with Salesforce to digitise its trade channel sales operations and dealer management system across more than 850 locations in India.

The partnership marks Salesforce’s first collaboration within India’s cricket equipment ecosystem. SG will deploy Salesforce Service Cloud, automation and a unified Dealer Management System to consolidate data, improve dealer interactions and enable real-time insights for decision-making.

“As customer expectations continue to evolve in the sports industry, we believe the future of sports equipment manufacturing lies in data agility, real-time responsiveness, and intelligent decision-making. This collaboration with Salesforce is more than just a technology upgrade; it’s about future-proofing our organisation by embedding intelligence into every dealer interaction and sales touchpoint,” Paras Anand, CEO of Sanspareils Greenlands, said.

“In today’s rapidly evolving industrial landscape, the power of AI is transforming legacy companies like SG at their core. In a country where the passion for cricket runs deep, we are delighted to be a part of this journey, harnessing our AI-powered platform to create new levels of operational efficiency and growth,” Aditi Sharma, regional vice president for sales, Salesforce India, said.

According to the companies, the unified Salesforce platform will automate workflows, improve field productivity, streamline customer support and strengthen partner relationships. SG’s product portfolio includes cricket bats, protective gear, footwear and sportswear, which will all be integrated into the digitised dealer network.

Salesforce added that the collaboration will also support SG’s market expansion plans through AI-driven insights. The company is further advancing its enterprise AI portfolio with Agentforce, a platform designed to build and deploy AI agents capable of autonomously taking action across business functions.



Source link

Continue Reading

Jobs & Careers

Databricks Names Kamalkanth Tummala as India Country Manager, Strengthens $250 Mn Investment Plan

Published

on


Databricks, on September 15, appointed Kamalkanth Tummala as its new country manager for India, a move that comes as part of the company’s $250 million investment to expand operations, research and go-to-market resources in the country.

Tummala, who previously served as vice president at Salesforce, will lead Databricks’ growth strategy in India. He will take charge of scaling the company’s local business and strengthening its presence across industries.

“We’re excited to welcome Kamal to our leadership team as we expand in India—one of our most dynamic and strategic growth markets,” said Ed Lenta, senior vice president and general manager, Asia Pacific and Japan at Databricks. “Leading organisations such as CommerceIQ, Freshworks, HDFC Bank, Swiggy, TVS Motors and Zepto already rely on Databricks to innovate with data and AI. Now, with Kamal on board and our continued investment in India, we are well-positioned to build on this momentum.”

Tummala said his focus will be on accelerating enterprise adoption of Databricks’ data and AI platforms. “In India’s dynamic landscape, data and AI are reshaping every organisation and Databricks is uniquely positioned to help enterprises build a lasting competitive advantage,” he said. “I look forward to working with our customers and partners to accelerate their AI journeys.”

Tummala brings over 20 years of experience in enterprise technology and leadership. At Salesforce, he led go-to-market initiatives across multiple industries. Before that, he spent nearly a decade at Mindtree Ltd as regional director in India, where he expanded strategic accounts.

Databricks has expanded significantly in India in recent years. The company has increased regional hiring, opened a 1,05,000-square-foot R&D office in Bengaluru and launched the India Data + AI Academy.

The company will also host its Data + AI World Tour 2025 in Mumbai on September 19, featuring industry leaders, customers and partners showcasing data and AI solutions.



Source link

Continue Reading

Jobs & Careers

Cursor is Using Real Time Reinforcement Learning to Improve Suggestions for Developers

Published

on


Cursor, an AI-powered coding platform, has announced an upgrade for its Tab model—the autocomplete system that provides suggestions for developers. 

The company stated that this upgrade reduces low-quality suggestions while boosting accuracy, resulting in “21% fewer suggestions than the previous model while having a 28% higher acceptance rate”.

“Achieving a high accept rate isn’t just about making the model smarter, but also knowing when to suggest and when not to,” Cursor said in the blog post. 

To solve the problem, Cursor considered training a separate model to predict whether a suggestion would be accepted or not. Cursor referenced a 2022 research study in which this method was used with GitHub Copilot. 

It employed a logistic regression filter on features such as programming language, recent acceptance history and training characters, with suggestions that scored low being hidden. 

While Cursor stated that the solution was viable in terms of predicting whether a user would accept a suggestion or not, the AI coding platform noted, “We wanted a more general mechanism that reused the powerful representation of the code learned by the Tab model.” 

“Instead of filtering out bad suggestions, we wanted to alter the Tab model to avoid producing bad suggestions in the first place,” added Cursor. 

Thus, Cursor used policy gradient methods, a reinforcement learning (RL) approach, to solve the problem. The model receives a reward when suggestions are accepted, a penalty when they are rejected and nothing when it chooses to stay silent. 

This method requires ‘on-policy’ data, which is feedback collected from the model that is currently being used. Cursor addressed this by deploying new checkpoints to users multiple times a day and retraining the model quickly on fresh interactions. 

“Currently, it takes us 1.5 to 2 hours to roll out a checkpoint and collect the data for the next step. While this is fast relative to what is typical in the AI industry, there is still room to make it much faster,” Cursor stated. 

Cursor said the Tab model runs on every user action on the platform, handling over 400 million request per day. “We hope this improves your coding experience and plan to develop these methods further in the future,” it said. 

“Online RL is one of the most exciting directions for the field, and I’ve been incredibly impressed with Cursor being seemingly the first to implement it successfully at scale with a frontier capability,” an engineer who works on post-training at OpenAI wrote on X

In June, Cursor’s parent company Anysphere announced that it had raised $900 million at a $9.9 billion valuation led by Thrive Capital, Accel, Andreessen Horowitz (a16z) and DST. 

The company also launched a $200 monthly ‘Ultra’ plan, which promises 20x more usage than the Pro tier, priced at $20 a month. 

In the same month, Cursor also received a platform update, receiving new features that enable automatic code review capabilities, memory features and allow users to set up Model Context Protocol (MCP) servers in a single click. 





Source link

Continue Reading

Trending