Connect with us

Jobs & Careers

Cursor is Using Real Time Reinforcement Learning to Improve Suggestions for Developers

Published

on


Cursor, an AI-powered coding platform, has announced an upgrade for its Tab model—the autocomplete system that provides suggestions for developers. 

The company stated that this upgrade reduces low-quality suggestions while boosting accuracy, resulting in “21% fewer suggestions than the previous model while having a 28% higher acceptance rate”.

“Achieving a high accept rate isn’t just about making the model smarter, but also knowing when to suggest and when not to,” Cursor said in the blog post. 

To solve the problem, Cursor considered training a separate model to predict whether a suggestion would be accepted or not. Cursor referenced a 2022 research study in which this method was used with GitHub Copilot. 

It employed a logistic regression filter on features such as programming language, recent acceptance history and training characters, with suggestions that scored low being hidden. 

While Cursor stated that the solution was viable in terms of predicting whether a user would accept a suggestion or not, the AI coding platform noted, “We wanted a more general mechanism that reused the powerful representation of the code learned by the Tab model.” 

“Instead of filtering out bad suggestions, we wanted to alter the Tab model to avoid producing bad suggestions in the first place,” added Cursor. 

Thus, Cursor used policy gradient methods, a reinforcement learning (RL) approach, to solve the problem. The model receives a reward when suggestions are accepted, a penalty when they are rejected and nothing when it chooses to stay silent. 

This method requires ‘on-policy’ data, which is feedback collected from the model that is currently being used. Cursor addressed this by deploying new checkpoints to users multiple times a day and retraining the model quickly on fresh interactions. 

“Currently, it takes us 1.5 to 2 hours to roll out a checkpoint and collect the data for the next step. While this is fast relative to what is typical in the AI industry, there is still room to make it much faster,” Cursor stated. 

Cursor said the Tab model runs on every user action on the platform, handling over 400 million request per day. “We hope this improves your coding experience and plan to develop these methods further in the future,” it said. 

“Online RL is one of the most exciting directions for the field, and I’ve been incredibly impressed with Cursor being seemingly the first to implement it successfully at scale with a frontier capability,” an engineer who works on post-training at OpenAI wrote on X

In June, Cursor’s parent company Anysphere announced that it had raised $900 million at a $9.9 billion valuation led by Thrive Capital, Accel, Andreessen Horowitz (a16z) and DST. 

The company also launched a $200 monthly ‘Ultra’ plan, which promises 20x more usage than the Pro tier, priced at $20 a month. 

In the same month, Cursor also received a platform update, receiving new features that enable automatic code review capabilities, memory features and allow users to set up Model Context Protocol (MCP) servers in a single click. 





Source link

Jobs & Careers

OpenAI Introduces GPT-5-Codex, Expands Codex Integration Across Developer Tools

Published

on


OpenAI has launched GPT-5-Codex, an upgraded version of GPT-5 designed for software engineering tasks, alongside major updates to its Codex platform. The new model is now the default for cloud tasks and code reviews and can also be used locally via Codex CLI and IDE extensions.

“Codex moves closer to what we’ve been building toward all along—a teammate that understands your context, works alongside you and reliably takes on work for your team,” OpenAI said in its announcement.

GPT-5-Codex was trained on complex engineering tasks such as debugging, feature development, large-scale refactoring and code reviews. In tests, it demonstrated a 51.3% accuracy rate on code refactoring tasks compared to GPT-5’s 33.9%. 

The model can dynamically adjust how much time it spends on tasks, ranging from short sessions to working independently for more than seven hours.

OpenAI reported that GPT-5-Codex generates fewer incorrect review comments than GPT-5 (4.4% vs. 13.7%) and produces more high-impact comments (52.4% vs. 39.4%). It has also been deployed internally at OpenAI, where the company said it reviews “the vast majority of its PRs, catching hundreds of issues every day—often before a human review begins”.

Developers can now use Codex across multiple environments, including the terminal, IDEs like VS Code, GitHub, the web and even mobile. Updates include a rebuilt Codex CLI with support for screenshots and diagrams, a task to-do list, improved terminal UI and three new approval modes for managing access. The new IDE extension integrates cloud tasks directly into editors, allowing seamless context switching.

Codex also now offers code review capabilities directly on GitHub. Developers can trigger reviews by mentioning “@codex review” in a pull request and can specify focus areas such as security vulnerabilities.

Early adopters include Cisco Meraki, Duolingo, Ramp, Vanta, Virgin Atlantic and Gap. 

“With Codex, I offloaded the refactoring and test generation while focusing on other priorities,” said Tres Wong-Godfrey, tech lead at Cisco Meraki. “It produced high-quality, fully tested code that I could quickly hand back—keeping the feature on schedule without adding risk.”

To improve security, Codex runs in sandboxed environments by default, with network access disabled unless explicitly approved. Developers can configure access levels depending on risk tolerance. OpenAI emphasised that Codex should serve as an additional reviewer, not a replacement for human review.

Codex is available under ChatGPT Plus, Pro, Business, Edu and Enterprise plans. Pro supports full workweeks of usage, while Enterprise plans provide pooled credits for teams. API availability for GPT-5-Codex is planned.



Source link

Continue Reading

Jobs & Careers

Why Cybersecurity is More Important Today for Data Science Than Ever

Published

on


Cybersecurity Important Today Data Science Than Ever
Image by Author | ChatGPT

 

Data science has evolved from academic curiosity to business necessity. Machine learning models now approve loans, diagnose diseases, and guide autonomous vehicles. But with this widespread adoption comes a sobering reality: these systems have become prime targets for cybercriminals.

As organizations accelerate their AI investments, attackers are developing sophisticated techniques to exploit vulnerabilities in data pipelines and machine learning models. The result is clear: cybersecurity has become inseparable from data science success.

 

The New Ways You Can Get Hit

 
Traditional security focused on protecting servers and networks. Now? The attack surface is far more complex. AI systems create vulnerabilities that did not exist before.

Data poisoning attacks are subtle. Attackers corrupt training data in ways that often go unnoticed for months. Unlike obvious hacks that trigger alarms, these attacks quietly undermine models—for example, teaching a fraud detection system to ignore certain patterns, effectively turning the AI against its own purpose.

Then there are adversarial attacks during real-time use. Researchers have shown how small stickers on road signs can trick Tesla’s systems into misreading stop signs. These attacks exploit the way neural networks process information, exposing critical weaknesses.

Model theft is a new form of corporate espionage. Valuable machine learning models that cost millions to develop are being reverse-engineered through systematic queries. Once stolen, competitors can deploy them or use them to identify weak spots for future attacks.

 

Real Stakes, Real Consequences

 
The consequences of compromised AI systems extend far beyond data breaches. In healthcare, a poisoned diagnostic model could miss critical symptoms. In finance, manipulated trading algorithms could trigger market instability. In transportation, compromised autonomous systems could endanger lives.

We’ve already seen troubling incidents. Flawed training data forced Tesla to recall vehicles when their AI systems misclassified obstacles. Prompt injection attacks have tricked AI chatbots into revealing confidential information or generating inappropriate content. These are not distant threats—they are happening today.

Perhaps most concerning is how accessible these attacks have become. Once researchers publish attack techniques, they can often be automated and deployed at scale with modest resources.

Here is the problem: traditional security measures were not designed for AI systems. Firewalls and antivirus software cannot detect a subtly poisoned dataset or identify an adversarial input that looks normal to human eyes. AI systems learn and make autonomous decisions, which creates attack vectors that do not exist in conventional software. This means data scientists need a new playbook.

 

How to Actually Protect Yourself

 
The good news is you don’t need a PhD in cybersecurity to improve your security posture significantly. Here’s what works:
Lock down your data pipelines first. Treat datasets as valuable assets. Use encryption, verify data sources, and implement integrity checks to detect tampering. A compromised dataset will always produce a compromised model, regardless of architecture.
Test like an attacker. Beyond measuring accuracy on test sets, probe your models with unexpected inputs and adversarial examples. Leading security platforms provide tools to identify vulnerabilities before deployment.
Control access ruthlessly. Apply least privilege principles to both data and models. Use authentication, rate limiting, and monitoring to manage model access. Watch for unusual usage patterns that may indicate abuse.
Monitor continuously. Deploy systems that detect anomalous behavior in real time. Sudden performance drops, data distribution shifts, or unusual query patterns can all signal potential attacks.

 

Building Security Into Your Culture

 
The most important shift is cultural. Security cannot be bolted on after the fact — it must be integrated throughout the entire machine learning lifecycle.

This requires breaking down silos between data science and security teams. Data scientists need basic security awareness, while security professionals must understand AI system vulnerabilities. Some organizations are even creating hybrid roles that bridge both domains.

You don’t need every data scientist to be a security expert, but you do need security-conscious practitioners who account for potential threats when building and deploying models.

 

Looking Forward

 
As AI becomes more pervasive, cybersecurity challenges will intensify. Attackers are investing heavily in AI-specific techniques, and the potential rewards from successful attacks continue to grow.

The data science community is responding. New defensive techniques such as adversarial training, differential privacy, and federated learning are emerging. Take adversarial training, for example — it works like inoculation by deliberately exposing a model to attack examples during training, enabling it to resist them in practice. Industry initiatives are developing security frameworks specifically for AI systems, while academic researchers are exploring new approaches to robustness and verification.

Security is not a constraint on innovation — it enables it. Secure AI systems earn greater trust from users and regulators, opening the door for broader adoption and more ambitious applications.

 

Wrapping Up

 
Cybersecurity has become a core competency for data science, not an optional add-on. As models grow more powerful and widespread, the risks of insecure implementations expand exponentially. The question is not whether your AI systems will face attacks, but whether they will be ready when those attacks occur.

By embedding security into data science workflows from day one, we can ensure that AI innovations remain both effective and trustworthy. The future of data science depends on getting this balance right.
 
 

Vinod Chugani was born in India and raised in Japan, and brings a global perspective to data science and machine learning education. He bridges the gap between emerging AI technologies and practical implementation for working professionals. Vinod focuses on creating accessible learning pathways for complex topics like agentic AI, performance optimization, and AI engineering. He focuses on practical machine learning implementations and mentoring the next generation of data professionals through live sessions and personalized guidance.



Source link

Continue Reading

Jobs & Careers

NextEra, ServiceNow Partner to Boost Digital Transformation Across Middle East

Published

on


NextEra, a joint venture between LTIMindtree and Aramco Digital, has entered a partnership with ServiceNow, the AI platform for digital transformation, to drive large-scale digital transformation across the Kingdom of Saudi Arabia and the broader Middle East and North Africa (MENA) region. 

As part of this partnership, NextEra will set up a proximity centre at the Imam Abdulrahman Bin Faisal University in Dammam, and ServiceNow will set up a centre of excellence to train and equip the local workforce, digitally, according to an official statement. 

This partnership brings together ServiceNow’s intelligent workflow orchestration and NextEra’s robust service.

A key advantage is the availability of Agentic Central offerings built on ServiceNow and the ecosystem, the announcement said. 

The partnership will accelerate the deployment of generative AI and agentic automation across enterprise environments by leveraging core ServiceNow functionality and NextEra’s strong services capability. 

Moreover, it will also address some of the most critical requirements of this geography, an Arabic user interface with strong localised solutions that meet unique regional needs and business challenges.

The partnership will enhance ServiceNow’s presence across the MENA region. 

A key focus of the partnership is the development of industry-specific platforms that address real-world challenges in sectors like energy, BFSI and giga projects. 

These platforms are intended to deliver measurable impact, enabling organisations to modernise legacy systems, enhance agility, upgrade CRM and employee experience platforms, and consequently unlock new levels of operational excellence.

“This partnership represents a bold step forward in our mission to deliver transformative outcomes for enterprises in the Kingdom. This also reflects our commitment to creating new job opportunities and nurturing technology talent in the region,” Dina Abo-Onoq, CEO of NextEra and executive VP of LTIMindtree, stated. 

Saif M Mashat, vice president of Middle East and Africa at ServiceNow, said, “Together, we’re enabling customers with intelligent workflows, scalable platforms and AI-powered solutions that unlock new levels of agility and growth.” 



Source link

Continue Reading

Trending