Connect with us

AI Research

OpenAI’s new ChatGPT Agent can control an entire computer and do tasks for you

Published

on


OpenAI is going all in on the most-hyped trend in AI right now: AI agents, or tools that go a step beyond chatbots to complete complex, multi-step tasks on a user’s behalf. The company on Thursday debuted ChatGPT Agent, which it bills as a tool that can complete work on your behalf using its own “virtual computer.”

In a briefing and demo with The Verge, Yash Kumar and Isa Fulford — product lead and research lead on ChatGPT Agent, respectively — said it’s powered by a new model that OpenAI developed specifically for the product. The company said the new tool can perform tasks like looking at a user’s calendar to brief them on upcoming client meetings, planning and purchasing ingredients to make a family breakfast, and creating a slide deck based on its analysis of competing companies.

The model behind ChatGPT Agent, which has no specific name, was trained on complex tasks that require multiple tools — like a text browser, visual browser, and terminal where users can import their own data — via reinforcement learning, the same technique used for all of OpenAI’s reasoning models. OpenAI said that ChatGPT Agent combines the capabilities of both Operator and Deep Research, two of its existing AI tools.

To develop the new tool, the company combined the teams behind both Operator and Deep Research into one unified team. Kumar and Fulford told The Verge that the new team is made up of between 20 and 35 people across product and research.

In the demo, Kumar and Fulford demonstrated potential use cases for ChatGPT Agent, like asking it to plan a date night by connecting to Google Calendar to see when the user has a free evening, and then cross-referencing OpenTable to find openings at certain types of restaurants. They also showed how a user could interrupt the process by adding, say, another restaurant category to search for. Another demonstration showed how ChatGPT Agent could generate a research report on the rise of Labubus versus Beanie Babies.

Fulford said she enjoyed using it for online shopping because the combination of tech behind Deep Research and Operator worked better and was more thorough than trying the process solely using Operator. And Kumar said he had begun using ChatGPT Agent to automate small parts of his life, like requesting new office parking at OpenAI every Thursday instead of showing up Monday having forgotten to request it with nowhere to park.

Kumar said that since ChatGPT Agent has access to “an entire computer” instead of just a browser, they’ve “enhanced the toolset quite a bit.”

According to the demo, though, the tool can be a bit slow. When asked about latency, Kumar said their team is more focused on “optimizing for hard tasks” and that users aren’t meant to sit and watch ChatGPT Agent work.

“Even if it takes 15 minutes, half an hour, it’s quite a big speed-up compared to how long it would take you to do it,” Fulford said, adding that OpenAI’s search team is more focused on low-latency use cases. “It’s one of those things where you can kick something off in the background and then come back to it.”

Before ChatGPT Agent does anything “irreversible,” like sending an email or making a booking, it asks for permission first, Fulford said.

Since the model behind the tool has increased capabilities, OpenAI said it has activated the safeguards it created for “high biological and chemical capabilities,” even though the company said it does not have “direct evidence that the model could meaningfully help a novice create severe biological or chemical harm” in the form of weapons. Anthropic in May activated similar safeguards for its launch of one of its Claude models, Opus 4.

When asked about whether the tool is permitted to perform financial transactions, Kumar said those actions have been restricted “for now,” and that there’s an additional protection called Watch Mode, wherein if a user navigates to a certain category of webpages, like financial sites, they must not navigate away from the tab ChatGPT Agent is operating in or the tool will stop working.

OpenAI will start rolling out the tool today to Pro, Plus, and Team users — pick “agent mode” in the tools menu or type “/agent” to access it — and the company said it will make it available to ChatGPT Enterprise and Education users later this summer. There’s no rollout timeline yet for the European Economic Area and Switzerland.

The concept of AI agents has been a buzzworthy trend in the industry for years. The ideal developers are working toward is something like Iron Man’s J.A.R.V.I.S., a tool that can perform specific job functions, check people’s calendars for the best time to schedule an event, purchase a gift based on a friend’s preferences, and more, but at the moment, they’re somewhat limited to assisting with coding and compiling research reports.

The term “AI agent” became more common to investors and tech executives in 2023 and quickly picked up speed, especially after fintech company Klarna announced in February 2024 that in just one month of operation, its own AI agent had handled two-thirds of its customer service chats — the equivalent of 700 full-time human workers. From there, executives at Amazon, Meta, Google, and more started mentioning their AI agent goals on earnings call after earnings call. And since then, AI companies have been strategically hiring to reach those goals: Google, for instance, last week hired Windsurf’s CEO, co-founder, and some R&D team members to help further its agentic AI projects.

OpenAI’s debut of ChatGPT Agent follows its January release of Operator, which the company billed as “an agent that can go to the web to perform tasks for you” since it was trained to be able to handle the internet’s buttons, text fields, and more. It’s also part of a larger trend in AI, as companies large and small chase AI agents that will capture the attention of consumers and ideally become habits. Last October, Anthropic, the Amazon-backed AI startup behind Claude, released a similar tool called “Computer Use,” which it billed as a tool that could use a computer the same way a human can in order to complete tasks on a user’s behalf. Multiple AI companies, including OpenAI, Google, and Perplexity, also offer an AI tool that all three have dubbed Deep Research, denoting an AI agent that can write sizable analyses and research reports on anything a user wants.



Source link

AI Research

School Cheating: Research Shows AI Has Not Increased Its Scale

Published

on


Changes in Learning: Cheating and Artificial Intelligence

When reading the news, one gets the impression that all students use artificial intelligence to cheat in their studies. Headlines in newspapers such as The Wall Street Journal or the New York Times often mention ‘cheating’ and ‘AI’. Many stories, similar to a publication in New York Magazine, describe students who openly testify about using generative AI to complete assignments.

With the rise of such headlines, it seems that education is under threat: traditional exams, readings, and essays are filled with cheating through AI. In the worst cases, students use tools like ChatGPT to write complete works.

This seems frustrating, but such a thought is only part of the story.

Cheating has always existed. As an educational researcher studying cheating with AI, I can assert that preliminary data indicate that AI has changed the methods of cheating, but not its volumes.

Our early data suggest that AI has changed the method, but not necessarily the scale of cheating that was already taking place.

This does not mean that cheating using AI is not a serious problem. Important questions are raised: Will cheating increase in the future due to AI? Is the use of AI in education cheating? How should parents and schools respond to prepare children for a life that is significantly different from our experience?

The Pervasiveness of Cheating

Cheating has existed for a very long Time — probably since the creation of educational institutions. In the 1990s and 2000s, Don McCabe, a business school professor at Rutgers University, recorded high levels of cheating among students. One of his studies showed that up to 96% of business students admitted to engaging in ‘cheating behavior’.

McCabe used anonymous surveys where students had to indicate how often they engaged in cheating. This allowed for high cheating rates, which varied from 61.3% to 82.7% before the pandemic.

Cheating in the AI Era

Has cheating using AI increased? Analyzing data from over 1900 students from three schools before and after the introduction of ChatGPT, we found no significant changes in cheating behavior. In particular, 11% of students used AI to write their papers.

Our diligent work showed that AI is becoming a popular tool for cheating, but many questions remain to be explored. For example, in 2024 and 2025, we studied the behavior of another 28000-39000 students, where 15% admitted to using AI to create their work.

Challenges of Using AI

Students are accustomed to using AI but understand that there are boundaries between acceptable and unacceptable use. Reports indicate that many use AI to avoid doing homework or to gain ideas for creative work.

Students feel that their teachers use AI, and many consider it unfair when they are punished for using AI in education.

What Will AI Use Mean for Schools?

The modern education system was not designed with generative AI in mind. Traditionally, educational tasks are seen as the result of intensive work, but now this work is increasingly blurred.

It is important to understand what the main reasons for cheating are, how it relates to stress, time management, and the curriculum. Protecting students from cheating is important, but ways of teaching and the use of AI in classrooms also need to be rethought.

Four Future Questions

AI has not caused cheating in educational institutions but has only opened new possibilities. Here are questions worth considering:

  • Why do students resort to cheating? The stress of studying may lead them to seek easier solutions.
  • Do teachers adhere to their rules? Hypocrisy in demands on students can shape false perceptions of AI use in education.
  • Are the rules concerning AI clearly stated? Determining the acceptability of AI use in education may be vague.
  • What is important for students to know in a future rich in AI? Educational methods must be timely adapted to the new reality.

The future of education in the age of AI requires an open dialogue between teachers and students. This will allow for the development of new skills and knowledge necessary for successful learning.



Source link

Continue Reading

AI Research

Artificial intelligence helps break barriers for Hispanic homeownership | National News

Published

on


























Artificial intelligence helps break barriers for Hispanic homeownership | National News | ottumwacourier.com

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.

For any issues, call (641) 684-4611.



Source link

Continue Reading

AI Research

Artificial intelligence helps break barriers for Hispanic homeownership – Temple Daily Telegram

Published

on



Artificial intelligence helps break barriers for Hispanic homeownership  Temple Daily Telegram



Source link

Continue Reading

Trending