Tools & Platforms
Coastal Bend teachers learn to incorporate AI and technology in classrooms

ROBSTOWN, Texas — Local educators gathered at Robstown ISD for “Tech Connect”, a conference focused on integrating technology and artificial intelligence tools into their teaching methods.
“Chat GPT is very popular now,” said Janet Corder, a presenter at the Tech Connect conference.
The reality is that AI isn’t going anywhere anytime soon. That’s why local teachers came together for the conference.
Coastal Bend teachers learn to incorporate AI and technology in classrooms
“So a lot of our sessions are actually circled around AI and it’s to give the chance for our educators, everyone from our administrators to our teachers to our para professionals to know what is out there,” Rachel Medrano, Robstown ISD coordinator of instructional technology, said.
Medrano said it’s time for teachers to embrace this new digital age.
“Oh, I can use it for this, I can use it for that, I can use it to help me lesson plan, I can help it design my flyer, and it does it within seconds,” Medrano said,
Medrano said there are tools that educators can incorporate to lessen their workload while still helping their students.
“If I leave this tool open for my student, it’s like having my personal assistant help that student,” Melissa Summerford, a presenter at Tech Connect, said while demonstrating how to use the program Kami.
Isabel Lopez is about to start her first year teaching at Agua Dulce. She said she’s trying to learn all she can ahead of the school year.
“New students, they’re accustomed to it, like that’s all that they’ve ever known, and so I think it’s better to know how to work with that and know like the limitations,” Lopez said.
Even though she didn’t have AI growing up, Lopez said she’s keeping an open mind as she enters her teaching career.
“AI technology, all that stuff is eventually gonna take over like the traditional way, like how I did it, so I just think it’s better to just embrace it and just like learn as much as I can about it,” Lopez said.
The next Tech Connect conference will be held next July.
This story was reported on-air by a journalist and has been converted to this platform with the assistance of AI. Our editorial team verifies all reporting on all platforms for fairness and accuracy.
Tools & Platforms
Anthropic settles, Apple sued: Tech giant faces lawsuit over AI copyright dispute

Apple has been drawn into the growing legal battle over the use of copyrighted works in artificial intelligence training, after two authors filed a lawsuit in the United States accusing the technology giant of misusing their books.
Claims of pirated dataset use
The proposed class action, lodged on Friday in federal court in Northern California, alleges that Apple copied protected works without permission, acknowledgement or compensation. Authors Grady Hendrix and Jennifer Roberson claim their books were included in a dataset of pirated material that Apple allegedly used to train its “OpenELM” large language models.
The filing argues that Apple has failed to seek consent or provide remuneration, despite the commercial potential of its AI systems. Both the company and the authors’ legal representatives declined to comment when approached.
Part of a wider copyright battle?
The case adds to a mounting wave of litigation targeting technology firms over intellectual property in the AI age. Earlier this week, AI start-up Anthropic disclosed it had reached a $1.5 billion settlement with a group of authors who accused the company of using their books to develop its Claude chatbot without authorisation. The payout, which Anthropic agreed to without admitting liability, has been described by lawyers as the largest publicly reported copyright settlement to date.
Lawyers who represented the authors against Anthropic described the accord as unprecedented. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from pirate websites is wrong,” said Justin Nelson of Susman Godfrey.
The settlement is among the first to be reached in a wave of copyright lawsuits filed against AI firms, including Microsoft, OpenAI, Meta and Midjourney, over their use of proprietary online content. Some competitors have pre-emptively struck licensing deals with publishers to avoid litigation; Anthropic has not disclosed any such agreements.
(With inputs from Reuters)
Tools & Platforms
‘Your Work Now Shapes Your Life Decades Ahead’: Anna Gagarina, Career Expert, on Using AI to Land the Right Professional Path

By the end of 2025, the AI market in HR is set to reach nearly $7 billion—almost a billion more than in 2024. Global corporations are ready to invest heavily in technology to attract top talent. But amid the surge of AI adoption, recruitment is going through turbulent times. What was once a predominantly manual process has transformed into a high-tech operation—both for candidates and employers—so much so that the market sometimes finds AI competing not with humans, but with other AI systems.
Understanding these new trends has become crucial both for career consultants and job seekers. How to avoid missing out on promising positions in an algorithm-driven world, where to find your place in the evolving tech landscape, and how to turn AI to your advantage—all of this is explained by Anna Gagarina, career development expert, founder of Job Mentor, an AI platform for career guidance.
Anna, before you began guiding others in effective job searching, you went through an extensive personal journey exploring different countries and careers. When and how did that journey begin?
I often joke that I’m the classic millennial from the memes—the one who had no clue what she wanted to be when she grew up. My career choice was entirely spontaneous: I didn’t go into business or technology; I studied history. I dedicated seven years of my life to it, winning competitions, publishing academic papers, and presenting at conferences—but even during university, I realised that teaching probably wasn’t for me. Then I wondered: who needs all this?
This personal crisis coincided with my first encounter with the business world. I had no prior experience—neither personal nor family-related. Everything started from scratch: I studied the profession, explored case studies, and observed people and companies. I took short courses in sales and marketing, and my head was bursting with information so different from the academic world I was used to. But it was during an internship at an educational company that I truly discovered a new world. I think that was my first real breakthrough in mindset—a step onto the career path I’ve been following for over 11 years.
The relocations were truly pivotal moments for me. In 2020, I was invited to work in Ireland, where I first met families who had been running their businesses for generations. That experience gave me a key insight: “It’s possible to build a business for the long term, creating a legacy and a community around it.”
Another breakthrough came to me after moving to the U.S. Here, I saw what real competition for talent looks like, how to plan a career strategically, and how to consciously and methodically build a path to success, brick by brick. In Russia, decisions are often made with a “just don’t miss out” mindset; here, people follow the principle: “Your career today is an investment in your wealth 50 years from now.”
And so much depends on your ability to build relationships—and how you do it. You can be extremely talented, but if you can’t forge connections, your talent alone won’t take you far.
At what point did you decide to focus on AI when it comes to attracting top talent?
Honestly, I’m not an early adopter or a tech evangelist.
By nature—both personally and professionally—I’ve always been a bit of a conservative. The real turning point came when I started working with companies from Silicon Valley. At first, I didn’t even handle ChatGPT very well—but I quickly learned, and then it hit me: “If even a total tech novice like me can master this, it’s clear this technology is going to change the world.”
From there, client demand pushed me further. I began to see exactly which talents the market needed, where investments were flowing, and what new opportunities and roles were emerging as these technologies advanced. That’s when I realised: as a career consultant, I simply had to move toward AI and emerging tech—because they are shaping the near future of careers.
My first large-scale experiments with AI tools started in the corporate space. As a recruiter, for example, AI-driven sourcing lets me identify more than 100 candidates a day for leading startups and draft over 50 personalized professional emails—even in a language that isn’t my own. This proactive approach helps companies hire high-quality, high-potential talent, where the balance of time and quality is impossible to overstate: every day without a qualified employee can cost a company tens of thousands of dollars. AI also delivers another critical edge—speed. With it, I can create extensive training programs, learning materials, and simulations in just hours instead of months.
You’ve developed over 40 corporate programs and advised more than 1000 HR professionals. Which project was a true breakthrough for you?
I’d say it was a project tied to an employee career management course, where I worked with HR specialists from large companies. A single 20-minute consultation with me could evolve into a full-scale project that was later implemented across companies with 10,000 or even 20,000 employees. This fundamentally changed my mindset: as an individual consultant, you work one-on-one with a client. But when your idea scales within a company of thousands, you’re genuinely influencing the system.
One outcome of this realization was the creation of my project, Job Mentor. The idea stemmed from a very personal challenge: I ran into the classic consulting problem—my resources were limited by my time and expertise. Gradually, I began automating processes, starting with reports, content, and analytics.
Over the past two years, I’ve guided more than 200 career consultations, integrating AI into every step—from defining career paths to refining résumés and identifying the right opportunities. What started as an experiment has grown into a structured system: I begin by introducing clients to AI tools, then provide customized agents that help automate job searches and self-reflection. The result is tangible: I save hours of work, while clients gain something even more valuable—time they can spend with family instead of navigating endless applications. And this was only the first stage of the transition.
At some point, I asked myself: “Could I replace myself entirely?” That’s how the idea for a service where my calendar and 15 hours of individual work aren’t needed was born. Instead, users get a ready-made solution in just 30 minutes of interacting with the system. This drastically lowers the cost of the service and makes it accessible to a much wider audience.
Traditional career coaching doesn’t come cheap. In the U.S., an hour with a consultant averages around $400, while full-service packages often range from $2,000 to $3,000.
By contrast, an AI-powered consultation costs about $100. Of course, no algorithm can fully replace the human connection—but what if you need urgent career support and can’t afford traditional fees? For some, a job is a matter of survival; for others, it’s the chance to unlock potential and achieve a breakthrough in their field. Ultimately, expanding access to career guidance means creating a labor market that is fairer and more transparent for everyone.
How exactly do you replace your involvement? Which AI technologies and tools do you integrate into your work with clients and companies?
We’re a fully AI-based agency, so you could say that almost all of our core technologies fall under the AI umbrella. This includes agents that help automate routine tasks, notetakers that analyse and organise information, as well as tools like Perplexity for deep research and handling large volumes of data.
Such automation brings measurable business results. For example, by reducing the need for manual data processing and admin tasks, we save around $8,000–10,000 per month in operational costs (which would otherwise require 1–2 full‑time specialists). It also significantly reduces classic risks associated with consulting and recruiting businesses, such as knowledge gaps when team members leave, or over-dependency on individual experts.
Additionally, AI allows us to continuously collect and structure career data from 100+ client interactions. Most of this is unstructured information — recorded consultations that often include tens of thousands of words with low repetition and very few clear patterns. Instead of requiring consultants to manually revisit and decode these conversations, our AI instantly analyzes the material, extracts actionable insights, and organizes it for further use.
Thanks to this capability, our consultant can conduct 20–30% more career sessions per month, raising both the speed and the depth of expertise at each stage of the client journey.
Can you share an example of when AI completely transformed recruitment outcomes?
Absolutely—but like any story, there are two sides to the coin: impressive wins and some unexpected headaches.
On the positive side, today’s notetakers do much more than just record interviews like in the old Zoom days—they gather data, take smart notes, and, crucially, learn from company-specific information. This supercharges the recruitment process: from crafting job posts and writing emails to analysing candidates and enhancing communication. For example, emails and text content can now be generated automatically, slashing the recruiter’s time on routine tasks. Natural language sourcing tools allow you to describe your ideal candidate and instantly get relevant profiles—something that used to require complex Boolean searches.
But there’s a flip side. AI has dramatically increased the number of low-quality applications, including spam, making it easy for strong candidates to get lost in the noise. Candidates are using AI to whip up instant responses and cover letters, which only amplifies the flood. Recruiters who used to handle dozens of applications now face hundreds—or even thousands—forcing them to sift, filter, and compete in a whole new landscape. In a sense, AI-driven candidates and AI-powered recruiters are now battling it out on the same field.
How will these changes—AI and automation—impact the global job market in the coming years?
You can roughly divide today’s professions into three groups. The first is non-human: jobs that are already automated or will be soon. Think heavy, dangerous, or repetitive mechanical work—tasks based on algorithmic, repeatable movements. Robots are already taking over these roles, from automated warehouse workers and bartenders to delivery drivers, taxi drivers, and even nail technicians.
The second group is human + AI copilot. These are roles where systems and platforms collect, organise, and analyse data—in medicine, logistics, sales, finance, and education—but the final decision is still made by a human.
Finally, the third group is purely human: top management roles—tasks only a person can perform, overseeing both non-human and human + AI copilot teams.
Technological changes are reshaping all three groups. Essentially, a profession is becoming a platform reflecting your education and professional expertise—the foundation on which everything else, including technology, is built. For example, being an engineer is a profession, but acquiring a more specialised skill set and integrating certain technologies can turn you into, say, a machine learning operations (MLOps) engineer.
Training and learning methods are bound to change. We need to find ways to quickly acquire in-demand skills, specialise, and understand the realities of the modern workplace. AI copilots already help accelerate the junior phase, enabling professionals to move faster into human + AI copilot roles—and eventually reach purely human-level positions.
Yes, lately, we’ve seen waves of layoffs, especially among junior employees, as AI takes over. How can companies balance automation with keeping jobs?
It’s a tricky question. In practice, companies will automate wherever it makes economic sense. If robots or AI can produce a product cheaper than a human, businesses will naturally go for the cost cut. Without regulations or mandatory limits, there’s very little to slow automation down.
Yet the reality is more nuanced. Jobs stick around as long as automating a role costs more than keeping a human. Take giants like Amazon—they pour billions into warehouse and logistics automation, yet remain among the biggest employers in the sector because some tasks are simply cheaper to do by hand.
The future of work is all about reskilling and the shift to gig and project-based careers. Lifelong employment at a single company is disappearing—and the traditional idea of permanent work is fading too. More people will become small independent “businesses,” managing careers like a portfolio of projects and tasks. It’s a world of opportunity, but it demands flexibility, constant upskilling, and the ability to pivot quickly.
What’s the number-one piece of advice you’d give to someone worried that AI will take over their job?
Every time a person frees up time, it gets filled with something new. That’s how new sectors, fields of knowledge, and professions emerge. My advice would be to ask yourself: where do you want to go next? In a year, two, five, ten—and strategically, throughout your life? Where will your mind, creativity, skills, and resources create value and help solve real problems for people?
Which skills will define “new literacy” in the next 10 years?
Even sooner than a decade from now, being literate will mean knowing how to work effectively with AI as a tool. Everyone will need to grasp how these technologies function, what they can—and can’t—do, and how to craft the right prompts—a skill surprisingly few people master today.
Equally crucial will be the ability to design systems and automate routine tasks, orchestrating different agents and tools to maximise efficiency. Basic programming will no longer be just for coders—it will become a must-have for anyone building AI-powered products and services.
The ability to structure information visually and create clear, compelling designs will also be essential, helping ideas and results cut through the noise.
And, of course, critical thinking and the capacity to filter massive streams of information will be indispensable. We’ll need to digest enormous amounts of data and make smart decisions in a constantly shifting landscape. Curiosity, experimentation, and adaptability—being ready to try new approaches and pivot quickly—will become the hallmarks of both professional and personal growth.
Tools & Platforms
Duke University pilot project examining pros and cons of using artificial intelligence in college

DURHAM, N.C. — As generative artificial intelligence tools like ChatGPT have become increasingly prevalent in academic settings, faculty and students have been forced to adapt.
The debut of OpenAI’s ChatGPT in 2022 spread uncertainty across the higher education landscape. Many educators scrambled to create new guidelines to prevent academic dishonesty from becoming the norm in academia, while some emphasized the strengths of AI as a learning aid.
As part of a new pilot with OpenAI, all Duke undergraduate students, as well as staff, faculty, and students across the University’s professional schools, gained free, unlimited access to ChatGPT-4o beginning June 2. The University also announced DukeGPT, a University-managed AI interface that connects users to resources for learning and research and ensures “maximum privacy and robust data protection.”
Duke launched a new Provost’s Initiative to examine the opportunities and challenges AI brings to student life on May 23. The initiative will foster campus discourse on the use of AI tools and present recommendations in a report by the end of the fall 2025 semester.
The Chronicle spoke to faculty members and students to understand how generative AI is changing the classroom.
ALSO SEE Job seekers, HR professionals grapple with use of artificial intelligence
Embraced or banned
Although some professors are embracing AI as a learning aid, others have implemented blanket bans and expressed caution regarding the implications of AI on problem-solving and critical thinking.
David Carlson, associate professor of civil and environmental engineering, took a “lenient” approach to AI usage in the classroom. In his machine learning course, the primary learning objective is to utilize these tools to understand and analyze data.
Carlson permits his students to use generative AI as long as they are transparent about their purpose for using the technology.
“You take credit for all of (ChatGPT’s) mistakes, and you can use it to support whatever you do,” Carlson said.
He added that although AI tools are “not flawless,” they can help provide useful secondary explanations of lectures and readings.
Matthew Engelhard, assistant professor of biostatistics and bioinformatics, said he also adopted “a pretty hands-off approach” by encouraging the use of AI tools in his classroom.
“My approach is not to say you can’t use these different tools,” Engelhard said. “It’s actually to encourage it, but to make sure that you’re working with these tools interactively, such that you understand the content.”
Engelhard emphasized that the use of these tools should not prevent students from learning the fundamental principles “from the ground up.” Engelhard noted that students, under the pressure to perform, have incentives to rely on AI as a shortcut. However, he said using such tools might be “short-circuiting the learning process for yourself.” He likened generative AI tools to calculators, highlighting that relying on a calculator hinders one from learning how addition works.
Like Engelhard, Thomas Pfau, Alice Mary Baldwin distinguished professor of English, believes that delegating learning to generative AI means students may lose the ability to evaluate the process and validity of receiving information.
“If you want to be a good athlete, you would surely not try to have someone else do the working out for you,” Pfau said.
Pfau recognized the role of generative AI in the STEM fields, but he believes that such technologies have no place in the humanities, where “questions of interpretation … are really at stake.” When students rely on AI to complete a sentence or finish an essay for them, they risk “losing (their) voice.” He added that AI use defeats the purpose of a university education, which is predicated on cultivating one’s personhood.
Henry Pickford, professor of German studies and philosophy, said that writing in the humanities serves the dual function of fostering “self-discovery” and “self-expression” for students. But with increased access to AI tools, Pickford believes students will treat writing as “discharging a duty” rather than working through intellectual challenges.
“(Students) don’t go through any kind of self-transformation in terms of what they believe or why they believe it,” Pickford said.
Additionally, the use of ChatGPT has broadened opportunities for plagiarism in his classes, leading him to adopt a stringent AI policy.
Faculty echoed similar concerns at an Aug. 4 Academic Council meeting, including Professor of History Jocelyn Olcott, who said that students who learn to use AI without personally exploring more “humanistic questions” risk being “replaced” by the technology in the future.
How faculty are adapting to generative AI
Many of the professors The Chronicle interviewed expressed difficulty in discerning whether students have used AI on standard assignments. Some are resorting to a range of alternative assessment methods to mitigate potential AI usage.
Carlson, who shared that he has trouble detecting student AI use in written or coding assignments, has introduced oral presentations to class projects, which he described as “very hard to fake.”
Pickford has also incorporated oral assignments into his class, including having students present arguments through spoken defense. He has also added in-class exams to lectures that previously relied solely on papers for grading.
“I have deemphasized the use of the kind of writing assignments that invite using ChatGPT because I don’t want to spend my time policing,” Pickford said.
However, he recognized that ChatGPT can prove useful in generating feedback throughout the writing process, such as when evaluating whether one’s outline is well-constructed.
A ‘tutor that’s next to you every single second’
Students noted that AI chatbots can serve as a supplemental tool to learning, but they also cautioned against over-relying on such technologies.
Junior Keshav Varadarajan said he uses ChatGPT to outline and structure his writing, as well as generate code and algorithms.
“It’s very helpful in that it can explain concepts that are filled with jargon in a way that you can understand very well,” Varadarajan said.
Varadarajan has found it difficult at times to internalize concepts when utilizing ChatGPT because “you just go straight from the problem to the answer” without paying much thought to the problem. Varadarajan acknowledged that while AI can provide shortcuts at times, students should ultimately bear the responsibility for learning and performing critical thinking tasks.
For junior Conrad Qu, ChatGPT is like a “tutor that’s next to you every single second.” He said that generative AI has improved his productivity and helped him better understand course materials.
Both Varadarajan and Qu agreed that AI chatbots come in handy during time crunches or when trying to complete tasks with little effort. However, they said they avoid using AI when it comes to content they are genuinely interested in exploring deeper.
“If it is something I care about, I will go back and really try to understand everything (and) relearn myself,” Qu said.
The future of generative AI in the classroom
As generative AI technologies continue evolving, faculty members have yet to reach consensus on AI’s role in higher education and whether its benefits for students outweigh the costs.
“To me, it’s very clear that it’s a net positive,” Carlson said. “Students are able to do more. Students are able to get support for things like debugging … It makes a lot of things like coding and writing less frustrating.”
Pfau is less optimistic about generative AI’s development, raising concerns that the next generation of high school graduates will be too accustomed to chatbots coming into the college classroom. He added that many students find themselves at a “competitive disadvantage” when the majority of their peers are utilizing such tools.
Pfau placed the responsibility on students to decide whether the use of generative AI will contribute to their intellectual growth.
“My hope remains that students will have enough self-respect and enough curiosity about discovering who they are, what their gifts are, what their aptitudes are,” Pfau said. “… something we can only discover if we apply ourselves and not some AI system to the tasks that are given to us.”
___
This story was originally published by The Chronicle and distributed through a partnership with The Associated Press.
Featured video is ABC11 24/7 Livestream
Copyright © 2025 by The Associated Press. All Rights Reserved.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi