Connect with us

AI Research

Nasa astronauts Butch and Suni finally back on Earth

Published

on


Rebecca Morelle, Alison Francis and Greg Brosnan

BBC Science

Watch: Astronauts return to Earth after extended stay in Space

After nine months in space, Nasa astronauts Butch Wilmore and Suni Williams have finally arrived back on Earth.

Their SpaceX capsule made a fast and fiery re-entry through the Earth’s atmosphere, before four parachutes opened to take them to a gentle splashdown off the coast of Florida.

A pod of dolphins circled the craft.

After a recovery ship lifted it out of the water, the astronauts beamed and waved as they were helped out of the hatch, along with fellow crew members astronaut Nick Hague and cosmonaut Aleksandr Gorbunov.

“The crew’s doing great,” Steve Stich, manager, Nasa’s Commercial Crew Program, said at a news conference.

It brings to an end a mission that was supposed to last for just eight days.

It was dramatically extended after the spacecraft Butch and Suni had used to travel to the International Space Station suffered technical problems.

“It is awesome to have crew 9 home, just a beautiful landing,” said Joel Montalbano, deputy associate administrator, Nasa’s Space Operations Mission Directorate.

Thanking the astronauts for their resilience and flexibility, he said SpaceX had been a “great partner”.

The journey home took 17 hours.

The astronauts were helped on to a stretcher, which is standard practice after spending so long in the weightless environment.

They will be checked over by a medical team, and then reunited with their families.

NASA Suni Williams exits the capsule smiling in a white space suit and helmet helped by two assistants dressed in black.NASA

Triumphant – Suni Williams exits the capsule

“The big thing will be seeing friends and family and the people who they were expecting to spend Christmas with,” said Helen Sharman, Britain’s first astronaut.

“All of those family celebrations, the birthdays and the other events that they thought they were going to be part of – now, suddenly they can perhaps catch up on a bit of lost time.”

The saga of Butch and Suni began in June 2024.

They were taking part in the first crewed test flight of the Starliner spacecraft, developed by aerospace company Boeing.

But the capsule suffered several technical problems during its journey to the space station, and it was deemed too risky to take the astronauts home.

Starliner returned safely to Earth empty in early September, but it meant the pair needed a new ride for their return.

So Nasa opted for the next scheduled flight: a SpaceX capsule that arrived at the ISS in late September.

It flew with two astronauts instead of four, leaving two seats spare for Butch and Suni’s return.

The only catch was this had a planned six-month mission, extending the astronauts stay until now.

The Nasa pair embraced their longer-than-expected stay in space.

NASA Butch and Suni pose smiling for the camera leaning out of a small hatchNASA

Butch Wilmore and Suni Williams have been on the ISS since June 2024

They carried out an array of experiments on board the orbiting lab and conducted spacewalks, with Suni breaking the record for the woman who spent the most hours outside of the space station. And at Christmas, the team dressed in Santa hats and reindeer antlers – sending a festive message for a Christmas that they had originally planned to spend at home.

And despite the astronauts being described as “stranded” they never really were.

Throughout their mission there have always been spacecraft attached to the space station to get them – and the rest of those onboard – home if there was an emergency.

Now the astronauts have arrived home, they will soon be taken to the Johnson Space Centre in Houston, Texas, where they will be checked over by medical experts.

Long-duration missions in space take a toll on the body, astronauts lose bone density and suffer muscle loss. Blood circulation is also affected, and fluid shifts can also impact eyesight.

It can take a long time for the body to return to normal, so the pair will be given an extensive exercise regime as their bodies re-adapt to living with gravity.

British astronaut Tim Peake said it could take a while to re-adjust.

“Your body feels great, it feels like a holiday,” he told the BBC.

“Your heart is having an easy time, your muscles and bones are having an easy time. You’re floating around the space station in this wonderful zero gravity environment.

“But you must keep up the exercise regime. Because you’re staying fit in space, not for space itself, but for when you return back to the punishing gravity environment of Earth. Those first two or three days back on Earth can be really punishing.”

In interviews while onboard, Butch and Suni have said they were well prepared for their longer than expected stay – but there were things they were looking forward to when they got home.

Speaking to CBS last month, Suni Williams said: “I’m looking forward to seeing my family, my dogs and jumping in the ocean. That will be really nice – to be back on Earth and feel Earth.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

How Oakland teachers use — or avoid — AI in the classroom

Published

on


When Calupe Kaufusi was a freshman at McClymonds High School in West Oakland, he’d use platforms like ChatGPT or Google Gemini for written assignments in his history class. But he quickly learned they weren’t infallible. 

“It became kind of inconvenient,” Kaufusi said. “As I learned more about AI, I learned it wouldn’t give you correct information and we’d have to fact check it.”

Like many students, Kaufusi used generative AI platforms — where users can input a prompt and receive answers in various formats, be it an email text, an essay, or the answers to a test — to get his work done quickly and without much effort. Now a junior, Kaufusi said he’s dialed down his AI use.

Already rampant in college and university settings, artificial intelligence software is also reshaping the K-12 education landscape. Absent a detailed policy in the Oakland Unified School District, individual teachers and schools have been left to navigate how to integrate the technology in their classrooms — or how to try to keep it out. 

McClymonds High School in West Oakland. Credit: Jungho Kim for The Oaklandside

Some teachers told The Oaklandside they are choosing to embrace AI by incorporating it into student projects or using it to assist with their own lesson planning, while others have said they’ve rejected it for its environmental impacts and how it enables students to cut corners. Some teachers are returning to old forms of assessment, such as essays handwritten during class that can’t be outsmarted by the platforms. 

What’s clear to many is that AI platforms are already ubiquitous on the internet and many students are going to use them whether their teachers advise them to or not.

Kaufusi, who is in McClymonds’ engineering pathway, is interested in studying machine learning or software engineering, so he wants to see more of his teachers discuss responsible uses for AI. “They know there’s no way to stop us” from using it, he said, “so they can try to teach us how to use it properly.” 

A new policy in the works

Under current OUSD guidance, published in March, teachers and principals are left to determine whether students are allowed to use AI in their work; if they do, students are required to cite it. The guidance also outlines procedures for teachers to follow if they suspect a student is misusing AI, for example, by representing AI-generated work as their own, starting with a private conversation with the student, then the collection of evidence, and finally a consultation with colleagues about proper discipline. 

Work is underway in Oakland Unified to develop a more comprehensive AI policy for the district, said Kelleth Chinn, the district’s instructional technology coordinator. In his role, he’s been thinking about how to address student use of AI. A former classroom teacher, Chinn can imagine beneficial uses for both students and teachers in the classroom, but he knows teaching students responsible uses for AI doesn’t preclude them from using it in dishonest ways.

“The reason that we need to talk about AI to students is because a lot of students are already using it,” Chinn told The Oaklandside. “In the absence of having any kind of conversations, you’re just leaving this vacuum without guidance for students.”

Any new draft policy would first be evaluated by the school board’s teaching and learning committee before being considered by the full board of directors. VanCedric Williams, chair of that committee, has met with Chinn and his team to discuss potential approaches. Williams, a veteran teacher, said he is hesitant to recommend a policy that would encourage educators to use AI. 

“I do not want to put any expectations for teachers or students to use it or not,” Williams told The Oaklandside. “We’re looking at best practices around the state, what other districts are doing and what pitfalls they’ve incurred.” 

Chinn added that he’s been looking at how colleges and universities are addressing AI. What he’s found is that some professors are turning away from papers and written homework assignments and toward methods like blue book exams and oral presentations that preclude the use of AI.   

‘We just want our kids to be able to critically think’

Some teachers are hesitant to fully embrace the technology, concerned that it could hamper student learning and critical thinking. At Oakland Technical High School, a group of history and English teachers have formed a professional learning community to study AI in education and come up with potential guidance. 

Amanda Laberge and Shannon Carey, who both teach juniors at Oakland Tech, joined the group as AI skeptics. Carey, who has been teaching in OUSD since 1992, sees AI differently than she does other advances in technology that have taken place over the course of her career. 

“A computer is a tool: You can draft your essay and I can put comments on it,” Carey, a history teacher, told The Oaklandside. “Whereas AI, the way many students are using it, is to do their thinking for them.”

Carey noted that after years of a drive to incorporate more tech in the classroom, the tide is turning on cell phones — many schools now have “no smartphone” policies and last year Governor Gavin Newsom signed a law, which goes into effect in 2026, requiring all school districts to prohibit cell phone use during the school day. 

Neither Carey nor Laberge plan to use AI themselves, the way some educators use it for grading or lesson planning.

Oakland Technical High School. Credit: Amir Aziz/The Oaklandside

Laberge, who teaches English in Oakland Tech’s race, policy, and law pathway, assigned her students a project encouraging them to think critically about AI. They’ll survey other students on how they use AI, research the cognitive impacts of relying on AI, gain an understanding of how exactly the algorithms and platforms operate, and examine wider societal implications. 

“Our job is to help them develop skills and thinking so as adults they can do whatever they want,” Laberge said. 

Laberge and Carey said they want to see OUSD put together an evidence-based policy around AI use. They mentioned a 2025 MIT study that monitored brain function for groups writing an essay. The  authors found that those using a large language model to assist in writing the essay had lower brain activity than those who didn’t, and they had more trouble quoting their own work. 

“We just want our kids to be able to critically think and read and write fluently and with grace,” Carey said. “We do not see a way in which AI is going to make that happen.”

Using AI strategically

At Latitude High School in Fruitvale, educators are taking a different approach. Computer science students at the charter school, which emphasizes project-based learning, are incorporating AI into math video games they’re creating for local fourth graders. This is the first year that classes have introduced AI as part of the curriculum, according to Regina Kruglyak, the school’s dean of instruction. 

Students first write out code on their own, then run it through ChatGPT to test their ideas and find errors. The school uses GoGuardian, a software that can block websites, to restrict access to ChatGPT when students aren’t actively using it for an assignment, Kruglyak said. 

“We were nervous about the possibility that students will forget how to do certain things, or they’ll never learn how to do it in the first place because they’ll just fall back on having ChatGPT do it for them,” Kruglyak said. “That’s where we use GoGuardian. Making sure that students are using their own brains and learning the skills in the first place feels very crucial.” 

Kruglyak coaches Latitude’s science teachers and has held professional development sessions on new AI platforms. She recently introduced Notebook LM, a Google platform that can summarize documents and organize notes into various media. Kruglyak tested it by uploading a grant application and having the software turn it into a podcast. Her goal, she said, is to “change teachers’ minds about what AI can do, and how to help students learn from it rather than be scared of it as a teacher.”

It’s not only high school educators who are confronting students using AI. Joel Hamburger, a fifth grade teacher at Redwood Heights Elementary School, said with students using Google on their Chromebooks, AI results come up every time they type in a Google search. Hamburger, who has been teaching for four years, said this calendar year is when he first started noticing how unavoidable AI is in the classroom. 

“Google AI culls the information from the internet and immediately gives you a response,” Hamburger told The Oaklandside. “Whereas a year or two ago, it gave you websites to go to.”

For now, he allows his students to use Google’s AI for filling out simple worksheets in class. At this time of year, Hamburger’s focus is teaching his students how to craft the right inputs to get the answers they’re looking for. During a spring unit on research projects, he’ll lay out the foundations for evaluating information and factchecking what Google serves up. 

Any kind of AI policy should include tiered guidance for various grade levels, Hamburger said. While fifth graders may not be using ChatGPT, he said, they’re surrounded by AI on their devices and guidance for them may not look the same as instructions for a high schooler. 

“The genie’s just about to be brought out of the bottle for these 10-year-olds,” he said. “They need to know appropriate uses.”

*” indicates required fields



Source link

Continue Reading

AI Research

A Realistic Direction for Artificial General Intelligence Today

Published

on


In November 2024, OpenAI’s Sam Altman said that ChatGPT would achieve the holy grail of artificial general intelligence (AGI) in 2025.

AGI is admittedly a fuzzy goal. Most agree that it involves an ability to perform any intellectual task as well as or better than humans. But which humans? An average human? A Nobel laureate? All Nobel laureates put together?

When GPT-5 was released a few weeks ago, it seemed that a version of AGI had been achieved, on time, when Altman boasted that “it really feels like talking to an expert in any topic, like a PhD-level expert.” Alas, Altman had again overpromised and underdelivered.

It immediately became clear that GPT-5 is not a PhD-level expert (see, for example, here, here, and here.) The fundamental problem remains; large language models (LLMs) like GPT-5 are hobbled by their inherent inability to relate the text they input and output to the real world. No amount of scaling will change that. Tens of thousands of human trainers can patch billions of blunders but (a) an LLM following instructions it does not understand is not intelligent; (b) the trainers cannot anticipate every future blunder that LLMs will make; and (c) many important decisions should elicit subjective probabilities that depend on the specific details of the situation and are beyond the capabilities of LLMs.

The reality is that we are not going to pay big bucks for LLMs that simply recite facts, like the capital of Chile or the population of Portugal. Boring! We can use Wikipedia to look up facts, and the answers are more trustworthy than those generated by LLMs.

For example, I gave GPT-5 this prompt four times, “How many siblings did George Washington have?” and GPT-5 gave four different answers: 7, 8, 9, and 12:

GPT-5:   George Washington, the first President of the United States, had five full siblings and seven half-siblings, for a total of 12 siblings.

               Here’s the breakdown:

Full siblings (same mother and father):

  1. Samuel Washington
  2. John Augustine Washington
  3. Charles Washington
  4. Betty Washington Lewis
  5. Mildred Washington

Half-siblings (from his father Augustine Washingtons first marriage to Jane Butler):

  1. Butler Washington
  2. Lawrence Washington
  3. Augustine Washington Jr.
  4. Jane Washington
  5. Mildred Washington
  6. Richard Washington
  7. John Washington

               So, George grew up in a fairly large blended family.

               If you want, I can also make a family tree diagram showing all his siblings—it makes it much clearer. Do you want me to do that?

I responded yes and GPT-5 drew me a confusing family tree with ten siblings:

Wikipedia gives the correct answer (nine siblings — five full siblings and four half-siblings) and, even though GPT-5 trained on Wikipedia, something in its training set had led it astray.

What should Sam Altman and other developers do now?

Instead of admitting defeat (or continuing to make increasingly embarrassing claims), Altman and his colleagues might heed some timeless advice by declaring victory and getting the hell out. Instead of chasing a goal they cannot achieve, change the goal to be something that has already been achieved by redefining AGI.

I have been thinking about this for several years now. A realistic and easily understood goal is for a computer to be as intelligent as a friend I will call Brock. Everyone knows someone like Brock, so we can all relate to what Brock Intelligence means.

Brock is a prototypical mansplainer. Ask him (or anyone within his earshot) any question and he immediately responds with a long-winded, confident answer — sometimes at 200 words a minute with gusts up to 600. Kudos to those who can listen to half of his answer. Condolences to those who live or work with Brock and have to endure his seemingly endless blather.

Instead of trying to compete with Wikipedia, Altman and his competitors might instead pivot to a focus on Brock Intelligence, something LLMs excel at by being relentlessly cheerful and eager to offer facts-be-damned advice on most any topic.

Brock Intelligence vs. GPT Intelligence

The most substantive difference between Brock and GPT is that GPT likes to organize its output in bullet points. Oddly, Brock prefers a less-organized, more rambling style that allows him to demonstrate his far-reaching intelligence. Brock is the chatty one, while ChatGPT is more like a canned slide show.

They don’t always agree with each other (or with themselves). When I recently asked Brock and GPT-5, “What’s the best state to retire to?,” they both had lengthy, persuasive reasons for their choices. Brock chose Arizona, Texas, and Washington. GPT-5 said that the “Best All-Around States for Retirement” are New Hampshire and Florida. A few days later, GPT-5 chose Florida, Arizona, North Carolina, and Tennessee. A few minutes after that, GPT-5 went with Florida, New Hampshire, Alaska, Wyoming, and New England states (Maine/Vermont/Massachusetts).

Consistency is hardly the point. What most people seek with advice about money, careers, retirement, and romance is a straightforward answer. As Harry Truman famously complained, “Give me a one-handed economist. All my economists say “on the one hand…,” then “but on the other….” People ask for advice precisely because they want someone else to make the decision for them. They are not looking for accuracy or consistency, only confidence.

Sam Altman says that GPT can already be used as an AI buddy that offers advice (and companionship) and it is reported that OpenAI is working on a portable, screen-free “personal life advisor.” Kind of like hanging out with Brock 24/7. I humbly suggest that they name this personal life advisor, Brock Says (Design generated by GPT-5.)



Source link

Continue Reading

AI Research

[Next-Generation Communications Leadership Interview ③] Shaping Tomorrow’s Networks With AI-RAN – Samsung Global Newsroom

Published

on


Part three of the interview series covers Samsung’s progress in AI-RAN network efficiency, sustainability and the user experience

Samsung Newsroom interviews Charlie Zhang, Senior Vice President of Samsung Electronics’ 6G Research Team

With global competition intensifying along with 5G evolution and 6G preparations, AI is emerging as a defining force in next-generation communications. Especially AI-based radio access network (AI-RAN) technology that brings AI to base stations, a key element of the network, stands out as a breakthrough to drive new levels of efficiency and intelligence in network architecture.

 

At the forefront of research into next-generation network architectures, Samsung Electronics embeds AI throughout communications systems while leading technology development and standardization efforts in AI-RAN.

 

▲ Charlie Zhang, Senior Vice President, 6G Research Team at Samsung Electronics

 

In part three of the series, Samsung Newsroom spoke with Charlie Zhang, Senior Vice President of 6G Research Team at Samsung Electronics, about the evolution of AI-RAN and how Samsung’s research is preparing for the 6G era. This follows parts one and two of the series exploring Samsung’s efforts in 6G standardization and global industry leadership.

 

 

Reimagining 6G for a Dynamic Environment

In today’s mobile communications landscape, sustainability and user experience innovation are more important than ever.

 

“End users now prioritize reliable connectivity and longer battery life over raw performance metrics such as data rates and latency,” said Zhang. “The focus has shifted beyond technical specifications to overall user experience.”

 

In line with this shift, Samsung has been conducting 6G research since 2020. The company published its “AI-Native & Sustainable Communication” white paper in February 2025, outlining the key challenges and technology vision for 6G commercialization. The paper highlights four directions — AI-Native, Sustainable Network, Ubiquitous Coverage and Secure and Resilient Network. This represents a comprehensive network strategy that goes beyond improving performance to encompass both sustainability and future readiness.

 

▲ The four key technological directions in “AI-Native & Sustainable Communication”

 

“AI is not only a core technology of 5G but is also expected to be the cornerstone of 6G — enhancing overall performance, boosting operational efficiency and cutting costs,” he emphasized. “Deeply embedding AI from the initial design stage to create autonomous and intelligent networks is exactly what we mean by ‘AI-Native.’”

 

 

How AI-RAN Transforms Next-Gen Network Architecture

To realize the evolution toward next generation networks and the vision for 6G, network architecture must evolve to the next level. At the center of this transformation is innovation in RAN, the core of mobile communications.

 

Traditional RAN has relied on dedicated hardware systems for base stations and antennas. However, as data traffic and service demands have surged, this approach has revealed limitations in transmission capacity, latency and energy efficiency — while requiring significant manpower and time for resource management. To address these challenges, virtualized RAN (vRAN) was introduced.

 

vRAN implements network functions in software, significantly enhancing flexibility and scalability. By leveraging cloud-native technologies, network functions can run seamlessly on general-purpose servers — enabling operators to reduce capital costs and dynamically allocate computing resources in response to traffic fluctuations. vRAN is a key platform for modernization, efficiency and the integration of future technologies without requiring a full infrastructure rebuild. Samsung has already successfully mass deployed its vRAN in the U.S. and worldwide.

 

▲ Network Evolution towards AI-RAN

 

AI-RAN ushers in a new era of network evolution, embedding AI to create an intelligent RAN that learns, predicts and optimizes on its own. Not only does AI integration advance 4G and 5G networks that are based on vRAN, but it also serves as the breakthrough and engine for 6G. Real-time optimization sets the platform apart, boosting performance while reducing energy consumption to improve efficiency and stability.

 

In addition, AI-RAN enables networks to autonomously assess conditions and maintain optimal connectivity. “For instance, the system can predict a user’s movement path or radio environment in advance to determine the best transmission method, while AI-driven processing manages complex signal operations to minimize latency,” Zhang explained. “By analyzing usage patterns, AI-RAN can allocate tailored network resources and deliver more personalized user experiences.”

 

 

Proven Potential Through Research

Samsung is advancing network performance and stability through research in AI-based channel estimation, signal processing and system automation. Samsung has verified the feasibility of these technologies through Proof of Concept (PoC). At MWC 2025, the company demonstrated AI-RAN’s ability to improve resource utilization even in noisy, interference-prone environments.

 

“With AI-based channel estimation, we can accurately predict and estimate dynamic channel characteristics that are corrupted by noise and interference. This higher accuracy leads to more efficient resource utilization and overall network performance gains,” said Zhang. “AI also enhances signal processing. AI-driven enhancements in modem capabilities enable more precise modulation and demodulation, resulting in higher data throughput and lower latency.”

 

System automation for RAN optimization further analyzes user-specific communication quality and real-time changes in the network environment, dynamically adjusting modulation, coding schemes and resource allocation. This allows the network to predict and mitigate potential failures in advance, easing operational burdens while improving reliability and efficiency.

 

“These advancements enhance network performance, stability and user satisfaction, driving innovation in next-generation communication systems,” he added.

 

 

Global Collaboration Fuels AI-RAN Progress

International collaboration in research and standardization — such as the AI-RAN Alliance — is central to advancing AI-RAN technology and expanding the global ecosystem.

 

“Global collaboration enables knowledge sharing and joint research, accelerating the industry’s adoption of AI-RAN,” said Zhang. “Samsung is a founding member of the AI-RAN Alliance and currently holds leadership positions as vice chair of the board and chair of the AI-on-RAN Working Group.”

 

▲ Organizational structure and roles of the AI-RAN Alliance

 

Building on its expertise in communications and AI, Samsung is advancing R&D in areas such as real-time optimization through edge computing and adaptability to dynamic environments.

 

“Samsung’s involvement accelerates AI‑RAN adoption by bridging technology gaps, promoting open innovation and ensuring that advances in AI‑driven networks are both commercially viable and technically sound — thereby advancing the ecosystem’s maturity and global impact,” he explained.

 

Through this commitment to collaboration and investment, AI-RAN technology is expected to progress rapidly worldwide and become a core competitive advantage in next-generation communications.

 

 

Leading the Way Into the 6G Era

Samsung is strengthening its edge in AI-RAN with a distinctive approach that combines innovation, collaboration and end-to-end solutions in preparation for the 6G era.

 

Through an integrated design that develops RAN hardware and AI-based software in parallel, the company is enabling optimization across the entire network stack. Samsung has boosted performance with its deep expertise in communications, while partnerships with global telecom operators and standardization bodies are helping accelerate industry adoption of its research.

 

Continued research in areas such as radio frequency (RF), antennas, ultra-massive multiple-input multiple-output (MIMO)1 and security is playing a critical role in transforming 6G from vision to market-ready technology. With the establishment of its AI-RAN Lab, Samsung is accelerating prototyping and testing, shortening the R&D cycle and paving the way for faster commercialization.

 

“Beyond ecosystem development, Samsung is positioning itself as a leader in AI-RAN through a blend of innovation, strategic collaboration and end-to-end solutions,” Zhang emphasized. “Together, these elements cement Samsung’s role at the forefront of AI-RAN.”

 

 

AI-RAN is redefining next-generation communications. By integrating AI across networks, Samsung is leading the way — and expectations are growing for the company’s role in shaping the future.

 

In the final part of this series, Samsung Newsroom will explore the latest trends in the convergence of communications and AI, along with Samsung’s future network strategies in collaboration with global partners.

 

 

1 Multiple-input multiple-output (MIMO) transmission improves communication performance by utilizing multiple antennas at both the transmitter and receiver.



Source link

Continue Reading

Trending