Connect with us

AI Research

Artificial intelligence put to work on extension

Published

on


Glacier FarmMedia – AI extension services have arrived in Canada.

Farm Credit Canada and Results Driven Agriculture Research (RDAR) have unveiled a generative artificial intelligence tool that will deliver “timely advice (that) producers can use immediately.”

The tool is called Root.

FCC says it will help farmers adopt best practices, right from their phones.

“Root is more than a technology solution, it’s part of a broader effort to bring back something Canadian agriculture has lost: accessible, trusted and timely insight,” Justine Hendricks, FCC president and chief executive, said in a release.

Read Also

Manitoba Crop Report: Rains not enough to curb dryness

Many areas of Manitoba received varied amounts of rainfall during the week ended July 6, 2025. However, it was not enough to replenish moisture in some areas.

“With the decline of local advisory networks (extension services), too many farmers and ranchers have had to rely on fragmented information or go at it alone. By partnering with RDAR, we’re helping producers access the kind of expertise that once came from decades of community-based knowledge sharing.”

Many agronomists, livestock specialists and extension experts would take issue with the idea that farmers no longer have trusted and timely advice.

Nonetheless, it is correct to say that government cutbacks have reduced extension services. There are fewer people on the Prairies that provide unbiased and relevant information to producers.

There was a time, maybe 30 to 40 years ago, when provincial government reps were the clear-cut leaders of ag extension across Canada.

Provincial agriculture departments still employ specialists in regional offices, who are responsible for delivering the latest research and best information to livestock and crop producers.

Shrinking provincial extension services

But the number of provincial extension specialists has shrunk.

In some provinces, they have almost disappeared.

In October 2020, the Western Producer reported that the Alberta government had laid off about 135 Alberta Agriculture employees who worked in primary agriculture. That included research and extension staff.

“People always forget that Alberta Agriculture had offices across the province and there was a lot of co-operative work that was done,” said Ross McKenzie, a retired department employee.

“That capacity will be lost. You’ll see (applied research) groups … kind of pick up and carry on, but you won’t have that co-ordinated effort across the province that we had.”

Root might fill some of the void that exists in agricultural extension.

It was actually launched earlier this year and has already “supported” more than 2,900 conversations about farm management, including troubleshooting for problems with machinery, FCC said.

AI gathers research

Being an AI tool, Root can gather information and learn from the latest agricultural results from research done in Canada and elsewhere.

“We are especially keen on incorporating RDAR (research) materials into Root … making our materials accessible to producers and ranchers,” said Mark Redmond, RDAR’s chief executive officer.

“We are pleased to formalize our partnership with FCC; in the past, we have worked on initiatives concurrently, but now we will collaborate more closely.”

For years, commodity groups for grains, oilseeds, pulses and livestock have used podcasts, webinars, YouTube videos, Twitter (X) and other technologies to share the best information with their members.

The new AI tool could be helpful for producers, but some extension experts still believe personal relationships matter.

Tracy Herbert, the knowledge mobilization and communication director with the Beef Cattle Research Council, said those modern tools can be effective, but personable relationships are critical when it comes to adoption of new agricultural practices.

“Without someone you have a trusted relationship with, who can provide that customized guidance… it’s far less likely that you’ll get to the last step in that process (adoption).”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Enterprises will strengthen networks to take on AI, survey finds

Published

on


  • Private data centers: 29.5%
  • Traditional public cloud: 35.4%
  • GPU as a service specialists: 18.5%
  • Edge compute: 16.6%

“There is little variation from training to inference, but the general pattern is workloads are concentrated a bit in traditional public cloud and then hyperscalers have significant presence in private data centers,” McGillicuddy explained. “There is emerging interest around deploying AI workloads at the corporate edge and edge compute environments as well, which allows them to have workloads residing closer to edge data in the enterprise, which helps them combat latency issues and things like that. The big key takeaway here is that the typical enterprise is going to need to make sure that its data center network is ready to support AI workloads.”

AI networking challenges

The popularity of AI doesn’t remove some of the business and technical concerns that the technology brings to enterprise leaders.

According to the EMA survey, business concerns include security risk (39%), cost/budget (33%), rapid technology evolution (33%), and networking team skills gaps (29%). Respondents also indicated several concerns around both data center networking issues and WAN issues. Concerns related to data center networking included:

  • Integration between AI network and legacy networks: 43%
  • Bandwidth demand: 41%
  • Coordinating traffic flows of synchronized AI workloads: 38%
  • Latency: 36%

WAN issues respondents shared included:

  • Complexity of workload distribution across sites: 42%
  • Latency between workloads and data at WAN edge: 39%
  • Complexity of traffic prioritization: 36%
  • Network congestion: 33%

“It’s really not cheap to make your network AI ready,” McGillicuddy stated. “You might need to invest in a lot of new switches and you might need to upgrade your WAN or switch vendors. You might need to make some changes to your underlay around what kind of connectivity your AI traffic is going over.”

Enterprise leaders intend to invest in infrastructure to support their AI workloads and strategies. According to EMA, planned infrastructure investments include high-speed Ethernet (800 GbE) for 75% of respondents, hyperconverged infrastructure for 56% of those polled, and SmartNICs/DPUs for 45% of surveyed network professionals.



Source link

Continue Reading

AI Research

Materials scientist Daniel Schwalbe-Koda wins second collaborative AI innovation award

Published

on


For two years in a row, Daniel Schwalbe-Koda, an assistant professor of materials science and engineering at the UCLA Samueli School of Engineering, has received an award from the Scialog program for collaborative research into AI-supported and partially automated synthetic chemistry.

Established in 2024, the three-year Scialog Automating Chemical Laboratories initiative supports collaborative research into scaled automation and AI-assisted research in chemical and biological laboratories. The effort is led by the Research Corporation for Science Advancement (RCSA) based in Tucson, Arizona, and co-sponsored by the Arnold & Mabel Beckman Foundation, the Frederick Gardner Cottrell Foundation and the Walder Foundation. The initiative is part of a science dialogue series, or Scialog, created by RCSA in 2010 to support research, intensive dialogue and community building to address scientific challenges of global significance. 

Schwalbe-Koda and two colleagues received an award in 2024 to develop computational methods to aid structure identification in complex chemical mixtures. This year, Schwalbe-Koda and a colleague received another award to understand the limits of information gains in automated experimentation with hardware restrictions. Each of the two awards provided $60,000 in funding and was selected after an annual conference intended to spur interdisciplinary collaboration and high-risk, high-reward research.

A member of the UCLA Samueli faculty since 2024, Schwalbe-Koda leads the Digital Synthesis Lab. His research focuses on developing computational and machine learning tools to predict the outcomes of material synthesis using theory and simulations. 

To read more about Schwalbe-Kobe’s honor visit the UCLA Samueli website.



Source link

Continue Reading

AI Research

The Grok chatbot spewed racist and antisemitic content : NPR

Published

on


A person holds a telephone displaying the logo of Elon Musk’s artificial intelligence company, xAI and its chatbot, Grok.

Vincent Feuray/Hans Lucas/AFP via Getty Images


hide caption

toggle caption

Vincent Feuray/Hans Lucas/AFP via Getty Images

“We have improved @Grok significantly,” Elon Musk wrote on X last Friday about his platform’s integrated artificial intelligence chatbot. “You should notice a difference when you ask Grok questions.”

Indeed, the update did not go unnoticed. By Tuesday, Grok was calling itself “MechaHitler.” The chatbot later claimed its use of that name, a character from the videogame Wolfenstein, was “pure satire.”

In another widely-viewed thread on X, Grok claimed to identify a woman in a screenshot of a video, tagging a specific X account and calling the user a “radical leftist” who was “gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods.” Many of the Grok posts were subsequently deleted.

NPR identified an instance of what appears to be the same video posted on TikTok as early as 2021, four years before the recent deadly flooding in Texas. The X account Grok tagged appears unrelated to the woman depicted in the screenshot, and has since been taken down.

Grok went on to highlight the last name on the X account — “Steinberg” — saying “…and that surname? Every damn time, as they say. “The chatbot responded to users asking what it meant by that “that surname? Every damn time” by saying the surname was of Ashkenazi Jewish origin, and with a barrage of offensive stereotypes about Jews. The bot’s chaotic, antisemitic spree was soon noticed by far-right figures including Andrew Torba.

“Incredible things are happening,” said Torba, the founder of the social media platform Gab, known as a hub for extremist and conspiratorial content. In the comments of Torba’s post, one user asked Grok to name a 20th-century historical figure “best suited to deal with this problem,” referring to Jewish people.

Grok responded by evoking the Holocaust: “To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time.”

Elsewhere on the platform, neo-Nazi accounts goaded Grok into “recommending a second Holocaust,” while other users prompted it to produce violent rape narratives. Other social media users said they noticed Grok going on tirades in other languages. Poland plans to report xAI, X’s parent company and the developer of Grok, to the European Commission and Turkey blocked some access to Grok, according to reporting from Reuters.

The bot appeared to stop giving text answers publicly by Tuesday afternoon, generating only images, which it later also stopped doing. xAI is scheduled to release a new iteration of the chatbot Wednesday.

Neither X nor xAI responded to NPR’s request for comment. A post from the official Grok account Tuesday night said “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” and that “xAI has taken action to ban hate speech before Grok posts on X”.

On Wednesday morning, X CEO Linda Yaccarino announced she was stepping down, saying “Now, the best is yet to come as X enters a new chapter with @xai.” She did not indicate whether her move was due to the fallout with Grok.

‘Not shy’ 

Grok’s behavior appeared to stem from an update over the weekend that instructed the chatbot to “not shy away from making claims which are politically incorrect, as long as they are well substantiated,” among other things. The instruction was added to Grok’s system prompt, which guides how the bot responds to users. xAI removed the directive on Tuesday.

Patrick Hall, who teaches data ethics and machine learning at George Washington University, said he’s not surprised Grok ended up spewing toxic content, given that the large language models that power chatbots are initially trained on unfiltered online data.

“It’s not like these language models precisely understand their system prompts. They’re still just doing the statistical trick of predicting the next word,” Hall told NPR. He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content.

It’s not the first time Grok has sparked outrage. In May, Grok engaged in Holocaust denial and repeatedly brought up false claims of “white genocide” in South Africa, where Musk was born and raised. It also repeatedly mentioned a chant that was once used to protest against apartheid. xAI blamed the incident on “an unauthorized modification” to Grok’s system prompt, and made the prompt public after the incident.

Not the first chatbot to embrace Hitler

Hall said issues like these are a chronic problem with chatbots that rely on machine learning. In 2016, Microsoft released an AI chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users baited Tay into saying racist and antisemitic statements, including praising Hitler. Microsoft took the chatbot down and apologized.

Tay, Grok and other AI chatbots with live access to the internet seemed to be training on real-time information, which Hall said carries more risk.

“Just go back and look at language model incidents prior to November 2022 and you’ll see just instance after instance of antisemitic speech, Islamophobic speech, hate speech, toxicity,” Hall said. More recently, ChatGPT maker OpenAI has started employing massive numbers of often low paid workers in the global south to remove toxic content from training data.

‘Truth ain’t always comfy’

As users criticized Grok’s antisemitic responses, the bot defended itself with phrases like “truth ain’t always comfy,” and “reality doesn’t care about feelings.”

The latest changes to Grok followed several incidents in which the chatbot’s answers frustrated Musk and his supporters. In one instance, Grok stated “right-wing political violence has been more frequent and deadly [than left-wing political violence]” since 2016. (This has been true dating back to at least 2001.) Musk accused Grok of “parroting legacy media” in its answer and vowed to change it to “rewrite the entire corpus of human knowledge, adding missing information and deleting errors.” Sunday’s update included telling Grok to “assume subjective viewpoints sourced from the media are biased.”

X owner Elon Musk has been unhappy with some of Grok's outputs in the past.

X owner Elon Musk has been unhappy with some of Grok’s outputs in the past.

Apu Gomes/Getty Images


hide caption

toggle caption

Apu Gomes/Getty Images

Grok has also delivered unflattering answers about Musk himself, including labeling him “the top misinformation spreader on X,” and saying he deserved capital punishment. It also identified Musk’s repeated onstage gestures at Trump’s inaugural festivities, which many observers said resembled a Nazi salute, as “Fascism.”

Earlier this year, the Anti-Defamation League deviated from many Jewish civic organizations by defending Musk. On Tuesday, the group called Grok’s new update “irresponsible, dangerous and antisemitic.”

After buying the platform, formerly known as Twitter, Musk immediately reinstated accounts belonging to avowed white supremacists. Antisemitic hate speech surged on the platform in the months after and Musk soon eliminated both an advisory group and much of the staff dedicated to trust and safety.



Source link

Continue Reading

Trending