Connect with us

AI Insights

Nvidia AI challenger Groq announces European expansion — Helsinki data center targets burgeoning AI market

Published

on


American AI hardware and software firm, Groq (not to be confused with Elon Musk’s AI venture, Grok), has announced it’s establishing its first data center in Europe as part of its efforts to compete in the rapidly expanding AI industry in the EU market, as per CNBC. It’s looking to capture a sizeable portion of the inference market, leveraging its efficient Language Processing Unit (LPU), application-specific integrated circuit (ASIC) chips to offer fast, efficient inference that it claims will outcompete the GPU-driven alternatives.

“We decided about four weeks ago to build a data center in Helsinki, and we’re actually unloading racks into it right now,” Groq CEO Jonathan Ross said in his interview with CNBC. “We expect to be serving traffic to it by the end of this week. That’s built fast, and it’s a very different proposition than what you see in the rest of the market.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Cal State LA secures funding for two artificial intelligence projects from CSU

Published

on


Cal State LA has won funding for two faculty-led artificial intelligence projects through the California State University’s (CSU) Artificial Intelligence Educational Innovations Challenge (AIEIC).

The CSU launched the initiative to ensure that faculty from its 23 campuses are key drivers of innovative AI adoption and deployment across the system. In April, the AIEIC invited faculty to develop innovative instructional strategies that leverage AI tools.

The response was overwhelming, with more than 400 proposals submitted by over 750 faculty members across the state. The Chancellor’s Office will award a total of $3 million to fund the 63 winning proposals, which were chosen for their potential to enable transformative teaching methods, foster groundbreaking research, and address key concerns about AI adoption within academia.

“CSU faculty and staff aren’t just adopting AI—they are reimagining what it means to teach, learn, and prepare students for an AI-infused world,” said Nathan Evans, CSU deputy vice chancellor of Academic and Student Affairs and chief academic officer. “The number of funded projects underscores the CSU’s strong commitment to innovation and academic excellence. These initiatives will explore and demonstrate effective AI integration in student learning, with findings shared systemwide to maximize impact. Our goal is to prepare students to engage with AI strategically, ethically, and successfully in California’s fast-changing workforce.”

Cal State LA’s winning projects are titled “Teaching with Integrity in the Age of AI” and “AI-Enhanced STEM Supplemental Instruction Workshops.”

For “Teaching with Integrity in the Age of AI,” the university’s Center for Effective Teaching and Learning will form a Faculty Learning Community (FLC) to address faculty concerns about AI and academic integrity. From September 2025 to April 2026, the FLC will support eight to 15 cross-disciplinary faculty members in developing AI-informed, ethics-focused pedagogy. Participants will explore ways to minimize AI-facilitated cheating, apply ethical decision-making frameworks, and create assignments aligned with AI literacy standards.

The “AI-Enhanced STEM Supplemental Instruction Workshops” project will look to expand and improve student success in challenging first-year Science, Technology, Engineering, and Math courses by integrating generative AI tools, specifically ChatGPT, into Supplemental Instruction workshops. By leveraging AI, the project addresses the limitations of collaborative learning environments, providing personalized, real-time feedback, and guidance.

The AIEIC is a key component of the CSU’s broader AI Strategy, which was launched in February 2025 to establish the CSU as the first AI-empowered university system in the nation. It was designed with three goals: to encourage faculty to explore AI literacies and competencies, focusing on how to help students build a fluent relationship with the technologies; to address the need for meaningful engagement with AI, emphasizing strategies that ensure students actively participate in learning alongside AI; and to examine the ethics of AI use in higher education, promoting approaches that embed academic integrity.

Awarded projects span a broad range of academic areas, including business, engineering, ethnic studies, history, health sciences, teacher preparation, scholarly writing, journalism, and theatre arts. Several projects are collaborative efforts across multiple disciplines or focus on faculty development—equipping instructors with the tools to navigate course design, policy development, and classroom practices in an AI-enabled environment. 



Source link

Continue Reading

AI Insights

Will we ever feel comfortable with AIs taking on important tasks?

Published

on


Imagine a map of the world, divided by national borders. How many colours do you need to fill each country, plus the sea, without any identical colours touching?

The answer is four – indeed, no matter what your map looks like, four colours will always be enough. But proving this required a schism in mathematics. The four colour theorem, as it is known, was the first major result to be proved using a computer. The 1976 proof reduced the problem to a few thousand map arrangements, each of which was then checked by software.

Many mathematicians at the time were up in arms. How could something be called proven, they argued, if the core of the proof hides behind an unknowable machine? Perhaps because of this pushback, computer-aided proofs have remained a minority pursuit.

But that may be starting to change. As we report in “AI could be about to completely change the way we do mathematics”, the latest generation of artificial intelligence is turning this argument on its head. Why, ask its proponents, should we trust the mathematics of flawed humans, with their assumptions and shortcuts, when we can turn the verification of a proof over to a machine?

The argument raging over AI in mathematics is a microcosm of a larger question facing society

Naturally, not everyone agrees with this suggestion. And the argument raging over AI’s use in mathematics is a microcosm of a larger question facing society: just when is it appropriate to let a machine take over? Tech firms are increasingly promising that AI agents will remove drudgery by taking on mundane tasks from processing invoices to booking holidays. However, when we tried letting them run our day (see “‘Flashes of brilliance and frustration’: I let an AI agent run my day”), we found that these agents aren’t yet fully up to the job.

Relinquishing control by handing your credit cards or your password to an opaque AI creates the same sense of unease as with the four colour proof. Only now, we are no longer colouring in a map, but trying to find its edges as we probe new territory. Does evidence that we can rely on machines await us over the horizon, or merely a digital version of “here be dragons”?

Topics:



Source link

Continue Reading

AI Insights

New ‘Centaur’ AI model can predict how we behave with unprecedented accuracy

Published

on


A new artificial intelligence (AI) model can predict and simulate human thought and behavior with a surprising degree of accuracy. The language model, called Centaur, could help researchers improve our understanding of human cognition.

The model was trained on more than 10 million real decisions made by participants of psychological experiments. Using this dataset, Centaur predicted and simulated how people would think and behave with 64% accuracy, researchers reported July 2 in the journal Nature.



Source link

Continue Reading

Trending