Connect with us

AI Insights

Google releases pint-size Gemma open AI model

Published

on


Big tech has spent the last few years creating ever-larger AI models, leveraging rack after rack of expensive GPUs to provide generative AI as a cloud service. But tiny AI matters, too. Google has announced a tiny version of its Gemma open model designed to run on local devices. Google says the new Gemma 3 270M can be tuned in a snap and maintains robust performance despite its small footprint.

Google released its first Gemma 3 open models earlier this year, featuring between 1 billion and 27 billion parameters. In generative AI, the parameters are the learned variables that control how the model processes inputs to estimate output tokens. Generally, the more parameters in a model, the better it performs. With just 270 million parameters, the new Gemma 3 can run on devices like smartphones or even entirely inside a web browser.

Running an AI model locally has numerous benefits, including enhanced privacy and lower latency. Gemma 3 270M was designed with these kinds of use cases in mind. In testing with a Pixel 9 Pro, the new Gemma was able to run 25 conversations on the Tensor G4 chip and use just 0.75 percent of the device’s battery. That makes it by far the most efficient Gemma model.



Gemma 3 270M shows strong instruction-following for its small size.

Credit:
Google

Gemma 3 270M shows strong instruction-following for its small size.


Credit:

Google

Developers shouldn’t expect the same performance level of a multi-billion-parameter model, but Gemma 3 270M has its uses. Google used the IFEval benchmark, which tests a model’s ability to follow instructions, to show that its new model punches above its weight. Gemma 3 270M hits a score of 51.2 percent in this test, which is higher than other lightweight models that have more parameters. The new Gemma falls predictably short of 1 billion-plus models like Llama 3.2, but it gets closer than you might think for having just a fraction of the parameters.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Mistral AI (hereinafter referred to as Mistral), a leading artificial intelligence (AI) startup in F..

Published

on


European version of open AI, Mistral AI enterprise value jumps to €12 billion

Mistral AI Logo

Mistral AI (hereinafter referred to as Mistral), a leading artificial intelligence (AI) startup in France, is speeding up its investment financing.

According to Bloomberg News on the 4th (local time), Mistral is valued at about 12 billion euros (about 19.5 trillion won) in corporate value and is nearing the end of negotiations on new financing worth 2 billion euros (about 3.24 trillion won).

Mistral is a company established in 2023 by Artur Mensch and others from Google DeepMind and is considered a European AI alternative company against OpenAI and Anthropic in the United States.

Until now, Mistral has grown its presence by releasing an open-source language model and a chatbot “Le Chat” aimed at European users.

The company secured an investment of about 600 million euros from Samsung and Nvidia in June last year, with an enterprise value of 5.8 billion euros at the time. The investment is the first since then.

Bloomberg evaluated the investment, saying, “This solidifies Mistral’s position as one of the most valuable technology startups in Europe.”

In addition to the mistral, major AI companies have recently aggressively raised funds despite the AI bubble controversy, heating up investment.

OpenAI secured an investment of $40 billion in March this year, and Anthropic, called OpenAI’s rival, recently attracted $13 billion in funds, jumping to $183 billion in corporate value. The figure nearly tripled in just five months.

Meanwhile, OpenAI is also seeking to sell its holdings of current and former employees. According to CNBC, the amount of stock sales by employees has expanded from $6 billion to $10.3 billion, and OpenAI’s corporate value is expected to be valued at about $500 billion at the end of October, when the transaction is completed. At the time of OpenAI’s investment attraction in March, the enterprise value was about $300 billion.



Source link

Continue Reading

AI Insights

Switzerland developed its own artificial intelligence model Apertus – Telegraph

Published

on


A new player has emerged in the artificial intelligence race, as Switzerland unveiled Apertus, its national open-source Large Language Model (LLM), which it hopes will be an alternative to models offered by companies like OpenAI.

Advertisement Apertus is a Latin word meaning “open” and was developed by the Swiss Federal Institute of Technology in Lausanne (EPFL), ETH Zurich and the Swiss National Supercomputing Center (CSCS), which are public institutions.

“Currently, Apertus is the leading public AI model, developed by public institutions in the public interest. It is our best proof yet that AI can be a form of public infrastructure like highways, water or electricity,” said Joshua Tan, a leading proponent of turning AI into public infrastructure.

The Swiss institutions designed Apertus to be completely open, allowing users to review every part of its training process. In addition to the model itself, they have published comprehensive documentation and source code of its training process, as well as the datasets they used.

Apertus was developed in compliance with Swiss data protection and copyright laws, making it perhaps one of the best choices for companies that want to comply with European regulations.

Anyone can use the new model. Researchers, hobbyists, and even companies are welcome to use it and adapt it to their needs. They can use it to create chatbots, translators, and even educational or training tools. Apertus has been trained on 15 trillion tokens in more than 1000 languages, of which 40 percent of the data is in languages ​​other than English, including Swiss German and Romansh.

It should be noted that artificial intelligence companies like Perplexity have previously been accused of downloading content from websites and bypassing protocols intended to block their browsers.

Several artificial intelligence companies have also sued news organizations and creators for using their content to train their models without permission.

Apertus is currently available in two sizes, with 8 billion and 70 billion parameters. It is currently available through Swisscom, a Swiss information and communications technology company, or through Hugging Face. /Telegraph/





Source link

Continue Reading

AI Insights

‘Just blame AI’: Trump hints at using artificial intelligence as shield for controversies

Published

on


US President Donald Trump has suggested that artificial intelligence could become a convenient scapegoat for political controversies, raising concerns about how the technology might be used to deflect accountability.

Speaking at the White House this week, Trump was asked about a viral video that appeared to show a bag being tossed out of a window at the presidential residence. Although officials had already explained it was routine maintenance, Trump dismissed the clip by saying: “That’s probably AI-generated.” He added that the White House windows are sealed and bulletproof, joking that even First Lady Melania Trump had complained about not being able to open them for fresh air.

But Trump went further, framing AI as both a threat and an excuse. “One of the problems we have with AI, it’s both good and bad. If something happens really bad, just blame AI,” he remarked, hinting that future scandals could be brushed aside as artificial fabrications.

This casual dismissal reflects a growing trend in Trump’s relationship with AI. In July, he reposted a fabricated video that falsely depicted former President Barack Obama being arrested in the Oval Office. He also admitted to being fooled by a machine-made life-long video montage of himself, from childhood to the present day.

Experts warn that as deepfake technology becomes increasingly sophisticated, it could destabilise politics by eroding public trust in what is real. If leaders begin to label inconvenient evidence as AI-generated, whether true or not, the result could be a dangerous precedent where accountability becomes optional and facts are endlessly disputed.

For Trump, AI appears to represent both risk and opportunity. While he acknowledges its ability to create “phony things,” he also seems to see it as a ready-made shield against future controversies. In his own words, the solution may be simple: “just blame AI.”



Source link

Continue Reading

Trending