Connect with us

Tools & Platforms

Tampa General nurses are now using AI with their patients

Published

on


As a nurse specializing in neuroscience at Tampa General Hospital, Renee Albert spends a lot of time updating charts.

The process involves writing down exactly what they did with patients and when. It can eat up a lot of time – some nurses spend up to 15% of their shifts documenting interactions with patients.

In an effort to cut down on that, Tampa General Hospital implemented a pilot artificial intelligence program in February. Using “ambient listening technology” developed by Microsoft and Epic, a healthcare technology company, the AI program listens while nurses are with patients and converts the audio to clinical summaries.

To use the tech, nurses are given smartphones with an app built in, said Amit Patel, the chief nursing informatics officer at Tampa General. Once logged in, nurses open individual patients’ charts, press record and the app starts listening.

Tampa General Hospital nurse Carlysa Telemaque was one of the early adopters of the ambient listening technology. [ DANIEL WALLACE | Daniel Wallace, Tampa General Hospital ]

Before she starts recording, Albert asks her patients if they are OK with ambient listening. So far no one has declined, she said.

Albert’s floor was one of the first in the hospital to try it out. Wendi Goodson-Celerin, chief nursing executive at Tampa General, said the neurological floor was chosen for the pilot program because of the heavy workload nurses face.

It was tough to train all of the nurses on the AI, said Goodson-Celerin. They had varying levels of proficiency among the staff, and some were reluctant. But Goodson-Celerin said the tech is essential for the hospital to keep up in a changing world.

The current environment is “forever changing, and so we have to be on the forefront of that,” she said. “Or you kind of get left behind, and you get stagnant.”

There were some growing pains in the beginning though.

The AI made mistakes, so Albert had to correct them. This took her about the same time as it did to manually document. But the AI was able to improve, and now fewer corrections are needed.

The feedback from the nurses helps improve the AI, said Patel.

Over time Albert and her collogues developed strategies on how to best use it. If a patient is in a shared room, for example, Albert often goes into the hallway or another quiet area because the AI could pick up on nearby voices.

Tampa General has not had to pay for the current program, but by the end of the year, Patel expects Microsoft and Epic to roll out the technology in full. He said the hospital is prepared to absorb those costs once they come, but declined to say how much.

Keep up with Tampa Bay’s top headlines

Subscribe to our free DayStarter newsletter

We’ll deliver the latest news and information you need to know every morning.

You’re all signed up!

Want more of our free, weekly newsletters in your inbox? Let’s get started.

Explore all your options

The tech may end up saving the hospital money because nurses will work fewer overtime hours completing paperwork, said Goodson-Celerin.

Over the next few months, Albert expects the ambient listening technology to get better as it learns from past experiences. It has allowed her to focus more on patients, instead of constantly updating charts.

“I hope to have it where the nurse is not really in front of a keyboard much at all during a shift,” said Patel.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Australia is set to get more AI data centres. Local communities need to be more involved

Published

on


Data centres are the engines of the internet. These large, high-security facilities host racks of servers that store and process our digital data, 24 hours a day, seven days a week.

There are already more than 250 data centres across Australia. But there are set to be more, as the federal government’s plans for digital infrastructure expansion gains traction. We recently saw tech giant Amazon’s recent pledge to invest an additional A$20 billion in new data centres across Sydney and Melbourne, alongside the development of three solar farms in Victoria and Queensland to help power them.

The New South Wales government also recently launched a new authority to fast-track approvals for major infrastructure projects.

These developments will help cater to the surging demand for generative artificial intelligence (AI). They will also boost the national economy and increase Australia’s digital sovereignty – a global shift toward storing and managing data domestically under national laws.

But the everyday realities of communities living near these data centres aren’t as optimistic. And one key step toward mitigating these impacts is ensuring genuine community participation in shaping how Australia’s data-centre future is developed.

The sensory experience of data centres

Data centres are large, warehouse-like facilities. Their footprint typically ranges from 10,000 to 100,000 square metres. They are set on sites with backup generators and thousands of litres of stored diesel and enclosed by high-security fencing. Fluorescent lighting illuminates them every hour of the day.

A data centre can emanate temperatures of 35°C to 45°C. To prevent the servers from overheating, air conditioners are continuously humming. In water-cooled facilities, water pipes transport gigalitres of cool water through the data centre each day to absorb the heat produced.

Data centres can place substantial strain on the local energy grid and water supply.

In some places where many data centres have been built, such as Northern Virginia in the United States and Dublin in Ireland, communities have reported rising energy and water prices. They have also reported water shortages and the degradation of valued natural and historical sites.

They have also experienced economic impacts. While data centre construction generates high levels of employment, these facilities tend to employ a relatively small number of staff when they are operating.

These impacts have prompted some communities to push back against new data centre developments. Some communities have even filed lawsuits to halt proposed projects due to concerns about water security, environmental harm and heavy reliance on fossil fuels.

A unique opportunity

To date, communities in Australia have been buffered from the impacts of data centres. This is largely because Australia has outsourced most of its digital storage and processing needs (and associated impacts) to data centres overseas.

But this is now changing. As Australia rapidly expands its digital infrastructure, the question of who gets to shape its future becomes increasingly important.

To avoid amplifying the social inequities and environmental challenges of data centres, the tech industry and governments across Australia need to include the communities who will live alongside these crucial pieces of digital infrastructure.

This presents Australia with a unique opportunity to set the standard for creating a sustainable and inclusive digital future.

A path to authentic community participation

Current planning protocols for data centres limit community input. But there are three key steps data centre developers and governments can take to ensure individual developments – and the broader data centre industry – reflect the values, priorities and aspirations of local communities.

1. Developing critical awareness about data centres

People want a greater understanding of what data centres are, and how they will affect their everyday lives.

For example, what will data centres look, sound and feel like to live alongside? How will they affect access to drinking water during the next drought? Or water and energy prices during the peak of summer or winter?

Genuinely engaging with these questions is a crucial step toward empowering communities to take part in informed conversations about data centre developments in their neighbourhoods.

2. Involving communities early in the planning process

Data centres are often designed using generic templates, with minimal adaptation to local conditions or concerns. Yet each development site has a unique social and ecological context.

By involving communities early in the planning process, developers can access invaluable local knowledge about culturally significant sites, biodiversity corridors, water-sensitive areas and existing sustainability strategies that may be overlooked in state-level planning frameworks.

This kind of local insight can help tailor developments to reduce harm, enhance benefits, and ensure local priorities are not just heard, but built into the infrastructure itself.

3. Creating more inclusive visions of Australia’s data centre industry

Communities understand the importance of digital infrastructure and are generally supportive of equitable digital access. But they want to see the data centre industry grow in ways that acknowledges their everyday lives, values and priorities.

To create a more inclusive future, governments and industry can work with communities to broaden their “clean” visions of digital innovation and economic prosperity to include the “messy” realities, uncertainties and everyday aspirations of those living alongside data centre developments.

This approach will foster greater community trust and is essential for building more complex, human-centred visions of the tech industry’s future.



Source link

Continue Reading

Tools & Platforms

Google Launches Lightweight Gemma 3n, Expanding Edge AI Efforts — Campus Technology

Published

on


Google Launches Lightweight Gemma 3n, Expanding Edge AI Efforts

Google DeepMind has officially launched Gemma 3n, the latest version of its lightweight generative AI model designed specifically for mobile and edge devices — a move that reinforces the company’s emphasis on on-device computing.

The new model builds on the momentum of the original Gemma family, which has seen more than 160 million cumulative downloads since its launch last year. Gemma 3n introduces expanded multimodal support, a more efficient architecture, and new tools for developers targeting low-latency applications across smartphones, wearables, and other embedded systems.

“This release unlocks the full power of a mobile-first architecture,” said Omar Sanseviero and Ian Ballantyne, Google developer relations engineers, in a recent blog post.

Multimodal and Memory-Efficient by Design

Gemma 3n is available in two model sizes, E2B (5 billion parameters) and E4B (8 billion), with effective memory footprints similar to much smaller models — 2GB and 3GB respectively. Both versions natively support text, image, audio, and video inputs, enabling complex inference tasks to run directly on hardware with limited memory resources.

A core innovation in Gemma 3n is its MatFormer (Matryoshka Transformer) architecture, which allows developers to extract smaller sub-models or dynamically adjust model size during inference. This modular approach, combined with Mix-n-Match configuration tools, gives users granular control over performance and memory usage.

Google also introduced Per-Layer Embeddings (PLE), a technique that offloads part of the model to CPUs, reducing reliance on high-speed accelerator memory. This enables improved model quality without increasing the VRAM requirements.

Competitive Benchmarks and Performance

Gemma 3n E4B achieved an LMArena score exceeding 1300, the first model under 10 billion parameters to do so. The company attributes this to architectural innovations and enhanced inference techniques, including KV Cache Sharing, which speeds up long-context processing by reusing attention layer data.

Benchmark tests show up to a twofold improvement in prefill latency over the previous Gemma 3 model.

In speech applications, the model supports on-device speech-to-text and speech translation via a Universal Speech Model-based encoder, while a new MobileNet-V5 vision module offers real-time video comprehension on hardware such as Google Pixel devices.

Broader Ecosystem Support and Developer Focus

Google emphasized the model’s compatibility with widely used developer tools and platforms, including Hugging Face Transformers, llama.cpp, Ollama, Docker, and Apple’s MLX framework. The company also launched a MatFormer Lab to help developers fine-tune sub-models using custom parameter configurations.

“From Hugging Face to MLX to NVIDIA NeMo, we’re focused on making Gemma accessible across the ecosystem,” the authors wrote.

As part of its community outreach, Google introduced the Gemma 3n Impact Challenge, a developer contest offering $150,000 in prizes for real-world applications built on the platform.

Industry Context

Gemma 3n reflects a broader trend in AI development: a shift from cloud-based inference to edge computing as hardware improves and developers seek greater control over performance, latency, and privacy. Major tech firms are increasingly competing not just on raw power, but on deployment flexibility.

Although models such as Meta’s LLaMA and Alibaba’s Qwen3 series have gained traction in the open source domain, Gemma 3n signals Google’s intent to dominate the mobile inference space by balancing performance with efficiency and integration depth.

Developers can access the models through Google AI Studio, Hugging Face, or Kaggle, and deploy them via Vertex AI, Cloud Run, and other infrastructure services.

For more information, visit the Google site.

About the Author



John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he’s written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].







Source link

Continue Reading

Tools & Platforms

Gelson’s adopts Upshop’s AI-powered tech

Published

on


Gelson’s Markets has gone all-in on artificial intelligence with plans to deploy Uphop’s total store platform to manage forecasting, ordering, inventory, and production planning, the Austin-based tech company announced Monday. 

Gelson’s, which operates 26 upscale supermarkets and one convenience store, ReCharge by Gelsons, in Southern California, said the partnership ensures that “every location is tuned into local demand dynamics.”

The Austin-based SaaS tech company has served as a leader in AI-powered inventory management with its suite of tools that streamline the process. That includes direct store delivery (DSD) future-proofing, food traceability, and food waste management, among others. 

“In a competitive grocery landscape, scale isn’t everything—intelligence is,” said Ryan Adams, president and CEO of Gelson’s Markets, in a press release. “With Upshop’s embedded platform and AI-driven capabilities, we’re empowering our stores to be hyper-responsive, efficient, and focused on the guest experience. It’s how Gelson’s can compete at the highest level.”

Implementing the new technology puts Gelson’s in league with “a market dominated by national chains,” according to Upshop.

The grocery retailer’s adoption of the platform will kick off with a focus on “eliminating food waste and optimizing fresh food production—especially within foodservice,” with the goals of reducing shrink, streamlining production, and enhancing quality, according to Upshop.

Related:Foxtrot added to Uber Eats app

The premium grocery chain’s announcement appears to build on its recent investment in technology. In January 2024, the grocer announced a partnership with Scottsdale, Ariz.-based Clear Demand, which specializes in so-called intelligent price management and optimization (IPMO). That partnership aims to manage retail pricing strategies for the grocer.
Gelson’s was sold to Tokyo-based Pan Pacific International Holdings (PPIH) from TPG Capital in 2021.

**
Join us at Grocery NEXT, September 10-12 at the Westin Chicago Northwest in Itasca, Ill., where industry leaders will explore the future of grocery technology, AI, automation and evolving consumer trends. Register now to be part of this groundbreaking event.





Source link

Continue Reading

Trending