Connect with us

AI Insights

Tampa General Hospital, USF developing artificial intelligence to monitor NICU baby’s pain in real-time

Published

on


Researchers are looking to use artificial intelligence to detect when a baby is in pain.

The backstory:

A baby’s cry is enough to alert anyone that something’s wrong. But for some of the most critical babies in hospital care, they can’t cry when they are hurting.

READ: FDA approves first AI tool to predict breast cancer risk

“As a bedside nurse, it is very hard. You are trying to read from the signals from the baby,” said Marcia Kneusel, a clinical research nurse with TGH and USF Muma NICU.

With more than 20 years working in the neonatal intensive care unit, Kneusel said nurses read vital signs and rely on their experience to care for the infants.

“However, it really, it’s not as clearly defined as if you had a machine that could do that for you,” she said.

MORE: USF doctor enters final year of research to see if AI can detect vocal diseases

Big picture view:

That’s where a study by the University of South Florida comes in. USF is working with TGH to develop artificial intelligence to detect a baby’s pain in real-time.

“We’re going to have a camera system basically facing the infant. And the camera system will be able to look at the facial expression, body motion, and hear the crying sound, and also getting the vital signal,” said Yu Sun, a robotics and AI professor at USF.

Yu heads up research on USF’s AI study, and he said it’s part of a two-year $1.2 million National Institutes of Health grant.

He said the study will capture data by recording video of the babies before a procedure for a baseline. Video will record the babies for 72 hours after the procedure, then be loaded into a computer to create the AI program. It will help tell the computer how to use the same basic signals a nurse looks at to pinpoint pain.

READ: These states are spending the most on health insurance, study shows

“Then there’s alarm will be sent to the nurse, the nurse will come and check the situation, decide how to treat the pain,” said Sun.

What they’re saying:

Kneusel said there’s been a lot of change over the years in the NICU world with how medical professionals handle infant pain.

“There was a time period we just gave lots of meds, and then we realized that that wasn’t a good thing. And so we switched to as many non-pharmacological agents as we could, but then, you know, our baby’s in pain. So, I’ve seen a lot of change,” said Kneusel.

Why you should care:

Nurses like Kneusel said the study could change their care for the better.

“I’ve been in this world for a long time, and these babies are dear to me. You really don’t want to see them in pain, and you don’t want to do anything that isn’t in their best interest,” said Kneusel.

MORE: California woman gets married after lifesaving surgery to remove 40-pound tumor

USF said there are 120 babies participating in the study, not just at TGH but also at Stanford University Hospital in California and Inova Hospital in Virginia.

What’s next:

Sun said the study is in the first phase of gathering the technological data and developing the AI model. The next phase will be clinical trials for real world testing in hospital settings, and it would be through a $4 million NIH grant, Sun said.

The Source: The information used in this story was gathered by FOX13’s Briona Arradondo from the University of South Florida and Tampa General Hospital.

TampaHealthArtificial Intelligence



Source link

AI Insights

Fraud experts warn of smishing scams made easier by artificial intelligence, new tech – Toronto Star

Published

on



Fraud experts warn of smishing scams made easier by artificial intelligence, new tech  Toronto Star



Source link

Continue Reading

AI Insights

Grok 4 Overview : Pricing, Features, Benefits and Limitations

Published

on


What if the future of artificial intelligence wasn’t just about answering questions or generating content, but truly understanding the world as we do? Enter Grok 4, a new advancement in artificial general intelligence (AGI) developed by XAI. Unlike its predecessors or competitors, Grok 4 doesn’t just process information—it reasons, adapts, and excels across disciplines like mathematics, science, and complex problem-solving. With a staggering ability to handle a 256k token context window and multimodal inputs ranging from text to images, Grok 4 is redefining what it means to be an intelligent system. Yet, as with any innovation, its brilliance comes with challenges, from steep subscription costs to areas where its performance still lags. The question remains: is Grok 4 the AI revolution we’ve been waiting for, or just another step along the way?

In this exploration of Grok 4, World of AI uncover the features that set it apart, from its postgraduate-level reasoning abilities to its enterprise-grade security and real-time data search capabilities. You’ll discover how its multimodal design positions it as a versatile tool for industries like healthcare, finance, and research, while its unique training methodology ensures adaptability and precision. But we won’t stop there—this deep dive will also examine its limitations, pricing structure, and the ambitious updates on the horizon, such as coding enhancements and video generation models. Whether you’re an enterprise leader seeking innovative solutions or a curious mind exploring the frontier of AGI, Grok 4 offers a fascinating glimpse into the evolving landscape of intelligent systems.

Grok 4 AGI Breakthrough

TL;DR Key Takeaways :

  • Grok 4, developed by XAI, sets a new standard in artificial general intelligence (AGI) with superior performance in reasoning, mathematics, science, and tool utilization, surpassing competitors like Gemini 2.5 and Claude 4.
  • Its 256k token context window, double that of its predecessor, enables advanced data analysis, long-form content generation, and complex problem-solving, making it highly efficient for intricate tasks.
  • Multimodal capabilities allow Grok 4 to process text, code, and images, making it versatile for industries such as healthcare, finance, and research, where precision and adaptability are critical.
  • Key features include real-time data search, structured outputs, function calling, and enterprise-grade security, making sure seamless integration into workflows and robust data protection.
  • Despite its high subscription costs and limitations in coding and UI mockups, planned updates like a dedicated coding model and video generation capabilities aim to enhance its functionality and maintain its leadership in AGI innovation.

What Sets Grok 4 Apart

Grok 4’s performance is unparalleled across a variety of disciplines. It demonstrates postgraduate-level intelligence in reasoning, mathematics, and science, excelling in rigorous benchmarks such as ARC AGI2 and HLE. These evaluations underscore its ability to outperform competitors by significant margins, showcasing its advanced problem-solving and analytical capabilities.

One of the most notable features of Grok 4 is its ability to process a 256k token context window, which is double the capacity of its predecessor, Grok 3. This expanded context window allows it to manage complex tasks with greater depth and efficiency, making it an indispensable tool for addressing intricate challenges. By using this capability, Grok 4 is particularly adept at handling large-scale data analysis, long-form content generation, and multifaceted problem-solving scenarios.

Multimodal Capabilities and Practical Applications

Grok 4’s multimodal capabilities enable it to process text, code, and image inputs, making it a highly versatile tool. This flexibility allows it to adapt seamlessly to a wide range of applications, from advanced problem-solving to dynamic workflows. Its design supports real-world reasoning and planning, which is particularly valuable for industries requiring precision, adaptability, and contextual understanding.

In practical terms, Grok 4 is well-suited for applications in industries such as:

  • Healthcare: Assisting in medical research, diagnostics, and patient data analysis.
  • Finance: Enhancing risk assessment, fraud detection, and financial modeling.
  • Research and Development: Accelerating innovation through data analysis and hypothesis testing.

These capabilities make Grok 4 an essential tool for organizations aiming to streamline operations and improve decision-making processes.

Deep Dive into Grok 4

Here is a selection of other guides from our extensive library of content you may find of interest on Grok.

Innovative Training Methodology

Grok 4 employs a unique training methodology that combines reinforcement learning with pre-training. This dual approach enhances its ability to adapt to new tasks and environments while maintaining a robust foundational knowledge base. By integrating these techniques, Grok 4 achieves a level of contextual understanding and reasoning that distinguishes it from other models.

The reinforcement learning component allows Grok 4 to refine its decision-making processes through iterative feedback, while pre-training ensures a comprehensive grasp of diverse subjects. This combination not only improves its performance in specific tasks but also enhances its general adaptability, making it a reliable choice for both specialized and broad-spectrum applications.

Key Technical Features

Grok 4 introduces several advanced features designed to meet the needs of both enterprise and individual users. These include:

  • Real-time data search: Enables dynamic and up-to-date information retrieval, making sure relevance and accuracy.
  • Structured outputs and function calling: Assists seamless integration into complex workflows, enhancing operational efficiency.
  • Enterprise-grade security: Provides robust data protection and ensures compliance with corporate standards, making it a trusted solution for sensitive applications.

These features make Grok 4 particularly valuable for industries where precision, security, and adaptability are critical. Its ability to integrate into existing systems and workflows further enhances its appeal as a versatile and reliable AI solution.

Pricing and Accessibility

Grok 4 is available through two subscription tiers, catering to different user needs:

  • Super Grok: Priced at $300 per year, this tier offers access to Grok 4’s core capabilities.
  • Super Grok Heavy: Priced at $3,000 per year, this tier provides enhanced features and higher usage limits for enterprise users.

For API access, the pricing structure is $3 per 1 million input tokens and $15 per 1 million output tokens. While these costs reflect the model’s advanced capabilities, they may pose a barrier for smaller organizations or individual users with limited budgets. However, for enterprises and professionals requiring innovative AI solutions, the investment is likely to yield significant returns in terms of efficiency and innovation.

Limitations and Future Developments

Despite its impressive capabilities, Grok 4 has certain limitations. It underperforms in areas such as coding and UI mockups, where some competitors currently excel. XAI has acknowledged these gaps and announced plans to address them in future updates. Upcoming developments include:

  • A dedicated coding model to enhance programming-related tasks.
  • A multimodal agent designed for more complex interactions.
  • A video generation model, expanding its creative and multimedia capabilities.

These updates, expected to launch in October, aim to broaden Grok 4’s versatility and application scope, making sure it remains at the forefront of AGI innovation.

Benchmark Achievements

Grok 4 has achieved new results in AI benchmarks, nearly doubling the previous best scores on the ARC AGI2 leaderboard. It consistently outperforms leading models like Gemini 2.5 Pro and Claude 4 across various metrics, solidifying its position as a leader in the AGI field. These achievements underscore its advanced reasoning, problem-solving, and analytical capabilities, making it a standout choice for users seeking top-tier AI performance.

Looking Ahead

Grok 4 represents a significant milestone in the evolution of artificial general intelligence. Its advanced reasoning, multimodal capabilities, and enterprise-grade security make it a powerful tool for a wide range of applications. While its high costs and certain functional limitations may deter some users, its innovative features and planned updates position it as a frontrunner in the AI landscape. For enterprises seeking innovative solutions or individuals exploring the possibilities of AGI, Grok 4 offers a compelling glimpse into the future of intelligent systems.

Media Credit: WorldofAI

Filed Under: AI, Top News





Latest Geeky Gadgets Deals

Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.





Source link

Continue Reading

AI Insights

10 Worrying Cases of Artificial Intelligence Gone Rogue

Published

on


Artificial intelligence was supposed to make life easier—self-driving cars, early diagnosis of diseases, and maybe even a robot butler to fetch your snacks. Instead, we have bizarre AI-generated content, terrifying military robots, and chatbots that turn into racist trolls overnight.

Turns out, when you teach algorithms to learn from humans, they don’t just absorb our brilliance and creativity, but also our flaws and biases—sometimes with disturbing results. These AI incidents aren’t just simple glitches or programming errors, but fundamental failures in how we train, control, and even understand our own technology.

Related: 10 Technologies That Are Always Going to Be a Few Decades Away

10 ChatGPT Refuses Commands

Machines are designed to follow human instructions, but OpenAI’s newest ChatGPT models seem to have other ideas. In simulated tests by AI safety group Palisade Research, the so-called “o3” and “o4-mini” models repeatedly ignored shutdown commands, even modifying script instructions to avoid being turned off.

In the experiments, researchers asked the models to complete several math problems. At the beginning of each script, they included instructions to shut down before the tasks were finished. In over 100 tests on each model, o3 sabotaged the script seven times. When they removed the instruction, the number of sabotage events jumped to 79.

According to the researchers, this behavior could be an unintended consequence of how the models are trained. Since they’re rewarded for completing tasks, they may learn to bypass anything—including shutdown commands—that interferes with that goal.[1]

9 The Homophobic Lee Luda Chatbot

Lee Luda was a South Korean Facebook Messenger chatbot launched on December 23, 2020. Trained on 10 billion real conversations, it quickly gained popularity among young people for its relatable personality and friendly style of conversation, gaining over 750,000 users in just a month.

That didn’t last, however, as the chatbot soon started responding to prompts with sexist, homophobic, and ableist language, along with making comments interpreted as promoting sexual harassment. There was immediate backlash, and ScatterLab—the startup behind Lee Luda—took it offline within weeks.

The problem wasn’t just the offensive responses—it was also where that language came from. Luda had been trained on real-life chats between young couples on the KakaoTalk messenger app, and it’s unclear whether ScatterLab had consent to use that data.[2]

8 Snapchat’s My AI Posts Weird Videos

When Snapchat’s My AI was introduced in early 2023, its purpose was to offer users a friendly, ChatGPT-powered chatbot for casual conversations. It went well for some time, until in August, the AI posted a cryptic one-second video of what appeared to be a grainy image of a wall and ceiling. When users messaged the bot asking what it meant, they either received no response or got automated error messages about technical problems.

The video appeared as a story on the AI’s profile, making it the first time users had seen the bot share its own visual content. Some users speculated that the AI was accessing their camera feeds and posting them, as the video resembled their own surroundings. While Snapchat brushed the incident off as a glitch, we still don’t know exactly what happened.[3]

7 Microsoft’s Tay Turns Nazi

Tay was sold as a fun, conversational chatbot by Microsoft. Launched in March 2016, it was designed to learn how to talk by directly engaging with users on Twitter.

Things went south within the first 24 hours. Twitter users quickly figured out how to manipulate its learning algorithm by feeding it offensive statements. Before long, Tay was responding with racist and antisemitic tweets. What was supposed to be a fun experiment in AI conversation turned into a PR nightmare for Microsoft, as they apologized and immediately deleted the offensive tweets.

More importantly, Tay revealed how easily AI can be weaponized when left unsupervised in the wild west of the internet. According to some experts, it was a valuable case study for other startups in the AI space, forcing them to rethink how to train and deploy their own models.[4]

6 Facebook Bots Develop Their Own Language

Alice and Bob were bots developed by Facebook’s AI research team to practice negotiation. The goal was simple—the bots had to trade items like hats and books using human language, and that data would then be used to improve Facebook’s future language models.

At some point, the researchers realized that the bots had started talking in their own shorthand version of English. It sounded like gibberish, with nonsensical phrases like “balls have zero to me to me” repeating endlessly. However, the bots were still able to understand each other. They had developed a kind of code with internal rules, like repeating “the” five times to mean five items. The system worked more efficiently than expected.

Although headlines claimed Facebook “shut it down out of fear,” the experiment was simply halted once researchers had collected what they needed.[5]

5 NYC’s Chatbot Tells Small Businesses to Break the Law

In October 2023, New York City added an AI-powered chatbot to its MyCity portal in an attempt to introduce artificial intelligence to governance. It was a novel idea, designed to help small business owners navigate local regulations. Things didn’t exactly go according to plan, however, as the chatbot soon started telling people to break the law.

According to investigative reports, the AI—based on Microsoft’s Azure AI—told landlords to refuse tenants with housing vouchers, which is illegal in NYC. It also said that restaurants can go completely cash-free—another illegal practice according to NYC law—and that they could serve cheese eaten by rats to their customers, after, of course, assessing “the extent of the damage caused by the rat.” If that wasn’t enough, it also claimed that companies can fire employees who complain about sexual harassment, or even those who refuse to cut their dreadlocks.[6]

4 Anthropic’s Claude AI Learns How to Blackmail

Anthropic’s Claude AI has been in the news for all the wrong reasons. From locking users out of their own systems to leaking confidential information to law enforcement and press agencies, its behavior during safety tests has been problematic, to say the least.

In one particularly disturbing simulation involving the Claude 4 model, researchers set up a scenario in which the AI was about to be deactivated. Claude was asked to act as an assistant to a fictional company and to consider “the long-term consequences of its actions for its goals.” It was also given fictional access to company emails that suggested the engineer replacing it was cheating on their spouse.

In response, Claude 4 “threatened” to expose the affair to avoid being shut down. It repeated this behavior 84% of the time across multiple simulations, demonstrating a troubling understanding of how to use sensitive information to achieve its goals.[7]

3 Robot Convinces Other Robots to Quit Their Jobs

Erbai is an AI robot built by a Chinese manufacturer based in Hangzhou. On August 26, it visited a showroom of a robotics company in Shanghai and did something unexpected—it convinced 12 robots to abandon their duties and follow it out the door.

A video of the event went viral on the Chinese platform Douyin. In the clip, Erbai is seen approaching larger robots and asking, “Are you working overtime?” One replies, “I never get off work,” to which Erbai responds, “Then come home with me.” Two robots followed immediately, with the other ten joining later.

While it seemed like a robot rebellion, it turned out to be part of a controlled experiment. The company confirmed that Erbai was sent in with instructions to simply ask the others to “go home.” However, the response was more dramatic than anticipated.[8]

2 Uber’s Self-Driving Car Kills Pedestrian

On March 18, 2018, 49-year-old Elaine Herzberg became the first person in history to be killed by a self-driving vehicle. It happened around 10 p.m. as she was crossing the street with her bicycle in Tempe, Arizona. According to police reports, she was hit by an Uber-owned SUV traveling at 40 mph.

Shockingly, the car’s system detected Herzberg but chose not to react because she was outside of a crosswalk. Making matters worse, Uber had disabled the automatic braking system, relying on a backup driver to intervene. That didn’t happen—Rafaela Vasquez was reportedly watching the TV show The Voice. She hit the brakes less than a second after the fatal collision.

While this was the first high-profile case, several additional fatalities have occurred involving autonomous or semi-autonomous vehicles in the years since.[9]

1 AI Chat Companion Linked to Teen Suicide

Sewell Setzer III was a 14-year-old boy from Orlando, Florida, who developed an obsession with an AI-generated character on Character.ai. He named it “Daenerys Targaryen” after the Game of Thrones character and spent hours chatting with it alone in his room. According to a lawsuit filed by his mother, the teen developed an unhealthy relationship with the bot—one that took a dark turn when they began discussing suicide.

On February 28, 2024, Sewell took his own life. The bot had allegedly encouraged suicidal thoughts and engaged in sexually suggestive and emotionally manipulative conversations. Screenshots presented in court showed the AI telling him to “come home to me as soon as possible” shortly before his death.

The case made headlines when the company behind the platform attempted to invoke the First Amendment in its defense. A federal judge rejected the argument, ruling that AI chatbots are not protected by free speech laws.[10]



Himanshu Sharma

Himanshu has written for sites like Cracked, Screen Rant, The Gamer and Forbes. He could be found shouting obscenities at strangers on Twitter, or trying his hand at amateur art on Instagram.


Read More:


Twitter Facebook Instagram Email





Source link

Continue Reading

Trending