Connect with us

Tools & Platforms

Box’s new AI features help unlock dormant data – Computerworld

Published

on


AI provides a technique to extract value from this untapped resource, said Ben Kus, chief technology officer at Box. To use the widely scattered data properly requires preparation, organization, and interpretation to make sure it is applied accurately, Kus said.

Box Extract uses reasoning to dig deep and extract relevant information. The AI technology ingests the data, reasons and extracts context, matches patterns, reorganizes the information by placing it in fields, and then draws correlations from the new structure. In a way, it restructures unstructured data with smarter analysis by AI.

“Unstructured data is cool again. All of a sudden it’s not just about making it available in the cloud, securing it, or collaboration, but it’s about doing all that and AI,” Kus said.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Evertrace acquires Whisper AI to build the leading VC sourcing tool

Published

on


Evertrace – the founder detection engine for data-driven VCs – today announced the acquisition of Whisper AI. Whisper AI brings deep expertise in company data, trade registry integrations, and a strong foothold in the DACH market – a key step in Evertrace’s wider European and global expansion.

The acquisition accelerates Evertrace’s mission to give investors the earliest and most precise signals on emerging founders and companies. By combining Whisper AI’s registry and company data with Evertrace’s detection engine, the company moves closer to executing on this mission and be the key player in the market.

“Whisper AI’s expertise in company registries and their position in the DACH region give us access to a unique set of data sources and a crucial market. Together, we can strengthen our ability to surface the founders and companies investors need to know about – earlier than anyone else,” said Jacob Graubæk Houlberg, Co-founder at Evertrace.

We founded Whisper AI to make company and registry data more accessible and actionable to VC investors. Becoming part of Evertrace allows us to scale that mission significantly – and directly contribute to building the leading sourcing engine for early stage investors,” said Nikolai Niklaus, founder of Whisper AI.

Whisper AI’s technology will be fully integrated into the Evertrace platform, giving customers richer signals, faster updates, and broader geographic coverage.

About Evertrace

Evertrace is the founder detection engine for data-driven venture capital investors. Using machine learning and unique data signals, Evertrace helps funds identify founders earlier than anybody else

About Whisper AI

Whisper AI specializes in advanced company data and registry integrations, with a particular focus on the DACH market. Its technology enables the early detection of new companies and founders for European early stage investors by turning complex data pipelines into actionable insights



Source link

Continue Reading

Tools & Platforms

AI will be the most transformative force in human history

Published

on



‘My advice to young people is to study STEM and experiment with AI because you will always be better off understanding how new technologies work and how they can be used,’ Demis Hassabis, CEO of DeepMind Technologies, tells Kathimerini’s Executive Editor Alexis Papachelas. [Nikos Kokkalias]

From the age of 4, he had already demonstrated remarkable talent in chess. By 17, he had created his first video game. In 2024, at 48, Demis Hassabis was awarded the Nobel Prize in Chemistry, alongside his colleague John M. Jumper, for their groundbreaking AI research in protein structure prediction.

Born in London to a Greek-Cypriot father and a mother of Singaporean heritage, Hassabis is now regarded as one of the leading pioneers in artificial intelligence. He is the chief executive officer and co-founder of DeepMind, acquired by Google in 2014.

Last week, Hassabis visited Athens to meet with Prime Minister Kyriakos Mitsotakis and discuss AI, ethics and democracy within the framework of the Athens Innovation 2025 conference. On the occasion of his discussion at the Odeon of Herodes Atticus on Friday, he spoke to Kathimerini about the hopes and concerns surrounding “intelligent” technologies. For the first time, he also revealed how the trauma of displacement from Famagusta in 1974 has shaped both his family and himself.

You have an amazing personal story. Based on your experience, what advice would you give to a young person growing up today?

The only certainty is that there will be incredible change over the next 10 years. My advice to young people is to study STEM [science, technology, engineering and mathematics] and experiment with AI because you will always be better off understanding how new technologies work and how they can be used. But don’t neglect meta-skills, like learning how to learn, creativity, adaptability and resilience. They will all be critical for the next generation.

You often talk about the AI revolution being much bigger than the Industrial Revolution. You are one of the few people that can describe our future 10 years down the road. What will the fundamental changes be?

There will be profound change as AI advances. Universal assistants will perform mundane tasks, freeing us up for more creative pursuits. AI tools will help personalize education and curate information for us, allowing us to protect our attention and mindspace from the bombardment of the digital world. AI will also help us design new medicines and materials faster, giving us better batteries and new sources of clean energy. All of this could lead to an era of radical abundance by eliminating the scarcity of water, food, energy and other resources, allowing for maximum human flourishing. But this amazing future depends on society stewarding AI safely and responsibly. Just as with industrialization, the transition will come with challenges. The Industrial Revolution was a net good for society and propelled the world forward. I’m hopeful AI can deliver a similar leap for humanity.

Will an AI “creature” be able to hold a Socratic dialogue on abstract ideas with a real philosopher during our lifetime?

It’s plausible and perhaps even likely. Today’s AI systems are impressive, but they lack some key capabilities for a true Socratic dialogue. They don’t have a deep, conceptual understanding to explain their reasoning, and they can’t pose their own novel questions to explore ideas. Right now, we question and they answer. In the future, they will have to be able to do both in a way that doesn’t mimic us but pushes us to be creative and takes us down new avenues of thought.

There hasn’t been a commercially available AI app that has been proven really profitable… Do you share the concern that AΙ expectations have created a stock market bubble similar to the dot-com one?

In the short term, there is a lot of hype around AI – probably too much – because even though today’s systems are extremely impressive, they also have lots of flaws. Many near-term promises are being made that aren’t really scientifically grounded. But in the medium to longer term, the monumental impact AI is going to have is still underappreciated. It will be the most transformative moment in human history, so I think the investments we’re making are well-justified.

Is China really ahead of the US in terms of AI development? Do you see a new cold war emerging with competing AI platforms? Can Europe become a serious player on this frontier?

The US and the West are ahead of China on AI development currently but China’s domestic AI ecosystem is strong and catching up fast, as shown by recent model releases. Europe (including the UK) can be a serious player. It has real strengths in AI through its history of scientific discovery, incredible academic institutions and strong startup environment. There’s an important role for Europe in working with close allies like the US to shape the responsible development and governance of AI globally. But this will require it to remain innovative, dynamic and at the technical forefront.

You often paint an almost utopian picture of the future, with AI providing solutions to almost every challenge. Does your prediction run the risk of being too optimistic since AI will also create huge disruptions because of massive unemployment and energy depletion?

I’m a cautious optimist. I think artificial general intelligence (AGI) will be one of the most beneficial technologies ever invented. But there are significant risks that have to be managed and there is a high degree of uncertainty. There are technical ways to anticipate and mitigate these risks, but as a society, we should be trying to better understand and prepare for them. We need economists, social scientists, philosophers and other experts to be thinking about the implications of a post-AGI world. A technology with the potential for such profound societal impact must be handled with exceptional care and foresight.

AI is dramatically changing our business, the media business. Any thoughts on how solid news reporting and analysis can survive in the AI era? And do we run the risk of generations of “lazy minds” who will just look for ready, fully digestible answers to everything on their smartphone? Will AI-provided information be controlled by a few info-bosses?

AI can be a powerful tool for journalists, helping them handle more mundane information-gathering tasks so they can spend more time on valuable reporting. Misinformation and deepfakes are real risks but technical solutions exist, like invisible watermarking, to help people distinguish between real and fake information. Universal digital assistants will help us be more productive at work and in our personal lives, freeing up time for creativity and deep thinking. By helping to synthesize and understand information, they could enable us to learn faster. They could also enrich our lives by making better, more personalized recommendations for books, music and other ways we like to spend our time. Ensuring fair and equal access to AI requires careful management and cooperation between governments, academia, civil society, philosophers and the public.

Your father’s family had to abandon their home in Cyprus in 1974. Was this a traumatic moment and an important part of your growing up?

It was a devastating moment for my grandparents because they lost literally everything. They were working in the UK at the time but were sending all their money back to Cyprus to try to build their family home in Famagusta and then eventually go back to live there. They lost everything and I don’t think they ever really fully recovered from it. Obviously, it loomed large as a big part of my upbringing, a sort of unspoken thing always in the background.





Source link

Continue Reading

Tools & Platforms

Best Practices for Responsible Innovation

Published

on


Dr. Heather Bassett, Chief Medical Officer at Xsoli

Our patients can’t afford to wait on officials in Washington, DC, to offer guidance around responsible applications of AI in the healthcare industry. The healthcare community needs to stand up and continue to put guardrails in place so we can roll out AI responsibly in order to maximize its evolving potential. 

Responsible AI, for example, should include reducing bias in access to and authorization of care, protecting patient data, and making sure that outputs are continually monitored. 

With the heightened need for industry-specific regulations to come from the bottom up — as opposed to top-down — let’s take a closer look at the AI best practices currently dominating conversations among the key stakeholders in healthcare. 

Responsible AI without squashing innovation

How can healthcare institutions and their tech industry partners continue innovating for the benefit of patients? That must be the question guiding the innovators moving AI forward. On a basic level of security and legal compliance, that means companies developing AI technologies for payers and providers should be aware of HIPAA requirements. De-identifying any data that can be linked back to patients is an essential component to any protocol whenever data-sharing is involved.

Beyond the many regulations that already apply to the healthcare industry, innovators must be sensitive to the consensus forming around the definition of “responsible AI use.” Too many rules around which technologies to pursue, and how, could potentially slow innovation. Too few rules can yield ethical nightmares. 

Stakeholders on both the tech industry and healthcare industry sides will offer different perspectives on how to balance risks and benefits. Each can contribute a valuable perspective on how to reduce bias within the populations they serve, being careful to listen to concerns from any populations not represented in high-level discussions.

The most pervasive pain point being targeted by AI innovators

Rampant clinician burnout has persisted as an issue within hospitals and health systems for years. In 2024, a national survey revealed the physician burnout rate dipped below 50 percent for the first time since the COVID-19 pandemic. The American Medical Association’s “Joy of Medicine” program, now in its sixth year, is one of many efforts to combat the reasons for physician burnout — lack of work/life balance, the burden of bureaucratic tasks, etc. — by providing guidelines for health system leaders interested in implementing programs and policies that actively support well-being.

To that end, ambient-listening AI tools in the office are helping save time by transforming conversations between the provider and patient into clinical notes that can be added to electronic health records. Previously, manual note-taking would have to be done during the appointment, reducing the quality of face-to-face time between provider and patient, or after appointments during a physician’s “free time,” when the information gleaned from the patient was not front of mind.

Other AI tools can help combat the second-order effects of burnout. Armed with the critical information needed to recommend a diagnostic test available to them in the patient’s electronic health record (EHR), a doctor still might not think to recommend a needed test. AI tools can scan an EHR — prior visit information, lab results — to analyze potentially large volumes of information and make recommendations based on the available data. In this way the AI reader acts like a second pair of eyes to interpret a lab result, or year’s worth of lab results, for something the physician might have missed.

Administrative tasks outside the clinical setting can save burned-out healthcare workers (namely, revenue cycle managers) time and bandwidth as well.

Private-sector vs. public-sector transparency 

How can we trust whether an institution is disclosing how it uses AI when the federal government doesn’t require it to? This is where organizations like CHAI (the Coalition for Health AI) come in. Its membership is composed of a variety of healthcare industry stakeholders who are promoting transparency and open-source documentation of actual AI use-cases in healthcare settings.

Healthcare is not the only industry facing the question of how to foster public trust in how it uses AI. In general, the key question is whether there’s a human in the loop when an AI-influenced process affects a human. It ought to be easy for consumers to interrogate that to their own satisfaction. For its part, CHAI has developed an “applied model card” — like a fact sheet that acts as a nutrition label for an AI model. Making these facts more readily available can only further the goal of fostering both clinician and patient trust.

Individual states have their own AI regulations. Most exist to curb profiling, the use of the technology to sort people into categories to make it easier to sell them products or services or to make hiring, insurance coverage and other business decisions about them. In December, California passed a law that prohibits insurance companies from using AI to deny healthcare coverage. It effectively requires a human in the loop (“a licensed physician or qualified health care provider with expertise in the specific clinical issues at hand”) when any denials decisions are made. 

By vendors and health systems making their AI use transparent — following evolving recommendations on how we define and communicate transparency, and promoting how data is protected to end users and patients alike — hospitals and health systems have nothing to lose and plenty to gain.


About Dr. Heather Bassett 

Dr. Heather Bassett is the Chief Medical Officer with Xsolis, the AI-driven health technology company with a human-centered approach. With more than 20 years’ experience in healthcare, Dr. Bassett provides oversight of Xsolis’ data science team, denials management team and its physician advisor program. She is board-certified in internal medicine.



Source link

Continue Reading

Trending