Connect with us

AI Research

What every CEO can learn from tech’s approach to AI disruption

Published

on


play

  • Tech companies fully integrate AI across their operations, investing substantially rather than superficially experimenting.
  • Clear communication from leadership about AI’s role in the company’s future and its impact on employees is crucial for success.

The tech industry hasn’t always gotten AI right, but it has learned lessons the rest of the business world would do well to adopt.

For companies navigating the disruption of artificial intelligence, experts say the way tech firms approach innovation, risk, and organizational change holds key insights.

The most successful tech companies don’t just dabble in AI, but integrate it throughout operations, said Tom Davenport, professor of IT and management at Babson College in Wellesley, Massachusetts. That’s a theme he explored in “All-in On AI: How Smart Companies Win Big with Artificial Intelligence.”

“Companies like Google and Microsoft have been very aggressive in their use of AI,” he said. “To get value from it, you don’t want to just tinker around. You have to embed it into various parts of your business and spend a substantial amount of money on it.”

CEOs need to articulate how AI fits into their company’s future, said Amy Webb, CEO of the Future Today Strategy Group, a New York consulting firm specializing in strategic foresight. That means going beyond how it will save money, into how it will transform what the company does – and what that means for employees.

End-to-end ownership of AI deployment is critical.

Companies are encouraged to assign product managers to administer the entire lifecycle of AI initiatives, Davenport said, recommending that other companies follow suit. “Have somebody overseeing the entire process from conception through to implementation and ongoing monitoring.”

Tech companies have a willingness to experiment on their side.

Cultural alignment, Webb notes, is just as important as technological readiness. Transparency, effective communication, and upskilling employees are crucial to alleviating fear and fostering buy-in.

And even the best tools can fail without endorsement at the top.

Webb recounted a real-life example: one skeptical executive on a leadership team created so much downstream confusion and fear that employees sabotaged AI efforts, fearful that their jobs were at risk.

That shows why CEOs must clearly communicate their vision for AI, Webb said.

AI isn’t just a technical challenge – it’s a human challenge. Tech has an edge because its workforce tends to be, well, tech-savvy.

“A lot of tech companies today have a lot of (workers) who are more receptive to using technology … so they’re experimenting with AI on their own,” said Thomas Malone, Patrick J. McGovern professor of management at the MIT Sloan School of Management and founding director of the MIT Center for Collective Intelligence.

“If you’re in an old-line company and nobody knows how to spell AI, then it’s harder to get them to use it,” he joked.

Non-tech companies can work to hone their workplace culture by hiring people at multiple levels with an understanding of AI, said Malone, author of “Superminds: The Surprising Power of People and Computers Thinking Together.”

That understanding can also be built from within, since using ChatGPT and other generative AI tools doesn’t require programming skills.

“Lots of people can try lots of things relatively inexpensively that way, and many of them will probably turn out to be useful on a small scale,” Malone said. “Some of them are likely to turn out to be useful on a very large scale.”

Another tip from the tech world: Protect your AI innovation teams from bureaucracy.

Sometimes you want structural protection for AI innovation, often with senior leadership backing, Malone said. Large business units may be too preoccupied with the present to focus on the future.

Even if these teams are small, placing them high in the org chart gives them the visibility and support they need to explore bold new ideas.

Cultural barriers are real – especially fear of job loss or concerns about proprietary data leaking into public AI systems. But Davenport says there are straightforward solutions, like private cloud deployments, open-source models that companies can fully control, or having vendors commit not to reuse your data or prompts.

Experts warn against becoming enchanted with the latest flashy tool.

“The technology changes all the time,” Davenport said. “Whatever you buy will soon be obsolete.”

That means leaders should focus on building core capabilities – data, people, strategy – that can adapt.

To navigate change successfully, leaders need both an experimental and production mindset, Davenport said Try new things, and also commit to putting successful pilots into widespread use.

Ultimately, the AI era is marked by uncertainty – regulatory, ethical and even technological. But experts agree: that’s no excuse for inaction.

“This is not the time to throw up our hands and say, ‘The future will be what it is and we can’t influence it in any way,’” Webb said. “Now is the time to do the long-term planning, to challenge our cherished beliefs and to make better decisions.”



Source link

AI Research

Physicians Lose Cancer Detection Skills After Using Artificial Intelligence

Published

on


Artificial intelligence shows great promise in helping physicians improve both their diagnostic accuracy of important patient conditions. In the realm of gastroenterology, AI has been shown to help human physicians better detect small polyps (adenomas) during colonoscopy. Although adenomas are not yet cancerous, they are at risk for turning into cancer. Thus, early detection and removal of adenomas during routine colonoscopy can reduce patient risk of developing future colon cancers.

But as physicians become more accustomed to AI assistance, what happens when they no longer have access to AI support? A recent European study has shown that physicians’ skills in detecting adenomas can deteriorate significantly after they become reliant on AI.

The European researchers tracked the results of over 1400 colonoscopies performed in four different medical centers. They measured the adenoma detection rate (ADR) for physicians working normally without AI vs. those who used AI to help them detect adenomas during the procedure. In addition, they also tracked the ADR of the physicians who had used AI regularly for three months, then resumed performing colonoscopies without AI assistance.

The researchers found that the ADR before AI assistance was 28% and with AI assistance was 28.4%. (This was a slight increase, but not statistically significant.) However, when physicians accustomed to AI assistance ceased using AI, their ADR fell significantly to 22.4%. Assuming the patients in the various study groups were medically similar, that suggests that physicians accustomed to AI support might miss over a fifth of adenomas without computer assistance!

This is the first published example of so-called medical “deskilling” caused by routine use of AI. The study authors summarized their findings as follows: “We assume that continuous exposure to decision support systems such as AI might lead to the natural human tendency to over-rely on their recommendations, leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance.”

Consider the following non-medical analogy: Suppose self-driving car technology advanced to the point that cars could safely decide when to accelerate, brake, turn, change lanes, and avoid sudden unexpected obstacles. If you relied on self-driving technology for several months, then suddenly had to drive without AI assistance, would you lose some of your driving skills?

Although this particular study took place in the field of gastroenterology, I would not be surprised if we eventually learn of similar AI-related deskilling in other branches of medicine, such as radiology. At present, radiologists do not routinely use AI while reading mammograms to detect early breast cancers. But when AI becomes approved for routine use, I can imagine that human radiologists could succumb to a similar performance loss if they were suddenly required to work without AI support.

I anticipate more studies will be performed to investigate the issue of deskilling across multiple medical specialties. Physicians, policymakers, and the general public will want to ask the following questions:

1) As AI becomes more routinely adopted, how are we tracking patient outcomes (and physician error rates) before AI, after routine AI use, and whenever AI is discontinued?

2) How long does the deskilling effect last? What methods can help physicians minimize deskilling, and/or recover lost skills most quickly?

3) Can AI be implemented in medical practice in a way that augments physician capabilities without deskilling?

Deskilling is not always bad. My 6th grade schoolteacher kept telling us that we needed to learn long division because we wouldn’t always have a calculator with us. But because of the ubiquity of smartphones and spreadsheets, I haven’t done long division with pencil and paper in decades!

I do not see AI completely replacing human physicians, at least not for several years. Thus, it will be incumbent on the technology and medical communities to discover and develop best practices that optimize patient outcomes without endangering patients through deskilling. This will be one of the many interesting and important challenges facing physicians in the era of AI.



Source link

Continue Reading

AI Research

AI exposes 1,000+ fake science journals

Published

on


A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out “questionable” scientific journals.

The study, published Aug. 27 in the journal “Science Advances,” tackles an alarming trend in the world of research.

Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers — for a hefty fee.

Such publications are sometimes referred to as “predatory” journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting.

“There has been a growing effort among scientists and organizations to vet these journals,” Acuña said. “But it’s like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.”

His group’s new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors?

Acuña emphasizes that the tool isn’t perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable.

But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said.

“In science, you don’t start from scratch. You build on top of the research of others,” Acuña said. “So if the foundation of that tower crumbles, then the entire thing collapses.”

The shake down

When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality — or, at least, that’s the goal.

A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase “predatory” journals to describe these publications.

Often, they target researchers outside of the United States and Europe, such as in China, India and Iran — countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high.

“They will say, ‘If you pay $500 or $1,000, we will review your paper,'” Acuña said. “In reality, they don’t provide any service. They just take the PDF and post it on their website.”

A few different groups have sought to curb the practice. Among them is a nonprofit organization called the Directory of Open Access Journals (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.)

But keeping pace with the spread of those publications has been daunting for humans.

To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ’s data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet.

Among those journals, the AI initially flagged more than 1,400 as potentially problematic.

Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable.

“I think this should be used as a helper to prescreen large numbers of journals,” he said. “But human professionals should do the final analysis.”

A firewall for science

Acuña added that the researchers didn’t want their system to be a “black box” like some other AI platforms.

“With ChatGPT, for example, you often don’t understand why it’s suggesting something,” Acuña said. “We tried to make ours as interpretable as possible.”

The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level.

The new AI system isn’t publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data — what he calls a “firewall for science.”

“As a computer scientist, I often give the example of when a new smartphone comes out,” he said. “We know the phone’s software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science.”

Co-authors on the study included Han Zhuang at the Eastern Institute of Technology in China and Lizheng Liang at Syracuse University in the United States.



Source link

Continue Reading

AI Research

The Artificial Intelligence Is In Your Home, Office And The IRS Edition

Published

on




Source link

Continue Reading

Trending