Connect with us

AI Insights

Pre-law student survey unmasks fears of artificial intelligence taking over legal roles

Published

on


“We’re no longer talking about AI just writing contracts or breaking down legalese. It is reshaping the fundamental structure of legal work. Our future lawyers are smart enough to see that coming. We want to provide them this data so they can start thinking about how to adapt their skills for a profession that will look very different by the time they enter it,” said Arush Chandna, Juris Education founder, in a statement.

Juris Education noted that law schools are already integrating legal tech, ethics, and prompt engineering into curricula. The American Bar Association’s 2024 AI and Legal Education Survey revealed that 55 percent of US law schools were teaching AI-specific classes and 83 percent enabled students to learn effective AI tool use through clinics.

Juris Education’s director of advising Victoria Inoyo pointed out that AI could not replicate human communication skills.

“While AI is reshaping the legal industry, the rise of AI is less about replacement and more about evolution. It won’t replace the empathy, judgment, and personal connection that law students and lawyers bring to complex issues,” she said. “Future law students should focus on building strong communication and interpersonal skills that set them apart in a tech-enhanced legal landscape. These are qualities AI cannot replace.”

Juris Education’s survey obtained responses from 220 pre-law students. The challenge of maintaining work-life balance was cited by 21.8 percent of respondents as their primary career concern; increasing student debt juxtaposed against low job security was the third most prevalent concern with 17.3 percent of respondents citing it as their biggest career fear.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Charity Digital – Topics – The best AI grant writers for charities

Published

on


Much has been heralded around the ability of artificial intelligence (AI) to make short work out of writing large swathes of text. It can provide lengthy answers to questions in mere moments, generating copy with little effort and time required to do so. For the time-poor charity sector, it is perhaps not surprising then that such technology has been touted as game-changing, not least in the area of grant writing. 

Generally speaking, there are a few key reasons why grant writing is a prime target for AI usage. AI models typically start out by writing a decent first draft, including basic research, structure, and analysis. The research provided is a time saver. AI models help with literature reviews, citations, and some even identify potential grant opportunities.  

To unlock these myriad benefits, we’ll explore some of the AI grant writers available, and how charities can make the most out of them to boost their funds and support vital services.

 

 

AI tools for grant writing

Charity Excellence’s grant writer takes aim at levelling the playing field. Targeted at smaller charities this grant writer uses AI to coach charities through their applications. Users log into the dashboard and start running through the questionnaire. Next, the AI Bunny processes the request and out pops out generated text based on responses. Applicants then have the choice to download or email themselves the draft.  

Recently launched, nonprofit technology expert Kindsight’s Grant Writer is a comprehensive solution to proposal drafting. The AI draws from a proprietary database from fully awarded grants, meaning that charities are learning from a bank of known winning proposals. The data has been tested and vetted by professional grant writers.  

Other features of this platform are also time-savers. Authors can click and drop an executive summary, needs, research, budget and capacity statements, but leave out other fillers or sections. Each proposal can be tailored exactly to the request.  

Pro tip: Check out the free trial.  

Plinth takes a slightly different perspective. The AI-driven platform is helping small and large organisations manage their applications. The AI-powered features help with grant management, service delivery, case management, and fundraising. Plinth says it saves time by using tech to vet applications against Charity Commission data. It then comes up with feedback that is customised to the applicant.  

Plinth’s main benefits are two-fold. The platform uses previous applications to pre-populate questionnaires and can build evidence from your existing work. Then the AI can do some professional editing. The technology adjusts for tone and language to suit the application.  

Pro tip: Consider using the entire platform to maximise benefits.  

GrantWrite AI is a dedicated platform which scans the internet for possible funding opportunities and makes recommendations. Then the application process is made easier. The editing tool enables better writing, so that the proposal is tailored to the grant criteria. The platform also acknowledges that grant writing isn’t done by a single person. The Collaborative Workflow function includes other inputters and reviewers in the process. Next, the grant process itself is tracked. GrantWrite AI monitors progress and shares updates.  

Pro tip: The best feature here is the Snippet Library – lift your best work and phrases into new applications.  

Another specialised platform, Autogen AI works in two ways. First, the platform can smartly identify funding opportunities. Second, it helps with the request-for-proposal (RFP) process.  

The process Autogen AI uses is intuitive. First, AI can ‘read’ the RFP and extract the relevant requirements for your organisation. Then the platform can separate each section and users assign responsibilities and due dates. In addition to streamlining the process, AI can help adjust form and language to meet RFP expectations.  

Pro tip: This platform works best for complicated, professional RFP processes – ideally government bids and other major projects.  

For many charities, testing out new tech is the way to go ahead of a larger investment. Grant Finder Pro helps by dedicating most of their services to smaller organisations. Once registered, the platform sends alerts to charities on which grants might be suitable. When a grant is identified, Grant Finder Pro then uses AI to help draft the proposal using registration information, website, grant, and project details. For an added cost, applicants can add a human editor to the process.  

Pro tip: UK Grant Finder Pro works well on a shoestring budget without any other features.  

Another pared back service, Grant Boost requires users to share information about the charity and grant. AI is used across three processes. First, the tech checks out the charity information and parses it out to product a draft. AI also can edit the responses, providing better writing and compelling answers.  

Pro tip: Use for basic applications.  



Source link

Continue Reading

AI Insights

Perplexity Valuation Hits $20 Billion Following New Funding Round

Published

on


Artificial intelligence (AI) search startup Perplexity AI has reportedly secured $200 million in new funding.

The new funding values the company at $20 billion, according to multiple media accounts late Wednesday (Sept 10). The company’s financing was initially reported by The Information, which cited sources familiar with the matter.

That report noted that Perplexity has raised funds approximately once every two months in the last year, with its total funding exceeding $1 billion.

Perplexity was valued at $14 billion following a funding round in March, with its valuation jumping to $18 billion after it raised another $100 million in July.

This latest funding happened in the wake of Perplexity’s bid last month to purchase Google’s Chrome browser for $34.5 billion, a move that would have allowed its Comet browser to better compete with the likes of OpenAI.

The company’s offer came after the Justice Department proposed that Google sell Chrome as a remedy in its antitrust case. A federal judge recently ruled that Google did not need to break up its search business, meaning it will keep Chrome.

The rise of AI-driven search tools like Perplexity’s, or OpenAI’s ChatGPT and Google’s AI Overviews, has birthed the concept of generative engine optimization, or GEO. As PYMNTS wrote last week, this is the emerging discipline of making a brand remain visible in searches.

“Businesses now face a two-front battle: keep their place in traditional search while ensuring AI systems recognize and cite them as authoritative answers,” that report said. “Whether one calls it SEO, GEO, or simply good content, the playbook for staying visible is changing fast, and the cost of sitting out is invisibility.”

As companies watch their click-through rates decline, they have no choice but to embrace an era where AI offers up complete answers to user queries, the report added.

“AI search isn’t coming, it’s already reshaping the web,” Rich Pleeth, former Google marketing executive who is now co-founder and CEO of Finmile, said in an interview with PYMNTS. 

“Traditional SEO was about keywords and backlinks. But with AI search engines like ChatGPT and Gemini, discoverability is now about authority, clarity and context. It’s not just about ranking, it’s about being the answer.”

He added that this means online businesses must “rethink their entire content strategy: Speak like a human, show domain expertise, and design for machine readability.”



Source link

Continue Reading

AI Insights

Patients turn to AI to interpret lab tests, with mixed results : Shots

Published

on


People are turning to Chatbots like Claude to get help interpreting their lab test results.

Smith Collection/Gado/Archive Photos/Getty Images


hide caption

toggle caption

Smith Collection/Gado/Archive Photos/Getty Images

When Judith Miller had routine blood work done in July, she got a phone alert the same day that her lab results were posted online. So, when her doctor messaged her the next day that overall her tests were fine, Miller wrote back to ask about the elevated carbon dioxide and something called “low anion gap” listed in the report.

While the 76-year-old Milwaukee resident waited to hear back, Miller did something patients increasingly do when they can’t reach their health care team. She put her test results into Claude and asked the AI assistant to evaluate the data.

“Claude helped give me a clear understanding of the abnormalities,” Miller said. The generative AI model didn’t report anything alarming, so she wasn’t anxious while waiting to hear back from her doctor, she said.

Patients have unprecedented access to their medical records, often through online patient portals such as MyChart, because federal law requires health organizations to immediately release electronic health information, such as notes on doctor visits and test results.

And many patients are using large language models, or LLMs, like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini, to interpret their records. That help comes with some risk, though. Physicians and patient advocates warn that AI chatbots can produce wrong answers and that sensitive medical information might not remain private.

But does AI know what it’s talking about?

Yet, most adults are cautious about AI and health. Fifty-six percent of those who use or interact with AI are not confident that information provided by AI chatbots is accurate, according to a 2024 KFF poll. (KFF is a health information nonprofit that includes KFF Health News.)

That instinct is born out in research.

“LLMs are theoretically very powerful and they can give great advice, but they can also give truly terrible advice depending on how they’re prompted,” said Adam Rodman, an internist at Beth Israel Deaconess Medical Center in Massachusetts and chair of a steering group on generative AI at Harvard Medical School.

Justin Honce, a neuroradiologist at UCHealth in Colorado, said it can be very difficult for patients who are not medically trained to know whether AI chatbots make mistakes.

“Ultimately, it’s just the need for caution overall with LLMs. With the latest models, these concerns are continuing to get less and less of an issue but have not been entirely resolved,” Honce said.

Rodman has seen a surge in AI use among his patients in the past six months. In one case, a patient took a screenshot of his hospital lab results on MyChart then uploaded them to ChatGPT to prepare questions ahead of his appointment. Rodman said he welcomes patients’ showing him how they use AI, and that their research creates an opportunity for discussion.

Roughly 1 in 7 adults over 50 use AI to receive health information, according to a recent poll from the University of Michigan, while 1 in 4 adults under age 30 do so, according to the KFF poll.

Using the internet to advocate for better care for oneself isn’t new. Patients have traditionally used websites such as WebMD, PubMed, or Google to search for the latest research and have sought advice from other patients on social media platforms like Facebook or Reddit. But AI chatbots’ ability to generate personalized recommendations or second opinions in seconds is novel.

What to know: Watch out for “hallucinations” and privacy issues

Liz Salmi, communications and patient initiatives director at OpenNotes, an academic lab at Beth Israel Deaconess that advocates for transparency in health care, had wondered how good AI is at interpretation, specifically for patients.

In a proof-of-concept study published this year, Salmi and colleagues analyzed the accuracy of ChatGPT, Claude, and Gemini responses to patients’ questions about a clinical note. All three AI models performed well, but how patients framed their questions mattered, Salmi said. For example, telling the AI chatbot to take on the persona of a clinician and asking it one question at a time improved the accuracy of its responses.

Privacy is a concern, Salmi said, so it’s critical to remove personal information like your name or Social Security number from prompts. Data goes directly to tech companies that have developed AI models, Rodman said, adding that he is not aware of any that comply with federal privacy law or consider patient safety. Sam Altman, CEO of OpenAI, warned on a podcast last month about putting personal information into ChatGPT.

“Many people who are new to using large language models might not know about hallucinations,” Salmi said, referring to a response that may appear sensible but is inaccurate. For example, OpenAI’s Whisper, an AI-assisted transcription tool used in hospitals, introduced an imaginary medical treatment into a transcript, according to a report by The Associated Press.

Using generative AI demands a new type of digital health literacy that includes asking questions in a particular way, verifying responses with other AI models, talking to your health care team, and protecting your privacy online, said Salmi and Dave deBronkart, a cancer survivor and patient advocate who writes a blog devoted to patients’ use of AI.

Physicians must be cautious with AI too

Patients aren’t the only ones using AI to explain test results. Stanford Health Care has launched an AI assistant that helps its physicians draft interpretations of clinical tests and lab results to send to patients.

Colorado researchers studied the accuracy of ChatGPT-generated summaries of 30 radiology reports, along with four patients’ satisfaction with them. Of the 118 valid responses from patients, 108 indicated the ChatGPT summaries clarified details about the original report.

But ChatGPT sometimes overemphasized or underemphasized findings, and a small but significant number of responses indicated patients were more confused after reading the summaries, said Honce, who participated in the preprint study.

Meanwhile, after four weeks and a couple of follow-up messages from Miller in MyChart, Miller’s doctor ordered a repeat of her blood work and an additional test that Miller suggested. The results came back normal. Miller was relieved and said she was better informed because of her AI inquiries.

“It’s a very important tool in that regard,” Miller said. “It helps me organize my questions and do my research and level the playing field.”

KFF Health News is a national newsroom that produces in-depth journalism about health issues and is one of the core operating programs at KFF .



Source link

Continue Reading

Trending