Connect with us

AI Insights

It’s Time for Your Company to Invest in AI. Here’s How.

Published

on

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Susan B. Anthony House to offer artificial intelligence ASL interpreting for guests

Published

on


The story is known to many.

“News hit, you know?” Hughes begins, recounting the history recited by many a docent over the years. “The 15 women in Rochester who had voted in the election in November of 1872 were all indicted. They had gathered together here, at this house, on that morning, election morning, and they’d walked out West Main Street.”

But while the National Susan B. Anthony Museum and House is something of a Mecca for some visitors, and a regular field trip for area school children, all this time, a segment of the population has been left out. Rochester is home to one of — if not the —largest deaf populations per capita in the world.

That is why the museum is turning to Sign-Speak, an emerging, locally developed artificial intelligence platform providing on-demand speech to sign, sign to text and speech to text interpretation.

Scanning a QR code with their phones to access the platform, deaf visitors soon will be able to take in Hughes’ words: “They had registered before using the 14th Amendment, which said everyone who is born in this country is a citizen. And the women were turning that around to say, ‘therefore, women are citizens,’ and demanding a right of citizenship,” she concludes, “which is voting.”

Adopting the new technology will help make the museum more accessible. But it also fits with the National Susan B. Anthony Museum and House’s mission to open up conversations, Hughes said, and move those conversations forward from 1872 to present day. Ideally, with as many perspectives as possible. And this is a particularly important time to do so, she added.

Part of the National Susan B. Anthony Museum and House’s mission is to open up conversations, Hughes said, and move those conversations forward from 1872 to present day. Ideally, with as many perspectives as possible — which is where a new partnership with Sign-Speak enters the room.

“We’re in a cycle now where the same questions have come back at us again (from Anthony’s time) about humanity, about people within the borders of the United States: What civil rights do they have? Who gets to participate in systems of justice? Does the government have freedom to limit your civil rights or your actions?” Hughes said. “What’s fair and what’s just, and how do we determine that?”

Sign-Speak, an emerging artificial intelligence platform based in Rochester that provides on-demand American Sign Language interpretation, may expand that opportunity to more visitors who are deaf and hard of hearing.

Nikolas Kelly, the deaf owner and chief product officer of Sign-Speak, said the partnership marks the first collaboration with a museum, and it’s just one of many possible uses for the tool.

“We do have voice recognition and closed captioning, but we also have sign language recognition, where you can sign into it to reply back, and then we also have an avatar, and that’s sort of like a signing person,” Kelly said. “And lastly, we do also have VRI, video remote interpreting, and that’s if a deaf person feels like they need a human for whatever situation to interpret for them.”

Kelly said he got the idea to create the digital tool when he was a student at the National Technical Institute for the Deaf at Rochester Institute of Technology.
“I had noticed at that time that voice recognition technology had propagated, become ubiquitous, but no sign language technology existed that allowed me to express myself in my preferred language of sign language, at any time, to communicate with hearing people,” he said.

The vision is to expand accessibility and independence, particularly when last-minute interpretation is needed and not available. Along the way, and throughout development so far, he said the Deaf community has been involved.

“Anywhere where AI is used, for example, for application development for disabled communities, I think that it’s important to emphasize inclusion of members of that community, of that disabled community, in the development process,” Kelly said.

Museum staff are scheduled to be trained on Sign-Speak next Wednesday, with the roll out of the platform for the museum taking place shortly after.

Below is a transcript of a radio broadcasted story:

HOST: “A local museum and start up are partnering to provide on-demand artificial intelligence American Sign Language interpretation for visitors. From WXXI’s Inclusion Desk, reporter Noelle Evans caught up with a leader at the company Sign-Speak about it.”

NIKOLAS KELLY: “There’s some situations where you can’t get an interpreter right away. So as deaf people, we’re often stuck.”

NOELLE EVANS: “That’s Niko Kelly speaking through an interpreter. He is the Chief Product Officer at Sign-Speak in Rochester. The company is teaming up with the National Susan B Anthony Museum and House to provide on-demand interpretation using an AI platform he and his team have been developing.”

KELLY: “We do have voice recognition and closed captioning, but we also have sign language recognition, where you can sign into it to reply back, and then we also have an avatar, and that’s sort of like a signing person. … And lastly, we do also have VRI, video remote interpreting, and that’s if a deaf person feels like they need a human for whatever situation to interpret for them.”

EVANS: “Kelly says he got the idea when he was a student at the National Technical Institute for the Deaf.”

KELLY: “I had noticed at that time that voice recognition technology had propagated, become ubiquitous, but no sign language technology existed that allowed me to express myself in my preferred language of sign language, at any time, to communicate with hearing people.”

EVANS: “He says this tool is a way to bridge that gap. The Susan B Anthony House is expected to roll out Sign-Speak for guests in the coming weeks. Noelle Evans, WXXI News.”





Source link

Continue Reading

AI Insights

China claims its new LLM is up to 100x faster than ChatGPT

Published

on


In the mid-20th Century, we had the Space Race kick off, and in the mid-2020s, we’re very much in the middle of the AI race. Nobody is sitting still, with parties all around the globe pushing for the next big advancements.

Chinese scientists are now making a big claim to have made one of their own. As reported by The Independent, SpikingBrain1.0 is a new large language model (LLM) out of China, which ordinarily might not be so exciting. But this isn’t supposed to be any normal LLM. SpikingBrain1.0 is reported to be as much as 100x faster than current models such as those behind ChatGPT and Copilot.

All down to the way the model operates, which is something completely new. It’s being touted as the first “brain-like” LLM, but what does that actually mean? First, a little background on how the current crop of LLMs work.

The current crop of LLMs work very differently to what’s claimed of SpikingBrain1.0 (Image credit: Windows Central)

Bear with me on this one, hopefully I can make it make sense and as simply as possible. Essentially, the current crop of LLMs look at all of the words in a sentence at once. They’re looking for patterns, relationships between words, whatever their position in the sentence.



Source link

Continue Reading

AI Insights

How AI is helping one lawyer get kids out of jail faster – Computerworld

Published

on


In what ways has using AI helped you improve service delivery for underserved communities, particularly families who can’t afford traditional legal fees? “My practice is intentionally small, focused on defending kids charged with crimes. The cost savings from Rev help me keep my practice accessible to families who need help most. When assessing a case, I don’t have to think about the hours — and thus additional expense — it’s going to take to go through evidence, then get that evidence into a form that’s easily usable in court. Rev does that for me. 

“More importantly, the ability to make same-day bail arguments and secure my client’s release up to a week faster is a huge improvement in service delivery. For a traumatized youth already struggling with an arrest, every hour of freedom matters. Rev allows me to provide a fierce defense and potentially get kids out of detention much faster.”

How do you evaluate the accuracy and reliability of Rev’s AI transcriptions in high-stakes legal settings? “In high-stakes legal settings, I don’t just rely on the paper transcript. The transcript is my initial reference, but I use the hyperlinks to the audio for the actual words. This feature is incredibly powerful. For example, if an officer testifies in court that “the sky was black,” but on their body cam footage, they said “the sky is blue,” I can use the hyperlinked transcript to find that exact point. I just click a button on my laptop, and it plays the audio for them. When they hear their own voice, they’re often forced to admit they were wrong. This makes me an unstoppable force for truth in the courtroom.”



Source link

Continue Reading

Trending