AI Research
MyPillow CEO’s lawyers fined for AI-generated court filing
A federal judge ordered two attorneys representing MyPillow CEO Mike Lindell to pay $3,000 each after they used artificial intelligence to prepare a court filing that was riddled with errors, including citations to nonexistent cases and misquotations of case law.
Christopher Kachouroff and Jennifer DeMaster violated court rules when they filed the motion that had contained nearly 30 defective citations, Judge Nina Y. Wang of the U.S. District Court in Denver ruled Monday.
“Notwithstanding any suggestion to the contrary, this Court derives no joy from sanctioning attorneys who appear before it,” Wang wrote in her ruling, adding that the sanction against Kachourouff and Demaster was “the least severe sanction adequate to deter and punish defense counsel in this instance.”
The motion was filed in Lindell’s defamation case, which ended last month when a Denver jury found Lindell liable for defamation for pushing false claims that the 2020 presidential election was rigged.
The filing misquoted court precedents and highlighted legal principles that were not involved in the cases it cited, according to the ruling.
During a pretrial hearing after the errors were discovered, Kachouroff admitted to using generative artificial intelligence to write the motion.
Kachouroff initially told the judge that the motion was a draft and was filed by accident. But the “final” version that he said was the correct one was still riddled with “substantive errors,” including some that were not included in the filed version, Wang wrote.
It was the attorneys’ “contradictory statements and the lack of corroborating evidence” that led the judge to believe that the filing of the AI-generated motion was not “an inadvertent error” and deserved a sanction.
The judge also found Kachouroff’s accusation of the court trying to “blindside” him over the errors were “troubling and not well-taken.”
“Neither Mr. Kachouroff nor Ms. DeMaster provided the Court any explanation as to how those citations appeared in any draft of the Opposition absent the use of generative artificial intelligence or gross carelessness by counsel,” Wang wrote.
Kachouroff and DeMaster did not immediately return a request for comment Monday.
AI Research
Joint UT, Yale research develops AI tool for heart analysis – The Daily Texan
A study published on June 23 in collaboration with UT and Yale researchers developed an artificial intelligence tool capable of performing and analyzing the heart using echocardiography.
The app, PanEcho, can analyze echocardiograms, or pictures of the heart, using ultrasounds. The tool was developed and trained on nearly one million echocardiographic videos. It can perform 39 echocardiographic tasks and accurately detect conditions such as systolic dysfunction and severe aortic stenosis.
“Our teammates helped identify a total of 39 key measurements and labels that are part of a complete echocardiographic report — basically what a cardiologist would be expected to report on when they’re interpreting an exam,” said Gregory Holste, an author of the study and a doctoral candidate in the Department of Electrical and Computer Engineering. “We train the model to predict those 39 labels. Once that model is trained, you need to evaluate how it performs across those 39 tasks, and we do that through this robust multi site validation.”
Holste said out of the functions PanEcho has, one of the most impressive is its ability to measure left ventricular ejection fraction, or the proportion of blood the left ventricle of the heart pumps out, far more accurately than human experts. Additionally, Holste said PanEcho can analyze the heart as a whole, while humans are limited to looking at the heart from one view at a time.
“What is most unique about PanEcho is that it can do this by synthesizing information across all available views, not just curated single ones,” Holste said. “PanEcho integrates information from the entire exam — from multiple views of the heart to make a more informed, holistic decision about measurements like ejection fraction.”
PanEcho is available for open-source use to allow researchers to use and experiment with the tool for future studies. Holste said the team has already received emails from people trying to “fine-tune” the application for different uses.
“We know that other researchers are working on adapting PanEcho to work on pediatric scans, and this is not something that PanEcho was trained to do out of the box,” Holste said. “But, because it has seen so much data, it can fine-tune and adapt to that domain very quickly. (There are) very exciting possibilities for future research.”
AI Research
New Research Shows Language Choice Alone Can Guide AI Output Toward Eastern or Western Cultural Outlooks
A new study shows that the language used to prompt AI chatbots can steer them toward different cultural mindsets, even when the question stays the same. Researchers at MIT and Tongji University found that large language models like OpenAI’s GPT and China’s ERNIE change their tone and reasoning depending on whether they’re responding in English or Chinese.
The results indicate that these systems translate language while also reflecting cultural patterns. These patterns appear in how the models provide advice, interpret logic, and handle questions related to social behavior.
Same Question, Different Outlook
The team tested both GPT and ERNIE by running identical tasks in English and Chinese. Across dozens of prompts, they found that when GPT answered in Chinese, it leaned more toward community-driven values and context-based reasoning. In English, its responses tilted toward individualism and sharper logic.
Take social orientation, for instance. In Chinese, GPT was more likely to favor group loyalty and shared goals. In English, it shifted toward personal independence and self-expression. These patterns matched well-documented cultural divides between East and West.
When it came to reasoning, the shift continued. The Chinese version of GPT gave answers that accounted for context, uncertainty, and change over time. It also offered more flexible interpretations, often responding with ranges or multiple options instead of just one answer. In contrast, the English version stuck to direct logic and clearly defined outcomes.
No Nudging Needed
What’s striking is that these shifts occurred without any cultural instructions. The researchers didn’t tell the models to act more “Western” or “Eastern.” They simply changed the input language. That alone was enough to flip the models’ behavior, almost like switching glasses and seeing the world in a new shade.
To check how strong this effect was, the researchers repeated each task more than 100 times. They tweaked prompt formats, varied the examples, and even changed gender pronouns. No matter what they adjusted, the cultural patterns held steady.
Real-World Impact
The study didn’t stop at lab tests. In a separate exercise, GPT was asked to choose between two ad slogans, one that stressed personal benefit, another that highlighted family values. When the prompt came in Chinese, GPT picked the group-centered slogan most of the time. In English, it leaned toward the one focused on the individual.
This might sound small, but it shows how language choice can guide the model’s output in ways that ripple into marketing, decision-making, and even education. People using AI tools in one language may get very different advice than someone asking the same question in another.
Can You Steer It?
The researchers also tested a workaround. They added cultural prompts, telling GPT to imagine itself as a person raised in a specific country. That small nudge helped the model shift its tone, even in English, suggesting that cultural context can be dialed up or down depending on how the prompt is framed.
Why It Matters
The findings concern how language affects the way AI models present information. Differences in response patterns suggest that the input language influences how content is structured and interpreted. As AI tools become more integrated into routine tasks and decision-making processes, language-based variations in output may influence user choices over time.
Notes: This post was edited/created using GenAI tools. Image: DIW-Aigen.
Read next: Jack Dorsey Builds Offline Messaging App That Uses Bluetooth Instead of the Internet
AI Research
Indonesian volcano Mount Lewotobi Laki-laki spews massive ash cloud as it erupts again
Indonesia’s Mount Lewotobi Laki-laki has begun erupting again – at one point shooting an ash cloud 18km (11mi) into the sky – as residents flee their homes once more.
There have been no reports of casualties since Monday morning, when the volcano on the island of Flores began spewing ash and lava again. Authorities have placed it on the highest alert level since an earlier round of eruptions three weeks ago.
At least 24 flights to and from the neighbouring resort island of Bali were cancelled on Monday, though some flights had resumed by Tuesday morning.
The initial column of hot clouds that rose at 11:05 (03:05 GMT) Monday was the volcano’s highest since November, said geology agency chief Muhammad Wafid.
“An eruption of that size certainly carries a higher potential for danger, including its impact on aviation,” Wafid told The Associated Press.
Monday’s eruption, which was accompanied by a thunderous roar, led authorities to enlarge the exclusion zone to a 7km radius from the central vent. They also warned of potential lahar floods – a type of mud or debris flow of volcanic materials – if heavy rain occurs.
The twin-peaked volcano erupted again at 19:30 on Monday, sending ash clouds and lava up to 13km into the air. It erupted a third time at 05:53 on Tuesday at a reduced intensity.
Videos shared overnight show glowing red lava spurting from the volcano’s peaks as residents get into cars and buses to flee.
More than 4,000 people have been evacuated from the area so far, according to the local disaster management agency.
Residents who have stayed put are facing a shortage of water, food and masks, local authorities say.
“As the eruption continues, with several secondary explosions and ash clouds drifting westward and northward, the affected communities who have not been relocated… require focused emergency response efforts,” say Paulus Sony Sang Tukan, who leads the Pululera village, about 8km from Lewotobi Laki-laki.
“Water is still available, but there’s concern about its cleanliness and whether it has been contaminated, since our entire area was blanketed in thick volcanic ash during yesterday’s [eruptions],” he said.
Indonesia sits on the Pacific “Ring of Fire” where tectonic plates collide, causing frequent volcanic activity as well as earthquakes.
Lewotobi Laki-laki has erupted multiple times this year – no casualties have been reported so far.
However, an eruption last November killed at least ten people and forced thousands to flee.
Laki-Laki, which means “man” in Indonesian, is twinned with the calmer but taller 1,703m named Perempuan, the Indonesian word for “woman”.
Additional reporting by Eliazar Ballo in Kupang.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers5 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit