Connect with us

AI Research

Can artificial intelligence and astrology work together?

Published

on


Can AI and astrology be successfully used together to give data-driven information regarding the most significant choices in life? That is the premise behind a Bengaluru-based astro-tech startup that is updating one of India’s oldest customs. As marriage remains one of the most important events, and almost 80 percent of Indian families rely on astrology when it comes to weddings, Melooha is codifying this tradition with accuracy and customisation in mind.

MIXING TRADITION AND TECHNOLOGY

It is an award-winning venture operating at the intersection of culture, advanced technology, and science, founded by IIM Bangalore alumnus Vikram Labhe. The company’s system employs advanced AI-based models based on astronomical data, planetary conjunctions, and known astrological principles.

While most horoscope applications are based on simplified predictions, it’s proprietary prediction engine incorporates a large number of algorithms or software IPs (more than 200), some of which are patentable. This allows for predictions that are contextual, knowable, and hyper-personalised based on timeframes for important life events.

The Indian astrology market, valued at USD10 billion, is growing fast in the online world. Younger generations, Gen Z and millennials, tend to rely on digital sources rather than approaching traditional astrologers. Whether it is career advice or relationship choices, these audiences are helping drive what analysts term the “astro-tech revolution.”

USER-CENTRIC APPROACH

“Marriage is also amongst the most important decisions individuals come across in their lives, and they need to get clarity. Combining both AI and astrology, we wanted to hold onto the tradition and still make it appealing to the new generation of digital-first users who are more open to precision and transparency in their lives,” said Vikram Labhe, Founder & CEO of Melooha.

The platform was adopted early by urban youths who appreciated a combination of technology and spirituality.

EXPANDING BEYOND MARRIAGE PREDICTIONS

Although its “Life Partner” report has drawn the spotlight, the company’s applications extend far beyond relationships. The platform provides real-time answers to users’ life questions, ranging from career growth, education, wealth opportunities, health, and overall well-being.

Every forecast is both contextualised against broader astronomical metrics, and customised to an individual’s particular birth chart and place in life, which increases the chances the guidance feels personal and actionable.

Using a combination of ancient wisdom and algorithmic science, the company is constructing the white space for another type of decision-making tool that blends the two worlds, tradition and technology. At the intersection of faith and future preparedness, it is transforming how a younger digital-native generation will engage with astrology – not to divide the future, but as a source of real-time clarity when they doubt their own intuition.

– Ends

Published On:

Sep 3, 2025



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Inside Austin’s Gauntlet AI, the Elite Bootcamp Forging “AI First” Builders

Published

on


In the brave new world of artificial intelligence, talent is the new gold, and companies are in a frantic race to find it. While universities work to churn out computer science graduates, a new kind of school has emerged in Austin to meet the insatiable demand: Gauntlet AI.

Gauntlet AI bills itself as an elite training program. It’s a high-stakes, high-reward process designed to forge “AI-first” engineers and builders in a matter of weeks.

“We’re closer to Navy SEAL bootcamp training than a school,” said Ash Tilawat, Head of Product and Learning. “We take the smartest people in the world. We bring them into the same place for a 1000 hours over ten weeks and we make them go all in with building with AI.”

Austen Allred, the co-founder and CEO of Gauntlet AI, says when they claim to be looking for the smartest engineers in the world, it’s no exaggeration. The selection process is intensely rigorous.

“We accept around 2 percent of the applicants,” Allred explained. “We accept 98th percentile and above of raw intelligence, 95th percentile of coding ability, and then you start on The Gauntlet.”

ALSO| The 60-Second Guardian: Can a Swarm of Drones Stop a School Shooter?

The price of admission isn’t paid in dollars—there are no tuition fees. Instead, the cost is a student’s absolute, undivided attention.

“It is pretty grueling, but it’s invigorating and I love doing this,” said Nataly Smith, one of the “Gauntlet Challengers.”

Smith, whose passions lie in biotech and space, recently channeled her love for bioscience to complete one of the program’s challenges. Her team was tasked with building a project called “Geno.”

“It’s a tool where a person can upload their genomic data and get a statistical analysis of how likely they are to have different kinds of cancers,” Smith described.

Incredibly, her team built the AI-powered tool in just one week.

The ultimate prize waiting at the end of the grueling 10-week gauntlet is a guaranteed job offer with a starting salary of at least $200,000 a year. And hiring partners are already lining up to recruit challengers like Nataly.

“We very intentionally chose to partner with everything from seed-stage startups all the way to publicly traded companies,” said Brett Johnson, Gauntlet’s COO. “So Carvana is a hiring partner. Here in Austin, we have folks like Function Health. We have the Trilogy organization; we have Capital Factory just around the corner. We’re big into the Austin tech community and looking to double down on that.”

In a world desperate for skilled engineers, Gauntlet AI isn’t just training people; it’s manufacturing the very talent pipeline it believes will power the next wave of technological innovation.



Source link

Continue Reading

AI Research

Endangered languages AI tools developed by UH researchers

Published

on

By


Reading time: < 1 minute

University of Hawaiʻi at Mānoa researchers have made a significant advance in studying how artificial intelligence (AI) understands endangered languages. This research could help communities document and maintain their languages, support language learning and make technology more accessible to speakers of minority languages.

The paper by Kaiying Lin, a PhD graduate in linguistics from UH Mānoa, and Department of Information and Computer Sciences Assistant Professor Haopeng Zhang, introduces the first benchmark for evaluating large language models (AI systems that process and generate text) on low-resource Austronesian languages. The study focuses on three Formosan (Indigenous peoples and languages of Taiwan) languages spoken in Taiwan—Atayal, Amis and Paiwan—that are at risk of disappearing.

Using a new benchmark called FORMOSANBENCH, Lin and Zhang tested AI systems on tasks such as machine translation, automatic speech recognition and text summarization. The findings revealed a large gap between AI performance in widely spoken languages such as English, and these smaller, endangered languages. Even when AI models were given examples or fine-tuned with extra data, they struggled to perform well.

“These results show that current AI systems are not yet capable of supporting low-resource languages,” Lin said.

Zhang added, “By highlighting these gaps, we hope to guide future development toward more inclusive technology that can help preserve endangered languages.”

The research team has made all datasets and code publicly available to encourage further work in this area. The preprint of the study is available online, and the study has been accepted into the 2025 Conference on Empirical Methods in Natural Language Processing in Suzhou, China, an internationally recognized premier AI conference.

The Department of Information and Computer Sciences is housed in UH Mānoa’s College of Natural Sciences, and the Department of Linguistics is housed in UH Mānoa’s College of Arts, Languages & Letters.



Source link

Continue Reading

AI Research

OpenAI reorganizes research team behind ChatGPT’s personality

Published

on


OpenAI is reorganizing its Model Behavior team, a small but influential group of researchers who shape how the company’s AI models interact with people, TechCrunch has learned.

In an August memo to staff seen by TechCrunch, OpenAI’s chief research officer Mark Chen said the Model Behavior team — which consists of roughly 14 researchers — would be joining the Post Training team, a larger research group responsible for improving the company’s AI models after their initial pre-training.

As part of the changes, the Model Behavior team will now report to OpenAI’s Post Training lead Max Schwarzer. An OpenAI spokesperson confirmed these changes to TechCrunch.

The Model Behavior team’s founding leader, Joanne Jang, is also moving on to start a new project at the company. In an interview with TechCrunch, Jang says she’s building out a new research team called OAI Labs, which will be responsible for “inventing and prototyping new interfaces for how people collaborate with AI.”

The Model Behavior team has become one of OpenAI’s key research groups, responsible for shaping the personality of the company’s AI models and for reducing sycophancy — which occurs when AI models simply agree with and reinforce user beliefs, even unhealthy ones, rather than offering balanced responses. The team has also worked on navigating political bias in model responses and helped OpenAI define its stance on AI consciousness.

In the memo to staff, Chen said that now is the time to bring the work of OpenAI’s Model Behavior team closer to core model development. By doing so, the company is signaling that the “personality” of its AI is now considered a critical factor in how the technology evolves.

In recent months, OpenAI has faced increased scrutiny over the behavior of its AI models. Users strongly objected to personality changes made to GPT-5, which the company said exhibited lower rates of sycophancy but seemed colder to some users. This led OpenAI to restore access to some of its legacy models, such as GPT-4o, and to release an update to make the newer GPT-5 responses feel “warmer and friendlier” without increasing sycophancy.

Techcrunch event

San Francisco
|
October 27-29, 2025

OpenAI and all AI model developers have to walk a fine line to make their AI chatbots friendly to talk to but not sycophantic. In August, the parents of a 16-year-old boy sued OpenAI over ChatGPT’s alleged role in their son’s suicide. The boy, Adam Raine, confided some of his suicidal thoughts and plans to ChatGPT (specifically a version powered by GPT-4o), according to court documents, in the months leading up to his death. The lawsuit alleges that GPT-4o failed to push back on his suicidal ideations.

The Model Behavior team has worked on every OpenAI model since GPT-4, including GPT-4o, GPT-4.5, and GPT-5. Before starting the unit, Jang previously worked on projects such as Dall-E 2, OpenAI’s early image-generation tool.

Jang announced in a post on X last week that she’s leaving the team to “begin something new at OpenAI.” The former head of Model Behavior has been with OpenAI for nearly four years.

Jang told TechCrunch she will serve as the general manager of OAI Labs, which will report to Chen for now. However, it’s early days, and it’s not clear yet what those novel interfaces will be, she said.

“I’m really excited to explore patterns that move us beyond the chat paradigm, which is currently associated more with companionship, or even agents, where there’s an emphasis on autonomy,” said Jang. “I’ve been thinking of [AI systems] as instruments for thinking, making, playing, doing, learning, and connecting.”

When asked whether OAI Labs will collaborate on these novel interfaces with former Apple design chief Jony Ive — who’s now working with OpenAI on a family of AI hardware devices — Jang said she’s open to lots of ideas. However, she said she’ll likely start with research areas she’s more familiar with.

This story was updated to include a link to Jang’s post announcing her new position, which was released after this story published. We also clarify the models that OpenAI’s Model Behavior team worked on.





Source link

Continue Reading

Trending