Connect with us

AI Insights

Data is destiny! | Opinion

Published

on


In our era, where digitalization is accelerating, humanity is confronted with a new paradigm shaped by the transition from analog to digital. This transformation has not been limited to technological infrastructures alone; it has reshaped every aspect of life, from the functioning of social structures to individual life practices, from modes of cultural production to political decision-making processes. In this emerging order, where mathematical models, algorithms and big data analytics increasingly assume a central role, the very nature of truth itself is undergoing a transformation. Reality is now understood and defined less through direct experience and more through numerical indicators and data-driven representations. Elements that cannot be measured, digitized, or represented through algorithms are rapidly becoming invisible, gradually being pushed into a position of diminished value.

This process has not only been a revolution that facilitates individuals’ access to information but has also created a complex system that deepens social inequalities. The processes of producing, processing, and interpreting numerical data, although often presented under the guise of technological neutrality, have in fact become one of the primary arenas where power relations are most intensely reproduced. The ownership of data, the capacity to access it, and the ability to process and interpret it are now directly tied to economic, cultural, and political power. The gap between those who can benefit from the opportunities offered by digitalization and those who remain excluded from this process is steadily widening, giving rise to a new form of social hierarchy.

At this point, Ibn Khaldun’s famous assertion that “geography is destiny” offers a striking analogy for understanding the new dynamics of the digital age. Ibn Khaldun emphasized that the lifestyles of individuals and societies, the institutions they build, and the economic systems they establish are shaped by their geographical environment. Today, however, we must reinterpret this proposition: in the digital era, “data is destiny.” Individuals, institutions, and societies are now defined by the data produced about them, while the scores, rankings, and predictions generated by algorithms increasingly shape the direction of the future. In this context, a kind of “data geography” emerges, functioning as a new map that determines the destiny of individuals and societies. Which data can be accessed, which algorithms produce which outputs, and where individuals or groups are positioned within various rankings have become increasingly decisive factors.

In this new order, artificial intelligence technologies play a dominant role. Evolving at an exponential pace, AI is not merely a technical innovation but a transformative force that fundamentally reshapes social, cultural, and economic structures. From decision-making processes to education policies, from labor markets to the public sphere, numerous domains are being reconstructed based on new parameters defined by AI. However, this transformation also brings with it a growing tension between human autonomy and technological autonomy. Decision-making mechanisms driven by algorithms have become the primary tools shaping individual choices, giving rise to a new form of dependency that quietly undermines the concept of free will. People are increasingly being compelled – often unconsciously – to make decisions within the frameworks set by algorithms.

The most dangerous dimension of this dependency emerges in the impact of AI on human cognitive capacity. Digital systems facilitate access to information, providing speed and efficiency; however, they simultaneously weaken individuals’ ability to generate knowledge and construct meaning through their own mental processes. Information is increasingly consumed in prepackaged and filtered forms rather than being processed through intellectual effort. This leads to the erosion of critical thinking skills, a decline in intellectual depth, and a weakening of individuals’ capacity for independent decision-making. On a societal scale, this cognitive transformation contributes to the superficiality of public debates, a reduction in dialogue between different perspectives, and the erosion of a shared sense of reality.

The pervasive integration of mathematical models into every aspect of social life facilitates the invisible reproduction of inequalities. While these models aim to predict the future based on historical data, they simultaneously carry existing biases and disadvantages into the future, reinforcing them within datasets. For example, in education, the allocation of fewer resources to lower-ranked schools exacerbates their disadvantages, while increased police presence in areas with higher recorded crime rates leads to more incidents being documented, creating self-reinforcing cycles. Similarly, recruitment algorithms that prioritize criteria favoring historically advantaged groups concentrate future opportunities within those same groups. This mechanism effectively produces a self-fulfilling prophecy, generating a system where advantage perpetually begets advantage, while disadvantage solidifies into persistent structural inequality.

Another domain transformed by digitalization is the structure of the relationship between success, performance, and reward. In traditional social systems, success was largely associated with individual effort, talent, and discipline. However, in today’s hyper-connected world, this relationship has been radically redefined. The network theory of Albert-László Barabási and Peter Érdi’s analyses of the “success game” reveal that the dynamics of achievement in the digital age have been fundamentally reshaped. Individuals or institutions that gain small initial advantages rapidly accumulate visibility and resources through the mechanism of preferential attachment. While performance often follows a normal distribution, the distribution of success and rewards aligns with the power law: a very small number of actors capture nearly all the returns, while the vast majority fade into obscurity.

In this new order, concentration occurs not only in the economic sphere but also across cultural and cognitive domains. In scientific production, a small number of universities capture the majority of citations; in the art world, visibility revolves around a handful of major galleries and museums; and in the music industry, a limited number of artists control nearly all streams and revenue shares. In the realm of technology, only a few global corporations dominate almost the entirety of data, algorithms, and user behavior. This creates a winner-takes-all system, fundamentally undermining the perception of social equality of opportunity and eroding collective notions of fairness and justice.

The structure of the public sphere has also been profoundly transformed by this shift. Traditionally, the public sphere served as a space where individuals could engage in discussions on shared concerns, freely express their ideas, and collectively construct social consciousness. Today, however, this space has largely fallen under the control of digital platforms. Access to information, visibility, and interaction are now dictated by algorithmic priorities, while the quality of social dialogue has been subordinated to platform logics driven by commercial interests and user data. This marks a critical rupture that poses a deep threat to democratic processes. As decision-making mechanisms become increasingly dependent on data monopolies, political debates are trapped within echo chambers, misinformation spreads rapidly, and polarization intensifies – all of which gradually erode the foundations for democratic consensus.

This reality underscores the urgent need for new institutional, ethical, and political frameworks to redirect the course of digitalization in favor of humanity. Understanding technology alone is not sufficient; it is essential to build a collective will that places it in the service of human well-being. The development of artificial intelligence and algorithms should not be left solely to technical experts; instead, it must include the participation of civil society, academia, labor unions, independent oversight mechanisms, and the communities directly affected by these technologies. Education systems should be restructured around an approach that prioritizes digital literacy, data ethics, and critical thinking, enabling individuals to move beyond being passive consumers of technology and instead become conscious, questioning actors. Furthermore, algorithmic decision-making must be made transparent, AI applications should be subjected to strict accountability standards, and individuals’ digital rights must be safeguarded through robust legal protections.

In conclusion, while digitalization and artificial intelligence present one of the greatest potentials in human history, they also carry profound risks. In the coming period, the decisions made by individuals, institutions and governments will shape not only the trajectory of technology but also the future of humanity itself. Uncontrolled technological growth carries the danger of reducing human beings to passive objects within systems of their own creation; yet, when guided by human will, it offers unique opportunities to lay the foundations for a more just, inclusive and meaningful social order. The direction the future will take depends on the values and principles by which we choose to govern technology. Humanity still holds the chance to remain the subject – not the object – of this transformation. Seizing this opportunity requires building new ethical frameworks, establishing transparency standards and creating mechanisms that strengthen the public sphere.



Source link

AI Insights

How to talk to your teen about AI : NPR

Published

on


Parents should broach the AI conversation with their children when they are elementary school-age, before they encounter AI through their friends at school or in other spaces, says Marc Watkins, a lecturer at the University of Mississippi who researches AI and its impact on education.

Eva Redamonti for NPR


hide caption

toggle caption

Eva Redamonti for NPR

Nicholas Munkbhatter started using ChatGPT shortly after the artificial intelligence chatbot was released in late 2022. He was 14 at the time, and he says, “I would use it for almost everything, like math problems.”

At first, Munkbhatter, who is from Sacramento, Calif., thought it was amazing. But then, he says, he started to see downsides: “I realized it was just giving me an answer without helping me go through the actual process of learning.”

Many kids and teens use ChatGPT and other generative artificial intelligence models like Claude or Google Gemini for everything from dealing with math homework to coping with a mental health crisis, often with little to no guidance from adults. Education and child development experts say parents must take the lead in helping children understand this new technology. 

“Having conversations now about what is ethical, responsible usage of AI is important, and you need to be a part of that if you are a parent,” says Marc Watkins, a lecturer at the University of Mississippi who researches AI and its impact on education.

While early evidence suggests the technology could bolster student learning if deployed correctly, ongoing research and stories about teenagers who died by suicide after talking to AI chatbots indicate significant risks to young users.

Experts share advice on how to talk to kids about AI, including its potential benefits and harms.

Start the conversation early 

Broach the conversation when children are elementary-school age, Watkins says, before they encounter AI through their friends at school or in other spaces.

To guide these discussions, Watkins says to budget time each week to learn about AI and try the tools for yourself. That might mean listening to a podcast, reading a newsletter or experimenting with platforms like ChatGPT.

To explain how AI works to your kids, Watkins recommends playing a Google game called Quick, Draw!. Players receive a drawing prompt, and the game’s neural network tries to guess what you’re drawing by recognizing patterns in doodles from thousands of other players.

Watkins says it’s a way to show kids that AI is only as good as the data it’s trained on. It mimics how humans write and create content, but it doesn’t think or understand things the way people do.

Use AI together 

Since the technology is still evolving, parents are often learning about it alongside their children. Ying Xu, an assistant professor at the Harvard Graduate School of Education, who researches AI, says parents can use this as an opportunity to explore it together.

For example, the next time your child asks you a question, type it into an AI chatbot and discuss the response, Xu says. “Is it helpful? What felt off? How do you think this response was generated?”

Parents should also reinforce that AI can make mistakes. Xu says parents can teach kids to fact-check information that AI chatbots provide by using other sources.

Explore its possibilities 

If your kid is using AI for homework help, keep an open mind.

Research has shown that some AI tools can have a positive impact on learning. Xu worked with PBS Kids to design interactive, AI-powered digital versions of popular kids shows. She found that children who watched the AI versions were more engaged and learned more compared with children who watched the traditional broadcast version of the show.

Meanwhile, Munkbhatter, the teenager from Sacramento, says AI has been a helpful learning aid and brainstorm partner — so long as he doesn’t use it to do all the work for him.

Now, if he gets stuck on a challenging math problem, he says he asks ChatGPT: “What’s the first step I should take when looking at a problem like this? How should I think about it?”

Munkbhatter also says he provides his class notes to ChatGPT and asks it to quiz him on the subject matter. “I make sure that it only gives me the question itself rather than the question and the answer at the same time.”

Understand the risks

We don’t yet know how generative AI will impact child development in the long term, but there are some present dangers.

Dr. Darja Djordjevic, a faculty fellow at Stanford University’s Brainstorm: The Stanford Lab for Mental Health Innovation, is working with the group Common Sense Media to study how popular AI models respond to users who show symptoms of psychiatric disorders that affect teens. The research hasn’t been released yet, but Djordjevic shared some of her findings with NPR.

“ What we found was that the AI chatbots could provide good general mental health information, but they demonstrated concerning gaps in recognizing serious conditions,” Djordjevic says.

At times, she says, AI chatbots provided unsafe responses to questions and statements about self-harm, substance use, body image or eating disorders and risk-taking behaviors. She says they also generated sexually explicit content.

NPR reached out to OpenAI, the company behind ChatGPT, about these concerns. We were directed to a recent post on the company’s website that says OpenAI is “continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input.”

The post says ChatGPT is also trained to direct users expressing suicidal intent to professional help.

Warning signs a child is spending too much time with AI include increased time alone with devices or talking about an AI chatbot as if it were a real friend.

“That’s a warning sign that the conversation about these being AI tools and not people needs to be nurtured again,” Djordjevic says.

Set reasonable household rules around AI 

You may be wondering how to enforce these boundaries at home. Experts share their tips.

Co-write the AI rules with your kids, Djordjevic says. Identify safe uses of AI together — like for homework help with a parent’s supervision or as a creative outlet — and limit the amount of time your child uses it. And check in regularly on how the use of AI is making your child feel.

Don’t prohibit your kids from using AI — but do set limits. “Bans don’t generally work, especially with teens,” Watkins says. “What works is having conversations with them, putting clear guidelines and structure around these things and understanding the do’s and don’ts.” Parents should feel empowered to ban clearly dangerous uses, like if a child is harming themselves and an AI chatbot encourages the behavior, Djordjevic says. 

Make time for real life. Prioritize time spent outside with real people, away from devices, Djordjevic says. That could include joining a sports team and scheduling regular family activities.

Trust that your conversations will make a difference. As overwhelmed as parents might feel navigating AI, Watkins emphasizes that taking time to talk with kids can have real impact: “They’re not going to remember an ad from an AI chatbot. They’re going to remember a conversation you had with them. And that gives you a lot of agency, a lot of power in this.”

This episode of Life Kit was produced by Clare Marie Schneider. It was edited by Malaka Gharib. The visual editor is Beck Harlan.

Want more Life Kit? Subscribe to our weekly newsletter and get expert advice on topics like money, relationships, health and more. Click here to subscribe now.



Source link

Continue Reading

AI Insights

5 Top Artificial Intelligence Stocks to Buy in September

Published

on


The opportunity in AI remains massive.

Artificial intelligence (AI) has been the driving force behind the stock market’s biggest winners in recent years, and that trend looks far from finished. The opportunity ahead is still massive, so this is an area that most investors will want some exposure to.

Here are five top AI stocks to buy as September rolls on.

Image source: Getty Images

1. Nvidia

No company has benefited more from the buildout of AI infrastructure than Nvidia (NVDA 0.43%). Its graphics processing units (GPUs) remain the gold standard for powering the training of large language models (LLMs), and its popular CUDA software platform helped give it a moat that competitors have yet to crack. Meanwhile, its networking revenue is also soaring, with demand for its NVLink, InfiniBand, and Spectrum-X products leading to a 98% year-over-year surge in Q2 data center networking revenue to $7.3 billion.

While its Blackwell chips are already the leading hardware for providing processing power for training, Nvidia said that those GPUs also set the standard for inference, which could eventually become a much bigger market than training. With AI infrastructure projected to be a multitrillion-dollar market in the coming years, Nvidia has more than enough room to keep growing.

The stock has had a massive run, but the momentum behind AI spending means Nvidia remains a top pick for long-term investors.

2. Broadcom

Broadcom (AVGO 0.19%) has emerged as the go-to name for custom AI chips, which are becoming critical as hyperscalers (operators of massive data centers) look to lower their inference costs and reduce their reliance on Nvidia. Broadcom already counts Alphabet (GOOGL 0.22%) (GOOG 0.27%), Meta Platforms (META 0.70%), and ByteDance among its customers, and management projects that these relationships alone could be worth $60 billion to $90 billion by its fiscal 2027 (which ends October 2027).

However, things got even better for shareholders after Broadcom revealed that a fourth customer, presumably OpenAI, had placed a massive $10 billion order for next year. The pace at which Broadcom is able to design custom AI chips appears to be accelerating, which bodes well for its growth, especially with Apple having been earlier revealed as a fifth major customer.

Throw in Broadcom’s strong networking business and its VMware arm, which positions it as a software player in AI infrastructure, and this is a company with a lot of growth potential. Investors looking for diversified AI winners beyond Nvidia should have this stock near the top of their lists.

3. Advanced Micro Devices

The next battleground in the AI chip wars looks like it will be for inference. While Nvidia and Broadcom are both well positioned for this fight, don’t count Advanced Micro Devices (AMD 1.91%) out. AMD has already been carving out a role in this space. Seven of the 10 biggest AI operators already use its GPUs, with one major AI company running a significant amount of inference on AMD chips.

AMD, along with Broadcom and others, also helped form the UALink Consortium to create an open-source interconnect standard. This could loosen the grip that Nvidia has established on that score with its NVLink offering, and allow companies to more easily mix and match AI chips from different vendors. That would be hugely beneficial for AMD.

On top of that, AMD’s central processing units (CPUs) continue to gain traction in data centers. The revenue gap between Nvidia and AMD is still massive, which is why even modest market share gains in the GPU segment could drive AMD’s numbers significantly higher. That opportunity makes the stock a compelling buy.

4. Alphabet

Alphabet just dodged what could have been a big problem when the judge in its antitrust case opted not to require it to sell its Chrome browser. That preserved one of the company’s most important advantages in search: distribution. That foundation, with Chrome and Android, gives Google an advantage that will be difficult for AI upstarts to overcome.

The company is now layering AI on top of search. Its AI Overviews are already being used by more than 2 billion people each month, and it’s rolling out AI Mode around the world in different languages. Meanwhile, its Gemini large-language models are among the best in the industry, and giving users the option to toggle between AI Mode and traditional search inside Google is another edge. Importantly, Alphabet knows how to monetize users, whether through traditional search or AI.

Beyond search, Google Cloud has been a powerful growth engine for Alphabet as companies rush to cloud computing providers to help build out their own AI models and apps and run them on cloud infrastructure. Meanwhile, its custom chips — made with the help of Broadcom — have given it a cost advantage. Add in other bets like its Waymo robotaxi business and its quantum computing efforts, and Alphabet is well-positioned for the future.

5. Meta Platforms

Meta Platforms has reinvented itself with AI, turning what many thought was a fading social media company into one of the best growth stories out there. Its Llama models are improving user experiences by serving up more engaging content, while advertisers get access to better targeting and ad campaign tools. That combination led to a 22% year-over-year jump in ad revenue last quarter, with both impressions and ad prices moving higher. Meta is also just starting to run ads on WhatsApp and Threads, opening up new growth avenues. 

Yet CEO Mark Zuckerberg’s ambitions go well beyond ads. He has talked openly about building “personal superintelligence” and has been recruiting aggressively to make it happen.

With its huge operating cash flow, Meta can afford to chase big opportunities in AI, and it already benefits from AI-driven gains in its core business. That makes Meta an AI stock to own.

Geoffrey Seiler has positions in Alphabet. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Apple, Meta Platforms, and Nvidia. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Insights

Albania’s prime minister appoints an AI-generated ‘minister’ to tackle corruption | World News

Published

on


Albania’s prime minister has appointed an artificial intelligence-generated “minister” to tackle corruption and promote innovation in his new cabinet.

The new AI minister, officially named Diella – the female form of the word for sun in the Albanian language – was appointed on Friday and is a virtual entity.

Diella will be a “member of the cabinet who is not present physically but has been created virtually,” Prime Minister Edi Rama said in a post on Facebook.

Mr Rama said the AI-generated bot would help ensure that “public tenders are completely free of corruption” and assist the government in operating more efficiently and transparently.

Image:
Albania’s AI “minister” Diella. Pic: AP/Vlasov Sulaj

Diella uses the latest AI models and methods to ensure accuracy in carrying out its assigned responsibilities, according to the website of Albania’s National Agency for Information Society.

Diella, portrayed wearing a traditional Albanian folk costume, was developed earlier this year in partnership with Microsoft. She serves as a virtual assistant on the e-Albania public service platform, helping users navigate the site and access around one million digital inquiries and documents.

Mr Rama’s Socialist Party won a fourth straight term by securing 83 out of 140 seats in the parliamentary elections in May.

With this majority, the party can govern independently and pass most laws, though it falls short of the 93-seat threshold required to amend the Constitution.

The Socialists have pledged to secure European Union membership for Albania within five years, aiming to complete negotiations by 2027 – a claim met with scepticism by the Democratic opposition, who argue the country is not ready.

Read more from Sky News:
All we know about suspect in Charlie Kirk’s shooting
UK joins NATO operation to bolster Europe’s eastern flank

The Western Balkan country began full EU membership negotiations a year ago. The incoming government now faces key challenges, including tackling organized crime and long-standing corruption – issues that have persisted since the end of communist rule in 1990.

Diella is also expected to support local authorities in accelerating reforms and aligning with EU standards.

President Bajram Begaj has tasked Prime Minister Rama with forming the new government, a move analysts say grants him the authority to establish and implement the AI-powered assistant Diella.



Source link

Continue Reading

Trending