Connect with us

Ethics & Policy

Aachi Global School (AGS) | Times Now

Published

on


Aachi Global School (AGS)

A holistic school that encourages students to explore, innovate and achieve

In the realm of education, Aachi Global School (AGS) stands out as a beacon for holistic development, leadership, entrepreneurship, and en-during values. Founded by Mrs.Rebekah Abishek, AGS is not just a school; it’s a foundation dedicated to preparing children for life.

The cornerstone of AGS’s philosophy lies in the belief that education should equip learners with life skills, values, integrity, self-confidence, and the ability to form opinions. Mrs.Rebekah Abishek, Founder and Trustee, embarked on this educational journey driven by the desire to provide a sensible curriculum that imparts whole some education.

Rebekah Abishek Founder Aachi Group of Schools
Rebekah Abishek, Founder, Aachi Group of Schools

AGS goes beyond conventional teaching methods, fostering an environment where young minds actively engage in activities that nurture creativity and artistic potential. The school offers both Cambridge and CBSE curricula, catering to children from Play Group to Grade X. Mrs.Rebekah Abishek emphasis the importance of hands-on experiences and real-life learning, with classroom activities and educational trips geared towards developing inquisitive and principled individuals.

AGS
AGS

At AGS, early experiences are viewed as the foundation for future learning and behavior. Educators act as facilitators, partnering with children to encourage exploration, critical thinking, and reflection. The school regularly invites foreign faculty for seminars, promoting cultural understanding and appreciation.

Educational trips are a staple at AGS, providing students with a tactile understanding of the environment they study. This hands-on approach extends to co-curricular activities like Theatre, vocals, dance, arts, STEM, and even robotics,starting from Grade I. The emphasis on social responsibility is evident as students actively participate in activities that involve sharing food and clothes with those in need.

Aachi Global School envisions its students as lifelong learners, creative thinkers, and responsible global citizens. The curriculum in addition to its commitment to academic quality focuses on instilling crucial life skills such as good manners, etiquette, moral ethics, cognitive thinking, empathy, and systematic learning strategies.

AGS boasts two campuses, strategically located in Anna Nagar and Ayanambakkam.

These campuses are designed to seamlessly integrate learning spaces with well-equipped classrooms and state-of-the-art facilities for both indoor and outdoor sports, ensuring a comprehensive educational experience for every child.

Aachi Global School believes “When you show children the importance of Corporate Social Responsibility you are planting a seed and setting values in place that will have a big impact on who they become as adults”. So students are regularly encouraged to actively participate in various activities where they can be helpful to the society.

Enroll your child at Aachi Global School for a transformative journey towards a future of knowledge, values, and success.

For more information, contact: 7338856789,7708856789

Email: Admissions@aachiglobalschool.com

Disclaimer: This article is a sponsored article and does not have journalistic or editorial involvement of Times Now.

Get Latest News Live on Times Now along with Breaking News and Top Headlines from Bizz Impact and around the world.





Source link

Ethics & Policy

‘Humans in the Loop’ Film Explores AI, Ethics & Adivasi Labor

Published

on


Humans in the Loop, director Aranya Sahay’s 2024 film about an Adivasi woman working as a data annotator, was screened at the UNESCO House in New Delhi on September 6. The film explored the hidden biases behind Artificial Intelligence (AI) and the ethical dilemmas associated with the technology while providing a glance at the human labour that powers the “artificial” intelligence.

During the post-screening discussion, executive producer Kiran Rao pointed out that many contemporary conversations about AI centred on the economics of the business. “Our film talks about equitability, representation and data colonialism,” she added.

The movie’s protagonist, Nehma, lives in a remote village in Jharkhand, taking care of a rebellious school-going daughter and an infant son. She also finds work as a data annotator, labelling objects in images and videos accurately to create datasets for training AI models. Nehma begins her work with some hesitation and confusion initially, not entirely understanding the technology and her role in it. However, she soon grows fond of her work, likening AI to a child that needs to be “taught” the right things.

And what is the right thing for an AI to be taught? That question forms the central premise of the film, following Nehma’s inner turmoil as she navigates the conflict between her manager’s expectations and what she knows to be true and right.

Initially, Nehma’s job is simple enough – label and outline body parts accurately so she can get a human-like computer model to walk. She completes her task successfully, and the model stumbles, falls, and finally walks upright, much to her joy.

Her second task is a lot more morally complex. Nehma and her team are working for an agritech firm that wants them to go through millions of images and accurately label crops and pests, like small insects or critters. The result? The so-called ‘pests’ are violently eradicated with a precision laser, with the crops left to thrive.

This is where the ‘Humans in the Loop’ film raises a crucial point – what makes something a pest? Nehma is visibly uncomfortable with the display of the laser’s awesome power as it scorches a small caterpillar. Acting on a whim, she decides not to label that particular critter as a pest.

The results pour in, and Nehma’s American clients are incensed. They demand that Nehma’s supervisor either fix the mistakes or risk losing the contract. The ball eventually falls in Nehma’s court, where she explains her reasoning. That particular caterpillar isn’t a pest, she argues, but a harmless critter that only eats the dead part of leaves without damaging the crop as a whole.

Nehma’s supervisor won’t have it – “Client ne bola hai pest hai, to hai,” she exclaims in Hindi, meaning that if the client considers something to be a pest, then that is a pest.

So who is in the right here? Nehma’s knowledge comes from her own lived experience as an Adivasi woman, drawn from close proximity to Jharkhand’s dense forests. In fact, director Aranya Sahay argued that India’s Adivasis would see life in AI, which is why Nehma wanted to impart what she knew to the model.

‘Humans in the Loop’ also raises an important point – terminologies and classifiers like ‘pests’, ‘weeds’ or ‘crops’ are dependent on functionality – what is a ‘weed’ to one party is a ‘herb’ to another, just like a friendly caterpillar may become a pest to another.

For industrial agricultural operations, only the plants which bring in a profit are useful; everything else can be thrown to the laser.

“Will this industrial consumption economy dictate our knowledge?” asked Kiran Rao.

Advertisements

The other important problem that the movie touched upon was the question of adequate representation – in one telling scene where Nehma prompts an image generator to create an image of a tribal woman, only to receive images of women wearing vaguely native american headdresses. When she tries the prompt “beautiful woman”, she is greeted with images of white-skinned, blonde and blue-eyed women.

As many commentators have pointed out, AI models are heavily dependent on large volumes of data for accuracy, most of which comes from the Global North. On top of that, these datasets mostly feature images of white, Western European people, meaning that any resulting AI systems are much more inaccurate when dealing with people of colour or non-Western cultures in general.

Sahay gave an example from personal experience – during a special screening of the film for Adivasi scholars, a young tribal boy was attempting to generate an image of himself sitting on a crocodile. What he instead got was a white boy sitting atop an alligator.

“We’ve all tried to create images of ourselves, and there’s a definite slant towards European images,” said Rao.

‘Humans in the loop’ provides a solution for this problem by having Nehma take pictures of herself and other members of her community and feed them to the model. It soon starts generating images of brown-skinned tribal women. While this would most likely not occur this way in real life, as the model would have to undergo a fresh training cycle with the new data, culturally representative datasets are, in fact, something AI companies are increasingly hungry for.

It is in this context that fresh concerns arise, ones that do not get addressed through the movie. These representative datasets, containing images of indigenous people, their languages, culture and knowledge systems, are a product of the labour of the masses. The AI image generator is arguably better off with Nehma’s additions, but what does she or her community get out of it? If foreign AI companies are dependent on the value generated by India’s tribals, trained on their images and utilising their knowledge of nature, what is their stake in the multi-billion-dollar valuations commanded by Silicon Valley giants?

Also Read:

Support our journalism:

For You



Source link
Continue Reading

Ethics & Policy

AI in healthcare: legal and ethical considerations at the new frontier

Published

on


Whilst the EU Commission’s guidelines5, published in July 2025, offer some insight as to the compute threshold at which downstream modification constitutes the creation of a new model (with that downstream modifier then becoming a “provider” of the GPAIM and therefore subject to extensive compliance requirements), simple numerical thresholds do not necessarily tell the whole story. There are many different techniques for customizing general purpose AI models, and a simple compute threshold will not capture some customization techniques that are likely to have a more significant impact on model behavior, such as system prompts. Careful case-by-case consideration of the modification in practice will be necessary.

Organizations at risk of falling within scope of the EU AI Act GPAI requirements should consider the relevance of the General Purpose AI Code of Practice (the GPAI Code)6. The GPAI Code, while non-binding, has been developed collaboratively under the leadership of the European AI Office and is intended to be a practical tool to support organizations in complying with the AI Act for GPAI models, addressing transparency, copyright and safety and security in particular. The drafting process sparked significant debate among stakeholders: some arguing that the GPAI Code is overly restrictive with calls for greater flexibility, particularly regarding the training of LLMs. However, the European Commission asserts that signatories will benefit from a “simple and transparent way to demonstrate compliance with the AI Act”, with enforcement expected to be focused on monitoring their adherence to the GPAI Code. It remains to be seen how organizations manage that adherence, particularly, for example, in the face of technical challenges (such as output filtering) and legal complexities (not least due to the interplay with ongoing court action) and the allocation of liability between provider and deployer. 

  • Unlike the EU, the UK has, to date, chosen not to pass any AI-specific laws. Instead, it encourages regulators to first determine how existing technology-neutral legislation, such as the Medical Device Regulations, the UK GDPR, the Data Protection Act, can be applied to AI uses. For example, the Medicines & Healthcare products Regulatory Agency (MHRA) is actively working to extend existing software regulations to encompass “AI as a Medical Device” (or AIaMD). The MHRA’s new program focuses on ensuring both explainability and interpretability of AI systems as well as managing the retraining of AI models to maintain their effectiveness and safety over time.
  • In China, the National Health Commission and the National Medical Products Administration recently published several guidelines on the registration of AI-driven medical devices and the permissible use cases of applying AI in diagnosis, treatment, public health, medical education, and administration. The guidelines all emphasize AI’s assisting roles in drug and medical device development and monitoring under human supervision.

Leading AI developers are also setting up in-house AI ethics policies and processes, including independent ethics board and review committee, to ensure safe and ethical AI research. These frameworks are crucial while the international landscape of legally binding regulations continues to mature.

Recommendations: scenario-based assessments for AI tools

Healthcare companies face a delicate balancing act. On one hand, their license to operate depends on maintaining the trust of patients, which requires prioritizing safety above all else. Ensuring that patients feel secure is non-negotiable in a sector where lives are at stake. On the other hand, being overly risk-averse can stifle the very innovations that have the potential to transform lives and deliver better outcomes for patients and society as a whole. Striking this balance is critical: rigorous testing and review processes must coexist with a commitment to fostering innovation, ensuring progress without compromising safety. 

In this regard, a risk-based framework is recommended for regulating AI in healthcare. This approach involves varying the approval processes based on the risk level of each application. Essentially, the higher the risks associated with the AI tools, the more controls and safeguards should be required by authorities. For instance, AI tools that conduct medical training, promote disease awareness, and perform medical automation should generally be considered low risk. Conversely, AI tools that perform autonomous surgery and critical monitoring may be regarded as higher risk and require greater transparency and scrutiny. By tailoring the regulatory requirements to the specific risks, we can foster innovation while ensuring that safety is adequately protected.

Moreover, teams reviewing AI systems should consist of stakeholders representing a broad range of expertise and disciplines to ensure comprehensive oversight. For example, this may include professionals with backgrounds in healthcare, medical technology, legal and compliance, cybersecurity, ethics and other relevant fields as well as patient interest groups. By bringing together diverse perspectives, the complexities and ethical considerations of AI in healthcare can be better addressed, fostering trust and accountability.

Data protection and privacy

Data privacy requirements are a key consideration when using AI in healthcare contexts, especially given that many jurisdictions’ laws broadly define “personal data,” potentially capturing a wide range of data. Further, privacy regulators have been the forerunners in bringing AI-related enforcement actions. For example, AI tools such as OpenAI’s ChatGPT have encountered extensive regulatory scrutiny at EU level through the European Data Protection Board (EDPB) taskforce and NOYB (None of your Business)/the European Center for Digital Rights, the data privacy campaign group founded by Max Schrems, the well-known privacy activist, has initiated a complaint against the company in Austria, alleging GDPR breach.  DeepSeek has also attracted immediate attention from EU and other international regulators, with investigations initiated and the EDPB taskforce extended to cover its offerings.

Privacy considerations in AI

There are several privacy considerations to navigate when using AI. This can raise challenges as, often U.S. based, developers look to navigate highly regulated jurisdictions such as those in the EU, where regulators are scrutinizing approaches taken to data protection compliance. This includes the issue of identifying a lawful basis for the processing activity. Many jurisdictions’ data privacy laws contain a legitimate interests basis or similar provisions which, when applicable, permit the data controller to process personal data without first requiring individuals’ explicit consent. However, there are diverging views on whether this basis can be used for AI-related processing.

The European Data Protection Board (EDPB) issued an Opinion 28/20247 in December 2024, which provides detailed guidance on the use of legitimate interest as a legal basis for processing personal data in the development and deployment of AI models, including LLMs (the EDPB AI Opinion). The EDPB AI Opinion, although indicating that legitimate interest may be a possible legal basis, highlights the three-step test that should be applied when assessing the use of legitimate interest as a legal basis, i.e. (1) identify the legitimate interest pursued by the controller or a third party; (2) analyse the necessity of the processing for the purposes of the legitimate interest pursued (the “necessity test”); and (3) assess that the legitimate interest is not overridden by the interests or fundamental rights and freedoms of the data subjects (the “balancing test”). It also highlights the need for robust safeguards to protect data subjects’ rights. The examples where legitimate interests could be a suitable lawful basis in the EDPB AI Opinion are relatively limited, including examples such as a conversational agent, fraud detection and threat analysis in an information system.

An EDPB Opinion adopted a few months earlier, in  October 2024, which addresses the legitimate interests basis for processing of personal data more generally (the EDPB LI Opinion), while helpful in referencing scientific research as a potential legitimate interest, is cautious about establishing a legitimate interest on the basis of societal benefit, emphasizing that the legitimate interest should tie to the interest of the controller or third party and that processing should be “strictly” necessary to achieve the legitimate interest (i.e. there is no other reasonable and equally effective method which is less privacy intrusive).

The EDPB AI Opinion clarifies that the unlawful processing of personal data during the development phase may not automatically render subsequent processing in the deployment phase unlawful, but controllers must be able to demonstrate compliance and accountability throughout the lifecycle of the AI system.  

Individual consent

As an alternative, businesses may need to obtain individual consent for AI-related processing activities. While this can be a difficult basis to use given the high bar for valid consent, it can be particularly challenging in an AI healthcare context given the heightened compliance obligations that apply to special category data (which includes health data), raising the requirement for consent to “explicit consent” combined with the potential for public distrust and misunderstanding around AI technologies.

Further, in some jurisdictions it is common for individuals to place stringent conditions, including time restrictions, on what their personal data can be used for. This could prevent their personal data being used in connection with AI, given it is not always possible to delete or amend personal data once it has been ingested into an AI system.

Professional accountability

Determining fault when an AI system makes an error is a particularly complex issue, especially given the number of parties that may be involved throughout the value chain. The challenge is heightened by the fact that different regulations may apply at different stages, and the legal landscape is still developing in response to these new technologies.

In the case of fully autonomous AI decision-making, one possible approach is that liability could fall on the AI developer, as it may be difficult to hold a human user responsible for outcomes they do not control. However, the allocation of responsibility could vary depending on the specific circumstances and regulatory frameworks in place.

Where AI systems operate with human involvement, another potential approach is for regulators to introduce a strict liability standard for consequences arising from the use of AI tools. While this could offer greater protection for patients, it may also have implications for the pace of technological innovation. Alternatively, some have suggested that requiring AI developers and commercial users to carry insurance against product liability claims could help address these risks. The WHO, for example, has recommended the establishment of no-fault, no-liability compensation funds as a way to ensure that patients are compensated for harm without the need to prove fault.8

In July 2025, a study commissioned by the European Parliament’s Policy Department for Justice, Civil Liberties and Institutional Affairs, was published9. Its aim was to critically analyze the EU’s evolving approach to regulating civil liability for AI systems, four policy proposals are discussed and the report advocated for a strict liability regime targeting high-risk AI systems. 

Ultimately, the question of legal responsibility for AI in healthcare remains unsettled and is likely to require ongoing adaptation as technology and regulation evolve. Accountability will be a particular challenge given the complexity of the value chain and the interplay of different regulatory regimes. It will be important for all stakeholders to engage in continued dialogue to ensure that legal frameworks keep pace with technological developments and that patient safety remains a central focus.

Ethical concerns

There are multiple ethical considerations that developers and deployers may need to address when using AI systems in healthcare. Three prominent examples are explored below. 

Bias causing unjust discrimination

Bias in AI systems can lead to unjustified discriminatory treatment of certain protected groups. There are two primary types of bias that may arise in healthcare:

  • Disparate impact risk: This occurs when people are treated differently when they should be treated the same. For example, a study10 found that Black patients in the U.S. healthcare system were assigned significantly lower “risk scores” than White patients with similar medical conditions. This discrepancy arose because the algorithm used each patient’s annual cost of care as a proxy for determining the complexity of their medical condition(s). However, less money is spent on Black patients due to various factors including systemic racism, lower rates of insurance, and poorer access to care.11 Consequently, using care costs created unjustified discrepancies for Black patients.
  • Improper treatment risk: Bias in AI systems can arise when training data fails to account for the diversity of patient populations, leading to suboptimal or harmful outcomes. For example, one study12 demonstrated that facial recognition algorithms often exhibit higher error rates when identifying individuals with darker skin tones. While this study focused on facial recognition, the same principle applies in healthcare, where AI systems used for dermatological diagnoses have been found to perform less accurately on patients with darker skin.13 This occurs because the datasets used to train these systems often contain a disproportionate number of images from lighter-skinned individuals. Such biases can lead to misdiagnoses or delays in treatment, illustrating the critical need for diverse and representative training data in healthcare AI applications.

Transparency and explainability

Providing individuals with information about how healthcare decisions are made, the process used to reach that decision, and the factors considered is crucial for maintaining trust between medical professionals and their patients. Understanding the reasoning behind certain decisions is not only important for ensuring high-quality healthcare and patient safety, but also helps facilitate patients’ medical and bodily autonomy over their treatment. However, explainability can be particularly challenging for AI systems, especially generative AI, as their “black box” nature means deployers may not always be able to identify exactly how an AI system produced its output. It is hoped that technological advances, including recent work on neural network interpretability,14  will assist with practical solutions to this challenge.

Human review

To facilitate fair, high-quality outcomes, it is important for end-users—often healthcare professionals—to understand the AI system’s intended role in their clinical workflow and whether the AI system is intended to replace user decision-making or augment it. 

However, it may not always be appropriate for the human to override the AI system’s output; their involvement in the workflow will likely vary depending on what the AI tool is being used for. For example, if an AI system has been trained to detect potentially cancerous cells in skin cell samples, and the AI system flags the sample as being potentially cancerous but the healthcare professional disagrees, it may be more appropriate to escalate the test to a second-level review than to permit the healthcare professional to simply override the AI system’s decision. A false positive here is likely to be less risky than a false negative. It is therefore important to take a considered, nuanced approach when determining how any human-in-the-loop process flow should operate.

Conclusion

AI offers significant benefits in healthcare but also presents legal and ethical challenges that must be navigated. Collaborative efforts among policymakers, healthcare professionals, AI developers, and legal experts are essential to establish robust frameworks that safeguard patient rights and promote equitable access to advanced healthcare technologies.

This article was written by Jieni Ji and David Egan, assistant general counsel, global digital and privacy at GSK in London. 



Source link

Continue Reading

Ethics & Policy

Ethical AI: Investing in a Responsible Future

Published

on


Risks to Investors and Regulatory Momentum

Despite its potential, AI carries its own unique and significant risks. It can amplify subjectivity, compromise privacy, and make opaque, unaccountable decisions, which could prove especially detrimental in high-stake sectors such as finance, law enforcement, and healthcare. Key concerns include inaccuracy, discrimination from biased data, as well as privacy breaches due to cyber vulnerabilities. Additionally, the environmental footprint of AI is swiftly expanding, with inference from models like ChatGPT already consuming over 124 GWh annually, and with compute demand doubling every 100 days, a potential trajectory toward tens of terawatt-hours annually over the next few years. Water usage is heading in a similar direction, with up to up 6.6 billion cubic meters of water projected to be consumed by 2027 – enough to meet Denmark’s yearly water needs.

“Greenwashing”, which can arise when businesses overstate their “green” credentials (which could include situations where businesses underestimate or fail to fully understand the environmental impact of their AI use), is increasingly coming into focus. This can be particularly pertinent to AI, as AI providers’ claims on their model’s energy and water usage are often opaque.  In the UK, under new powers introduced in the Digital Markets, Competition and Consumers Act 2024, the Competition and Markets Authority can impose fines of up to 10% of a company’s global turnover where companies engage in unfair commercial practices, including for misleading environmental claims. As ESG becomes more important in supply chains, scrutiny of AI usage and its underlying environmental impact is only likely to increase.

To consider another ethical angle; Getty’s claim against Stability AI for copyright and trademark infringement in respect of the data that Stability AI has used to train its AI model has drawn into sharp focus the ethics of the way in which AI developers acquire their training data. Investors may want reassurance that AI businesses in which they invest will not face the threat of litigation as a result of “stealing” data to develop their models. 

Encouragingly, investor awareness of these issues is growing. The World Benchmarking Alliance’s Collective Impact Coalition for Digital Inclusion brings together 34 institutional investors representing over $6.9 trillion in assets, alongside 12 civil society groups. Their collective engagement has reportedly prompted 19 companies to adopt ethical AI principles since 2022; however, the work is far from over, with a recent report revealing that only 52 of 200 major tech firms disclose their ethical AI principles.  

Regulatory momentum is building globally. The EU AI Act is the most comprehensive AI regulatory framework implemented so far and, much in the way that GPDR set privacy standards globally, looks sets to be the “gold standard” in AI regulation. The Act introduces a risk-based framework which bans the use of AI in high-risk applications, mandates transparency and introduces requirements for those developing and deploying AI to be AI literate. As noted above, other countries are also increasingly regulating, although in its recently published Digital and Technologies Sector Plan the UK Government has stated its aim to take a more pro-innovation and anti-administration sector specific stance rather than implement a single piece of overarching regulation..

As AI becomes more accessible, thanks to a 280-fold drop in inference costs between November 2022 and October 2024, deployment is accelerating, making inclusive and ethical AI governance more urgent than ever. Businesses and investors alike would be wise to stay alert to the risks, particularly if ethical applications are key to their business plans or investment strategies. 

To help mitigate risks to investors, the Responsible Investment Association Australasia recommends stewardship and integration strategies, including human rights due diligence aligned with the UN Guiding Principles on Business and Human Rights (UNGPs). It also advocates prioritising engagement based on the severity and likelihood of impacts, and pushing for greater transparency, contestability, and accountability in AI governance.



Source link

Continue Reading

Trending