Connect with us

Ethics & Policy

Who Is Soham Parekh? The Indian Techie Accused of Holding 5 Jobs and Fooling US Startups

Published

on


Soham Parekh, an Indian engineer, is accused of secretly working for multiple US startups simultaneously (Representative image)

Soham Parekh, an Indian engineer, is at the centre of a controversy in Silicon Valley following accusations from several US startups regarding his alleged moonlighting practices. The situation escalated after Suhail Doshi, founder of Playground AI, publicly warned others about Parekh’s purported simultaneous employment at multiple companies without proper disclosure.

Allegations of Dual Employment

According to reports, Parekh is believed to have worked for up to four or five startups, many of which are backed by Y Combinator. Doshi’s warning on social media highlighted that Parekh was dismissed from Playground AI within a week of his hiring after his dual employment was uncovered. Doshi also shared what he claimed was Parekh’s CV, which included positions at companies such as Dynamo AI and Synthesia, raising doubts about the authenticity of his credentials.

“PSA: there’s a guy named Soham Parekh (in India) who works at 3-4 startups at the same time. He’s been preying on YC companies and more. Beware. I fired this guy in his first week and told him to stop lying / scamming people. He hasn’t stopped a year later. No more excuses,” read his post.

Other founders corroborated Doshi’s claims, indicating similar experiences with Parekh. Flo Crivello, founder of Lindy, stated that he had to terminate Parekh’s contract shortly after hiring him. Nicolai Ouporov, CEO of Fleet AI, confirmed that Parekh had worked with them, noting that he had been engaging in this practice for years. Matthew Parkhurst, CEO of Antimetal, remarked on Parekh’s intelligence but indicated that the firm had to let him go once they discovered his multiple commitments.

The unfolding events have sparked broader discussions regarding remote hiring practices and the ethics surrounding moonlighting in the tech industry. Many are questioning how an engineer could manage multiple roles simultaneously and the adequacy of background checks in the hiring process. Parekh’s case has become a cautionary tale for startups navigating the complexities of remote work.

Despite the controversy, Parekh has not publicly commented on the allegations. However, he reportedly reached out to Doshi privately, expressing regret and seeking advice on how to rectify his situation. His educational background, which includes a degree from the University of Mumbai and a master’s from Georgia Institute of Technology, is now under scrutiny as well.

A Lesson for Silicon Valley Startups

The allegations against Soham Parekh serve as a stark reminder for many in Silicon Valley about the importance of rigorous hiring standards. As discussions around the implications of moonlighting continue, the incident highlights the need for better oversight in the rapidly evolving tech space.





Source link

Ethics & Policy

Experts gather to discuss ethics, AI and the future of publishing

Published

on

By


Representatives of the founding members sign the memorandum of cooperation at the launch of the Association for International Publishing Education during the 3rd International Conference on Publishing Education in Beijing.CHINA DAILY

Publishing stands at a pivotal juncture, said Jeremy North, president of Global Book Business at Taylor & Francis Group, addressing delegates at the 3rd International Conference on Publishing Education in Beijing. Digital intelligence is fundamentally transforming the sector — and this revolution will inevitably create “AI winners and losers”.

True winners, he argued, will be those who embrace AI not as a replacement for human insight but as a tool that strengthens publishing’s core mission: connecting people through knowledge. The key is balance, North said, using AI to enhance creativity without diminishing human judgment or critical thinking.

This vision set the tone for the event where the Association for International Publishing Education was officially launched — the world’s first global alliance dedicated to advancing publishing education through international collaboration.

Unveiled at the conference cohosted by the Beijing Institute of Graphic Communication and the Publishers Association of China, the AIPE brings together nearly 50 member organizations with a mission to foster joint research, training, and innovation in publishing education.

Tian Zhongli, president of BIGC, stressed the need to anchor publishing education in ethics and humanistic values and reaffirmed BIGC’s commitment to building a global talent platform through AIPE.

BIGC will deepen academic-industry collaboration through AIPE to provide a premium platform for nurturing high-level, holistic, and internationally competent publishing talent, he added.

Zhang Xin, secretary of the CPC Committee at BIGC, emphasized that AIPE is expected to help globalize Chinese publishing scholarships, contribute new ideas to the industry, and cultivate a new generation of publishing professionals for the digital era.

Themed “Mutual Learning and Cooperation: New Ecology of International Publishing Education in the Digital Intelligence Era”, the conference also tackled a wide range of challenges and opportunities brought on by AI — from ethical concerns and content ownership to protecting human creativity and rethinking publishing values in higher education.

Wu Shulin, president of the Publishers Association of China, cautioned that while AI brings major opportunities, “we must not overlook the ethical and security problems it introduces”.

Catriona Stevenson, deputy CEO of the UK Publishers Association, echoed this sentiment. She highlighted how British publishers are adopting AI to amplify human creativity and productivity, while calling for global cooperation to protect intellectual property and combat AI tool infringement.

The conference aims to explore innovative pathways for the publishing industry and education reform, discuss emerging technological trends, advance higher education philosophies and talent development models, promote global academic exchange and collaboration, and empower knowledge production and dissemination through publishing education in the digital intelligence era.

 

 

 



Source link

Continue Reading

Ethics & Policy

Lavender’s Role in Targeting Civilians in Gaza

Published

on


The world today is war-torn, starting with Russia’s attacks on Ukraine to Israel’s devastation in Palestine and now in Iran, putting the entire West Asia in jeopardy.

The geometrics of war has completely changed, from Blitzkrieg (lightning war) in World War II to the use of sophisticated and technologically driven missiles in these latest armed conflicts. The most recent wars are being driven by use of artificial intelligence (AI) to narrow down potential targets.

There have been multiple evidences which indicate that Israeli forces have deployed novel AI-driven targeting tools in Gaza. One system, nicknamed “Lavender” is an AI-enabled database that assigns risk scores to Gazans based on patterns in their personal data (communication, social connections) to identify “suspected Hamas or Islamic Jihad operatives”. Lavender has flagged up to 37,000 Palestinians as potential targets early in the war.

A second system, “Where is Daddy?”, uses mobile phone location tracking to notify operators when a marked individual is at home. The initial strikes using these automated generated systems targeted individuals in their private homes on the pretext of targeting the terrorists. But innocent women and young children also lost their lives in these attacks. This technology was developed as a replacement of human acumen and strategy to identify and target the suspects.

According to the Humans Rights Watch report (2024), around 70 per cent of people who have lost lives were women and children. The United Nations agency has also verified the details of 8,119 victims killed in Gaza from November 2023 to April 2024. The report showed that 44 per cent of the victims were children and 26 per cent were women. The humans are merely at the mercy of this sophisticated technology that identified the suspected militants and targeted them.

The use of AI-based tools like “Lavender” and “Where’s Daddy?” by Israel in its war against Palestine raises serious questions about the commitment of countries to the international legal framework and the ethics of war. Use of such sophisticated AI targeted tools puts the weaker nations at the dictate of the powerful nations who can use these technologies to inflict suffering for the non-combatants.

The international humanitarian law (IHL) and international human rights law (IHRL) play a critical yet complex role in the context of AI during conflict situations such as the Israel-Palestine Conflict. Such AI-based warfare violates the international legal framework principles of distinction, proportionality and precaution.

The AI systems do not inherently know who is a combatant. Investigations report that Lavender had an error rate on the order of 10 per cent and routinely flagged non-combatants (police, aid workers, people who merely shared a name with militants). The reported practice of pre-authorising dozens of civilian deaths per strike grossly violates the proportionality rule.

An attack is illegal if incidental civilian loss is “excessive” in relation to military gain. For example, one source noted that each kill-list target came with an allowed “collateral damage degree” (often 15–20) regardless of the specific context. Allowing such broad civilian loss per target contradicts IHL’s core balancing test (ICRC Rule 14).

The AI-driven process has eliminated normal safeguards (verification, warnings, retargeting). IHRL continues to apply alongside IHL in armed conflict contexts. In particular, the right to life (ICCPR Article 6) obliges states to prevent arbitrary killing.

The International Court of Justice has held that while the right to life remains in force during war, an “arbitrary deprivation of life” must be assessed by reference to the laws of war. In practice, this means that IHL’s rules become the benchmark for whether killings are lawful.

However, even accepting lex specialis (law overriding general law), the reported AI strikes raise grave human rights concerns especially the Right to Life (ICCPR Art. 6) and Right to Privacy (ICCPR Art. 17).

Ethics of war, called ‘jus in bello’ in the legal parlance, based on the principles of proportionality (anticipated moral cost of war) and differentiation (between combatants and non-combatants) has also been violated. Article 51(5) of Additional Protocol I of the 1977 Geneva Convention said that “an attack is disproportionate, and thus indiscriminate, if it may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and military advantage”.

The Israel Defense Forces have been indiscriminately using AI to target potential targets. These targets though aimed at targeting militants have been extended to the non-military targets also, thus causing casualties to the civilians and non-combatants. Methods used in a war is like a trigger which once warded off is extremely difficult to retract and reconcile. Such unethical action creates more fault lines and any alternate attempt at peace resolution and mediation becomes extremely difficult.

The documented features of systems like Lavender and Where’s Daddy, based on automated kill lists, minimal human oversight, fixed civilian casualty “quotas” and use of imprecise munitions against suspects in homes — appear to contravene the legal and ethical principles.

Unless rigorously constrained, such tools risk turning warfare into arbitrary slaughter of civilians, undermining the core humanitarian goals of IHL and ethics of war. Therefore, it is extremely important to streamline the unregulated use of AI in perpetuating war crimes as it undermines the legal and ethical considerations of humanity at large.



Source link

Continue Reading

Ethics & Policy

Building a responsible AI future: How the G7 Hiroshima AI Process is enhancing responsible AI around the globe

Published

on


The AI landscape is evolving rapidly, and with the rise of agentic AI, trust has never been more critical. As businesses continue to integrate AI into their operations and customer experiences, leaders must ensure that these technologies are developed and deployed in a responsible manner. Leading with trust and responsibility is not optional. Enterprise customers require this as part of their AI adoption journey, and trust is essential to a future in which AI creates opportunities for everyone. 

Salesforce is proud to be one of the first companies to contribute to the reporting framework developed by the OECD under the G7 Hiroshima AI Process (HAIP). Voluntary frameworks like this empower organisations to prioritise ethical practices, transparency, and governance at every stage of AI development and deployment, fostering more trustworthy AI ecosystems and enhancing global alignment on best practices. 

Risk identification: Laying the foundation for trustworthy AI

An effective, responsible AI approach begins with a comprehensive strategy for risk identification and evaluation. Organisations should define and classify different types of AI-related risks, particularly those that could cause serious harm. This is especially important in enterprise settings, where AI systems are often tailored and used in various contexts.
At Salesforce, the Responsible AI and Tech (RAIT) product managers within our Office of Ethical and Humane Use (OEHU) are central to this effort. During these reviews, RAIT product managers work closely with product teams to understand use cases, technology stacks, and intended audiences. The process involves identifying and categorising potential risks into subtypes of sociotechnical harm, as well as assessing both inherent and residual risks to provide a holistic view of potential impacts, enabling informed decision-making and effective mitigation strategies.

Our AI Acceptable Use Policy provides clear guidance on the uses for which our customers are prohibited from using  AI tools. This includes automated decision-making with legal consequences, predictions of an individual’s protected characteristics, or high-risk scenarios that could result in serious harm or injury. 

Ongoing risk management: Protecting AI systems in real-time

Responsible AI experts must collaborate closely with product teams at all stages of the innovation process to devise effective mitigation strategies. Standardised guardrails, such as Salesforce’s “trust patterns”, can include features like mindful friction, which introduces checkpoints for thoughtful decision-making, or transparency notifications that inform users when they are interacting with AI systems.

Organisations should also establish comprehensive frameworks that protect data privacy and security throughout every stage of the product development process. Salesforce’s  Trust Layer includes functionalities such as secure data handling, zero data retention, ethics by design, an audit trail, and real-time toxicity detection. 

Finally, Salesforce has clear evidence from enterprise customers that testing products against trust and safety metrics, such as bias, privacy, and truthfulness, is an important business strategy and benefit. At Salesforce, we regularly introduce red teaming exercises, which simulate potential risks in controlled environments, to identify vulnerabilities and risks within products. Tactics like this are particularly important as autonomous agents become increasingly widespread. 

Transparency reporting: Building trust through honest communication and knowledge-sharing

Transparency and honesty are core tenets of our trusted AI principles, which we augmented with our guidelines for trusted generative AI, and remain applicable to the agentic AI era. Organisations should ensure that users and stakeholders are informed about how and when AI is used. At Salesforce, we regularly share information about our product capabilities through our newsroom, blogs, and Trailhead, our free online learning platform. 

Salesforce also regularly reports on our progress in responsible AI efforts. Most recently, our Trusted AI and Agents Report explained our approach to designing and deploying AI agents. 

Furthermore, we aim to be transparent about the use of personal data. Salesforce enables customers to control how their data is used for AI. Whether using our own Salesforce-hosted models or external models within our shared trust boundary, no context is stored. The large language model forgets both the prompt and the output immediately after processing.

Organisational governance: Embedding responsible AI practices across the company

Gaining the buy-in from all parts of the organisation to deliver a truly effective responsible AI approach is critical. Salesforce embeds AI risk management within its organisational governance framework through various structures and practices. The company’s trusted AI principles, first developed in 2018 and augmented for generative AI in 2023, guide responsible development and deployment, focusing on intentional design and system-level controls.

Our governance infrastructure includes:

  • The Office of Ethical and Humane Use (OEHU), which regularly interacts with the executive leadership team for policy and product review and approval. The OEHU also leads the Trusted AI Review process to identify, mitigate, and track potential risks early in development.
  • The AI Trust Council, comprising executives across various departments, aligns and speeds up decision-making for AI products.
  • The Ethical Use Advisory Council, established in 2018 with external experts and internal executives, provides strategic guidance on product and policy recommendations.
  • The Cybersecurity and Privacy Committee of the Board of Directors, which meets quarterly with the Chief Ethical and Humane Use Officer to review AI priorities.
  • The Human Rights Steering Committee, meeting quarterly, oversees the human rights program, including identifying and mitigating salient risks.

A shared commitment to responsible AI: Aligning with global standards

The future of responsible AI depends on a collective commitment to developing systems that are innovative, trustworthy, ethical, and secure. Emphasising transparency and robust governance will unlock AI’s full potential while ensuring the safety of customers and stakeholders.

The G7 HAIP reporting framework provides an effective global benchmark for responsible AI initiatives, providing a structured approach for organisations to manage the risks and benefits of AI technologies. As these frameworks gain widespread adoption, they will promote consistency in responsible AI practices, building greater trust among users and society. Salesforce is committed to working with all stakeholders and navigating this transformative AI era with trust, responsibility, and ethics guiding the way. 

The post Building a responsible AI future: How the G7 Hiroshima AI Process is enhancing responsible AI around the globe appeared first on OECD.AI.



Source link

Continue Reading

Trending