Regulation Should Build Trust, Not Control, says Dr. Amar Patnaik at 6th Global AI Leadership Meet
Regulation Should Build Trust, Not Control, says Dr. Amar Patnaik at 6th Global AI Leadership Meet is a Varindia event report published on 21 February 2024. The article documents proceedings from the 6th Global AI Leadership Meet organised by ASSOCHAM in New Delhi, focusing on the theme “AI for India: Pushing Boundaries for Innovation.” The report features Sunil Abraham’s analysis of multi-layered AI governance frameworks and advocacy for open standards to protect digital rights.
Contents
Article Details
- 📰 Published in:
- Varindia
- 📅 Date:
- 21 February 2024
- 👤 Authors:
- Varindia
- 📄 Type:
- Event Report
- 🔗 Article Link:
- Read Online
Full Text
"The most important thing about regulation is not to control, but to build trust in the system", stated Dr. Amar Patnaik, Hon'ble Member of Parliament & Member of Parliament Standing Committee on Finance, public undertaking and subordinate legislation at the 6th Global AI Leadership Meet 2024, organised by the Associated Chambers of Commerce and Industry of India (ASSOCHAM). In an interactive session titled "AI for India: Pushing Boundaries for Innovation," leaders discussed the transformative potential of artificial intelligence as well as its potential societal impacts.
Dr. Patnaik emphasised how trust is essential for everyone involved and for making sure that the benefits of AI are fair for everyone. He said, "AI has the potential to do a lot of good in society on different levels. Responsible AI means making sure people trust it. Regulations should find a balance between preventing harm and encouraging innovation, all with the goal of building trust. Making AI available to everyone means making sure everyone has access to the data they need. It's important to test new AI ideas in safe environments and to check how they'll affect society. India has a chance to be a leader in AI by using its unique data."
Dr. Patnaik highlighted the problem of big tech companies having most of the data and how it affects competition. He said it's important to give smaller businesses and startups access to data too. He also talked about how governments should encourage sharing data openly and create an environment where new ideas can thrive.
He also said it's important to make AI fit in with the specific situations it's used in. He thinks India can use its different types of data and languages to do this well. He wants companies to use AI responsibly and work together to help achieve important goals for the future.
Dr. Patnaik expressed his backing for Prime Minister Narendra Modi's vision of harnessing AI for India's sustainable development. He highlighted the opportunity in aligning with this vision, emphasizing the potential for India to lead globally through responsible AI practices. Dr. Patnaik pointed out that, similar to achievements in combating climate change, India could drive positive change through AI innovation and a diverse workforce.
Dr. Lovneesh Chanana, Senior Vice President & Regional Head of Government Affairs (Asia Pacific and Japan) at SAP, highlighted the transition from exploring AI possibilities to implementing practical applications, citing examples such as automated visual inspections. He also discussed the uneven distribution of AI patents globally, emphasizing the need for fair representation. Looking ahead, he stressed the significance of integrating AI into essential business operations and supporting initiatives to enhance AI skills. Dr. Chanana concluded by urging collaborative efforts to navigate the evolving AI landscape responsibly, stating that "The next phase involves integrating AI into fundamental business functions."
Mr. Ashutosh Chadha, Director & Country Head of Corporate Affairs & Public Policy at Microsoft India, talked about how AI can change our daily lives by making things faster and more impactful. He mentioned that AI is now accessible to everyone, which is a big deal. Mr. Chadha also mentioned concerns about biases and risks in AI, like racial bias and creative biases. He said that because AI doesn't stick to one place or field, we need global rules to manage it. Mr. Chadha believes that countries and industries should work together to make strong rules for AI. He thinks that having a clear plan is important to make sure AI does more good than harm.
Mr. Xavier Kuriyan, Director of Pre-Sales at Dell Technologies, talked about how AI could boost India's economy by a trillion dollars by 2035. He explained Dell's strategy, which involves using AI in various ways like integration, innovation, process improvement, and working together with others. Mr. Kuriyan stressed the importance of using existing models quickly and developing local solutions step by step. He's optimistic about India's ability to use AI, mentioning our skilled workforce and how AI is becoming more important in how we work. He thinks that AI will become more common and bring new ideas faster.
Mr. Sunil Abraham, Co-Chair of ASSOCHAM National Council on IT/ITes and eCommerce & Public Policy Director at Meta, mentioned that while AI has risks, it's not helpful to exaggerate them. He suggested that we should have balanced views on regulating AI, using the example of animated drawings. Abraham talked about four levels of AI control: companies taking responsibility, industries setting their own rules, governments making rules, and laws being passed. He talked about the importance of existing laws and Meta's dataset of casual conversations to reduce biases. Abraham supported the idea of having open standards and open science, especially for Indian government groups, to make sure AI is used fairly and to protect digital rights.
Context and Background
This event occurred during a critical phase in India’s development of AI governance frameworks. Throughout 2023-2024, the government oscillated between promotional policies encouraging AI innovation and reactive attempts to regulate perceived harms, creating regulatory uncertainty for industry stakeholders.
ASSOCHAM’s choice of theme—”AI for India: Pushing Boundaries for Innovation”—reflected tensions between maximising AI’s economic potential and addressing societal risks. With projections suggesting AI could add a trillion dollars to India’s economy by 2035, business interests favoured light-touch regulation that wouldn’t stifle innovation. Civil society organisations countered that inadequate safeguards risked entrenching algorithmic discrimination and undermining democratic processes.
Dr. Patnaik’s emphasis on “trust-based” rather than “control-based” regulation offered a conceptual middle path, though translating this principle into concrete policy remained contentious. His identification of data concentration amongst large technology companies touched a fundamental challenge: India’s vast data generation capabilities—stemming from over 700 million internet users—primarily benefited multinational corporations rather than domestic enterprises or public institutions.
The data access problem Patnaik highlighted had regulatory dimensions. Existing competition frameworks inadequately addressed digital platform dominance, whilst privacy protections remained fragmented across sectoral regulations. Startups and smaller businesses lacked technical infrastructure to process large datasets even when theoretically available, creating structural barriers beyond legal access rights.
Sunil Abraham’s four-tiered governance model—corporate responsibility, industry self-regulation, government guidelines, and statutory law—acknowledged that different AI applications required calibrated responses. His caution against “exaggerating” AI risks likely referenced alarmist narratives about artificial general intelligence that distracted from immediate concerns like algorithmic bias in credit scoring or employment decisions.
The advocacy for open standards and open science particularly resonated with government technology initiatives. India’s Digital Public Infrastructure approach—exemplified by unified payment systems and identity frameworks—demonstrated how open architectures could enable innovation whilst maintaining public oversight. Applying similar principles to AI datasets and model development could theoretically democratise access beyond proprietary corporate ecosystems.
Meta’s dataset of “casual conversations” Abraham referenced aimed to train language models on diverse speech patterns, addressing concerns that AI systems predominantly reflected Western linguistic norms and cultural assumptions. For India’s multilingual context—with 22 scheduled languages and hundreds of dialects—ensuring AI systems understood regional variations presented both technical challenges and equity imperatives.
Microsoft’s Chadha raising concerns about racial and creative biases acknowledged that AI systems trained on historical data often perpetuated existing prejudices. Without deliberate interventions during development and deployment, these tools risked automating discrimination at scale across hiring, lending, policing, and public services.
The regulatory landscape remained fluid following this event. In March 2024, the government issued an advisory requiring explicit permission before deploying “unreliable” AI models—a directive so vaguely worded that it prompted immediate industry backlash. The subsequent withdrawal and replacement with softer guidance illustrated policymakers’ struggle to balance innovation promotion with risk mitigation, precisely the challenge Patnaik’s trust-based framework sought to address.
External Link
📄 This page was created on 6 January 2026. You can view its history on GitHub, preview the fileTip: Press Alt+Shift+G, or inspect the .