By – Devank Maheshwari, Sangeeth Narayanan and Nikhil Anand
Any issue pertaining to legal regulation of technology must necessarily contend with the Collingridge Dilemma. First articulated by David Collingridge in his 1980 book The Social Control of Technology, it postulates that any regulation of technology must deal with a double bind challenge:
In the context of technology platforms such as social media in India, it can be safely assumed that we are way past the Information Problem. Social media companies have been operating in India for well over two decades and the harms of social media are not abstract risks anymore, they are lived realities – from exposure to graphic content, cyberbullying, exploitative grooming, algorithmic manipulation, digital addiction, declining attention spans, excessive social comparison, and the apparent erosion of mental wellbeing.
The above issues are partly caused by the very design of social media platforms, which primarily focuses on ‘engagement’. Social media algorithms are optimized to maximize time spent, reactions, shares, and interactions by incorporating elements such as infinite scroll, engagement-driven algorithms, and gamification mechanics, all of which encourage compulsive use.
As far as the Power Problem of the Collingridge Dilemma is concerned, it is now a well-accepted position that social media is increasingly getting more and more entrenched in the social and psychological aspects of our lives. We live in an era where digital engagement starts in early childhood, and online interactions are central to adolescent socialization.
The omnipresence of social media reinforces the constitutional duty on part of the state to intervene to protect children from any potential harm caused by social media.1 Any social media regulation should endeavour to course correct and rebalance the patent asymmetry between powerful social media platforms and their vulnerable users, while ensuring that the measures adopted are reasonable, proportionate and in deference to the fundamental rights.
Children’s online safety and age restrictions for social media in India remain in a nascent and transitional stage. Currently, there is no law in force in India that either regulates access of minors to social media platforms or provides enforceable safeguards for their personal data. The Digital Personal Data Protection Act, 2023 (“DPDP Act”), though enacted, has not yet been notified, and the corresponding Digital Personal Data Protection Rules, 2025 (“Draft DPDP Rules”) are still at the draft stage and yet to be notified by the Central Government.
Nevertheless, it is important to consider the proposed regulatory framework on cyber safety for children under the DPDP Act and the Draft DPDP Rules (“DPDP Framework”).
The DPDP Act provides the overarching legal framework for digital personal data protection, including data pertaining to children. In terms of Section 2(f) of the DPDP Act, ‘child’ is defined as “an individual who has not completed the age of eighteen years”, thereby effectively designating 18 years as the age of digital majority. Section 9 of the DPDP Act deals with processing of personal data of children.
In terms of Section 9(1), Data Fiduciaries (that would include social media intermediaries)2 must obtain verifiable consent from a parent or lawful guardian before processing a child’s personal data. Further, Section 9(2) prohibits Data Fiduciaries from processing personal data in a manner likely to cause detrimental effects on a child’s well-being. Similarly, Section 9(3) explicitly forbids data fiduciaries from engaging in tracking, behavioural monitoring, or targeted advertising directed at children.
While there is no express minimum age for social media access, it seeks to achieve a functional equivalence by indirectly establishing an age threshold by requiring verifiable parental consent for processing the personal data of any individual under 18 years. Since social media intermediaries typically determine the purpose and means of processing personal data, such as name, pictures, geolocation or behavioural information, Section 9 of the DPDP Act de-facto bars minors from using such platforms without parental approval.
However, the DPDP Act stops short of prescribing how platforms should implement age assurance measures. However, guidance with respect to obtaining consent from a parent / guardian can be found under Rule 10 of the Draft DPDP Rules.
The DPDP Rules envisage that the Data Fiduciaries, including the social media intermediaries, are required to verify that the individual providing consent is the child’s parent or lawful guardian and is an adult. The draft Rule 10 provides two specific ways to verify the parent’s identity and age:
The specific technological methods for implementing this are left to the discretion of the Data Fiduciaries. The DPDP Act prescribes a stringent penalty for non-compliance with Section 9 pertaining to children. As per Section 33 read with Schedule 1, Entry 3 of the DPDP Act, a Data Fiduciary found to be in breach of its obligations under Section 9 may be liable to penalty of up to Rs. 200 crores. Such a penalty can be imposed by the Data Protection Board of India established by the Central Government under Section 18 of the DPDP Act.
To sum up, the DPDP framework provides the following:
Some of the major social media companies, by virtue of being based out of the US, follow minimum age of 13 years established under their corporate terms of service, originating from the Children’s Online Privacy Protection Act, 1998 (“COPPA”). Since this obligation originates from a foreign law, ensuring compliance in India is impractical. Moreover, the online platforms rely on users self-declaring their age as there is no mandate under COPPA requiring proactive verification of age of all users.
Perhaps the biggest implication of the parental consent requirement under the DPDP Act is the potential for broad-based age-gating of the platforms operated by the Data Fiduciaries, and by extension, a significant portion of the internet. In practice, if platforms are required to ensure that users below 18 years have obtained verifiable parental consent, they may resort to requiring all users to affirmatively prove adulthood, likely through identity documents or linkage with government IDs for general access.
This could result not only in widespread age-gating of the internet, but also in the gradual erosion of online anonymity. This outcome, whether intended or incidental, risks pitting the issue of child safety against privacy concerns. If this consequence is allowed, it would be a textbook instance of ‘function creep’ whereby mechanisms originally and purportedly designed for protecting children might evolve into a wider digital identity verification or surveillance architecture.
The DPDP Act does not set a specific minimum age for children to access social media or digital services. In this context, it becomes important to juxtapose this ‘all-or-nothing’ against the recent social media ban imposed for children imposed in Australia. Australia’s Online Safety Act, 2021, as amended by the Online Safety Amendment (Social Media Minimum Age) Act, 2024 (“OSA”) carves out a separate category of ‘age-restricted social media platforms’3 that are required to take ‘reasonable steps’ to prevent users under 16 years of age from holding accounts.4
Unlike the Australian OSA that provides for a categorical age ban for accessing social media, the Indian position of not to set any fixed minimum age (aside from requiring parental consent for all minors) thus may be seen as a policy gap. Effectively, children (below 18) of all ages, whether a toddler or a 17-year-old, are treated equally with respect to digital consent. This essentially implies that once a parent / guardian issues his consent for processing of a child’s personal data, or if the consent is fraudulently obtained or manipulated, a child of any age can access social media.
This uniform approach effectively shifts the burden of gatekeeping to parents, despite the well-documented limitations of parental awareness and digital literacy. Tethering access to parental consent, without differentiating basis age, may lead to practical limitations in protection of children.
It has been widely suggested that India can adopt a more nuanced and tiered regulatory approach with a categorical prohibition on social media access for children, say, below the age of 16, akin to the recent Australian model, while simultaneously mandating verifiable parental consent for users in the 16 to 18 age bracket.
Section 9 of the DPDP Act read with Rule 10 of the Drat DPDP Rules require obtaining ‘verifiable consent’ from the parent / guardian. Further, Rule 10 provides for two methods of obtaining verified consent – reliance on existing identity details of guardian stored with the social media intermediary or accepting voluntarily provided identity/age details or a virtual token. The finer technical and organizational measures for complying with the above have been left with the Data Fiduciaries.
Despite the above provisions, significant weaknesses have been pointed out in the methodology for verifiable consent. Firstly, the DPDP framework has not prescribed a standard procedure for age verification of users. It is notable that Rule 10 of the Draft DPDP Rules only call for appropriate measures and stops short of specifying whether every user’s age will be verified or how the social media intermediaries should flag the accounts created by children. This ambiguity may pose a practical challenge as the platforms may just simply resort to self-declaration of age by the users.
On the other hand, if the platforms demand age verification (for instance by requiring government issued ID cards or linkage with DigiLocker), it could lead to excessive collection of personal data from adults and create new privacy risks. Data Fiduciaries should not be forced to choose between weak verification that undermines age-gating and overly burdensome requirements that raise privacy concerns.
The second issue is one of equity. The verification compliance on the parents / guardians under the DPDP Act presupposes ‘a level playing field of digital literacy’. This is far from the reality. Due to the lack of digital literacy in India, children from underprivileged backgrounds may effectively be locked out from accessing social media due to their parent’s incapacity to complete the digital consent process.
Thirdly, even when parents are willing and competent to give digital consent, there is no guidance in the DPDP framework to verify the parent – child or guardian – ward relationship. The Draft DPDP Rules are only confined to confirming the identity and age of the parent. In this scenario, the platforms may not have a reliable method to ascertain such relationships, unless thorough checks are carried out, presumably using government issued id cards or other documents (such as birth certificate, passport etc.). This again leads to a difficult dilemma- social media intermediaries will either have to carry out extensive verification / KYC processes that may lead to privacy concerns for patents and children alike or adopt a light touch verification process that may lead to false consents and weak implementation.
In this context, it is also important to refer to the recent amendment to the Australian OSA that explicitly prohibits social media platforms from collecting government-issued identification materials for age verification purposes5. This has been widely hailed as a progressive move that is respecting of privacy concerns. However, the Australian policymakers are still exploring technological options to verify age without recourse to government issued id cards and such age verification technology trials are currently ongoing. Some of the potential technological solutions suggested are:
India should invest in and test age verification technologies that are privacy preserving. Considering that the data protection framework is still in the formative stages in India, there is a need to undertake a long-term deliberative and tech-informed policymaking process to ensure that the child safety measures are not only effective but also constitutionally compliant, while engaging the major social media companies since the technological verification measures will ultimately be integrated in their platforms.
It is a veritable fact that India has millions of minors on different social media platforms. Therefore, the scale of enforcement assumes significance and ensuring parental consent at this scale is globally unprecedented. It would be a herculean task for the Data Protection Board to effectively monitor all forms of Data Fiduciaries to ensure that age-screening is implemented.
Although the DPDP Act provides for penalties up to Rs. 200 cr. for breach of obligations pertaining to children’s data, strong enforcement would require that there exists a clear liability framework clearly linked with sufficiency and coverage of clearly defined standards of implementation. This will also go a long way in preventing a situation wherein the Data Fiduciaries play the ‘wait and see’ game to gauge the level of compliance expected from them.
It would be incumbent upon the yet to be constituted Data Protection Board of India to devise clear, unambiguous and verifiable standards of performance for Data Fiduciaries. As noted, Rule 10 of the Draft DPDP Rules requires ‘appropriate’ measures but leaves the choice of method largely to the Data Fiduciary. Similarly, Section 9 of the DPDP Act requires Data Fiduciaries to process children’s data that is in line with their ‘well-being’. For effective implementation, the Data Protection Board of India must navigate such questions as to what constitutes ‘well-being’ of a child or what are the ‘appropriate’ measures to be adopted for cyber safety of children.
It is also essential to ensure that the implementation is carried out not just through the mechanism of post-facto penalties but also continuous supervision or guidance. It has been widely recommended that the DPDP framework ought to include provisions mandating a compliance audit mechanism specific to Section 9 of the DPDP Act and Rule 10 of the Draft DPDP Rules.
The enactment of the DPDP Act and the accompanying Draft Rules marks a critical turning point in India’s approach to online safety for children. For too long, India has operated in a regulatory vacuum, leaving millions of minors exposed to the rapidly evolving harms of unregulated digital engagement. The DPDP framework, though still in its nascent and transitional phase, signals a shift from regulatory silence to active statutory recognition of the need to safeguard children in the digital ecosystem.
However, it must also be recognized that gaps remain. The absence of a categorical minimum age for social media access, lack of classification by age group, and ambiguities in implementing verifiable parental consent create significant challenges in enforcement. Equally important is the need to complement the age-gating obligations with second-order legal safeguards. Instruments such as the Information Technology Act, 2000, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and the DPDP Act must be read and implemented in harmony. In particular, the 2021 Intermediary Guidelines provide complementary obligations by requiring intermediaries to prevent the hosting of content harmful to children and ensure effective grievance redressal mechanisms.
Finally, the insights of the Collingridge Dilemma must serve as a cautionary guide. India must not fall into the trap of regulatory latency. While the social media platforms are now deeply entrenched, even amongst younger age-group, this should not preclude meaningful intervention. Timely, well-calibrated and constitutionally sound regulatory interventions will go a long way in ensuring that the digital world is not a place of risk, but one of opportunity, especially for children.
There is currently no fixed minimum age specified under Indian law for accessing social media platforms. However, the Digital Personal Data Protection Act, 2023 indirectly creates an age barrier by requiring Data Fiduciaries, including social media platforms, to obtain verifiable parental consent before processing the personal data of individuals under the age of 18. As a result, access to social media for minors is effectively dependent on the consent of parents or lawful guardians.
While the DPDP Act does not explicitly prohibit minors from using social media platforms, it imposes a mandatory requirement for obtaining parental consent before processing their personal data. This consent requirement serves as a functional restriction, placing the control of children’s access to digital services in the hands of their parents or guardians.
Under Rule 10 of the Draft DPDP Rules, social media platforms can verify a parent’s identity and age through either existing identity data already stored with the platform or voluntarily provided documents or digital tokens, such as Aadhaar-based credentials via DigiLocker. These verification methods apply whether or not the parent is a registered user. However, there is no legal requirement for platforms to confirm the parental relationship through supporting documentation, such as birth certificates, which may raise implementation challenges.
Yes, the DPDP Act prescribes a significant penalty for failure to comply with child data protection provisions. A platform that fails to obtain verifiable parental consent or violates any obligation related to processing a child’s personal data may face a financial penalty of up to Rs. 200 crores, which can be imposed by the Data Protection Board of India.
At present, neither the DPDP Act nor the Draft DPDP Rules explicitly propose the use of advanced age verification technologies for social media platforms. The law leaves the choice of verification measures to the discretion of the platforms, and there is no indication of a government-led push towards mandatory technology-based age assurance.
India’s approach allows children of any age to access social media, provided that parental consent is obtained for data processing. This is broadly similar to the US Children’s Online Privacy Protection Act (COPPA), which requires parental consent for children under 13, and to the European Union’s GDPR, which sets the default age at 16 but allows member states to lower it to 13. In contrast, Australia’s Online Safety Act adopts a more restrictive model by enforcing a clear minimum age of 16 for accessing certain social media platforms.
Implementing strict age-gating mechanisms may require platforms to collect identity proofs from users, which could lead to overcollection of personal data and significantly erode online anonymity. This raises privacy concerns not just for children but also for adults who may be required to verify their age to access general services.
Since Section 9 of the DPDP Act requires platforms to distinguish between adults and minors to trigger parental consent, platforms may choose to collect identity or age verification from all users at the login stage. This practice could undermine user anonymity and increase the risk of surveillance, thereby affecting the broader privacy landscape for all internet users.
The current legal framework does not specify how a platform should confirm that the individual giving consent is indeed the parent or lawful guardian of the child. Without requiring documents such as birth certificates or other official records, platforms may find it difficult to reliably verify parental relationships, weakening the intended protection mechanism.
No, the DPDP Act explicitly prohibits Data Fiduciaries from engaging in tracking, behavioural monitoring, or targeted advertising aimed at children. These restrictions are designed to safeguard minors from manipulative or harmful online practices that exploit their data.