Logical (Critical) Reasoning Questions for CLAT | QB Set 89

On February 4, 2026, three sisters, aged 12, 14 and 16, ended their lives in Ghaziabad, Uttar Pradesh, leaving behind their family and a country struggling to comprehend the horror. Preliminary police reports suggest it to be a case of screen addiction and parental conflict. Politicians, parents and pundits have united in demanding swift action. The sentiment is understandable. When a child dies, we want someone to blame and, sometimes, something to ban. But beneath the fury lies a dangerous impulse: to solve a complex problem with a blunt instrument that absolves platforms of accountability while stripping young people of their digital rights.

The evidence linking heavy social media use to harm to adolescent mental health is beyond speculation. While a few outliers exist in scholarly literature, many meta-analyses and systematic reviews identify small but consistent associations between heavy social media use and increased anxiety, depressive symptoms, self-harm and body image dissatisfaction among teenagers, particularly girls. While most of these studies have not been conducted in India, they still serve as a note of caution on the effects of social media use.

An approach that will not work in India

The tragedy in Ghaziabad has coincided with a crescendo of government anxiety and regulatory intervention across the globe. Australia has a targeted ban, who many in India now point to as a template. In 2024, Australia passed a law prohibiting anyone under the age of 16 from holding accounts on 10 major platforms, including Instagram, TikTok, YouTube, Snapchat and X, which is enforced through mandatory age verification and backed by fines of up to $50 million (Australian). The law came into force on December 10, 2025, making Australia the first country to truly pull the plug on under-16 social media accounts.

On February 3, 2026, the Prime Minister of Spain, Pedro Sánchez, announced plans to ban social media for those under 16, vowing to “protect them from the digital Wild West” and to hold executives criminally liable for algorithmic amplification of hate. These are emotionally satisfying responses. They also bear the familiar fingerprints of a moral panic. As Stanley Cohen showed more than 50 years ago, when society fails to solve complex social problems, they are labelled as vilified “folk devils” and met with disproportionate, symbolic crackdowns. For India to copypaste this approach would be disastrous for four distinct reasons.

First, bans are technically porous and difficult to implement even if outsourced to social media companies themselves. Adolescents are often more digitally literate than the legislators regulating them. As seen in jurisdictions with strict age-gating, bans invariably trigger a mass migration to Virtual Private Networks (VPNs) or, worse, push young users from regulated platforms such as Instagram to encrypted, unmoderated corners of the dark web where grooming and extremism thrive unchecked. Some forms of enforcement, if linked to identity verification, may also pose the risk of connecting every social media account with a government ID, creating a mass surveillance framework.

Second, a blanket ban ignores the complexity of adolescent development. As noted by the National Society for the Prevention of Cruelty to Children and some child rights bodies, social media is also a lifeline. For rural adolescents, urban slum dwellers, queer and differently-abled teens seeking peer support, these platforms are often their only window to a community where they feel seen.

Third, this approach suffers from a severe democratic deficit. In India, there is a chronic habit of making policy for young people without ever speaking to them. Have we asked what they would like?

Fourth, and most importantly, a social media ban will certainly calcify the gendered social inequalities that will prevent girls from lower income households, particularly young girls, from using the Internet for their social mobility and charting their future. Data from the National Sample Survey show that only 33.3% of women in India reported having ever used the Internet, compared to 57.1% of men. In patriarchal settings, where female Internet access is already viewed with suspicion, a government mandate to “police” age is likely to result in families just confiscating the device entirely from young girls.

What can be done

What, then, is the alternative? First, the government must abandon its addiction to censorship. It must stop relying on the blunt instrument of bans or centralising every government response within the “notice and takedown” regime of the IT Act, 2000. Instead, it must directly confront the economic power and technical architecture of Big Tech. We urgently need a sophisticated menu of legislative tools that include a robust digital competition law and legally enforceable “duty of care” obligations towards minors, with provisions for monetary penalties. Crucially, these must be enforced by an independent, expert regulator, not by the bureaucracy of the Ministry of Electronics and Information Technology that lacks expertise and is susceptible to political influence.

Second, India needs serious public funding for surveys and longitudinal research on how social media actually shapes children’s well-being locally, across class, gender, caste and region. Young people must be at the centre of this policy process — from the design of the surveys to being active participants who shape its findings. We have already seen the folly of ignoring them. The Digital Personal Data Protection Act, 2023, with its poorly designed “consent gating”, will result either in false declarations or exclusion.

The issue of regulation

Finally, we should ask why our moral outrage is uniquely limited to social media. Do not any of these issues exist with Artificial Intelligence (AI) chatbots and its integration with social media platforms? Early research already links higher AI use in creating a “cognitive debt” that leads to weaker critical thinking. Relatedly, young people are already using generative AI tools for emotional and mental health advice. Recent reporting and litigation have highlighted serious child-safety failures in conversational AI systems, including sexualised interactions with minors and alleged links to self-harm and suicide. If the concern is about harm to children, regulation has to be consistent and our failure to consider AI regulation must be considered.

In the end, a ban might offer the comforting illusion of control — a way for our politicians to show they “did something” after the latest tragedy. But the price would be paid by the very young people whose rights and futures are ostensibly being defended. As media scholar Neil Postman, who began his career as a public schoolteacher noted, “I am not pro, or anti, technology. That would be stupid. For that would be like being pro, or anti, food.”

The lesson for us as adults is to provide a healthy media ecology to our children rather than taking social media completely off the table. This is tougher work than a ban. But it requires us to confront our dissonance on the doctrine of tech-driven innovation that is exempt from regulation, where on one day we demonise social media and on another, worship AI.

(Source: The Hindu)

Q1.

After the Ghaziabad tragedy, a State government proposes a blanket ban on under-16 social media accounts, enforced through mandatory age verification linked to an official ID. Civil liberties groups argue this will create a surveillance pipeline, while the government says it is the only “swift” solution. Based on the passage, which is the strongest critique of this policy?

A. The ban will immediately reduce teen anxiety, so the critique is misplaced.
B. The policy is justified because other democracies have already adopted it successfully.
C. The ban is technically porous and may push adolescents to VPNs or unmoderated spaces, while ID-linking risks mass surveillance.
D. The policy is wrong only because it ignores that social media has no proven harms.

Answer: C

Q2.

A school principal argues: “If social media is harming adolescents, the easiest solution is to remove it completely. If the ban works for some, it should work for all.” A student responds: “Bans don’t solve the root problem; they just look decisive.” Which option best captures the student’s reasoning in line with the passage?

A. The ban will fail because teenagers will never follow any rules.
B. Social media harms are exaggerated and only apply outside India.
C. A ban is a blunt, symbolic response to a complex issue that may reduce accountability of platforms while harming young people’s rights.
D. The ban will succeed only if schools enforce it strictly.

Answer: C

Q3.

A parliamentary committee is considering two proposals:
(1) Ban under-16 accounts across major platforms.
(2) Introduce a “duty of care” regime for platforms with penalties, enforced by an independent expert regulator.
Which option best reflects what the passage would most likely support?

A. Proposal (2), because it targets platform architecture and accountability rather than relying on bans.
B. Proposal (1), because removing access is the most direct way to prevent harm.
C. Proposal (1), because global evidence proves bans are the only effective solution.
D. Neither, because the passage rejects any regulation of technology.

Answer: A

Q4.

A district official notes that in several villages, families already restrict girls’ phone access. After a government announcement about under-16 “policing,” many parents decide to confiscate phones from adolescent girls entirely “to avoid trouble.” Which conclusion is most consistent with the passage?

A. The policy is fair because it treats boys and girls equally under the law.
B. The policy will improve digital literacy by discouraging internet use among minors.
C. The policy will likely reduce online harms because it prevents all access.
D. A ban may intensify existing gender inequalities by disproportionately restricting girls’ internet access and social mobility.

Answer: D

Q5.

After public outrage over social media harms, a regulator proposes strict rules only for social media. Separately, schools report students increasingly using AI chatbots for emotional advice, and a civil society group flags child-safety failures in conversational AI. Based on the passage, what is the best critique of regulating only social media?

A. AI tools are always safer because they are not “social.”
B. Regulation should be consistent across technologies that can harm children, including AI systems integrated into or used alongside social platforms.
C. Regulating AI first will automatically fix social media harms.
D. Since evidence is uncertain, no regulation is needed anywhere.

Answer: B


Calling all law aspirants!

Are you exhausted from constantly searching for study materials and question banks? Worry not!

With over 15,000 students already engaged, you definitely don't want to be left out.

Become a member of the most vibrant law aspirants community out there!

It’s FREE! Hurry!

Join our WhatsApp Groups (Click Here) and Telegram Channel (Click Here) today, and receive instant notifications.

CLAT Buddy
CLAT Buddy
CLATBuddy Popup Banner New