AIGovHub
Vendor Tracker
CCM PlatformSentinelProductsPricing
AIGovHub

The AI Compliance & Trust Stack Knowledge Engine. Helping companies become AI Act-ready.

Tools

  • AI Act Checker
  • Questionnaire Generator
  • Vendor Tracker

Resources

  • Blog
  • Guides
  • Best Tools

Company

  • About
  • Pricing
  • How We Evaluate
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  • Affiliate Disclosure

© 2026 AIGovHub. All rights reserved.

Some links on this site are affiliate links. See our disclosure.

Ofcom Investigates Telegram, Teen Chat, and Chat Avenue Under UK Online Safety Act
Ofcom investigation
Online Safety Act
Telegram compliance
child safety regulations UK
CSAM
content moderation

Ofcom Investigates Telegram, Teen Chat, and Chat Avenue Under UK Online Safety Act

AIGovHub EditorialApril 26, 20260 views

What Happened

On [date of article], the UK's communications regulator Ofcom announced investigations into three platforms — Telegram, Teen Chat, and Chat Avenue — for potential violations of the Online Safety Act. The Telegram probe focuses on allegations that the platform facilitated the sharing of child sexual abuse material (CSAM), based on evidence from the Canadian Centre for Child Protection. Ofcom's assessment determined an investigation was warranted to examine whether Telegram is meeting its legal obligations to monitor and address risks of child sexual exploitation.

Separately, Ofcom is investigating Teen Chat and Chat Avenue over concerns that these platforms are not adequately protecting children from grooming and exposure to harmful content like pornography. The regulator cited feedback from child protection agencies and expressed dissatisfaction with the platforms' responses about their safety measures.

Telegram denies the allegations, claiming it has virtually eliminated public CSAM spread through detection algorithms and cooperation with NGOs. Chat Avenue defended its safety measures but disagreed that grooming is prevalent on its platform.

Why It Matters

The investigations mark a significant step in enforcement of the Online Safety Act, which imposes strict duties on user-to-user services to protect children from illegal content and activity. Under the Act, platforms must conduct risk assessments, implement proportionate safety measures, and provide clear reporting mechanisms. Ofcom has enforcement powers including fines up to £18 million or 10% of qualifying worldwide revenue, and in serious cases can seek court orders to disrupt services by requiring third parties to withdraw payment, advertising, or internet access in the UK.

These cases highlight that regulators are scrutinizing not only major social media platforms but also smaller niche services. The broader implication for all tech platforms is clear: robust content moderation, age verification, and reporting mechanisms are no longer optional. The Telegram compliance case also raises questions about how platforms balance content moderation with privacy and free expression claims.

Ofcom is also investigating X (formerly Twitter) regarding nonconsensual sexually explicit content generated using its Grok AI chatbot, signaling that AI-generated content is firmly in regulators' sights.

What Organizations Should Do

Platforms operating in the UK — especially those with user-generated content or direct messaging — should take immediate steps to align with the Online Safety Act:

  • Conduct thorough risk assessments for illegal content, particularly CSAM and child sexual exploitation.
  • Implement robust content moderation using automated detection tools and human review, with clear escalation paths for flagged content.
  • Deploy age verification to prevent children from accessing age-inappropriate content or being exposed to grooming risks.
  • Establish transparent reporting mechanisms for users and child protection agencies, with timely responses.
  • Monitor regulatory developments across jurisdictions to anticipate enforcement trends.

To stay ahead of regulatory risks, compliance teams can leverage tools like AIGovHub's SENTINEL module, which provides real-time geopolitical and regulatory intelligence across 435+ sources, including Ofcom announcements and child safety regulations. SENTINEL's financial crime and sanctions screening capabilities also help platforms vet users and content against global watchlists.

Related Resources

  • TikTok DSA Breach: AI Governance Lessons for Social Platforms
  • AI Safety Incidents 2026: Governance Gaps Exposed
  • EU AI Act Compliance Roadmap: Implementation Guide

This content is for informational purposes only and does not constitute legal advice.