Google’s 2024 Ads Safety Report has unveiled the scale of its AI-powered crackdown on harmful online content—blocking 5.1 billion ads, restricting 9.1 billion more, and suspending over 39 million advertiser accounts. The report underscores how artificial intelligence has become Google’s first line of defense in the escalating battle against digital scams.
With online fraudsters becoming more sophisticated, many now use AI-generated content and impersonate public figures to mislead users. Google’s advanced Gemini-powered models are detecting threats faster than ever, catching red flags like fake businesses, stolen payment data, and coordinated scam campaigns before they reach the public.
Africa, particularly Nigeria, remains a high-risk zone for impersonation scams and misleading political ads. In response, Google updated its Misrepresentation Policy and deployed over 100 global experts. The result? A 90% drop in reported impersonation scams, with more than 700,000 scam-linked advertiser accounts removed from the platform.
As nearly half the world heads to the polls in 2024, Google also removed over 10 million election-related ads that violated transparency standards, including requirements for verified identities and clear sponsor disclosures.
Alex Rodriguez, Google’s General Manager for Ads Safety, said the results highlight the power of AI when used responsibly. “We rolled out more than 50 AI model upgrades in 2024, helping us act faster and smarter—stopping threats before users even saw them,” he stated.
While AI handles broad-scale enforcement, Google says human reviewers are now focused on more complex cases. The tech giant continues collaborating with regulators, industry leaders, and organizations like the Global Anti-Scam Alliance to stay ahead of ever-evolving digital threats.