Google has filed a lawsuit against SerpApi for evading security measures and taking protected content that appears in search results. The action, announced on December 19, 2025, aims to stop bots that, according to Google, ignore site rules and resell information that doesn’t belong to them.
What happened
Google accuses SerpApi of using techniques to bypass controls and mass-extract content from web pages that appear in Search. The complaint says those bots hide, constantly change their identity, and use massive networks to access content despite restrictions set by sites and by Google.
You might ask: isn't this just web crawling? Not exactly. Google draws a line between legitimate crawlers that follow protocols and respect site directives, and stealthy scrapers that ignore those rules and commercialize information without permission.
Why it matters
When a service takes images, real-time data, or snippets that Google or other providers license and then resells them, many parties are affected: creators, publishers, data providers, and end users. It's about rights, about sites choosing who can use their content, and about the sustainability of businesses that depend on that revenue.
Also, this kind of abuse can degrade user experience: overloaded servers, duplicated results across third-party services, and loss of control over how information is presented. You may not notice it immediately, but it changes who benefits from the content you rely on every day.
How it acts, according to Google
- Cloaking: the scrapers pretend to be normal users or benign crawlers.
- Identifier rotation: they keep changing the names they use to avoid blocks.
- Request bombardment: massive bot networks that consume resources and extract data at scale.
SerpApi, says Google, takes content that Google licenses and resells it, disregarding the rights and directives of websites.
It's important to note that Google says it followed other steps before filing the lawsuit, and frames this action as part of a history of litigation to curb web abuse.
What Google is seeking with the lawsuit
Primarily to stop the activity: for a court to order the bots to cease and halt the scraping that, according to the company, violates site rules and content owners' rights. It's not just a technical fight; it's a legal measure used as a last resort when technical protections fail.
What this means for sites and users
For content owners: it's a signal that companies indexing and commercializing others' content could face legal consequences. For users: the immediate impact may be small, but in the medium term it helps protect who controls and monetizes content.
If you run a site, it's worth reviewing your robots.txt, headers, and other directives, and considering extra controls like IP rate limits, scraping-pattern detection, and data-provider agreements.
Final thoughts
The fight isn't only technical; it's about who decides how content is used and how the digital ecosystem is protected. Should technical measures be enough, or do we need clearer legal frameworks for today's world? Today's lawsuit is an example of companies turning to the law when technical defenses are no longer sufficient.
Original source
https://blog.google/technology/safety-security/serpapi-lawsuit
