Could Big Tech be liable for generative AI output? Hypothetically ‘yes,’ says Supreme Court justice


The future of tech liability is uncertain, and the Supreme Court’s recent hearing on the Gonzalez v. Google case is proof. In this case, the family of an American killed in a 2015 ISIS terrorist attack in Paris argued that Google and its subsidiary YouTube did not do enough to remove or stop promoting ISIS terrorist videos seeking to recruit members. But Google won with the argument that Section 230 of the Communications Decency Act shields it from liability for what its users post on its platform.

Now, Justice Gorsuch has raised the question of whether generative AI output is also protected by Section 230. In this blog post, we’ll explore the legal battles brewing around generative AI, the implications of the Gonzalez v. Google case, and what this could mean for tech platforms in the future.

Is Generative AI Protected by Section 230?

As search engines begin answering some questions from users directly, using their own artificial intelligence software, it’s an open question whether they could be sued as the publisher or speaker of what their chatbots say. In the course of Tuesday’s questioning, Gorsuch used generative AI as a hypothetical example of when a tech platform would not be protected by Section 230.

Legal Battles Have Been Brewing for Months

As generative AI tools such as ChatGPT and DALL-E 2 exploded into the public consciousness over the past year, legal battles have been brewing all along the way. For example, in November a proposed class action complaint was announced against GitHub, Microsoft and OpenAI for allegedly infringing protected software code via GitHub Copilot, a generative AI tool which is meant to assist software coders.

And in mid-January, the first class-action copyright infringement lawsuit around AI art was filed against two companies focused on open-source generative AI art — Stability AI (which developed Stable Diffusion) and Midjourney — as well as DeviantArt, an online art community.

What Does This Mean for Big Tech?

It’s clear that legal battles around generative AI are here to stay, and the implications of the Gonzalez v. Google case could be far-reaching. Now, tech platforms are wondering if generative AI output will be protected by Section 230 and what this means for potential liability.

Check out all the on-demand sessions from the Intelligent Security Summit here to learn more about the critical role of AI & ML in cybersecurity and industry specific case studies.

The future of tech liability is uncertain, and the Supreme Court’s recent hearing on the Gonzalez v. Google case is only the beginning. Stay tuned to find out what this could mean for Big Tech and generative AI in the future.

Leave a comment

Your email address will not be published. Required fields are marked *