Saturday, February 14, 2026

In the early days of the internet, Section 230 was passed by Congress as part of the Communications Decency Act of 1996 to help protect platforms and shield them from liability.

With the massive growth of social media and AI, however, the landscape has changed a lot since its passing. Now, many are considering whether Section 230 should be reformed or even done away with entirely due to the harms it can cause, especially to our children.

Brad Carson, who is a former congressman and currently serves as the president of the University of Tulsa and the Under Secretary of the Army, sits down with Washington Times Commentary Editor Kelly Sadler on Politically Unstable to take a deep dive into Section 230 and what could be done to hold tech companies accountable.



[SADLER] We’re going to talk Section 230. It was the 30th anniversary of that law’s passage this week, and the implications it has on basically shielding big tech from any liability of the harms that it does on our youth and children. Brad, could you tell our audience what Section 230 is and what its implications are?

[CARSON] Back in the early days of the Internet, Congress passed the Communications Decency Act. And most of that law was actually struck down by the Supreme Court. Section 230 survived that judicial review. And Section 230 does two things that are very important even today.

One is, it defined the big companies like Facebook, now Google, others, as not being publishers. And so publishers, if you’re the Washington Times or the New York Times, and you were to have someone say something or to write something that was defamatory, you could be sued for libel, as an example. But it protected the platforms from those kinds of claims. And so the notion was that these were platforms where lots of people were just posting things. It was hopefully going to grow. And if you required platforms to moderate all that kind of content, then they would never actually expand. It would be very expensive and litigation could ensue. And so it immunized them from the claims that most other publishers, whether you’re a newspaper or other media outlets, were required to do.

The second thing it did that’s also important is it defined content moderation as something that you could do and that you wouldn’t be liable for some kind of discriminatory activities or if you privileged one viewpoint over another. So you could engage in good faith content moderation without worrying in some way that you were prejudicing one particular group or the other. And so you could, for example, get rid of all conservative speech or all liberal speech you wanted to. Those are the two big parts of it.

[SADLER] Back during COVID, when a lot of conservatives were being censored or monitored or cut out of different chats, conservatives argued getting rid of Section 230 because of the censorship implications. But there are greater implications of it today when we look at our youth and social media and AI and becoming addicted to these platforms, basically leading to suicide or depression. The ill effects of that — we have Meta that’s currently going through a court proceeding with a young girl who describes how basically the algorithm sucked her in and led to suicidal thoughts. How are companies like Meta using Section 230 to shield themselves from this accountability?

[CARSON] So you used a keyword there: algorithm. And things have changed in 30 years. You know, when Congress passed the Communications Decency Act, they were envisioning hobbyist chat rooms where people might go in, you’re a ham radio operator and, you know, you’re responding to one another. If you’re old enough, you can kind of remember the mid-1990s, and they didn’t want people to be liable if some person posted something that was defamatory or otherwise get you in trouble.

Today, though, social media works very, very differently. It’s an algorithmically curated environment. And so what we see are often driven by artificial intelligence now. And so a couple of years ago, courts held that that algorithm was actually also protected by Section 230. And so Meta will raise that in their defense.

I think the criticism from really the right and the left — no one is satisfied with Section 230 today — is that in this algorithmically curated environment that the companies should be liable, right? They’re weighing in heavily. And it’s no longer like the hobbyist chat board it once was where you want to protect yourself from just kind of hobbyists who might post something.

[SADLER] Yeah, and they have an economic incentive to base that algorithm off of. They want to keep our youth addicted and keep scrolling and devote more time on their sites because they can sell more ads. The algorithms become very valuable in being able to identify people and target them with specific products or specific content to sell to other vendors. Where do we go from here? Because there’s been efforts to repeal Section 230. What can Congress do?

As a parent of three young boys, I’m concerned that big tech is getting into their minds and there are safeguards that are out there, but parental controls can only do so much. Ultimately, these companies should bear some responsibility for the content they’re putting out there.

CARSON] That’s exactly right. As I mentioned, both the left and the right despise Section 230 these days. They get kind of bedeviled by trying to find what a new Section 230 could look like. But at a minimum, we should make sure that Section 230 is not extended into the realm of artificial intelligence. Because just like in Facebook, where they’re engagement farming, or they’re tuning algorithms and responses to maximize engagement. We know this from OpenAI who’ve publicly reported that some models have more engagement than others, or they put certain safeguards on ChatGPT, it leads to less engagement.

And so it’s kind of terrifying in some ways to think about it, but there’s almost a dial in Sam Altman’s hands that can determine how much engagement you’re going to have and what the model should look like. 

Watch the video for the full conversation.

Read more:

Click here for more Politically Unstable

Advertisement
Advertisement

Copyright © 2026 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.