Home / In the News / Do we need protection from Algorithms?

Do we need protection from Algorithms?

This is an interesting think piece. It’s about what should or should not be done about “algorithmic harms” an interesting term I’ve not heard before so perhaps the author of this article has coined it.

He writes, “Let’s compare two types of algorithmic harms. In the first case, a bank uses an algorithm designed to deny all loan applications from black women. Here, there’s an existing law being broken that can be prosecuted. Now compare that example with “filter bubbles,” the kind of ideological echo chambers on social media that, in the worst cases, can radicalize people to violence. While also potentially dangerous, this problem isn’t covered by any current laws or related “legal analogs”—and Smith isn’t sure it should.”

Of course those who support free speech understand that the only speech that really needs the protection is the kind of speech people don’t want to hear. No one is protesting or demanding laws to keep people from saying the things that everyone wants to hear. It’s the speech that is incendiary that requires the protection. This right is perennially important because today’s blessed speech might be tomorrows curse.

“Is it the responsibility of the technology platform to analyze your data and say, ‘Well, this person has these views. We want to ensure that 30 percent of the news that they see comes from a different political perspective’? I’m not sure that’s a position that consumers want them to be in.””

Indeed, it’s not, or, it could be. I take as an example the video platform known as VidAngel. It’s a platform that filters movies for people. Perhaps you want to watch a movie like, Deadpool, but don’t want all the blood, violence, and language. VidAngel will filter those things out. The big difference here though is that the customer picks what to filter. They can watch it with all the gore, or filter it out, all the naked butts they want, or none at all. There’s a great market for this. If that’s what a company wants to do, I say more power to it. However, I do not believe they should do this without the customer wanting it.

Facebook is a great example. I want to know all the bias and hate and gloriously stupid things going on. I don’t want that all filtered out. Filtering it out doesn’t mean that no one is thinking it, or saying it elsewhere. Take China for example. They are well-known for filtering and blocking the Internet. It’s interesting to note that they don’t filter complaints, mocking the government, or other such speech. What they filter is content that encourages people to meet up in the real world. Facebook can filter all the fake news it wants, but that will not stop people from creating meet ups to discuss their point of view. But it will stop everyone else from being exposed to the fact that point of view exists in the first place. We cannot protect out minds against the false if we don’t even know it’s out there.

Algomacine

Let’s compare two types of algorithmic harms. In the first case, a bank uses an algorithm designed to deny all loan applications from black women. Here, there’s an existing law being broken that can be prosecuted. Now compare that example with “filter bubbles,” the kind of ideological echo chambers on social media that, in the worst cases, can radicalize people to violence. While also potentially dangerous, this problem isn’t covered by any current laws or related “legal analogs”—and Smith isn’t sure it sho

Source: Why the Law Doesn’t (and Sometimes Shouldn’t) Protect Us From the Algorithms That Rule Our Lives

Top
%d bloggers like this: