Sunday, April 28, 2024

Tech Publication Calls For Regulation Of AI

-

tells us that – all this Chat GPT stuff – must be regulated before it gets out of hand. This is an argument that has several problems, the most important of which is the one that isn't being more generally appreciated.

Others – such as Bloomberg – understand some part of the problem, which is that we don't know what the end state is going to be. Therefore how can we regulate? We don't know what's going to happen so how can a bureaucracy tell us what to do? There's also the point that journalists – like those at Gizmodo and other outlets – are going to be the most worried about Chat GPT and other Large Language Models because it's their jobs that are feeling the hot, metallic breath on their necks.

You know, it's fine when robots weld cars because that makes better cars for journalists, but when robots make better writing for car welders at the cost of journalism jobs, the media is outraged. That's just being human, however hypocritical it is.

We could even worry about the usual mistake of allowing non-economists to use economic concepts – here, free rider, public goods and so on – because they never do get the implications of the phrases right. Here a philosopher seems to think that a public good is something that's a good made for the public. No, entirely wrong – and we'll not bother going into those weeds. Just let you know that the concept is entirely different. This particular worry is so absurd as to be ridiculous: “autonomous AI systems that match or exceed humans at most economically valuable tasks. “ To economists that's the point. Why would we use a machine that worse at doing something than humans are? We do not just desire we positively lust after the machines doing stuff better than we can.

But the underlying argument here is that we must have regulation first. Here's what the real danger of this regulation's first insistence is.

“However, it is clear that current AI systems already pose plenty of dangers, from racial bias in facial recognition technology to the increased threat of misinformation.”

Well, racial bias (they mean not being able to distinguish individuals of darker skin colors) in facial recognition is simply AI that works less well than humans do. So that's hardly a problem – who is going to use something worse? It's that “misinformation” that is truly worrying the media classes. Or,“conducted carefully, safely and with proper oversight and transparency.” Or even: “misinformation and fake news already pose serious threats to democracies, but AI has the potential to exacerbate the problem by spreading “fake news” faster and more effectively than people can.”

The current media setup – well, no, go back a few steps. The old media setup used to control the information we received, Those things like the insistence upon media balance, and the exclusion of odd ideas from the general conversation. They meant that ideas that the media classes didn't want us to consider never got that oxygen of publicity. The success of Fox News wasn't because it persuaded people, it was because it reflected ideas that didn't gain airtime elsewhere. Then came along the internet and the explosion of anyone getting to say anything.

The new limitation is that attempt to define unpalatable – to the progressive – ideas as misinformation. The regulation of what may be said has taken a step back into the shadows – as we've pointed out more than once. It's now those “fact-checking” groups who determine what may be publicly said.

What really worries those new gatekeepers is that ChatGPT, LLMs and AI are going to throw open the information gates once again. The stranglehold on what we might learn – therefore how we might differ from progressive orthodoxy – will be broken once again. Which is why they want control over what can be said and done before it is.

After all, they've spent decades trying to control what is taught in schools, and what can be said on the airwaves, why would they be happy with losing all that power now?

The point of controlling AI is not to save us from misinformation, it's to ensure the constant supply of disinformation.

This article originally appeared in Accuracy in Media. The opinions expressed in this article are those of the author and do not necessarily reflect the positions of American Liberty News. Republished with permission.

1 COMMENT

Comments are closed.

Latest News