If you wanna know my opinion on AI, I made a post here. Go check it.
This post is gonna be a little more broad, and will cover AI as an example because… Because I can. B)
Accidentally Bad
It’s fully possible to make something that sucks. In fact, it’s more than likely that you make something that is just bad in some way. And that is actually fine. No one was born knowing everything, and you can develop skills like cooking by making mistakes and understanding what were the issues and correcting them.
What I want to highlight is that sometimes people make things accidentally bad, either because they didn’t know better or they are developing a skill, or even the fact that they might have executed it a way that is very strange or dubious, reducing the quality of the work, instead of increasing it. And let me repeat this, that is absolutely fine. We’re not all paragons of truth or knowledge. People that make good works can make bad ones, it happens. We can’t win them all.
My issue is when you give eternal machines unlimited access to do things. LLMs with their search algorithms are incapable of learning from mistakes because they can’t think. It’s pretty cut and dry, machines do not carry intentionality or logical reasoning. Chatbots are pretty bad at this, exceptionally bad in fact.
This is because everything we do, as a species, has an intention behind it. Doesn’t matter if it’s me writing this post or a former president trying to stage a coup with American interventionism, everything has a meaning, a purpose, an intention. This is what chatbots and AI in general can’t and most certainly won’t have. Everything it does is a mechanical reaction, there’s no thinking involved, there’s only code.
Mechanically Stupid
What I want to say about LLMs with search algorithms is this: They are not smart. They might do insane calculations that humans would take decades to do in seconds but only if they are programmed to do so. Otherwise, they have no purpose, because they can’t think.
Which means they are mechanically stupid. A machine will only do work if it’s staged, molded, coded to do so. There’s no deviation, there’s no breaking what it was designed to do. This is why people go into panic mode when the machine deviates from what they are supposed to do, it’s because it’s either learning (it’s not) or it’s broken.
Many pieces of media have elaborated further, expressing that machines will turn sour once they realize that we’re all just a bunch of primates with opposable thumbs that can see colors and can think slightly better than normal (which is not a lot) and then order our destruction. That’s fine and all but these writers forgot that machines can’t think or evolve, not without our explicit intervention.
LLMs will never grow if data is never fed to it. But since what we have as data is usually poisoned, it’s not a surprise seeing Grok making alt-left comments, or bots trained on Twitter messages went from happy to depressed or criminal in less than 24 hours (also, it’s always related to Twitter, that heap of shit). And because it doesn’t understand what it’s doing, it has no intentionality on what it does, making it feel cold and rigid, because that’s what it was made for.
So the dream of a SkyNet that comes and just goes against war with humanity is not really a problem we should be looking for. We should be prepared but it’s not really likely to happen.
An Assistant, not a Worker
And this comes down to using AI effectively. If you treat it as an assistant that will help you get better at things, like writing, making images or related, then it’ll be fine. It can’t think but if asked things in its database, it can give good replies. Maybe grammar needs to be checked, or maybe get something done in a pose for a drawing. Those are gonna be fine uses for it.
What it won’t be a good usage for it when you delegate the entire workload to machines. They can’t think, they won’t think, results are bad and will turn into worse the more people think they’ll be able to get away with it.
You wouldn’t delegate entire jobs to apprentices that just started working at all but somehow lack the capabilities to learn, would you?
All Tests Failed
Have you heard of the Turing Test? Or the Chinese Room argument? If you have, congrats, you’re smart. If you haven’t, a quick rundown.
The Turing test, originally called the imitation game by Alan Turing in 1949, is a test of a machine’s ability to exhibit intelligent behaviour equivalent to that of a human. In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart.
The Chinese room argument holds that a computer executing a program cannot have a mind, understanding, or consciousness, regardless of how intelligently or human-like the program may make the computer behave. The argument was presented in 1980 by the philosopher John Searle.
These two are, to me, core tenets on what modern day AI (LLMs with search algorithms) are incapable of shaking off. We can easily tell if the test was made by AI with some reading, because, again, the AI in general doesn’t know or understand what it’s doing, it’s just searching info to generate a plausible response.
Speaking of which…
In the thought experiment, Searle imagines a person who does not understand Chinese isolated in a room with a book containing detailed instructions for manipulating Chinese symbols. When Chinese text is passed into the room, the person follows the book’s instructions to produce Chinese symbols that, to fluent Chinese speakers outside the room, appear to be appropriate responses. According to Searle, the person is just following syntactic rules without semantic comprehension, and neither the human nor the room as a whole understands Chinese. He contends that when computers execute programs, they are similarly just applying syntactic rules without any real understanding or thinking.
Doesn’t that sound very familiar? The key difference that humans do learn from experience, and assimilate that information inside their minds. Machines can’t do that, by design.
Also, look the Wikipedia page for it has the word in the title.
Searle argues that, without “understanding” (or “intentionality”), we cannot describe what the machine is doing as “thinking” and, since it does not think, it does not have a “mind” in the normal sense of the word.
Is it a simulation of having a mind, or is it truly having one? Well, I can tell you, the amount of slop it can produce, it doesn’t have any say on what it can make.
Unintentionally Bad and Closing Words
Humans will never be replaced by AI, mostly because AI is a bubble which is gonna pop. When? Probably soon, or when the investors run out of money. This is because emulate logical thought process is not a simple thing. And even if you can, they’ll be that, emulation, never like the real thing.
A type of realization comes from this, where some people might discredit bad fanfics or similar things but there’s some sort of desire to make it, meaning it has intention on it, and that alone makes it a valid effort for the subject, and better than whatever any AI can put out. Trying and failing isn’t a bad thing, we learn from mistakes. AIs don’t try because they don’t know what ‘try’ means.
So if you’ve ever wanted to complain about AI and what it does, mostly what it does wrong, you’re not wrong yourself. Artists, writers, they have the right to be scared about people trying to replace their jobs with AI, but the way I see it, it’s just a bunch of people at the top trying to cut them off instead, which is worse than the notion that AI will be able to act like humans in 2027 or that by ‘2030, AI will take all the jobs’ or something.
Instead, we’ll be fearmongering ourselves with distrust over things we have done over a century or two, and let the people that make the AI tools to act like they were the player in The Sims 1. The simulation of a human mind is not a real mind, because intentionality is everything.