I'm Not Saying "Thank you" to ChatGPT: My AI Skepticism

Earlier this year, my little brother started presenting what could be read as anger management issues, despite his life as a 4 year old being quite peaceful and not stressing at all. Well, he was biting and pushing the hair of other kids, so we got the message that he was dealing with some kind of pain, okay. He has always been really into books, so my mother decided to write a story teaching him how to deal with his frustrations. Or to have that story written, at least. I guess she judged she was too busy to really care about it at the moment, so she asked ChatGPT to "create" something. I couldn't accept it, of course, so I ended up coming up with a little poem myself – made by a human, for humans. It was way better than what ChatGPT had come out with, but that is not to say I'm a good writer: AI just makes the competition really easy.
Outsourcing human imaginative faculties – one of the potential defining traits of our species¹ – to a machine is ingenuous at best and dangerous at worst. I didn't want to see my brother reading something nobody bothered to write. Most people, ignoring how genAI works, are using their generatives properties with the amusement of someone watching a dog giving your their paw. But it isn't only because your dog knows this trick that you will use it to handshake your dinner guests in your place: allowing a machine to produce a text for your kids is one more instance of the techno-solutionism fallacy – that is, believing the solution to ancient problems that already survived centuries of innovation is simply... more innovation.
So just to warn any other busy mom out there, I'd like to briefly explain through this text how genAI works, what is the state of the AI market nowadays and why it's naïve to take part in it.
I.
AI chatbots, as the technology stands today, are language models trained through a huge amount of data – that's where their technical name comes from by the way, LLMs or Large Language Models. Since they are feeded with such copious amounts of data, these models are able to operate complex natural language processing such as generating new "content" through prompts.
Thus it must be noted that, unlike the common notion of artificial intelligence preached by the most speculative works of Hollywood fiction – an ancient source of misinformation² –, these models operate as "stochastic parrots": parrots because, just like the animal, they reproduce speech without really being conscious of the content behind the words used; and stochastic because they chain one word after the other through probabilistic analysis of how human discourse should be like, aided by the (really gigantic pile of) data they have been trained on. That made clear, believing such a machine could ever achieve rationality is as absurd as seeing a dirty car in which window someone has written "WASH ME" and believing the car itself to be the conscious issuer of this message. Current AIs models, therefore, are not only insufficient but also indifferent to the TechBro reverie of AGI (Artificial General Intelligence, a theoretical model that would be capable of learning, thinking and elaborating just like a human being).
So LLMs aren't really what people are sold when they buy into genAI. Not to say anyone believe that AGI is already here and that it's good ol' ChatGPT. But people are being bombed everyday with nonsensical doomsday previsions – mostly from the people behind the selling itself, how curious! – such as "AI will take your job!" or "AI will destroy modern societies!", which undoubtedly lead them to believe LLMs are way more powerful than they really are. Is tech companies making once again a fool out of us the worst thing happening when we swallow the bait? Oh boy, I wish! Just think, for example, about the high energy and environmental costs of training and using AIs; the use of copyrighted data to do that training; the crowdworkers in underdeveloped countries being paid a miserable salary to allow the conditions for machine learning; and the training of these chatbots using data generated by others LLMs.
This last issue is particularly problematic because, as explained above, AI can only generate "content" deriving it from pre-existing data. Thus, anything created by LLMs can never surpass human content in quality, since it must necessarily originate from it, with no room for creativity or inspiration (things a machine can't entertain). These models however need such a huge amount of data to work properly (or even to barely work at all, honestly) that free-to-use content from all over the internet in its three decades of public existence wasn't enough to sell users the illusion of a sentient algorithm. So companies started using information protected under copyright laws, such as newspaper articles or YouTube videos, to train their models. Sometimes, these AI companies will even make deals with platforms that own the data to retroactively change their terms of use, making the stealing of content somewhat "legal". One may argue that a video, if readily available on YouTube or other internet platform, has already been made public and can thus be put to any use. That however isn't true: streets are also public, but parking your private car in one doesn't mean that now anyone can use it.
These generative properties have been put to good use all over the internet, what means that a lot of AI-generated content is currently flooding the web, what in turn also means that AI models are being trained not only on human-made fine-tuned natural language anymore but also on a lot of slop in a true dementia feedback loop. However, being strictly derivative machines, the more a model is fed on AI-slop the worse its average output quality is³.
Thus, while this structural modelling problem isn't solved, LLMs may get less accurate and less reliable the more data they are trained on. How can you call such a system "intelligent"?
Such a predatory consumption of pre-existing content also reveals another dark aspect of AI technology: these chatbots become pretty much machines of plagiarism⁴. Plagiarizing is often the entire service a chatbot is providing you, for they usually generate responses following word-by-word earlier human-made sources – what is more an inevitable feature of the model instead of an accident, since these machines are trained to imitate human discourse in mathematically probable ways. By promptly presenting as its own an answer derived from the research and hard work of someone else, who took the time to study and analyze the topic before uploading it to the internet, AI reduces traffic to the original content – which is usually better organized and better written, since it was created by a human brain more fine-tuned to natural language. This decreases revenue in advertisements and commissions from the original post, what can force small content creators to give up their career – just search for "Google Zero" and check out what Google's AI automated responses are already doing wven to huge media outlets like Business Insider and The Washington Post, who reportedly lost about 50% of organic internet traffic each. Less human-made content on the internet also means less quality outputs for AI models, as explained above. Our desire for easily available answers may one day prevent us from finding any answer at all.
The whole point I'm trying to make through this first part of the text is pretty simple: AIs, as they exist today, don't really work how you would want them to and aren't even applicable to most of the uses they have been put to. The service being delivered simply isn't what is being promised – or at least not what is being marketed, since one of the favorite tactics of AI developers to keep alive interests and investments in their products is scaring the population with obviously fake news about machines becoming sentient and jobs being wiped out. Also, nothing indicates that the scary promises being made will one day turn true with the current technologies – "Woah, they could invent something in the future that...", well, sure! But these unabashed CEOs are trying to spook you with the cards they have at hands today! They have no reason to sell a technology they can't even create as of yet, especially not when someone else might be the one to do it first!
And, despite all the lack of practical benefit right now⁵, these LLMs are still being put to use everywhere regardless of the environmental damage, the creation of a new digital precariat through crowdworking or the unsustainable prospects of training data for future models. Beyond that, there is a series of dilemmas I won't really touch upon due to the brevity of this text: AI chatbots endanger job markets of all kinds – not because a chinese room is better than a human, but merely because people responsible for hiring teams believe so –, compromise the mental health of users, harm interpersonal relationships, promote cognitive sedentarism and decrease capacity for critical thinking.
II.
Then why are people using them?
Google, Meta, Amazon and Microsoft together will spend 330 billion dollars in AI investments just during 2025. All that pursuing a technology that didn't have a real market demand and won't ever be the AGI of science fiction – and that is costing they more money than what they are making out of it⁶. Do you really think these companies would spend one third of a trillion dollars in a single year on a new product if they didn't expect you to use it? They monopolize almost all platforms in which in inhabit on the internet, it's pretty easy for them to reach us with propaganda. What seems to be quite hard is making people notice this obvious conflict of interests: the same people that put so much money on a new technology are the ones trying to convince you that there is no future outside of it! Just think about how many times you heard this same story – and also how many times you saw these tech promises quietly withering away: cyptocurrencies, NFTs (the whole WEB3.0 idea, thankfully), Kinect (and the market of movement-tracking games), social networks like MySpace, Orkut and Messenger (now probably also Facebook, and eventually even the current idea of social media with the advent of protocols like ActivityPub), etc... Even some innovations that were never made into being were promised as the inescapable future only to be found dead in the crib later: remember how Tesla had years of good investments promising the "self-driving car" in just some more quarters?
The signs of a predatory and self-cannibalizing market don't end there: the investment needed to run AI models is so aggressive that companies can't even depend simply on end consumers, they need any excuse to put their beloved overpriced machine to "good" use. That's why some, like Google, are applying AI "enhancements" in third-parties contents without consent or previous warning. In the absence of real market demands to this technology – which nobody, maybe only NVIDIA, requested for –, companies are forcibly creating the demand to attend their supply, in a clear inversion of free market good practices. That's why YouTube is automatically dubbing the hell out of every bird's chirping and Google search engine is putting the "AI mode" as the first option of the navigation bar, tricking users accustomed to having the "All" general responses tab in that place.
Current AI market status is that of a bubble – and there isn't even 20 years between 2008 and us... This supposedly innovation isn't returning neither revenue for investors nor better results for end consumers: the market is based totally upon ignorance and fear. We have users fearing being drowned in an avalanche of misinformation and orchestrated "hype" from the BigTechs; and CEOs fearing being left behind in tech new great promise. There is this common myth that early adopters are faded to succeed⁷. History has proven time after time there is no reason to give this idea any credit: as just written above, the first social networks are all being overcome by more recent ones; Apple's iPhone had its first model in 2007 meanwhile smartphone technology was being developed since the 90s; similarly Tesla was founded in 2003 when electric vehicles had existed for at least one century and a half already. The abovementioned Apple, by the way, famously doesn't jump in market trends before carefully analyzing them: that's precisaly why they are being judged as "behind" in the AI race when they are actually just taking the technology for what it really is – that is, not really much.
III.
Okay, after all these considerations, you may think: AI is the apple of the eye (just not yet for Apple, but jokes aside) of the tech market nowadays, even without proper technological substratum for growth, chance of sustainable use or sign of benefits in most of its current uses. But it surely can't be bad taking part of the bubble as a mere end consumer, right? If you, using your individual agency (good luck finding that in the age of propaganda), decides to make a restrict and conscious use of AI, you aren't promoting any harm, are you?
Well, for one there is the structural harm of the model's own functioning: copyright infraction and plagiarism, environmental damage, disincentive to human-made content, low-quality slop flooding the internet, etc. These things will happen following the use of any consumer. You may think that's indifferent. "If it isn't for me, someone else will adopt the technology and just do equal harm". Teleologically speaking, that's correct. But the truth is that most often than not it's wiser to judge our actions by their principles and not by their consequences. You wouldn't think about killing someone in the street just because that person would eventually die anyway. This idea is clearly absurd! But these two examples, in spite of contextual differences, represent the same thing: actions taken with indifference for the notion of producing, in any case, "equal harm".
And for two (if you will allow me to purposefully use my broken english as jocular), without taking into consideration the damage done by the technology itself, there surely is harm in taking part of the bubble even as an end consumer. Leading companies in the field of AI are all billionaire megacorporations in the oligopoly of BigTechs – it couldn't be different: costs for developing and training models are too high to promote something even remotely similar to free market competition (that is, if any free market can truly exist⁸). An oligopoly – specially in a field so restrict by the level of investments and of technical expertise needed – basically means that companies can profit not only from gaining clients but also (and sometimes mainly) from exploring them. Tech megacorporations know they pretty much hold their users hostages to their products: there simply isn't appropriate alternatives in the case someone denies the option made available by BigTech (or, even when there is, the alternatives aren't known or aren't allowed). I want to emphasize the word "appropriate" here because, even though you can refuse using Netflix or Amazon Prime, you can't really find all the same movies in any other streaming service; even if you decline using Twitter or Facebook at all, that doesn't mean you will find another social media where all your friend have profiles; etc. This put companies in a pretty comfortable position to create adversarial relationships with consumers: they feel free to extend their powers even in detriment of users rights or products final quality – just think about the increase in YouTube ads or the avalanche of bots promoting adult content in Twitter. Accepting the bubble means accepting these abuses – and thus conceding BigTech companies more power over you.
To add more somber colours to the oligarchic domination nightmare painted by BigTechs, we still have the displeasure of finding out what these companies leaders are doing with the wealth they conquered thanks to users. Spotify CEO Daniel Ek used Prima Materia, his venture capital firm, to raise 600 million euros in a round of funding dedicated to Helsing, a German company that produces AI for military drones. Palantir, another company responsible for the development of top-notch military tech – including also military AI – raised its market value in about 10 times over the last 2 years. Nowadays its worth more than the combined value of 3 old geezers in the field: Lockheed Martin, General Dynamics and Northrop Grumman. One of its co-founders and current chairman is Peter Thiel, also co-founder of PayPal and first external investor of Facebook. Some of the biggest companies in American tech recently made millionaire deals with Defense Departments across America and Europe. Military services are Silicon Valley's latest affair. The money that made those investors rich was of course taken right out of the wallet of innocent consumers like you and me, and now we are inadvertently injecting investments in technologies built to endorse conflicts all over the globe. Palmer Luckey, the creator of Oculus Rift, used his wealth amassed through VR – another "inescapable future" that nobody cares about anymore, by the way – to found Anduril, another start-up that aims to offer the worst of robotics and AI for America's Department of War (they don't even pretend it's for defense anymore...). Even worse, this same Anduril closed a deal with OpenAI, developers of ChatGPT, to use their models in military products – in a treacherous change of company policy, since OpenAI explicitly defend until 2023 never using their technologies for military purposes. That prohibition, however, was quietly lifted in 2024, few months before announcing the deal with Anduril. ChatGPT and all AI models developed by OpenAI, needless to say, are financed and tested in a daily basis by well-intentioned users that don't even suspect what the corporation is up to. Regardless of how do you feel about the massacre in Palestine or the war between Ukraine and Russia, it must still be uncomfortable imaginingyou are indirectly financing the fabrication of weapons designed to kill.
In Conclusion:
Above I tried to briefly explain my disinterest in AI technology and my opposition to the AI market. Disinterest because I believe LLMs are neither great facilitators nor even huge innovations: models were developed for use cases that actually seem to be very limited and restricted and you consider all the uses they are actually being put to. And opposition because the irresponsible and unconscious use – stimulated by sellers of the products spreading misinformation – promises mainly harm in the long run. I also tried to explain how this innovation bubble is a recurring trick from BigTechs and how falling for this tale just concedes them power to keep abusing consumers and perverting profit in lousy investments. Above all, I tried to convince you the reader that we may be getting into this AI business with more optimist than life advices. If my words for now weren't enough, I believe the future will soon corroborate my thesis with plenty of examples. I honestly hope this text will be useful at least to produce interest in the topic. May the knowledge acquired here, if any, protect you from the cult of innovation and all its naivety.
Notes:
¹At least accordingo to Harari, Y.N. (2014). Sapiens: A Brief History of Humankind, Chapter 2: The Tree of Knowledge. Random House UK.
²I'm thinking about such stuff as, for example, the 1939 film Ninotchka, directed by Ernst Lubitsch and written, among other people, by Billy Wilder. With such people in the production, of course it's a great movie, but the entire film is filled with biased and misleadings representations of the Soviet Union. In a certain moment, Melvyn Douglas character tells Greta Gargo – who plays the titular "Ninotchka", a soviet official in mission at Paris – that he has been a great admirer of USSR's 5-Year Plans for the last 15 years. The 5-Year Plans, of course, were supposed to be updated every five years. Lubitsch, however, bets that his audience is ignorant and uses the phrase as a provocative joke with the meaning that the plans were unsuccessful for lasting so long.
³Section "LLM Self-Cannibalization"
⁴Section "Retrieval-Augmented Generation (RAG)"
⁵Notice how the text in the article linked says "The core issue? Not the quality of the AI models, but the 'learning gap' for both tools and organizations" only for, literally in the same paragraph, go on to say: "Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained". So... in the end it's pretty much about the models, right?
⁶The headline of the linked article talks about a revenue of 13 billion dollars for Microsoft from their AI products, but a paragraph hidden in the text brings the information "For the quarter, Microsoft reported capital expenditures of $22.6 billion, a new record high, citing the need to continue increasing capacity to meet demand for its cloud and AI offerings". In a single quarter, Microsoft is spending more on AI than what they are making in an entire year!
⁷Section "The LLM Train is Not Leaving the Station"
⁸Galeano, Eduardo (1971). Open Veins of Latin American. Monthly Review Press. Section "BURN THE CROPS? GET MARRIED? THE PRICE OF COFFEE DICTATES ALL" tell us: "The rich countries that preach free trade apply stern protectionist policies against the poor countries: they turn everything they touch--including the underdeveloped countries' own production--into gold for themselves and rubbish for others. The international coffee marker operates so exactly like a funnel that Brazil recently agreed to impose high taxes on its soluble coffee exports, a reverse protectionism designed to protect the interests of competing U.S. manufacturers. Instant coffee made in Brazil is cheaper and better than that made by the flourishing U.S. industry; but then, of course, in a system of free competition some are freer than others"