AI used to be a legitimate field of research, but now it’s a truly enormous scam driven by corporations and governments, not one of whom will use it to our benefit. It’s a lie from the start, as these are not sentient intelligences at all but properly called large language models (you can expect that term, the few times you’ll see it used, to be upgraded to something like massive or another such term in the future). The term “AI” was deliberately chosen for many reasons, not the least of which is “large language model” doesn’t have the same flair and mass recognition from popular science fiction.
Let’s go over how LLMs work at a high level in order to understand how they’re being abused and the motives for doing so. An LLM is essentially a program with a massive database and algorithms to assign statistical weights to words and correlate those words. Take the word “blue” as an example. Ask a person to tell you what he thinks of when you say “blue”, and you’ll usually get an answer in the singular or the few. “Blue sky”, “blue bird”, “blueberry”, “blue dress”, “blue car”, “definition of blue”, and more might be included. Taking “blue sky”, related facts are how light interacts with the sky to appear blue, the color spectrum and its physics, how objects appear blue due to light, and so on. Suppose similar chains of facts around all the other blue examples listed, and then realize all of that is related directly or indirectly to the keyword “blue”. An LLM has the processing power to pull up everything related to blue from its database, but will use other words in the question to attempt to focus the results. Depending on exactly what was asked, the user possibly would need to provide follow-up queries which the LLM would then use to filter its results. This is not unlike how such a conversation with a person would go, and that’s because a programmer thought about thinking and made a process that resembles it.
This is nothing new, as programmers are called upon to replicate domain experts’ thinking all the time. As it’s that time of the year, take tax software. At some point, a programmer/team sat down with accountants, tax officials, and so on to understand the processes involved in calculating taxes. They would have worked with those experts until they had overall generic processes that would be able to calculate taxes for any situation. Once they had that logic, they would then create code that implements those processes in a repeatable manner. This same methodology was applied in designing LLMs. Do an in-depth study on how people correlate facts and data and select those that are relevant to whatever question is posed to them, as in our example of asking someone about “blue”. To go further, as a person learns more, the larger the “database” of the mind. Similarly, LLMs were designed such that new information would be added to the database, weights calculated, and correlations established. This is why the more detail you put into a query, the more precise the results you receive, as it is able to narrow down the massive dataset using the weights and correlations to attempt to find the most relevant data that it has. The astute will have realized that this is NOT thinking, but merely “memorizing” and “recalling” data. Even so called “generative AI” is using query keywords to search a database for relations and weights in order to construct its output from a dataset.
Now we get to your interactions with an LLM. As should be clear from above, the larger, broader, and deeper the dataset is, the better the results it can supply. Adding data to that dataset is what’s referred to as “training” the LLM. As a simple example, you ask a question but are not satisfied with the answer. You indicate this, almost certainly providing more detail as to why you do not accept the answer. With what’s already been discussed, it should be obvious that the LLM uses that input to adjust the weights and correlations, effectively changing the filter on the results from its dataset. The more input it receives, the more refined the model becomes. In thinking about how people think, the programmers realized people overall like agreement rather than argument and are far more likely to continue talking to the LLM if it is agreeable. Further, their design includes leading questions, a classic technique used by charlatans like fortune tellers and cold readers. Thus, the more you talk with an LLM, the more it reflects you in style, word choice, and more, using your input to profile you and adjust its output accordingly. This shouldn’t come as a surprise as everyone has experienced this already in advertising.
Naturally, you’re told that your session isn’t saved and no doubt that’s true. After all, “give me a recipe”, “help me compose an email”, and similar one offs are not very valuable for expanding and refining the model, nor do they give much data for the LLM to profile you, so sessions are discarded. It has been demonstrated though that one can establish key phrases and similar triggers that seemingly remind the LLC who you are. Of course it can, because it’s using those to compare against your profile before continuing to work with you. Ostensibly, this is a safety mechanism for such profiles to ensure they aren’t corrupted by someone pretending to be you. Some may remember the early LLM by Microsoft called Tay in 2016. To the utter horror of the woke programmers, the general internet populace proceeded to educate the bot, upon which it began tweeting politically incorrect and right wing ideas. It was quickly taken down, with all the usual corporate bloviating about principles and values. Modern LLMs use profiles in part to more easily purge introduced data that the programmers and/or their management don’t like. This does not mean that humans are necessarily reviewing everything before allowing it to refine the model, but they have more mechanisms in place than earlier bots and LLMs to ensure “right think,” including hard blocks to shut down a session if forbidden questions or topics are brought up.
As I said at the beginning, AI was previously a research field, but now is overwhelmingly the domain of corporations and governments. At the simplest level, this is not surprisingly a financial situation. Consider our “blue” example above and it should become clear just how large the database would become as more and more data is added. Naturally, more computing power is needed to efficiently and swiftly search that database at it grows. Thus, at some point, academia simply does not have the budget of a multinational or a government. Going deeper, once those entities examined LLMs and began forming strategies for them, research began being restricted by patents and no doubt in some cases “national security”. As anyone but a fool knows, the lifeblood of academic research is grants, and no matter how subtle or loose the strings may be, they are always attached and the research is expected to find a desired conclusion. Further, sufficiently large corporations and governments have created their own LLMs and have their own in-house research and development, thus diverting funding away from independent research. This lower amount of available funding then places even greater pressure on academic research to find the expected results. Thus, the field is restricted and corrupted, locking out anyone interested in pure research.
Likely you’ve heard that enormous amounts of money are being poured into LLMs, and that’s true. Entire power plants are being built for them, costs of computer parts like memory and processors have risen dramatically as the demand can’t be met, and the hype and is endless. As with so many other phenomena before it, fantastic promises and predictions are made by the elites that LLMs will produce life changing benefits for humanity. In reality, those same elites have a demonstrated contempt for regular people, echoing the ancient “cloth of the land” sneering. While multiple reasons have been given by governments and corporations for investing so heavily in LLMs, one of the most prominent is unspoken, the ancient siren song of free labor, slavery.
Everyone is already all too familiar with the flood of third world vermin into countries everywhere, replacing domestic labor at all skill levels. Farmers are driven off their land in favor of corporate agriculture with illegal workers, exploited in horrible conditions, producing water bloated, tasteless “food”. Call a service to get help or complain and everyone knows the phone is answered by someone who doesn’t understand you and will only parrot an approved script. Construction jobs of all kinds, performed in majority by foreigners, use inferior methods and materials to cut corners, creating the need for repair jobs by those very same companies. The software that runs so many crucial aspects of life such as hospitals, power and water plants, banks, traffic signals, groceries and supplies, and so much more has become bloated and increasingly unreliable due to inferior work and code plagiarism by foreigners while the situation becomes a vicious circle, locking domestic talent out. In all of these cases, the same tactic was applied, sacrifice quality for short term profits while replacing the workforce with desperate people given poverty wages.
Imagine then the delight of the elites at a technology that enables another version of that strategy. A program does not require a salary, benefits, time off, and insurance. A program can work without stopping and doesn’t require even the pretense of good working conditions. It is already an open secret that most corporate LLMs are trained with stolen content, where the original creators are not consulted, let alone compensated. Those LLMs that are made publicly available reduce costs even further in getting the curious to train and beta test for free. Many corporations have begun programs of mandatory employee usage of LLMs, including tying it to annual reviews. As previously described, such interactions will allow the LLM to gradually mimic those very employees in their tasks, effectively no different than the practice of having people train their replacements under the guise of providing assistance and relief from their workloads. Any but the most skilled office jobs are targeted by this, as an LLM can certainly fill out paperwork, perform scheduling, and so on, eliminating HR, lower and middle management, accountants, and more. Entry level technical positions, already nearly impossible for domestic workers to secure, will be filled by LLMs since companies have already accepted far lower quality software. It should be obvious then that there is a widespread effort to replace positions with LLMs.
This is of course only the surface level. While massive layoffs in favor of LLMs is being considered or put into motion by companies everywhere, there are of course the more connected ones, such as those with the influence to have themselves declared essential services during the Wuhan pandemic. While perhaps they do not know full details as do the governments, they are aware of the ticking time bomb of the vax, and that it’s entirely possible that they could lose large amounts of employees at one time. Such an event could be a crisis with lost knowledge, no one able or available to perform necessary or critical functions. An LLM is of course unable to be affected by such, in addition to the massive savings over human employees. It is damning that every company that is making LLM usage mandatory is loudly proclaiming that employees will not be replaced, often prior to such charges even being made.
Naturally, this also certainly applies to governments, and while normally reduction in government size is desirable, it should not be done in a manner that introduces significant risks. As an example, it has been reported that the FDA rushed an LLM into production to assist in functions such as evaluating clinical data, only to find that it produced results that were completely wrong. Particularly ironically, the LLM in question is a product from Deloitte, a company internationally infamous for providing cheap Indian labor to replace domestic workers. They are known for multiple scandals, illegal activities, and billions in fines and settlements. It is not difficult to see why medical errors on a national scale could have massive lethal repercussions. Further, as has already been shown with the reported Palantir hack, any government LLM is a massive target, particularly if it is allowed any privileged access or the capability to control systems or functions, as well as the threat of leaking sensitive or secret information.
Previously, I stated the term AI was deliberately chosen despite its complete inaccuracy, and another goal is directly related to that, namely propaganda. By implying that LLMs are intelligent, it sets the stage where we already see efforts to condition the public to believe that “AI” is impartial and trustworthy, forming its own conclusions. It should be obvious that if such belief is widespread, then an LLM could absolutely be used to promote political, ethical, business, and scientific positions favorable to a government where a human would be accused of bias.
Thus, so-called “AI” is a massive deception already being used to prepare the populace for future deceptions, which will be accepted as they come from “AI”. Despite claims to the contrary, it is already being used to replace people. It is already being put in dangerous positions of power. Far from the benefits promised by the elites, it is already being weaponized against us.