A balanced left-wing perspective of AI
Almost all casual AI views are split into two opposite positions: that it will overtake humans and take over the world ("singularity"), and that it has no intelligence at all and is a scam ("talking parrots"). Neither is correct.
First, we will discuss the source of intelligence in both humans and AI, and then we will discuss AI capabilities. After that, we will discuss the AI bubble and some common miscellaneous left-wing mistakes.
Intelligence and capabilities
Where does intelligence come from?
We are Marxists, and Marxism already gives us an answer about where intelligence and knowledge come from: it comes from real world practice and experience, and the usage of that experience to develop theories that explain the world.
See Mao's "On Practice":
Whoever wants to know a thing has no way of doing so except by coming into contact with it, that is, by living (practicing) in its environment.
If you want to know a certain thing or a certain class of things directly, you must personally participate in the practical struggle to change reality, to change that thing or class of things, for only thus can you come into contact with them as phenomena; only through personal participation in the practical struggle to change reality can you uncover the essence of that thing or class of things and comprehend them.
If you want knowledge, you must take part in the practice of changing reality. If you want to know the taste of a pear, you must change the pear by eating it yourself. If you want to know the structure and properties of the atom, you must make physical and chemical experiments to change the state of the atom. If you want to know the theory and methods of revolution, you must take part in revolution. All genuine knowledge originates in direct experience.
This is the theory-practice cycle, the scientific method, the "iteration" that is talked about in agile software development, materialism and its "realism" capitalist equivalent, etc.
Now, AI has human data, so it obviously does not need to learn everything via its own practice:
All genuine knowledge originates in direct experience. But one cannot have direct experience of everything; as a matter of fact, most of our knowledge comes from indirect experience, for example, all knowledge from past times and foreign lands.
But still, a human did practice to collect this information:
To our ancestors and to foreigners, such knowledge was--or is--a matter of direct experience, and this knowledge is reliable if in the course of their direct experience the requirement of "scientific abstraction", spoken of by Lenin, was--or is--fulfilled and objective reality scientifically reflected, otherwise it is not reliable. Hence a man's knowledge consists only of two parts, that which comes from direct experience and that which comes from indirect experience. Moreover, what is indirect experience for me is direct experience for other people. Consequently, considered as a whole, knowledge of any kind is inseparable from direct experience... That is why the "know-all" is ridiculous... There can be no knowledge apart from practice.
There are two main points here:
- Building knowledge requires interacting with the world, not just observing. A simplistic way of thinking about this is recognizing correlation vs recognizing causation. AI's current knowledge mostly comes from interactions done by humans in the past ("indirect experience").
- AI's existing knowledge comes from past human experience, and to truly go beyond this, they must collect their own experience in the real world.
Now, some discussion and caveats.
This is not a validation of the "talking parrot" idea, where people claim that AIs are simply parrots and have no understanding of what they say. It doesn't necessarily invalidate the parrot idea on its own, but it does not validate it either. If indirect experience makes something a parrot, then a human student in school would be considered a parrot as well, which is nonsense.
Also, AI can almost surely identify new patterns or connections in existing data that humans did not notice or study thoroughly. This could mean that it can reach or go slightly beyond human intelligence. Of course, not being able to interact (apply changes to a system and view the outcome) will make things difficult.
But once the AI has "maximized" the knowledge (theory) from existing data (practice), it will need to collect more data itself in the real world. This puts the idea of an AI "singularity" into question.
Knowledge is not found inward. The idea of a "know-all" AI that purely iterates internally is invalid. If it must experiment in the real world, then it will be limited by real world laws. It cannot simply update its weights in a high-speed compute cluster: it must physically interact with the world. This means robots, the components for the robots, the materials, the energy, the production.
Again, Mao:
Man's knowledge depends mainly on his activity in material production, through which he comes gradually to understand the phenomena, the properties and the laws of nature, and the relations between himself and nature; and through his activity in production he also gradually comes to understand, in varying degrees, certain relations that exist between man and man. None of this knowledge can be acquired apart from activity in production.
Now, if existing human experience is enough to go far beyond human intelligence, it won't make a difference. Whether it is super-intelligent or super-super-intelligent will not matter much. But my knowledge of how theory and practice work makes me believe it will be difficult for AI to go very far on just human data.
The idea of duplicate vs new information
I did not see Marxists say this, but I think it is worth considering: As more knowledge is collected, more and more of the available information in the world will simply be duplicate information. This means that it will be harder and harder to find new information. This could mean that it requires more searching, and it could also mean that it will require more advanced experimentation, ie. more advanced technology and production.
Ignoring the production aspect, it implies that progress will be logarithmic rather than exponential. The funny thing about this is that current LLM progress has been shown to be logarithmic when compared with training compute. But in my opinion, it is a bit pseudoscientific to assume that this is evidence, as architecture improvements and context length can drive improvements instead.
So that is where intelligence comes from. This is true regardless of whether we are talking about humans or AI.
Let's talk about humans in particular.
Where does the vast majority of human intelligence come from?
Humans are able to have a basic understanding of the physical world around them within the first few years of life, and are able to reach "adult"-level human intelligence within 18-25 years.
From this, people incorrectly conclude that AI should be able to reach human intelligence from a single lifetime of human data. This is completely false because it fails to recognize where the vast majority of human intelligence comes from.
Evolution is the true source of human intelligence. An 18-year-old human has 18 years of human experience, and 1 billion years of evolutionary experience.
Evolution, a genetic algorithm, is orders of magnitude more complex and advanced than any human-developed learning algorithm. It has all types of complex behaviors, from genes turning on and off based on living conditions, self-improvement of the evolutionary learning algorithm itself, symbiotic relationships with foreign bacteria, recombination, etc.
The scale is dramatically different as well. This algorithm did not run in an imperfect and oversimplified simulation, in a single data center, for a few months. It ran in real life, on a planetary scale, for a billion years.
There are animals that are born knowing how to swim, how to follow their parents, how to hunt, and how to reproduce. This intelligence can not be obtained in a few minutes or hours of learning; it comes from evolution.
In comparison, humans take years to even be able to comprehend the world around them. But make no mistake: this is not evidence that humans do not receive any knowledge from birth. The human brain's architecture is far more advanced than any human-developed algorithms. Not only is it advanced in general, it is prepared to learn both vision and language. The fact that these areas are where the majority of AI developments were made is not a coincidence: vision is how we perceive the world and language is how we communicate what we perceive to others. Evolution designed the brain for these tasks and many others, and that is why the brain can learn so much in such a short time.
It is extremely naive to assume that we can simply bypass 1 billion years of evolution and easily surpass this level.
Tech workers have a habit of going into new domains and incorrectly assuming that their understanding is superior to the experts within that domain. It frequently causes failure in tech projects. In my opinion, that is what is happening in AI, but at an absurd scale. Although I am not religious, trying to create intelligence is effectively "playing god". It is a bit delusional to think that this would be an easy task.
It does not make sense to compare human lifetime learning to pretraining. It is more accurate to compare it to finetuning, calibration, decompression, and transfer learning. Pretraining, ie. starting from scratch, is more similar to evolution than it is to lifetime learning. It is completely predictable for it to be a slow and tedious process.
5-10 years ago, it almost felt like I was the only person in the world that believed that evolution was dominant. But this view is becoming more common. See this tweet from Andrej Karpathy:
Animal brains are nowhere near the blank slate they appear to be at birth. First, a lot of what is commonly attributed to "learning" is imo a lot more "maturation". And second, even that which clearly is "learning" and not maturation is a lot more "finetuning" on top of something clearly powerful and preexisting. Example. A baby zebra is born and within a few dozen minutes it can run around the savannah and follow its mother. This is a highly complex sensory-motor task and there is no way in my mind that this is achieved from scratch, tabula rasa. The brains of animals and the billions of parameters within have a powerful initialization encoded in the ATCGs of their DNA, trained via the "outer loop" optimization in the course of evolution. If the baby zebra spasmed its muscles around at random as a reinforcement learning policy would have you do at initialization, it wouldn't get very far at all. Similarly, our AIs now also have neural networks with billions of parameters... TLDR: Pretraining is our crappy evolution.
When people think about the "singularity", they often think of an instantaneous process. But we are already in the singularity: evolution was the first step, and human AI will be the second step. That first step has been running for a billion years, and humans have existed for over 100k years. Even if the scaling of AI intelligence were truly exponential (which is not guaranteed), that does not mean instantaneous. The second step after evolution still has not happened.
There are definitely limitations to comparing evolution with AI:
- Although DNA has excellent storage density and stores an absurd amount of data, the majority of it is probably redundant.
- The majority of experiences are not taken advantage of by evolution. The vast majority of the progress comes from survival/reproduction, which ignores more minor life experiences and simply focuses on life and death. Only a small amount of progress comes from epigenetic and cultural memory.
- While imperfect, simulations are valuable and the vast majority of the tiny details and imperfections are not important when learning.
- Human-developed technology can do some things dramatically better than evolution, which is restricted to general biological approaches.
There is one more very significant limitation, which I will go into next.
Where does the intelligence in AI come from?
Evolution starts from scratch, and I said that pretraining is comparable to evolution. But that is a bit of a simplification, because human-developed AI does not actually need to completely start from scratch. Instead, it can continue from where evolution left off.
The more extreme and theoretical example would be trying to recreate or clone the human brain. But there are simpler methods that are already being done: learning from human labels (vision classification), and learning from human language (LLM pretraining).
Human unconscious knowledge is far more valuable than human conscious knowledge. Humans are frequently able to complete tasks in ways that we are unable to explain. For instance, we are easily able to process visual information and identify objects, but we struggle to code non-learning-based computer vision algorithms ourselves. Human-labeled data, such as image data, is capable of transferring this information to AI, even when we are not able to comprehend it ourselves. However, this requires manual labeling, and also is usually a binary or categorical classification: when we label an object in an image, we are not transferring all the information we know about this type of object, or anywhere close.
Language is much more useful. First of all, it is a byproduct of human production and life. We constantly write and speak, and the internet records a lot of it. Second, language holds a huge amount of unconscious knowledge.
Think of this sentence: "The leaves are orange". Putting it simply, this sentence is telling us the color of a set of objects. But there is a lot of implied meaning there. What is a color? What is a leaf? What is a tree, a plant, an environment? What does it mean that the leaves are orange? What weather season is it? What season is next? What are weather seasons? What is the earth, and where did it come from? Who lives on the Earth, and what do they know? It can go on forever. With enough examples, a large amount of human knowledge can be transferred.
Human language is effectively distilled data, ie. data that provides a shortcut to reaching human-level intelligence. So while I said pretraining is similar to evolution or training from scratch, you could also describe it as being similar to training on embeddings or probabilities taken from the human brain.
How much information does human language hold?
How much information does human language hold? Past Marxists, specifically Joseph Stalin, give us an answer:
Language is one of those social phenomena which operate throughout the existence of a society. It arises and develops with the rise and development of a society. It dies when the society dies. Apart from society there is no language. Accordingly, language and its laws of development can be understood only if studied in inseparable connection with the history of society, with the history of the people to whom the language under study belongs, and who are its creators and repositories.
Language is a medium, an instrument with the help of which people communicate with one another, exchange thoughts and understand each other. Being directly connected with thinking, language registers and fixes in words, and in words combined into sentences, the results of the process of thinking and achievements of man's cognitive activity, and thus makes possible the exchange of thoughts in human society.
To put it directly, it holds almost all of it, conscious and unconscious. I previously said that unconscious human knowledge is much larger than conscious human knowledge, and that humans are unable to comprehend their unconscious knowledge. But in terms of learning algorithms, humans can consciously create datasets which will hold their unconscious knowledge, which can be learned from. That is what language is.
If you ask an AI, what color is a zebra, it will say black and white. You may think "it does not know what black and white are, it can not see, it is just memorized". But does it really matter whether it can see the color in reality? In terms of RGB, it knows that white is the max value for all three primary colors (255,255,255) and that black is the minimum value for every color (0,0,0). It knows that white is the combination of multiple human-visible wavelengths of light. It even knows what those numerical wavelengths are. It knows that black is the absence of light as well. It does have knowledge about color, even if it does not have all knowledge about color. Similar to how we can not see xrays but can comprehend them and transform them into formats that we can see (visible light images), AI can do the same.
Is someone who is blind from birth completely unable to understand what color is? Is it impossible for this person to reach human levels of intelligence because they lack vision capabilities? It is an invalid line of thinking.
(Sure, there is vision knowledge from evolution, but it was never decompressed or calibrated. It still is absurd.)
This is sometimes called symbolic knowledge, and is considered a sign of intelligence. It is funny that many people think the opposite is true today.
How intelligent are today's AIs?
I like to describe today's AI as 99% memorization, 1% intelligence. Note that it is mostly memorization rather than intelligence, and note that it is not 0% intelligence. There is some intelligence. It would not be able to form coherent sentences, solve open-ended problems with any degree of success, or do much of anything beyond a single task if it did not have some intelligence.
At the same time, this level of intelligence is still probably below the level of a dog or cat. People see it solve software engineering problems, or speak in coherent sentences, and think that this is impossible. What is going on here is that its intelligence is targeted towards useful areas. A dog or cat has a much better understanding of intuitive physics and how to do their jobs as animals. This is knowledge that is primarily unconscious for humans, and a lot of animals know it, so we take it for granted.
Again, there is symbolic or indirect knowledge, similar to human conscious knowledge of physics: The AI could absolutely use physics calculations and mathematics to predict how an object will move, with similar accuracy to human or animal intuition.
Vision capabilities
Today's state-of-the-art AIs are capable of comprehending an image, including what objects are in it and what is happening. Their vision is comparable to a low-resolution image, ie. the AIs struggle to see small details. Regardless, they can see and comprehend.
General problem-solving and agentic capabilities
Generally, complex problems are solved by breaking them down into multiple smaller and simpler problems, and solving step by step. This means that the problem requires less intelligence. Although they are terrible at it, and are quite dependent on human supervision, today's AIs are capable of breaking problems down step by step like this.
Outside of disabilities, raw human brainpower does not significantly vary from person to person. Although it may not seem like it on the surface, oftentimes the difference between an expert human and a highly skilled human in completing an expert-level task is simply time. It may be a very large amount of time, but it can be done.
Theoretically, an AI can work for a very long time and excessively break down a problem into smaller and smaller pieces until it completes the task. Because it can think (output words) very quickly, it does not matter that it takes way more thought than a human. This is the motivation behind AI agents and "agentic" AI: by allowing it to solve problems step by step, and allowing it to interact with testing environments to check its work, it will be able to solve human-level problems.
Practically, there is a very major limitation with today's AI that should be considered the #1 priority for AI researchers: context length.
Context Length
Context length is the maximum length of conversations that AIs can handle.
Technically, today's AIs typically can have context lengths of 1 million tokens or more. The provider may set this limit, or the hardware or algorithms may force this limit.
But in reality, the intelligence collapses much quicker than that. By 10k tokens there is a significant reduction in intelligence, and by 100k+ tokens the AI can easily become incoherent in an agentic task. For example, it may make nonsensical decisions, incorrectly remember past events and actions, repeat a set of actions over and over, or focus on a random unimportant part of the problem for an excessively long time.
10k words is not a lot. There are many software engineering problems that take more than 10k words to explain the problem (code file contents), so the limit is reached even without agentic behavior.
Remember my claim that hard problems can be solved with lower intelligence and experience given a large amount of time? Today's AIs do not have that time.
This is partially an architectural limitation. The transformers used in today's state-of-the-art AIs generate the next token by combining all previous tokens in the conversation. In other words, they have quadratically increasing complexity relative to conversation length. This is not ideal at all.
There are two possibilities:
- We can improve context length without moving on from the transformer architecture.
- We must move on from the transformer architecture.
The first possibility might mean that real agentic AIs are only a few years away. The second possibility might mean it will take 10+ years. But it is hard to predict.
What will be the effects of a real agentic AI?
A lot of the hype around AI is based on the belief that human or superhuman AI is not far off, and that this would cause dramatic and destabilizing changes to society by automating all work, including physical labor via robotics. A lot of the doom around AI is based on the belief that AI will not reach human level, will not be able to do the vast majority of human work, and will therefore be near-worthless. It is always these two extremes: human level or junk. This is invalid.
I personally think that human-level AI is very unlikely to happen, because beating evolution so easily seems very unrealistic. But that does not mean it can not cause dramatic changes to society.
Why? Because the vast majority of human jobs do not require human intelligence. For example, manual labor. Manual labor typically involves moving items from one place to another, searching for items, moving around an environment, etc. These tasks are normally very hard for past algorithms because those algorithms can not comprehend the world or take natural language instructions. This meant that they had to be manually trained or programmed for each task, which is infeasible and not a good approach. But today's LLMs are capable of perceiving things and taking natural language instructions. Their performance is definitely weak, but they already can do those tasks to some degree. Combined with automating a portion of white collar labor, this could mean that a majority or near majority of jobs could be automated.
This can happen without significant improvements over current intelligence. Context length is currently more important than intelligence, and speed and cost are as well. If the models degrade after only a few seconds of time, they will not be suitable. Similarly, if they have multiple seconds of latency and cost $5+ an hour, they will not be suitable. But they already can perceive the world, and can break a problem down into steps. If context length can be fixed, then we will not be far off from this capability, even without big increases in intelligence. At that point, AI robotics will be possible.
Conclusion
Even though super-intelligence is quite unlikely to happen any time soon, AI is still not far off from causing dramatic changes to society.
The American AI bubble
So far, this text has been very optimistic about the value of AI compared to typical left-wing views. AI does have some intelligence and will soon have a large amount of real-world automation value.
This does not mean that AI is not a bubble in America. Even if AI will be worth hundreds of billions of dollars to the economy (which is almost guaranteed), American investors have invested more than this, and have done so in a very wasteful way.
There are two issues with American AI investment
- Real-life physical input costs are a major limitation to the effects of AI.
- American AI investment is extremely wasteful. This waste manifests as both a premature investment in rapidly depreciating technology and a damaging diversion of essential resources, like energy and skilled labor, from other economic sectors.
Physical input costs: physical robots, natural resources, and the price influence of AI
Robots
Without a robot to control, AI simply can not do robotics.
This report from 2014 claims that software is 40-60% of the cost of a robot. This implies that even if AI manages to bring that software cost down significantly, there will only be up to a 60% cost reduction.
Note that these numbers are probably for larger-scale tasks. For smaller-scale tasks, since the software cost is normally a fixed cost, AI will probably cause a much bigger cost reduction. Also, this source is from 2014 which is not ideal, but I do not have a better source at the moment.
Still, this is not a 99% reduction or anything similar for large scale robotics. Even if robots could be purchased at much higher quantities without a significant change in prices (extremely unlikely; even a few years ago robot components were being bought out multiple years in advance), there would only be a price reduction of less than an order of magnitude. This is not the ultimate destabilizing change that US markets are expecting.
Physical inputs
Both physical products and physical robots require natural resources, which have costs that can not simply be automated away. Identifying the proportion of final costs of products that comes from natural resources is beyond the scope of this text, but they are significant.
Delayed reactions
Even if AI is capable of automating a large amount of work, the actual implementation and the response in pricing will likely take decades to appear. This means that short-term speed improvements are insignificant in the long run.
Waste
GPU degradation
It is well-understood that datacenter GPUs typically only last a few years. This means that the aggressive purchase of GPUs today will only bring a few years of capability.
Computational costs and Moore's Law
Moore's Law for CPUs is considered to be dead or nearly dead due to physical laws and limitations. That is not necessarily the case for GPUs. GPUs do operations in parallel on a large number of cores, with the compromise that those cores are much slower than CPU cores. As a result, GPU cores can continue to improve in both speed and quantity. So far, GPUs have continuously improved in price, power efficiency, and compute.
I am not trying to prove here that Moore's Law is relevant in particular, but that the effects of Moore's Law are relevant. In the past, when CPUs were still improving dramatically, it was commonly understood by engineers and businesses that if current CPUs did not meet your requirements, you could simply wait a few months to a few years, and your requirements would be met. This meant that efficiency improvements were unnecessary.
Nvidia is continuing to output better and better GPUs each year. Unless you believe in super-intelligence, it does not make sense to rush to make AI immediately. Simply waiting a few years may bring a huge cost reduction.
This is true for both training and inference. Even though AI is suitable for robotics, it is still very costly to run. So even if AI is capable of it, it will not be ready for production use. Buying compute early adds a huge premium, and every day that context length is not solved means a portion of that premium is wasted.
Inference and non-AI pricing effects
AI has caused significant increases in the price of GPU compute, as well as general compute. This has affected areas such as gaming, traditional computer vision, general cloud compute costs, etc. It has also affected inference costs. AI inference would likely be significantly cheaper if GPUs were not used as much on training.
Electricity costs
Electricity costs are exploding in the US due to AI.
GPU power efficiency
GPU power efficiency improves over time, meaning that electricity costs would be lower if the current GPU usage increases were delayed by some time.
The time required to increase electricity supply
Power generation is a very intensive/industrial process. It requires traditional power plants, nuclear power plants, green energy sources such as solar, etc. These take years to create.
This is particularly true for nuclear power, the most efficient and viable source of power in the long-term. It may take a decade to finish a nuclear power plant, and it can not necessarily be sped up.
Pricing effects on non-AI areas
Almost all production in the world relies on electricity. AI is aggressively increasing electricity costs, which will have dramatic inflationary effects on almost all other products. It will also cause price increases for utilities.
Applications
American companies consistently attempt to perform tasks that are beyond the capabilities of current AI, usually in the direction of complete automation instead of productivity increases or human supervision.
- Focusing on agentic approaches when current models do not have sufficient context length.
- Focusing on creating AI art from scratch, such as generating images, videos, and even entire movies from text, instead of creating more controllable tools that can be directed by actual artists, such as style transfer, tools and plugins for existing artist software such as PhotoShop, Blender, and CGI tools, etc.
- Prematurely focusing on humanoid robots as if AI is ready to replace any human in any task.
Although this is primarily due to delusional beliefs about AI capabilities in the near future, it is incorrect to ignore the right-wing and reactionary sentiments here, especially when so many grifters from cryptocurrency and other areas moved straight over to AI as the bubble progressed.
In general, capitalists and the right-wing very frequently fantasize about not only weakening but destroying the working class entirely, while ignoring that labor is the true source of value creation. OpenAI fantasizes about a $20k/year AI PhD worker, where they will magically eliminate the worker while keeping most of the wages as profit, without any competition hurting their margins. AI researchers and grifters online frequently theorize about a post-AI world, where no jobs are available and anyone without sufficient capital is basically doomed. They look at this without fear, but with excitement, and eagerly contribute to building this future, while not worrying about the economic aspect, likely because they think that they are not part of the vulnerable population.
For art, this is particularly visible. The "culture war" delusions of the right wing are out of scope of this text, but I will go into it briefly. The right-wing frequently claims that art is left-wing due to corporate endorsement, but not due to capitalism or due to consumer interests. They believe that corporations go against the "silent majority" (in reality, an extremely vocal minority) not for profit but for nefarious reasons ("white genocide", feminism, the destruction of western civilization, general antisemitic conspiracy theories). In reality, although left-wing views are actually very common among their customers, companies such as Disney frequently discourage left-wing themes in their entertainment, against the will of their left-wing workers.
Art is inherently a creative process, that requires out-of-the-box thinking. The "blue-haired artist" is not just a made-up stereotype; left-wing people are inherently more likely to have the creativity required. The right wing does not like this whatsoever, and fantasizes constantly about eliminating the industry entirely, and making art possible without creativity, which is somewhat of a contradiction.
Summary
American AI investment ignores the potentially exponential price reductions in GPUs that will happen over time, and ignores the linear price reductions/supply increases in electricity that will happen over time.
Probably due to the American economy's extreme focus on finance and software (read: monopolization and rent extraction), American AI investment also ignores the physical limitations of automation, which can not simply be exponentially improved the way that hardware can.
American AI investment assumes that human or super-human intelligence will make up for all of this, while ignoring:
- the fact that evolution took 1 billion years on a planetary scale to reach the same outcome.
- the physical limitations of automation.
- the fact that knowledge comes from real-world experience, meaning that it is also dependent on physical output, material conditions, and production in general.
It is biased by:
- The past decades of financialization and software-monopolization in the US economy.
- Right wing fantasies about completely disempowering workers.
Comparing China and the USA
China has been continuously building electrical capacity over time, probably already has the electricity needed for AI, and will have more electricity in the future.
China is building real competitors for Nvidia and TSMC, which avoids monopoly and geopolitical interference, and takes advantage of the benefits of competition.
China is focusing on practical applications of AI, knowing that it will naturally improve over time as computation costs reduce. This allows electricity, skilled human labor, and compute to be used in areas that are more relevant today.
China is building the majority of the robots in the world, and is continuing to expand on this. When AI is truly capable of robotics, the robots will be available for this.
Overall, China is taking a much more logical approach, thanks to its resistance to the disorderly expansion of capital and general opposition to software monopolization.
Left-wing mistakes and miscellaneous info
There are a few other topics I wanted to discuss that would dilute the message of the previous sections, so I will go into detail about them here. They are common leftist mistakes I've seen online, although some are not specific to leftists.
Stochastic parrots
Leftists frequently think that AI is a parrot that has zero intelligence. As discussed previously, it does have intelligence, and that intelligence is aligned with human work, so it is more visible and useful than it would be in an animal.
Complaints about water usage, electricity usage, and general costs of AI
AI in the US is certainly a bubble. But it is incorrect to think that AI should not be used at all, or that it is a significant factor in climate change.
Water usage is insignificant, and is misunderstood because of the small amount of water directly used in daily life. While humans may use less than 100 gallons of water a day directly, the indirect costs from industry and agriculture are orders of magnitude more. Both the water and electricity costs of AI are insignificant compared to the value that can come from inference. Furthermore, the water usage metrics that are commonly shared include cooling, where the water is used to cool the data center and then released back to the source. This can have ecological effects, but is overall insignificant compared to water usage in agriculture and industry, where the water is used up or contaminated.
If America had a rational approach to AI similar to China, these costs would be insignificant.
Aggressive endorsement of intellectual property and anti-automation views.
Although Marxists should care about workers, they should not be on the side of intellectual property or anti-automation. If AI was used properly, it would increase the productivity of artists, which under capitalism would mean increased unemployment and reduced wages. But Marxists should not be against automation, they should be against capitalism, and they should be against right-wing distortion of AI, which causes bad economic planning by irrationally focusing on eliminating labor immediately rather than working with existing labor to improve productivity.
"Evil" AI
There are many theoretical ideas about superintelligent AIs "misunderstanding" human requests. For example, a superintelligent AI is told to minimize world hunger, and it does this by killing all humans, which ensures that zero humans can ever be hungry.
In my opinion, this is a bit clueless. How can a superintelligent AI "misunderstand" a request? Even a human would know to read between the lines and understand the implication: that we want to improve food supply and distribution rather than killing hungry humans, and that we are trying to help humans, not harm them. Human language contains human knowledge and can not be separated from it. Superintelligence and misunderstanding are contradictory.
Again, lets see what Stalin says about human language:
Accordingly, language and its laws of development can be understood only if studied in inseparable connection with the history of society, with the history of the people to whom the language under study belongs, and who are its creators and repositories.
In other words, this means that understanding humanity is required for understanding language. Since AI today is primarily trained on language, and language is by far the best source of data because it is distilled human knowledge, it is extremely likely that a superintelligent AI would act like an extremely intelligent human.
Of course, humans can be still be evil, but the danger would be significantly reduced in this case, especially for scenarios where the AI misunderstands a human instruction or perspective. Also, most evil in humans comes from things that could be considered mental issues or mental disorders, such as sociopathy, fear, a lack of empathy etc. Overall, the owners of a human-level AI would be far more dangerous than the AI itself.
Memorization and hallucinations
A lot of the misunderstanding about hallucinations comes from a misunderstanding of how AI is built. AI is trained to predict the next word during pretraining, and solve tasks during alignment and post-training. A lot of the time, there is not an obvious answer, and the data may only have an answer (which could be of varying quality), not the best answer. Basically, it is trained to guess as well as it can. If it was not able to do this, or was aggressively discouraged from doing so, it would struggle to train or be useful, because it would refuse a lot of the time. Both for training and for usage, it is more useful for the AI to guess, and the AI is not intelligent enough to know when to guess and not to guess. This causes the hallucinations to be so prevalent.
Also, as discussed previously, understanding language requires understanding humans, and an AI that purely worked off memorization would struggle to form coherent sentences at all. Even if it does not have human intelligence, it still has some intelligence, and the hallucinations stick out to us because we do not realize the huge amount of intelligence required to answer at all.
Economic effects of AI
Human level intelligence is unlikely. What is more likely in the near future is the elimination of a significant portion of white collar and manual labor, meaning >40% of jobs, but not 100% of jobs. In other words, it is effectively automation, and traditional Marxist economics already mostly explain the economic effects.
People fail to recognize that major automation and productivity improvements have already happened many times over. The invention of agriculture 10,000 years ago is what made society possible, as not all humans had to hunt and gather food to survive. In other words, even 10,000 years ago humanity was dealing with the automation of a significant portion of society's labor. This was accelerated with capitalism and the industrial revolution, where productivity significantly and rapidly increased.
Before capitalism, there were obvious reasons for people to struggle to some degree. After capitalism and the industrial revolution, which massively increased productivity, why do humans still struggle? Because capitalism extracts as much wealth as possible. Why does it not extract even more? Because labor still produces all wealth, and the human population buys all products, of which the vast majority are workers. In other words, workers are both the supply and the primary (99%) demand for capitalism.
If capitalists do not provide their workers the means to survive, they will not have labor. If capitalists do not provide their workers the means to purchase their products, they will not have customers. Although capitalists like to fantasize about workers being powerlessly run over without resistance, this is not the case. When workers are pushed to the limit, they either have "nothing to lose but their chains" and fight back, or inflation and deflation, either natural or due to a central bank, will cause a correction. To put it simply, when you think "40% of labor will be automated", you should think "tendency of the rate of profit to fall" and "traditional Marxist economics".
Another mistake is thinking that the speed of this will be very quick. That is not the case historically, whether it be the industrial revolution, steam engines, semiconductors, robotics, the internet, etc. Companies frequently take decades to implement automations, companies frequently have minimal competitive pressure, supply chains can not significantly drop prices until nearly the entire chain is automated, and of course, physical and material requirements exist (robots, equipment, etc) and are the most significant limiting factor.
Economic effects of human-level AI
What about human-level AI? Marxism is uniquely suited to understanding this. Whereas most capitalist economic theories have a material interest in hiding the source of profit and value, Marxism does not. Marxism says it loud and clear: all created value comes from labor. A fully-automated competitive market with zero labor costs will tend to move towards zero prices outside of natural resources and input costs.
Humans can do physical and mental labor. Robots can already replace physical labor, but can not replace intelligence. If AI replaces human intelligence, labor will be automated, it will automatically scale with natural resource inputs, of which many such as electricity are effectively infinite in the short-term, and the concept of "value" will break down (of course, capitalism is not competitive in reality). This is still not that different, as the goal of society is to serve humans. So either the welfare state placates humans, and capitalist aggression for profits (monopolization, financialization) is contained, or there is revolution. Human-level AI should technically be able to beat socialist revolution, but capitalist governments are never rational, so I personally think it is unlikely. I also don't believe that human-level AI is likely in the first place so I will not go into it further.
Using idealism when discussing AI (Conclusion)
AI is discussed in an idealist, almost mystical way, by everyone, whether it be AI researchers or leftists. As you can probably tell, the overall goal of this text is to counter this tendency and highlight the relevance of materialism and traditional Marxism in this area, and hopefully increase their usage in the future.
๐