fr33th3p30pl3

joined 1 month ago
[–] fr33th3p30pl3@lemmygrad.ml 9 points 4 days ago (1 children)

It seemed more like a rhetorical question tbh. I thought you were either joking or insulting me, so I didn't answer. You probably are, but I'll give you an answer anyway.

AI is nearly useless with Marxism. It is only able to provide basic definitions and sometimes very narrow applications in well-understood contexts, usually contexts that were discussed by famous Marxists in the training data. If it wasn't useless, I'd use it to filter recent Marxist texts to find high quality ones, I'd translate Chinese texts to English so that I could read them, etc., but I do not trust it to reliably do this.

Anyway, because of that, for writing I never really use it beyond proofreading/verification; I've probably only taken text from it (like individual sentences and rewordings) like less than 5 times ever for Marxism. Any Marxism it outputs is just a bunch of cargo-cult dramatic prose that doesn't really say anything in the end. In the case of this text, it repeatedly says that a weakness of the text is the overdependence on "practice", meaning that it doesn't truly understand Mao's On Practice, the theory-practice cycle, or dialectical materialism in general. And it won't dare say anything positive about China in most cases lol. That tiny comparison of America and China makes it say that the text is idolizing China.

The wild people in the replies are complaining about the fact that there are multiple conclusions and sections. I'd call them delusional (and honestly I think it flows clearly if you actually read it) but there are actual reasons for this besides neurodivergence. It was originally in different formats:

  • It started off as a Q&A-style format where I corrected common leftist mistakes relating to AI, similar to the 3rd miscellaneous section. It was supposed to be a short post, but got very long, so I ended up discussing the majority of it in an intelligence section and the economics section, basically making it a real text. The remaining stuff was either unrelated, or would have diluted those sections, so I kept the remaining stuff in a 3rd miscellaneous section, and summarized the whole text with the same original goal: bring materialism into discussions of AI.
  • The intelligence section was originally two parts, with the second part being about AI technical capabilities. This section was way too detailed and technical, explaining each capability: vision, speed, price, short-term memory vs long-term memory vs working memory and how they compare to humans for each one, etc.
  • The American bubble part was originally a part of this text meant to be combined with the parts about venture capitalism, monopoly, and reserve currency, but was excluded due to being too detailed and too irrelevant. I also meant to discuss it in another propaganda text about the international reserve currency and US collapse but I abandoned that text entirely.

Basically it was massively shortened, and was not a single post originally.

Since it was a combination of a bunch of different stuff that was reorganized multiple times, I did use AI to find where the flow "breaks" (sudden jumps, cut-off sentences from copy pasting, etc). But as I said, AI is awful at actually writing about Marxism. Although I tried to get assistance from it, it did not suggest any of those rewrite strategies I mentioned, because it doesn't comprehend how the topics are related.

But afterward, I asked for help with the thesis. It was successful there, so there is 1 sentence in the text that actually was modified from AI. I will show you the process for transparency.

Here is the original:

So far, this text has been very optimistic about the value of AI compared to typical left-wing views. AI does have some intelligence and will soon have a large amount of real-world automation value. This does not mean that AI is not a bubble in America. Even if AI will be worth hundreds of billions of dollars to the economy (which I think is almost guaranteed), American investors have invested more than this, and have done so in a very wasteful way. There is a major limitation to the value of AI: real-life physical input costs. American AI investment is also extremely wasteful because it invests too early and because it aggressively redirects resources from other industries.

I tried to get AI to improve on this, and it gave me this:

While the previous sections established AI's significant practical potential, this value must not be confused with its current financial valuation in the United States. The American approach to AI is a classic speculative bubble, characterized by investment that far exceeds realistic economic returns. This bubble is driven by two core fallacies: a disregard for the hard physical limitations of automation—such as input costs and resource availability—and a profoundly wasteful strategy of premature investment that aggressively redirects capital and energy from other vital sectors of the economy.

THIS is how AI writes. It sees "Marxism" and starts writing super dramatically. Was it mostly logically correct? Yes, but that is because I provided it an almost-finished version of the text from the post, and it didn't need to add anything (I wanted it to write thesis-like statements for the transition to the next chapter). If I didn't provide it the whole text, it would have obviously made something much worse, because its comprehension is worse than its writing abilities. This is one of the best outputs I've gotten from AI for Marxism, and its still mediocre, even if it managed to not say anything incorrect.

But the last sentence is salvageable there. So I took it and made this:

This waste manifests as both a premature investment in rapidly depreciating technology and a damaging diversion of essential resources, like energy and skilled labor, from other economic sectors

As you can see, I shortened the sentence a bit, and removed the surrounding/previous parts, but kept a lot of the wording from that sentence. So to answer your question, that sentence's wording was significantly assisted by AI. The rest was not.

As for your concerns:

I think it’s fair to expect an author to have spent at least as much time writing as they expect a reader to spend reading.

I started recognizing that AI was able to generalize faster before GPT3 came out, probably 7 years ago or so. When combined with strong synthetic data strategies, large pretrained vision networks were capable of learning to classify an image from only a few examples (it used to take hundreds or thousands). Later on, similarity learning methods (for example, triplet loss) were capable of identifying objects from a single example without additional training, although with mediocre accuracy. This shows that when pretrained on a large amount of general data, ai models are able to generalize more quickly to more narrow datasets, and at much higher quality than if they trained on that dataset alone.

I'll spare you the details, but I started comparing evolution to machine learning once I saw GPT3. And I was a lot more extreme about it, thinking that it would take hundreds of years to catch up at minimum. I thought that the only way to "bypass" evolution would be to train on brain scan/MRI-like data, or similar (basically cloning the brain or something close). In reality, human language is effectively distilled data/embeddings of human knowledge, so it is very effective for bypassing evolution, at least partially.

Last year, I was looking at AI coding approaches, and realized that the traditional conversation format is not ideal. It is effectively a whiteboard interview: the agent can not use IDE features such as syntax checking, it can not run tests, etc. I started making an AI agent, thinking that if I gave the AI the ability to run tests and experiment the way a human would, it would perform much better. Instead of a human pointing out the mistakes, the AI could find the mistakes itself, and it would dramatically improve the final result. But performance was awful. I realized that AI quality drops dramatically if the context length exceeds 10k. This matches with other people's experience online.

I started the portions of this text late last year (for the other texts I mentioned), and actually started this specific text a few months ago. I abandoned and came back to it repeatedly, and then finally came back and finished it when Karpathy started talking about the AI bubble a few weeks ago, as his arguments matched mine (evolution).

So overall this post was from 7 years or so of observing and working with AI, took a few months to make, and probably took ~100 hours or so of actual direct writing.

To be honest, the complete unwillingness to read is nothing new to leftists whatsoever. People are like "its so long it has to be AI", "its badly written it has to be AI" like are y'all actually Marxists LOL? The group of people known for writing an absolute fuckton about everything and getting into detailed arguments about minor events from 100 years ago? Hello?

Refusing to read is nothing new. People blame AI now, saying that they don't know if what they are reading is just slop because it is so easily made. But in my experience, it is usually possible to tell if something is AI within the first 10 seconds of reading it, regardless of length, ESPECIALLY if you know the subject area. It really seems like the same old cope I saw in online discussions 5+ years ago. But I deal with AI labeling as a job, so I am probably better at detecting it I guess.

Overall, this comment section makes me wonder if AI is better at Marxism than most human Marxists. Years ago, I noticed that like 5% of self-described Marxists actually understood dialectical materialism, the rest just blindly believed it. I see that that hasn't changed.

[–] fr33th3p30pl3@lemmygrad.ml -2 points 5 days ago (6 children)

One is mostly talking possible potential, the other about currently existing systems.

I criticize all of those positions however, in the present and the future.

Knowledge comes from real world practice and experience, whether it is direct (real-world experimentation) or indirect (human language or some other form).

In the short-term, AI is limited by human language, as that allows bypassing evolution and is the most efficient approach. In the long-term, AI is limited by real-world resources and production. In both cases, this invalidates the typical idea of singularity, where AI becomes super-intelligent in days, weeks, months, or years.

For "talking parrot" views, they are delusional, and by that I mean I honestly can't convince people otherwise, only exposure can. They are pretty common among the left too, as we can see in this comment section.

Thinking of one of the simplest tasks for AI, summarization, how could it possibly do summarization without comprehending the text and its main points to at least to a small degree? How could it form the sentences, identify what is unnecessary, or do anything? Are we going to pretend that it is simply counting words by frequency, or generating tags? It is delusional. Sure it messes up at times, but to pretend that it is equivalent to random chance or a simple algorithm is insane, regardless of how many people believe it.

In terms of a more recent capability, if we have a series of images as a camera moves through an environment (for example, a building or city), SOTA vision language models such as Gemini 2.5 Pro and GPT 5 are capable of comprehending that we are moving through an environment and are capable of providing instructions for moving to a location from the current position. This means that they can perceive the world, have spatial memory of this world, explain it in human language, and know how to navigate this world. Are we going to pretend that this is just an implementation of SLAM and not a (low) level of intelligence?

Considering that most warehouse logistics jobs are effectively "move this object from its current location to a different location", AI is already not far off from this in terms of intelligence: it is able to remember an environment and how to navigate it, and it is able to identify objects. Can it clean up a mess, re-wrap a package, or similar? Not necessarily, but even in warehouses those tasks are delegated off to specialized roles with much lower manpower. I'm not really saying that AI have a lot of intelligence, really I'm saying that humans are overkill for the role. In the case of warehouses, there are already manually-programmed robots doing close to this without AI, so it really isn't that hard. The biggest limitation is not the intelligence itself, but the context length: if you give it too many images, too many instructions, or a task that takes too many steps, it will fall apart. With a real-world task, that will happen very quickly.

I used to believe that AI intelligence was fake, and I only changed my mind because I do data labeling as my job and also use it for software engineering. Since I use it constantly, I know that it is both unbelievably stupid and also has non-zero intelligence, and that the ideas of human intelligence in the near future, superintelligence in the next 100 years, and zero intelligence today are wrong.

[–] fr33th3p30pl3@lemmygrad.ml 4 points 5 days ago

If anyone knows a better place to post this type of stuff let me know

view more: ‹ prev next ›