this post was submitted on 11 Aug 2025
10 points (85.7% liked)

Technology

3819 readers
570 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] rollin@piefed.social 1 points 1 day ago* (last edited 1 day ago) (1 children)

I'm going to repeat myself as your last paragraph seems to indicate you missed it: I'm *not* of the view that LLMs are capable of AGI, and I think it's clear to every objective observer with an interest that no LLM has yet reached AGI. All I said is that like cats and rabbits and lizards and birds, LLMs do exhibit some degree of intelligence.

I have been enjoying talking with you,, as it's actually quite refreshing to discuss this with someone who doesn't confuse consciousness and intelligence, as they are clearly not related. One of the things that LLMs do give us, for the first time, is a system which has intelligence - it has some kind of model of the universe, however primitive, to which it can apply logical rules, yet clearly it has zero consciousness.

You are making some big assumptions though - in particular, when you said an AGI would "have a subjective sense of self" as soon as it can "move, learn, predict, and update". That's a huge leap, and it feels a bit to me like you are close to making that schoolboy error of mixing up intelligence and consciousness.

I'm less mentally organised than I was yesterday, so for that I apologise. I suspect the problem is that we're both working from different ideas of the word intelligence. It's not a word that has a single definition based on solid scientific grounds. The biggest problem in neuroscience might be that we don't have a grand unified theory of what makes the mind do "intelligence", whatever that is. I did mistake your position somewhat, but I think it comes down to the fact that neither of us has a fully viable theory of intelligence and there is too much we cannot be certain of.

I admit that I overreached when I conflated intelligence and consciousness. We are not at that point of theoretical surety, but it is a strong hunch that I will admit to having. I do feel I ought to be pointing out that LLMs do not create a model, they merely work from a model - and not a model of anything but word associations, at that. But I do not want to make this a confrontation, I am only explaining a book or two I have read as best I can, in light of the observations I've made about LLMs.

From your earlier comments about different degrees of intelligence (animals and such), I have tried to figure that into how I describe what intelligence is, and how degrees of intelligence differ. Rats also have a neocortex, and therefore likely use the self-same pattern of repeating units that we do (cortical columns). They have a smaller neocortex, and fewer columns. The complexity of behaviour does seem to vary in direct proportion to the number of cortical columns in a neocortex, from what I recall reading. Importantly, I think it is worth pointing out that complexity of behaviour is only an outward symptom of intelligence, but not likely the source. I put forward the "number of cortical columns" hypothesis, because it is the best one I know, but I also have to allow that other types of brains that do not have a neocortex can also display complex behaviours and we would need to make sense of that once we have a workable theory of how intelligence works in ourselves. It is too much to hope for all at once, I think.

So complex behaviour can be expressed by systems that do not closely mimic the mammalian neocortical pattern, but I can't imagine anyone would argue that ours is the dominant paradigm (whether in terms of evolution or technology, for now), so in the interest of keeping a theoretically firm footing until we are more sure, I will confine my remarks about theories of intelligence to the mammalian neocortex until someone is able to provide a compelling theory that explains at least that type of intelligence for us. I have not devoted my career to understanding these things, so all I can do is await the final verdict and speculate idly with people inclined to do so. I hope only that the conversation can continue to be an enjoyment, because I know better than anyone I am not the final word on much of anything!