ByteJunk

joined 2 years ago
[–] ByteJunk@lemmy.world 5 points 2 days ago

It was also a push for less pollution/environmental benefits.

But the fucking automakers in Europe were sleeping on their asses, and didn't believe thar electric cars could work or that people would buy them.

Then came Tesla and bitchslapped the whole lot of them, proving that it does work, and by the time they started realising they needed to actually do something, you had the Chinese pumping out awesome cars at literally half the price that the EU can make them, and now they're scared shitless.

I get that letting the EU car industry die is the beginning of the end for Europe, but the answer isn't prolonging gas cars, it's subsidising the industry in a smart way, so they can compete.

[–] ByteJunk@lemmy.world 12 points 2 weeks ago (2 children)

Partition the internet... Like during the Morris worm of '88, where they had to pull off regional networks to prevent the machines from being reinfected?

The good old days were, maybe, not that good. :)

[–] ByteJunk@lemmy.world 3 points 2 weeks ago (2 children)

Yep. When you have two aircraft carriers in their airspace, you make the laws.

[–] ByteJunk@lemmy.world 18 points 2 weeks ago

Time for some pipes to go boom boom.

At some point, you need to tilt the table a bit, because if you're the only one playing by the rules, you're getting fucked.

[–] ByteJunk@lemmy.world 3 points 2 weeks ago (1 children)

But then how can you tell that it's not an actual conscious being?

This is the whole plot of so many sci-fi novels.

[–] ByteJunk@lemmy.world 3 points 2 weeks ago (4 children)

I'll bite.

How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?

[–] ByteJunk@lemmy.world 19 points 2 weeks ago* (last edited 2 weeks ago)

Let me grab all your downvotes by making counterpoints to this article.

I'm not saying that it's not right to bash the fake hype that the likes of altman and alienberg are making with their outlandish claims that AGI is around the corner and that LLM are its precursor. I think that's 100% spot on.

But the news article is trying to offer an opinion as if it's a scientific truth, and this is not acceptable either.

The basis for the article is the supposed "cutting-edge research" that shows language is not the same as intelligence. The problem is that they're referring to a publication from last year that is basically an op-ed, where the authors go over existing literature and theories to cement their view that language is a communication tool and not the foundation of thought.

The original authors do acknowledge that the growth in human intelligence is tightly related to language, yet assert that language is overall a manifestation of intelligence and not a prerequisite.

The nature of human intelligence is a much debated topic, and this doesn't particularly add to the existing theories.

Even if we accept the authors' views, then one might question if LLMs are the path to AGI. Obviously many lead researchers in AI have the same question - most notably, Prof LeCun is leaving Meta precisely because he has the same doubts and wants to progress his research through a different path.

But the problem is that the Verge article then goes on to conclude the following:

an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

This conclusion is a non sequitur. It generalizes a specific point about the capacity of LLMs to evolve into true AGI or not, into an "AI dumb" catchall that ignores even the most basic evidence that they themselves give - like being able to "solve" go, or play chess in a way that no human can even comprehend - and, to top it off, conclude that "it will never be able to" in the future.

Looking back at the last 2 years, I don't think anyone can predict what AI research breakthroughs might happen in the next 2, let alone "forever".

[–] ByteJunk@lemmy.world 4 points 1 month ago (1 children)

I wonder why this isn't a class action. Maybe they thought they could win more this way?

I wonder what's gonna happen if other courts find different verdicts, or different compensation values...

[–] ByteJunk@lemmy.world 35 points 1 month ago (3 children)

This is astonishing.

He's hoarding the profits of automation for himself, while socializing the lost wages and poverty that will come from this.

But the issue is that we know full well that they'll escape paying out their massive profits as taxes, which is what MUST happen for the model to work. Shareholder payouts need to be taxed at like 50+% rate or even more.

[–] ByteJunk@lemmy.world 12 points 1 month ago (6 children)

Boots be made for walking. You'll be hard-pressed to find a place that's not struggling with fascism nowadays, or free from patriarchal tendencies, but I assume you're in the US, in which case many countries are indeed better.

[–] ByteJunk@lemmy.world 3 points 1 month ago (1 children)

Yes, and as awful as that sounds, this is the right way of doing things.

What must happen now is that the minister opens a public consultation, and then you go out and bring along every single ally you can muster and you make yourself heard, and you convince everyone and their mothers that these guys can never be elected again.

And if the rest of the country doesn't agree with you, then they don't.

The only issue is when money comes into play, and it's used to amplify the message from one side and drown out the other. Then the country is fucked.

[–] ByteJunk@lemmy.world 1 points 1 month ago (1 children)

I'm not sure neat is the word I'd use to describe that hellish landscape, but might be a language issue... :)

view more: next ›