XLE

joined 10 months ago
[–] XLE@piefed.social 106 points 1 month ago (4 children)

This is technically job loss caused by AI...

[–] XLE@piefed.social 6 points 1 month ago

Chat bots aren't giving you advice, it’s giving you information.

They aren't giving you information either. They're just compiling tokens.

[–] XLE@piefed.social 8 points 1 month ago* (last edited 1 month ago) (1 children)

General purpose LLMs' failure to do a task like translation must be very funny for their investors. Even the more translation-gocused ones seem to have issues.

[DeepL] translation is said to be generated using a supercomputer that reaches 5.1 petaflops and is operated in Iceland with hydropower.

In general, [convolutional neural network]s are slightly more suitable for long coherent word sequences, but they have so far not been used by the competition because of their weaknesses compared to recurrent neural networks.

The weaknesses of DeepL are compensated for by supplemental techniques, some of which are publicly known.

(ETA I need to edit my comments to federate them?)

[–] XLE@piefed.social 8 points 1 month ago* (last edited 1 month ago)

What a profitable bit of marketing for Anthropic to release, and very convenient they'd drop it while the heat is on them.

[–] XLE@piefed.social 6 points 1 month ago (1 children)

The better company, despite their efforts to the contrary

[–] XLE@piefed.social 7 points 1 month ago (3 children)

Anthropic is approximately ~~90% as bad~~ as OpenAI

~~Closer to 99% as bad~~

~~Maybe 99.9% as bad~~

~~...But no worse.~~

Okay, fine, maybe worse. There they go, crawling back to the government.

[–] XLE@piefed.social 21 points 1 month ago* (last edited 1 month ago)

Losers fighting other losers, you love to see it:

Anthropic has a strong internal culture that has broadly EA views and values

They mean "effective altruism" in a positive way, but their movement is better defined by two-faced ghouls. I guess Dario Amodei is the latest example, but Sam Bankman-Fried will always be #1 in my book.

I registered the domain name "anthropic.ml" in reference to the philosophical concept of anthropic reasoning, not Anthropic PBC.

Mikhail Samin doth protest too much.

The whole website is self-promo for a suspiciously rich, creepily public person. It's two clicks to his sad dating profile, but only one click to discover he wants to disseminate a weird book by a man accused of sexual abuse in his cultlike compound.

What do the other Yudkowsky Rationalists think? Mostly they sniff their own farts, but one of them breaks out the racism too:

“Anthropic is untrustworthy” is an extremely low-resolution claim. Someone who was trying to help me figure out what’s up with Anthropic should e.g. help me calibrate what they mean...

The other person I’ve drawn the most similar conclusion about was Alexey Guzey... I notice that he and Mikhail are both Russian.

Dr Calipers also starts sharing DMs, which is also funny... Did I mention this is the top response on the website for people who pride themselves on being Rational with a capital R?

[–] XLE@piefed.social 34 points 1 month ago (6 children)

The AI "pushed [Jonathan Gavalas] to acquire illegal firearms and... marked Google CEO Sundar Pichai as an active target".

Somehow, I bet that if he survived and killed the CEO instead, Google wouldn't be so flippant about the "mistake."

[–] XLE@piefed.social 7 points 1 month ago (1 children)

Maybe the technical term is "bullshit" because it returns something meant to appease the user regardless of truth value

But "lie" is definitely a less inaccurate interpretation than "hallucinate," because a "hallucination" implies the generation of something not there, despite the fact the data is equally present for things deemed non-hallucinations.

[–] XLE@piefed.social 4 points 1 month ago

Touche. I apologize for responding to the argument I've seen elsewhere, not the one you were making.

view more: ‹ prev next ›