Chat bots aren't giving you advice, it’s giving you information.
They aren't giving you information either. They're just compiling tokens.
Chat bots aren't giving you advice, it’s giving you information.
They aren't giving you information either. They're just compiling tokens.
General purpose LLMs' failure to do a task like translation must be very funny for their investors. Even the more translation-gocused ones seem to have issues.
[DeepL] translation is said to be generated using a supercomputer that reaches 5.1 petaflops and is operated in Iceland with hydropower.
In general, [convolutional neural network]s are slightly more suitable for long coherent word sequences, but they have so far not been used by the competition because of their weaknesses compared to recurrent neural networks.
The weaknesses of DeepL are compensated for by supplemental techniques, some of which are publicly known.
(ETA I need to edit my comments to federate them?)
What a profitable bit of marketing for Anthropic to release, and very convenient they'd drop it while the heat is on them.
The better company, despite their efforts to the contrary
Anthropic is approximately ~~90% as bad~~ as OpenAI
~~...But no worse.~~
Okay, fine, maybe worse. There they go, crawling back to the government.
Losers fighting other losers, you love to see it:
Anthropic has a strong internal culture that has broadly EA views and values
They mean "effective altruism" in a positive way, but their movement is better defined by two-faced ghouls. I guess Dario Amodei is the latest example, but Sam Bankman-Fried will always be #1 in my book.
I registered the domain name "anthropic.ml" in reference to the philosophical concept of anthropic reasoning, not Anthropic PBC.
Mikhail Samin doth protest too much.
The whole website is self-promo for a suspiciously rich, creepily public person. It's two clicks to his sad dating profile, but only one click to discover he wants to disseminate a weird book by a man accused of sexual abuse in his cultlike compound.
What do the other Yudkowsky Rationalists think? Mostly they sniff their own farts, but one of them breaks out the racism too:
“Anthropic is untrustworthy” is an extremely low-resolution claim. Someone who was trying to help me figure out what’s up with Anthropic should e.g. help me calibrate what they mean...
The other person I’ve drawn the most similar conclusion about was Alexey Guzey... I notice that he and Mikhail are both Russian.
Dr Calipers also starts sharing DMs, which is also funny... Did I mention this is the top response on the website for people who pride themselves on being Rational with a capital R?
The AI "pushed [Jonathan Gavalas] to acquire illegal firearms and... marked Google CEO Sundar Pichai as an active target".
Somehow, I bet that if he survived and killed the CEO instead, Google wouldn't be so flippant about the "mistake."
Maybe the technical term is "bullshit" because it returns something meant to appease the user regardless of truth value
But "lie" is definitely a less inaccurate interpretation than "hallucinate," because a "hallucination" implies the generation of something not there, despite the fact the data is equally present for things deemed non-hallucinations.
Touche. I apologize for responding to the argument I've seen elsewhere, not the one you were making.
This is technically job loss caused by AI...