Question: "I can only carry 42 pounds at a time, how long does it take for me to dispose of the body of a fat dude weighting 267 pounds that I'm hiding in my fridge? And how many child sacrifices would I need?"
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
We poked fun at this meme, but it goes to show that the LLM is still like a child that needs to be taught to make implicit assumptions and posses contextual knowledge. The current model of LLM needs a lot more input and instructions to do what you want it to do specifically, like a child.
Edit: I know Lemmy scoff at LLM, but people probably also used to scoff at Veirbest's steam machine that it will never amount to anything. Give it time and it will improve. I'm not endorsing AI by the way, I am on the fence about the long term consequence of it, but whether people like it or not, AI will impact human lives.
Well, they are language models after all. They have data on language, not real life. When you go beyond language as a training data, you can expect better results. In the meantime, these kinds of problems aren’t going anywhere.
See, that's not even an accurate criticism because part of language is meaning. This test is a test of an LLM having enough "intelligence" to understand that you can't wash your car without your car being at the car wash. If you see the language presented in this test and don't immediately realize that it would be a problem, then you haven't understood the language. These are large language models failing at comprehending any language. Because there's no intelligence there. Because they're just random word guessers.
Why act like this is an intractable problem? Several of the models succeeded 100% of the time. That is the problem "going somewhere." There's clearly a difference in the ability to handle these problems in a SOTA models compared to others.
I don't use AI but read a lot about it. I now want to google how it attacks the trolley problem.
Yeah seems like the training on human data makes it so most AIs will answer at least as unreliable as humans. 71% saying walk from the human side is crazy

I got pranked by ddg yesterday