This is the perfect time for LLM-based AI. We are already dealing with a significant population that accepts provable lies as facts, doesn't believe in science. and has no concept of what hypocrisy means. The gross factual errors and invented facts of current AI couldn't possibly fit in better.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
✅ Colorado
✅ Connedicut
✅ Delaware
❌ District of Columbia (on a technicality)
✅ Florida
But not
❌ I'aho
❌ Iniana
❌ Marylan
❌ Nevaa
❌ North Akota
❌ Rhoe Islan
❌ South Akota
Everyone knows it's properly spelled "I, the ho" not Idaho. That’s why it didn’t make the list.
Nothing will stop them, they are so crazy that they can turn nonsense into reality, believe me.
Or to put it more simply -- They need power for the sake of power itself, there is nothing higher.
Lol @ these fucking losers who think AI is the current answer to any problems
AI will most likely create new problems in the future as it eats up electricity like a world eater, so I fear that soon these non-humans will only turn on electricity for normal people for a few hours a day instead of the whole day to save energy for the AI.
I'm not sure about this of course, but it's quite possible.
Third time's the charm! They have to keep the grift going after Blockchain and NFT failed with the general public.
@arararagi@ani.social Don't forget Metaverse, they took a fuckin bath on that.
Funny thing is, the metaverse as their pictured it failed, but vrchat itself had it's biggest spike this year.
As long as there's something to sell for untalented morons to feel intelligent & talented; they'll take the bait.
I don't think this gets nearly enough visibility: https://www.academ-ai.info/
Papers in peer-reviewed journals with (extremely strong) evidence of AI shenanigans.
Thanks for sharing! I clicked on it with cynicism around how easily we could detect AI usage with confidence vs. risking making false allegations, but every single example on their homepage is super clear and I have no doubts - I'm impressed! (and disappointed)
Yup. I had exactly the same trepidation, and then it was all like “As an AI model, I don’t have access to the data you requested, however here are some examples of…”
I have more contempt for the peer reviewers who let those slide into major journals, than for the authors. It’s like the Brown M&M test; if you didn’t spot that blatant howler then no fucking way did you properly check the rest of the paper before waving it through. The biggest scandal in all this isn’t that it happened, it’s that the journals involved seem to be almost never retracting them upon being reported.
They took money away from cancer research programs to fund this.
After we pump another hundred trillion dollars and half the electricity generated globally into AI you're going to feel pretty foolish for this comment.
Just a couple billion more parameters, bro, I swear, it will replace all the workers
- CEOs
Well, it's almost correct. It's just one letter off. Maybe if we invest millions more it will be right next time.
Or maybe it is just not accurate and never will be....I will not every fully trust AI. I'm sure there are use cases for it, I just don't have any.
Cases where you want something googled quickly to get an answer, and it's low consequence when the answer is wrong.
IE, say a bar arguement over whether that guy was in that movie. Or you need a customer service agent, but don't actually care about your customers and don't want to pay someone, or your coding a feature for windows.
You joke, but I bet you didn't know that Connecticut contained a "d"
I wonder what other words contain letters we don't know about.
The famous 'invisible D' of Connecticut, my favorite SCP.
That actually sounds like a fun SCP - a word that doesn't seem to contain a letter, but when testing for the presence of that letter using an algorithm that exclusively checks for that presence, it reports the letter is indeed present. Any attempt to check where in the word the letter is, or to get a list of all letters in that word, spuriously fail. Containment could be fun, probably involving amnestics and widespread societal influence, I also wonder if they could create an algorithm for checking letter presence that can be performed by hand without leaking any other information to the person performing it, reproducing the anomaly without computers.
The letters that make up words is a common blind spot for AIs, since they are trained on strings of tokens (roughly words) they don't have a good concept of which letters are inside those words or what order they are in.
Well, for anyone who knows a bit about how LLMs work, it’s pretty obvious why LLMs struggle with identifying the letters in the words
GitLab Enterprise somewhat recently added support for Amazon Q (based on claude) through an interface they call “GitLab Duo”. I needed to look up something in the GitLab docs, but thought I’d ask Duo/Q instead (the UI has this big button in the top left of every screen to bring up Duo to chat with Q):
(Paraphrasing…)
ME: How do I do X with Amazon Q in GitLab? Q: Open the Amazon Q menu in the GitLab UI and select the appropriate option.
ME: [:looks for the non-existant menu:] ME: Where in the UI do I find this menu?
Q: My last response was incorrect. There is no Amazon Q button in GitLab. In fact, there is no integration between GitLab and Amazon Q at all.
ME: [:facepalm:]
Yesterday i asked Claude Sonnet what was on my calendar (since they just sent a pop up announcing that feature)
It listed my work meetings on Sunday, so I tried to correct it…
You’re absolutely right - I made an error! September 15th is a Sunday, not a weekend day as I implied. Let me correct that: This Week’s Remaining Schedule: Sunday, September 15
Just today when I asked what’s on my calendar it gave me today and my meetings on the next two thursdays. Not the meetings in between, just thursdays.
Something is off in AI land.
Edit: I asked again: gave me meetings for Thursday’s again. Plus it might think I’m driving in F1
A few weeks ago my Pixel wished me a Happy Birthday when I woke up, and it definitely was not my birthday. Google is definitely letting a shitty LLM write code for it now, but the important thing is they're bypassing human validation.
Stupid. Just stupid.
No, this is Google throwing the cheapest possible shit at you that is barely capable of stringing together 5 coherent sentences and has the reasoning capability of a tapeworm.
Here is the output of the minimalist open Chinese model Qwen3, that runs locally on my 6 year old mid-end PC:
The US states that contain the letter "d" (case-insensitive, as state names are typically capitalized) are as follows. I've verified each state name for the presence of the letter "d" in its standard spelling:
Colorado (contains "d" in "Colorado")
Delaware (starts with "D")
Florida (contains "d" in "Florida")
Idaho (contains "d" in "Idaho")
Indiana (contains "d" in "Indiana")
Maryland (ends with "d" in "Maryland")
Nevada (contains "d" in "Nevada")
North Dakota (contains "d" in "Dakota")
Rhode Island (contains "d" in "Rhode")
South Dakota (contains "d" in "Dakota")
Total: 10 states.
Hey look the markov chain showed its biggest weakness (the markov chain)!
In the training data, it could be assumed by output that Connecticut usually follows Colorado in lists of two or more states containing Colorado. There is no other reason for this to occur as far as I know.
Markov Chain based LLMs (I think thats all of them?) are dice-roll systems constrained to probability maps.
Edit: just to add because I don't want anyone crawling up my butt about the oversimplification. Yes. I know. That's not how they work. But when simplified to words so simple a child could understand them, its pretty close.
Connedicut.
I wondered if this has been fixed. Not only has it not, the AI has added Nebraska.
"This is the technology worth trillions of dollars"
You can make anything fly high in the sky with enough helium, just not for long.
(Welcome to the present day Tech Stock Market)
Listen, we just have to boil the ocean five more times.
Then it will hallucinate slightly less.
Or more. There’s no way to be sure since it’s probabilistic.
We're turfing out students by the tens on academic misconduct. They are handing in papers with references that clearly state "generated by Chat GPT". Lazy idiots.
This is why invisible watermarking of AI-generated content is likely to be so effective. Even primitive watermarks like file metadata. It's not hard for anyone with technical knowledge to remove, but the thing with AI-generated content is that anyone who dishonestly uses it when they are not supposed to is probably also too lazy to go through the motions of removing the watermarking.
Blows my mind people pay money for wrong answers.