this post was submitted on 07 Mar 2026
826 points (98.9% liked)

Technology

82414 readers
3102 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Over the past few weeks, several US banks have pulled off from lending to Oracle for expanding its AI data centres, as per a report.

you are viewing a single comment's thread
view the rest of the comments
[–] CileTheSane@lemmy.ca 24 points 1 day ago (4 children)

Only people who know very little about a field feel like AI "is good enough" for that field. Experts in a field will universally say that AI is shit in their field.

LLMs are the extreme example of "the dumb man's idea of a smart man." It sounds like it knows what it's talking about so people ignorant on the subject don't know it's full of shit.

[–] jj4211@lemmy.world 8 points 1 day ago

I agree with you and I consider it similar to the 'hollywood effect': Ask any expert to review typical depictions of their expertise in film and tv and they will mostly groan at the inaccuracies that most people won't catch.

Problem is that if you compare the works that do it 'right' to the ones that do it 'wrong', there's no correlation between doing it right and being more popular, the horribly wrong depictions get plenty of ratings regardless.

Now one might reasonably argue 'sure, but that's purely fiction anyway, if it had real consequences, that would actually matter', except it constantly happens in real world situations.

My work colleague picked up his car from some mechanic chain after having it 'fixed' and took us to lunch. There was just this awful squeal as he started the car and I said why is it making that noise after just getting fixed and the guy said "Oh, the staff told me that cars just sound like that after a repair until the parts break in" and that bullshit worked to get him to pay and walk out the door. I ask if I can take a quick look under his hood and there was a flashlight wedged against a belt. He just laughed it off and said "hey, free flashlight, thanks for figuring that out" and a few months later he had mentioned going back to the exact same place for something else.

A few days ago I went to a hardware store and their site said they had it, but under location it said "see associate". The first one checked his device and didn't understand what the deal was so he said "Oh, go over there and ask John, he knows all this stuff". Ok, so I walk over to John, who takes one glance and confidently says "oh yeah, that stuff is in a cage in the back row locked up, just go up to the cage and press the button to get someone to get it". I think "ok, good, a guy who really knows his stuff and the other staff recognize him for it". I roll up to the cage and look in and realize "uh oh, this is not the type of stuff I'm looking for, he made a pretty amateur mistake", but I push the button anyway. I show my phone to the guy who comes up and said that "John" said it would be here but I couldn't see it, and at the mention of "John" the guy clearly rolled his eyes and it was abundantly clear that John's "expertise" was a repeated annoyance for the guy. The actual answer is they kept that stuff in back and the employees all are supposed to see the notation in their devices telling them this, but none of them seem to figure it out and John just keeps sending people to his department instead.

This has also come out in use of AI. I offered that my group could crank out a quick tool to do something that could be a problem, and one of the people said "in this new era, we don't need you for this quick tool, I just asked Claude and it made me this application". So I tested it and reported that 'a', it didn't actually work, it produced stuff that looked right, but the actual tool wouldn't accept it because it didn't se the right syntax, and 'b', if t did work, it faked authentication and had a huge vulnerability. He just laughed it off and said 'guess LLMs sometimes aren't perfect yet', no consequences for what could have been a disastrous tool, no severe change in stance on using LLMs, and I am pretty sure the audience probably found the response about it not working to be annoyingly buzzkill and were rooting for the LLM to do all the work instead. People who need your expertise are desparate to not need your expertise anymore and are willing to believe anything to enable that, and are willing to accept a lot of badness just to not be dependent on you.

AI produce what is seen as plausible narrative, and plausible narrative can win even when the facts are against it. To be very charitable, a quick "usually" correct answer is indeed frequently "good enough" for a lot of purposes, and LLM's speed at generating output can't be beat.

[–] Croquette@sh.itjust.works 7 points 1 day ago

The problem is that there is a lot if these people that thinks LLMs are good enough, and many of them are in decisional positions, so we're getting raked no matter what.

[–] vacuumflower@lemmy.sdf.org 1 points 1 day ago (1 children)

Bad craftsman blames his tools is what I'd answer to this.

[–] CileTheSane@lemmy.ca 3 points 1 day ago* (last edited 1 day ago) (1 children)

I agree anyone using an LLM is a bad craftsman, because they're using a hammer to drive in a screw.

[–] vacuumflower@lemmy.sdf.org -1 points 1 day ago (1 children)

All LLMs are using a tool for the wrong task then, in your opinion? So in the composite object of "LLM" what is the tool and what is the task?

[–] CileTheSane@lemmy.ca 3 points 23 hours ago (2 children)

So in the composite object of “LLM” what is the tool and what is the task?

The tool is "Language Learning Model" and the task is "Learn language and mimic human speech."

The task is not "Provide accurate information" or "write code" or "provide legal advice" or "Diagnose these symptoms" or "provide customer service" or "manage a database".

[–] Not_mikey@lemmy.dbzer0.com -1 points 16 hours ago (1 children)

And a human's task, along with any other lifeform, is to survive and reproduce. In pursuit of that goal we have learned many different complex strategies and methods to achieve it, same with an llm.

Peoples tasks are also not to provide accurate information, write code, provide legal advice etc. If a person can earn a living, attract a mate and raise children by lying, writing bad code, giving shitty legal advice etc. they will. It takes external discipline to make sure agents don't follow those behaviors. For humans that discipline is provided by education, socialization, legal systems etc. For LLMs that discipline is provided by fine tuning, ie. The lying models get down rated while the more truthful models get boosted.

[–] CileTheSane@lemmy.ca 2 points 15 hours ago (1 children)

They all "lie" because they don't actually know a damn thing. Everything an LLM outputs is just a guess of what a human might do.

[–] Not_mikey@lemmy.dbzer0.com 0 points 2 hours ago (1 children)

An LLM has a great deal of declarative knowledge. Eg. It knows that the first president of the US is George Washington. Like humans it has built up this knowledge through reinforcement, the more a fact is reinforced by external sources, the more you/ it knows it. Like with humans when it reaches the edge of its knowledge base it will guess. If I ask someone who the 4th president of the US was they may guess Monroe, that person isn't lying, it's just an area that hasn't been reinforced (studied) as much so they are making their best guess, LLMs do the same. That doesn't mean that person cannot and will not ever know the 4th president, it just means they need more reinforcement / training / studying.

Humans as well as LLMs have a declarative knowledge area with a lot of grey area between knowing and not knowing. It'd be like a spectrum starting on one end with stuff that has been reinforced many times by people with high authority, what is your name would probably be the furthest on one side, to another end with stuff you've never heard or heard from untrustworthy sources. LLMs may not have the other dimension of trustworthiness that people do but the humans training it will usually compensate that with more repetition from trustworthy sources, eg. They'll put 10 copies of the new York times and only one of younewsnow.com or whatever in the training data.

[–] CileTheSane@lemmy.ca 1 points 1 hour ago* (last edited 1 hour ago)

An LLM has no knowledge.

My calculator does not "know" that 2+2=4, it runs the code it has been programmed with which tells it to output 4. It has no knowledge or understanding of what it's being asked to do, it just does what it is programmed to do.

An LLM is programmed to guess what a human would say if asked who the 4th president of the United States was. It runs the code that was developed with the training data to output the most likely response. Is it true? Doesn't matter. All that matters is that it sounds like something a human would say.

I trust the knowledge of my calculator more, because it was designed to give factual correct responses.

[–] vacuumflower@lemmy.sdf.org -1 points 17 hours ago (1 children)

It's "Large Language Model", and the point is in "Large" and that on really large datasets and well-selected attention dimensions set it's good at extrapolating language describing real world, thus extrapolating how real world events will be described. So the task is more of an oracle.

I agree that providing anything accurate is not the task. It's the opposite of the task, actually, all the usefulness of LLMs is in areas where you don't have a good enough model of the world, but need to make some assumptions.

Except for "diagnose these symptoms", with proper framework around it (only using it for flagging things, not for actually making decisions, things that have been discussed thousands of times) that's a valid task for them.

[–] CileTheSane@lemmy.ca 3 points 15 hours ago (1 children)

Except for "diagnose these symptoms", with proper framework around it (only using it for flagging things, not for actually making decisions, things that have been discussed thousands of times) that's a valid task for them.

This sounds like someone who knows nothing about construction saying "building a house" is a valid task because they don't understand why using a hammer to drive in a screw would be incorrect or why it's even a problem. "The results are good enough right?"

[–] vacuumflower@lemmy.sdf.org -1 points 14 hours ago

You are writing pretentious nonsense, go someplace else.

[–] ieGod@lemmy.zip -4 points 1 day ago (2 children)

A lot of fields don't require doctorate levels of expertise to render effective business services. I've seen first hand companies replace thousands of employees and shutter divisions because their AI counterpart has been doing the job quantitatively equally, and faster. Perfect is the enemy of good enough, in most cases, as they say.

Lemmy is filled to the brim with llm haters but you're not only a minority, you're probably also closing doors on the future trajectory of tech in business.

[–] hanrahan@slrpnk.net 2 points 20 hours ago

perhaps but one example, Commonwealth Bank (largest Australian Bank and in the top 10 worldwide AFAIK) in Australia said they were dismissing 1000's of staff because of AI, turned out they were just offshoring. The latter is seen positively apparently, the former not so much.

[–] CileTheSane@lemmy.ca 5 points 1 day ago (1 children)

Lemmy is filled to the brim with llm haters but you're not only a minority, you're probably also closing doors on the future trajectory of tech in business.

"Think of the shareholder value of firing all these people!"

Also, I call bullshit. I've seen many cases of companies replacing their staff with AI, then a month later desperately trying to hire staff again because the AI is good at "looking like* it can do the job but once in use turns out it's complete shit.

[–] ieGod@lemmy.zip 0 points 1 day ago (1 children)

“Think of the shareholder value of firing all these people!”

This is of course problematic, but not directly the fault of the technology itself. The entire system is problematic, but that's a digression from the effectiveness of the tech doing the job.

And the instances I'm talking about were running the ai stack and employee teams in parallel for nearly a year. The replacement wasn't a "yeah let's try this... whoops that didn't work". It was a tried and tested approach, and the employees made redundant (in the capability sense, not the firing sense, which followed afterwards).

[–] CileTheSane@lemmy.ca 4 points 23 hours ago

And I give it less then a year before the "oh shit, we really should have human's overseeing this" hits