this post was submitted on 20 Oct 2025
442 points (96.6% liked)
Technology
76499 readers
2021 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI is a tool. It’s not a person, it’s not a be-all-end-all of anything. Just like a person can use excel and come up with the wrong numbers, people can use AI and come up with the wrong answer.
Just like with every tool, there are people who can’t use them properly, there are people who are good enough to get modest results, and there are people who are experts at their craft who can do amazing things with them. AI is no different.
If you want a calculator, use a calculator - not AI. Use the right tool for the job and you’ll get the best result.
Studies can be made to say anything, and I know the ones you are talking about - they’re bogus.
Except that anyone who can use it properly can also just do the job without it, and the amount of damage it is doing because it’s freely available to everyone is insane.
You’re completely ignoring all my arguments. This sorta makes sense since your original reply was very “just ignore the bad stuff and it’s good!” but you’re going to have to address those things. I meanc, you did say “they’re bogus” and then not elaborate at all, but I’m assuming that if you have the energy to continuing writing comments then you would also have the energy to do the far more efficient thing and show me why those studies are bogus, right?
No I'm not, I addressed them. LLMs not being able to do maths/spelling is a known shortcoming. Anyone using it to do that is literally using it wrong. The studies you talk about were ridiculous, I know the ones you're talking about. Of course people that don't learn something won't know how to do it, for example - but the fact that they can do it with AI is a positive. Obviously getting AI to write an essay means that the person will feel less "proud" of their work, as one of the studies said - but that's not a "bad" thing. Just like how people don't need to learn how to hunt and gather anymore doesn't mean that it's a bad thing - the world as it is, and as it always will be from here on out, means we don't need to know that unless we want to do it.
Again - AI is a tool, and idiots being able to use it to great effect doesn't mean that the tool is bad. If anything that's a showing of how good the tool is.
Those studies aren’t about them feeling less proud, they’re about the degradation of critical thinking skills.
I have repeatedly said that isn’t worth anything largely because it doesn’t do anything I can’t do with relative ease. Why do you think it’s so great? What do you honestly use it for?
As one example I built an MCP server that lets LLMs access a reporting database, and made a Copilot Agent and integrated it into Teams, so now the entire business can ask a chat bot questions, using natural language in teams, about business data. It can run reports for them on demand, pulling in new columns/tables. It can identify when there might be something wrong as it also reads from our logs.
These people don’t know databases. They don’t know how to read debug/error logs.
I also use GitHub copilot.
But sure, it can’t be of any help to anyone ever lol
I’ll take your word for it to not just be saying “no” but I still have to wonder why it needs “AI” and if people are going to build up a reliance on it to the point where they start to not be able to find that info on their own. I mean, hell, like you say they already can’t handle the databases so why are they even fucking around in there anyway/why aren’t they learning how to use them if they’re so important for their jobs?
Because in teams you could type (or say) "how many customers are still awaiting their refunds for their services that were cancelled last week?" and it will go and do its little AI magic and respond with the answer.
But they can never find it on their own - it's in a database, they have to use some tool to get it. Why can't that tool be AI?
They're not! That's the point. This way it gives them access to information that they would usually have to put in a support ticket, or run multiple reports and then try and compile them together, for example, to get. Now they can just ask a bot in teams a question and they get the answer.
Because their job isn't to access the production database.
So you can’t have a foolproof spreadsheet that just has an option for “refund given” with a date range? Why go through all this AI nonsense? All it’s doing is adding points of failure and giving people the ability to fuck up their prompts.
A spreadsheet? No, sales go through the database. That was also just an example. You could ask it to see which state has the most sales of product X between dates Y and Z for customers between age 18 and 25, as another example. You can ask it anything you can think of to do with the data.
It’s basically a reporting engine that can create ad-hoc reports at will.
It’s a lot easier to write a prompt for a report than it is to query the database, especially when you don’t know SQL etc - or even have access to the database.
Product X > filter by state > date range. Why is this difficult? Gimme another, it’s mildly entertaining even if it’s not exactly difficult.
What product are you using to get that data from a live Azure database?
You literally told you built something which would allow an LLM to access the data. In order to be reliable enough the data would have to be appropriately sorted already and there would need to be an interface which the LLMs could use. So you built all this stuff to let the LLM thing work and now you’re looking at me stupid like building an extreme simple filter is some sorta crazy thing and we need a product to do it.
What the hell were people doing before you built your little chatbot? Just neatly sorting information into a black box and throwing into the ocean?
Ah ok, so you have no idea what you're talking about then lol. In a nutshell you go "here is your database connection details, now be a good little AI and answer my questions about the database".
"an extreme simple filter" lol. It could be pulling data from 30 different tables, views, stored procedure results, etc from the database and making insanely complex queries and reports, then cross referencing those with external logs from a third party logging service to provide even more data. You seem to think that you pretty much have to build all the queries and reports and services and then the LLM just calls them with some parameters lol.
You very clearly have zero experience in this area, and have not done even the most basic of research.
Hey dude, I was responding to your incredibly shitty examples. You give me no information and blame for not having information well, that’s a you problem. But I suppose if you understood that concept you’d also understand the problems I’m talking about.
Now, again, if the AI can have access to all that information and identify it correctly then why is it impossible to do what I’m asking? It has to be able to tell the difference somehow, right? And with LLMs being known to have hallucinations and serious misunderstandings it seems rather ridiculous to rely on it for something that you say is so complex that a person cannot do it. You also haven’t answered me, I don’t think, on the topic of what people were doing before the LLM.
There are a lot of key elements you’re dodging here and before you start talking shit maybe start addressing them.