
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
you never had it to begin with. Goddamn leeches.
Maybe they should look into selling AI CP since it seems to be great at generating that shit
AI industry needs to encourage job seekers to pick up AI skills (undefined), in the same way people master Excel to make themselves more employable.
Has anyone in the last 15 years willingly learned excel? It seems like one of those things you have to learn on the job as your boomer managers insist on using it.
Excel depends on the usage. Way too many people want to use it for what it's bad at, but technically can do, instead of using it for what it's good at.
I'm fairly decent at using Excel, and have automated some database dependent tasks for my coworkers through it, which saves us a lot of time doing menial tasks no one actually wants to do.
I willingly learned excel in the past 15 years!
I have since moved on to open source replacements.
I did and it's awesome. People like to shit on Excel, but there is a reason why every business on earth runs on Excel. It's a great tool and if you really learn it, you can do great things with it.
I love excel, personally. I'm a big ol' nerd and love putting shit in a spreadsheet.
Funny thing about "AI skills" that I've noticed so far is that they are actually just skills in the thing you're trying to get AI to help with. If you're good at that, you can often (though not always) get an effective result. Mostly because you can talk about it at a deeper level and catch mistakes the AI makes.
If you have no idea about the thing, it might look competent to you, but you just won't be catching the mistakes.
In that context, I would call them thought amplifiers and pretty effective at the whole "talking about something can help debug the problem, even if the other person doesn't contribute anything of value because you have to look at the problem differently to explain it and that different perspective might make the solution more visible", while also being able to contribute some valueable pieces.
how else are you going to perform, document, and communicate engineering calculations in a format that is simple, intuitive, flexible, and easy to iterate upon?
I did take a few courses on excel over the last 25 years. I don't use excel that much but most features will never be used by most people.
Yeah, very good analogy actually...
I remember back in the day people putting stuff like 'Microsoft Word' under 'skills'. Instead of thinking 'oh good, they will be able to use Word competently', the impression was 'my god, they think Word is a skill worth bragging about, I'm inclined to believe they have no useful skills'.
'Excel skills' on a resume is just so vague, people put it down when they just figured out they can click and put things into a table, some people will be able to quickly roll some complicated formula, which is at least more of a skill (I'd rather program a normal way than try to wrangle some of the abominations I've seen in excel sheets).
Using an LLM is not a skill with a significant acquisition cost. To the extent that it does or does not work, it doesn't really need learning. If anything people who overthink the 'skill' of writing a prompt just end up with stupid superstitions that don't work, and when they first find out that it doesn't work, they just grow new prompt superstitions to add to it to 'fix' the problem.
Microsoft Word’ under ‘skills’.
Way back in the day a bunch of people endorsed me on linkedin for a bunch of nonsense like that and I manually hid all of it lol
"Microsoft thinks it has social permission to burn the planet for profit" is all I'm hearing.
Well, they at least have investor permission...which is the only people they care about anyway
Probably in the Hobbes sense that they're not actively revolting
Take away:
- MS is well aware AI is useless.
- Nadella admits they invested G$ in something without having the slightest clue what its use-cas would be ("something something rEpLaCe HuMaNs")
- Nadella is blissfully unaware of the "social" image MS already has in the eye of the public. You don't have our social permission to still live as a company!
Well you already lost that or rather never actually had that. You all pushed a broken and incomplete product you need to find a use not us...
"We have to find a compelling use case so we can keep tragedying the commons!"
CEOs aren't people. That's why they lobbied to have companies recognized as people. Stop giving them a stage.
I have a use for it. Put it in the recycle bin.
How can you lose social permission that you never had in the first place?
The peasants might light their torches
"Torching" the gas turbines what are on AI companies datacenters would be highly effective. Especially since they are outside and only a fence protects them.
It is so dump what they gas our environment for "AI". It was evil doing it in WW1 and WW2 and it is still today. See:
- https://www.theguardian.com/technology/2026/jan/15/elon-musk-xai-datacenter-memphis
- https://capitalbnews.org/musk-xai-memphis-black-neighborhood-pollution/
It is insane.
This guy knows how to translate billionaire dipshit speak.
Do something useful
What do you mean, that using ChatGPT for a recipe for eggs, sunny side up without any seasoning or toppings and burning up the electricity of a moderate household for a week with my query isn’t useful?
Allrecipes has you covered.
It's not the query that burns through electricity like crazy, it's training the models.
You can run a query yourself at home with a desktop computer, as long as it has enough RAM and compute cells to support the model you're using (think a few high-end GPUs).
Training a model requires a huge pile of computer power though, and the AI companies are constantly scraping the internet to ~~steal~~find more training material
- Denial
- Anger
- Bargaining <- They're here
- Depression
- Acceptance
The five stages of corporate grief:
- lies
- venture capital
- marketing
- circular monetization
- private equity sale
Correct, but needs clarification:
Depression referring to the whole economy as the bubble burst.
Acceptance is when the government accepts to bail them out because they're too big and the gov is too dependent on them to let them die.
I will try to have a balanced take here:
The positives:
- there are some uses for this "AI"
- like an IDE it can help speed up the process of development especially for menial tasks that are important such as unit test coverage.
- it can be useful to reword things to match the corpo slang that will make you puke if you need to use it.
- it is useful as a sort of better google, like for things that are documented but reading the documentation makes your head hurt so you can ask it to dumb it down to get the core concept and go from there
The negatives
- the positives don't justify the environmental externalities of all these AI companies
- the positives don't justify the pc hardware/silicone price hikes
- shoehorning this into everything is capital R retarded.
- AI is a fucking bubble keeping the Us economy inflated instead of letting it crash like it should have a while ago
- other than a paid product like copilot there is simply very little commercially viable use-case for all this public cloud infrastructure other than targeting with you more ads, that you can't block because it's in the text output of it.
Overall I wish the AI bubble burst already
menial tasks that are important such as unit test coverage
This is one of the cases where AI is worse. LLMs will generate the tests based on how the code works and not how it is supposed to work. Granted lots of mediocre engineers also use the "freeze the results" method for meaningless test coverage, but at least human beings have ability to reflect on what the hell they are doing at some point.
Granted lots of mediocre engineers also use the "freeze the results" method for meaningless test coverage,
I'd be interested what you mean by this? Isn't all unit tests just freezing the result? A method is an algorithm for certain inputs you expect certain outputs, you unit tests these inputs and matching outputs, and add coverage for edge cases because it's cheap to do with unit tests and these "freeze the results" or rather lock them in so you know that piece of code always works as expected or it's "frozen/locked in"
LLMs will generate the tests based on how the code works and not how it is supposed to work.
You can tell it to generate based on how it's supposed to work you know
You could have it write unit tests as black box tests, where you only give it access to the function signature. Though even then, it still needs to understand what the test results should be, which will vary from case to case.
Dear CEOs. I have revoked my permission. In fact it was never given.