Technology
Which posts fit here?
Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.
Post guidelines
[Opinion] prefix
Opinion (op-ed) articles must use [Opinion] prefix before the title.
Rules
1. English only
Title and associated content has to be in English.
2. Use original link
Post URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communication
All communication has to be respectful of differing opinions, viewpoints, and experiences.
4. Inclusivity
Everyone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacks
Any kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangents
Stay on topic. Keep it relevant.
7. Instance rules may apply
If something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.
Companion communities
!globalnews@lemmy.zip
!interestingshare@lemmy.zip
Icon attribution | Banner attribution
If someone is interested in moderating this community, message @brikox@lemmy.zip.
view the rest of the comments
Using a standalone LLM for personal use doesn’t seem like an ethical dilemma to me, it’s already been trained on the data and if the data was accessible on the web or via a library then I don’t see the harm.
Getting small amounts of medium-trust information on a subject, is a good way to get someone interested enough to read a book, watcha a YouTube video or find a website for more information and validate the AI response.
What is the ethical dilemma, exactly, and why/how is this different?
Again, how is this different? At least the web-based ones actually link to where the info came from...
We’re talking about home use AI searches… you said it was unethical so maybe you should define exactly why you think this?
Today I wanted to know what the tyre pressures should be for my 2002 Corolla and AI gave me the answer, I would not have bought a book or gone anywhere past the first page of google for that information.
The possible ethical dilemma is depriving someone of compensation because I used their research and deprived them of potential revenue, in reality I would never have bought a book on tyre pressures or car maintenance, and it’s unlikely I would ever have visited a site where adverts would have paid the contributors.
Another dilemma is of power consumption, the model is already made then it’s already used the power, and my tiny LLM query is going to use far less power locally than a web based search.
As a company who might make money, or achieve cost savings from using AI trained on data some only intended for use by a human, I can see how this is not always ethical.
It's very simple, copyright. You're benefitting from someone else's work without providing them with any compensation for said work. That doesn't suddenly change because the compute happens on your personal computer.
If you had actually looked it up, you might have actually gotten the correct answer, as well as learned that it's printed on the driver's door jamb of every car.
Why would you think your local LLM would be any more efficient than a web-based one?
This was exactly my point, when it’s for home use the chance of my depriving anyone of revenue is negligible.
If I’m running a home assistant anyway not having that assistant constantly connected to the web relaying my audio, processing and sending it back will use less power.
Finally thanks to the solar panels on my roof I can guarantee my searches are powered on 100% sunshine.