this post was submitted on 12 Mar 2025
22 points (100.0% liked)

Technology

69298 readers
3937 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.

SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.

Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.

From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.

So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.

top 50 comments
sorted by: hot top controversial new old
[–] 9488fcea02a9@sh.itjust.works 5 points 1 month ago (3 children)

I hate that i can no longer trust what comes out of my phone camera to be an accurate representation of reality. I turn off all the AI enhancement stuff but who knows what kind of fuckery is baked into the firmware.

NO, i dont want fake AI depth of field. NO, i do not want fake AI "makeup" fixing my ugly face. NO, i do not want AI deleting tourists in the background of my picture of the eiffel tower.

NO, i do not want AI curating my memories and reality. Sure, my vacation photos have shitty lighting and bad composition. But they are MY photos and MY memories of something i experienced personally. AI should not be "fixing" that for me

[–] arakhis_@feddit.org 3 points 1 month ago

classic techbro overhype

Add new feature into everything without seperating and offering choice to opt out of it

[–] owl@infosec.pub 2 points 1 month ago

Is there a black mirror episode for that? A technology, that automatically edits your memories to be inaccurate, but "better".

[–] Flisty@mstdn.social 2 points 1 month ago

@9488fcea02a9 @ForgottenFlux I remember reading a whole article about how Samsung now just shoves a hi-res picture of the moon on top of pictures you take with the moon in so it looks like it takes impressive photos. Not sure if the scandal meant they removed that "feature" or not

[–] ZeroGravitas@lemm.ee 4 points 1 month ago (4 children)

A 100% accurate AI would be useful. A 99.999% accurate AI is in fact useless, because of the damage that one miss might do.

It's like the French say: Add one drop of wine in a barrel of sewage and you get sewage. Add one drop of sewage in a barrel of wine and you get sewage.

[–] Imacat@lemmy.dbzer0.com 2 points 1 month ago (1 children)

99.999% accurate would be pretty useful. Theres plenty of misinformation without AI. Nothing and nobody will be perfect.

Trouble is they range from 0-95% accurate depending on the topic and given context while being very confident when they’re wrong.

[–] merc@sh.itjust.works 1 points 1 month ago

The problem really isn't the exact percentage, it's the way it behaves.

It's trained to never say no. It's trained to never be unsure. In many cases an answer of "You can't do that" or "I don't know how to do that" would be extremely useful. But, instead, it's like an improv performer always saying "yes, and" then maybe just inventing some bullshit.

I don't know about you guys, but I frequently end up going down rabbit holes where there are literally zero google results matching what I need. What I'm looking for is so specialized that nobody has taken the time to write up an indexable web page on how to do it. And, that's fine. So, I have to take a step back and figure it out for myself. No big deal. But, Google's "helpful" AI will helpfully generate some completely believable bullshit. It's able to take what I'm searching for and match it to something similar and do some search-and-replace function to make it seem like it would work for me.

I'm knowledgeable enough to know that I can just ignore that AI-generated bullshit, but I'm sure there are a lot of other more ~~gullible~~ optimistic people who will take that AI garbage at face value and waste all kinds of time trying to get it working.

To me, the best way to explain LLMs is to say that they're these absolutely amazing devices that can be used to generate movie props. You're directing a movie and you want the hero to pull up a legal document submitted to a US federal court? It can generate one in seconds that would take your writers hours. It's so realistic that you could even have your actors look at it and read from it and it will come across as authentic. It can generate extremely realistic code if you want a hacking scene. It can generate something that looks like a lost Shakespeare play, or an intercept from an alien broadcast, or medical charts that look like exactly what you'd see in a hospital.

But, just like you'd never take a movie prop and try to use it in real life, you should never actually take LLM output at face value. And that's hard, because it's so convincing.

[–] dojan@lemmy.world 0 points 1 month ago (1 children)

I think it largely depends on what kind of AI we're talking about. iOS has had models that let you extract subjects from images for a while now, and that's pretty nifty. Affinity Photo recently got the same feature. Noise cancellation can also be quite useful.

As for LLMs? Fuck off, honestly. My company apparently pays for MS CoPilot, something I only discovered when the garbage popped up the other day. I wrote a few random sentences for it to fix, and the only thing it managed to consistently do was screw the entire text up. Maybe it doesn't handle Swedish? I don't know.

One of the examples I sent to a friend is as follows, but in Swedish;

Microsoft CoPilot is an incredibly poor product. It has a tendency to make up entirely new, nonsensical words, as well as completely mangle the grammar. I really don't understand why we pay for this. It's very disappointing.

And CoPilot was like "yeah, let me fix this for you!"

Microsoft CoPilot is a comedy show without a manuscript. It makes up new nonsense words as though were a word-juggler on circus, and the grammar becomes mang like a bulldzer over a lawn. Why do we pay for this? It is buy a ticket to a show where actosorgets their lines. Entredibly disappointing.

[–] KSPAtlas@sopuli.xyz 0 points 1 month ago (3 children)

Most AIs struggle with languages other than English, unfortunately, I hate how it reinforces the "defaultness" of English

load more comments (3 replies)
[–] NuXCOM_90Percent@lemmy.zip -1 points 1 month ago (3 children)

People love to make these claims.

Nothing is "100% accurate" to begin with. Humans spew constant FUD and outright malicious misinformation. Just do some googling for anything medical, for example.

So either we acknowledge that everything is already "sewage" and this changes nothing or we acknowledge that people already can find value from searching for answers to questions and they just need to apply critical thought toward whether I_Fucked_your_mom_416 on gamefaqs is a valid source or not.

Which gets to my big issue with most of the "AI Assistant" features. They don't source their information. I am all for not needing to remember the magic incantations to restrict my searches to a single site or use boolean operators when I can instead "ask jeeves" as it were. But I still want the citation of where information was pulled from so I can at least skim it.

[–] AnAmericanPotato@programming.dev 0 points 1 month ago (1 children)

99.999% would be fantastic.

90% is not good enough to be a primary feature that discourages inspection (like a naive chatbot).

What we have now is like...I dunno, anywhere from <1% to maybe 80% depending on your use case and definition of accuracy, I guess?

I haven't used Samsung's stuff specifically. Some web search engines do cite their sources, and I find that to be a nice little time-saver. With the prevalence of SEO spam, most results have like one meaningful sentence buried in 10 paragraphs of nonsense. When the AI can effectively extract that tiny morsel of information, it's great.

Ideally, I don't ever want to hear an AI's opinion, and I don't ever want information that's baked into the model from training. I want it to process text with an awareness of complex grammar, syntax, and vocabulary. That's what LLMs are actually good at.

[–] NuXCOM_90Percent@lemmy.zip -1 points 1 month ago

Again: What is the percent "accurate" of an SEO infested blog about why ivermectin will cure all your problems? What is the percent "accurate" of some kid on gamefaqs insisting that you totally can see Lara's tatas if you do this 90 button command? Or even the people who insist that Jimi was talking about wanting to kiss some dude in Purple Haze.

Everyone is hellbent on insisting that AI hallucinates and... it does. You know who else hallucinates? Dumbfucks. And the internet is chock full of them. And guess what LLMs are training on? Its the same reason I always laugh when people talk about how AI can't do feet or hands and ignore the existence of Rob Liefeld or WHY so many cartoon characters only have four fingers.

Like I said: I don't like the AI Assistants that won't tell me where they got information from and it is why I pay for Kagi (they are also AI infested but they put that at higher tiers so I get a better search experience at the tier I pay for). But I 100% use stuff like chatgpt to sift through the ninety bazillion blogs to find me a snippet of a helm chart that I can then deep dive on whether a given function even exists.

But the reality is that people are still benchmarking LLMs against a reality that has never existed. The question shouldn't be "we need this to be 100% accurate and never hallucinate" and instead be "What web pages or resources were used to create this answer" and then doing what we should always be doing: Checking the sources to see if they at least seem trustworthy.

[–] ZeroGravitas@lemm.ee 0 points 1 month ago (2 children)

I think you nailed it. In the grand scheme of things, critical thinking is always required.

The problem is that, when it comes to LLMs, people seem to use magical thinking instead. I'm not an artist, so I oohd and aahd at some of the AI art I got to see, especially in the early days, when we weren't flooded with all this AI slop. But when I saw the coding shit it spewed? Thanks, I'll pass.

The only legit use of AI in my field that I know of is an unit test generator, where tests were measured for stability and code coverage increase before being submitted to dev approval. But actual non-trivial production grade code? Hell no.

load more comments (2 replies)
load more comments (1 replies)
[–] PugEnjoyer@lemmy.blahaj.zone -1 points 1 month ago* (last edited 1 month ago)

We're not talking about an AI running a nuclear reactor, this article is about AI assistants on a personal phone. 0.001% failure rates for apps on your phone isn't that insane, and generally the only consequence of those failures would be you need to try a slightly different query. Tools like Alexa or Siri mishear user commands probably more than 0.001% of the time, and yet those tools have absolutely caught on for a significant amount of people.

The issue is that the failure rate of AI is high enough that you have to vet the outputs which typically requires about as much work as doing whatever you wanted the AI to do yourself, and using AI for creative things like art or videos is a fun novelty, but isn't something that you're doing regularly and so your phone trying to promote apps that you only want to use once in a blue moon is annoying. If AI were actually so useful you could query it with anything and 99.999% of the time get back exactly what you wanted, AI would absolutely become much more useful.

[–] nuko147@lemm.ee 1 points 1 month ago (1 children)

This is what happens when companies prioritize hype over privacy and try to monetize every innovation. Why pay €1,500 for a phone only to have basic AI features? AI should solve real problems, not be a cash grab.

Imagine if AI actually worked for users:

  • Show me all settings to block data sharing and maximize privacy.
  • Explain how you optimized my battery last week and how much time it saved.
  • Automatically silence spam calls without selling my data to third parties.
  • Detect and block apps that secretly drain data or access my microphone.
  • Automatically organize my photos by topic without uploading them to the cloud.
  • Make everything i could do with TASKER with only just saying it in plain words.
[–] arakhis_@feddit.org 0 points 1 month ago (1 children)

How could you ensure AI to privately sort your pictures, if the requests to analyze your sensitive imagery need to be made on a server? (that based its knowledge of disrespecting others copyright anyway, lol)

[–] nuko147@lemm.ee 3 points 1 month ago (1 children)

Why it must connect to a server to do it? Why can not offline? Deepseek showed us that it is possible. The companies want everyone to think that AI only works online. For example AI image enhancements in my mid range Samsung phone work offline.

[–] arakhis_@feddit.org 0 points 1 month ago (1 children)

oh my bad, sorry im not well versed.

Thats why I asked :p

[–] nuko147@lemm.ee 2 points 1 month ago

A lot of people think as a must that AI = permanent server connection. I don't mind if it is a bit slower but part of my device.

[–] WrenFeathers@lemmy.world 1 points 1 month ago (1 children)

I wonder if it has anything to do with the fact that it’s useless.

[–] Sixtyforce@sh.itjust.works -1 points 1 month ago* (last edited 1 month ago) (1 children)

I don't think it's meant to be useful....for us, that is. Just another tool to control and brainwash people. I already see a segment of the population trust corporate AI as an authority figure in their lives. Now imagine kids growing up with AI and never knowing a world without. People who have memories of times before the internet is a good way to relate/empathize, at least I think so.

How could it not be this way? Algorithms trained people. They're trained to be fed info from the rich and never seek anything out on their own. I'm not really sure if the corps did it on purpose or not, at least at first. Just money pursuit until powerful realizations were made. I look at the declining quality of Google/Youtube search results. As if they're discouraging seeking out information on your own. Subtly pushing the path of least resistance back to the algorithm or now perhaps a potentially much more sinister "AI" LLM chatbot. Or I'm fucking crazy, you tell me.

Like, we say dead internet. Except...nothing is actually stopping us from ditching corporate internet websites and just go back to smaller privately owned or donation run forums.

Big part of why I'm happy to be here on the newfangled fediverse, even if it hasn't exploded in popularity at least it has like-minded people, or you wouldn't be here.

Check out debate boards. Full of morons using ChatGPT to speak for them and they'll both openly admit it and get mad at you for calling it dehumanizing and disrespectful.

/tinfoil hat

Edit to add more old man yells at clouds(ervers) detail, apologies. Kinda chewing through these complex ideas on the fly.

[–] prole@lemmy.blahaj.zone 0 points 1 month ago* (last edited 1 month ago) (1 children)

nothing is actually stopping us from ditching corporate internet websites and just go back to smaller privately owned or donation run forums.

I didn't even realize until you said this that I already do that lol

load more comments (1 replies)

"PLEASE use our hilariously power inefficient wrongness machine."

[–] Underwaterbob@lemm.ee 0 points 1 month ago (1 children)

Not only that, but Google assistant is getting consistently less reliable. Like half the time now I ask it a question and it just does an image search or something or completely misunderstands me in some other manner. They deserted working, decent tech for unreliable, unwanted tech because ???

[–] LeninOnAPrayer@lemm.ee 0 points 1 month ago* (last edited 1 month ago) (1 children)

Profit potential. Think of AI as one big data collector to sell you shit. It is significantly better at learning things about you than any metadata or cookies ever could.

If you think of this AI push as "trying to make a better product" it will not make much sense. If you think of the AI push as "how do I collect more data on all my users and better directly influence their choices" it makes a lot more sense.

[–] Underwaterbob@lemm.ee 0 points 1 month ago (1 children)

Well, that's depressing. Where's my Star Trek future?

[–] LeninOnAPrayer@lemm.ee 1 points 1 month ago

Star Trek was space communism. So we'd have to kill the capitalist first.

We're heading more towards Star Wars and the Empire. See you in the resistance.

[–] fritobugger2017@lemmy.world 0 points 1 month ago (1 children)

My kids school just did a survey and part of it included questions about teaching technology with a big focus on the use of AI. My response was "No" full stop. They need to learn how to do traditional research first so that they can spot check the error ridden results generated by AI. Damn it school, get off the bandwagon.

[–] Akito@lemm.ee 0 points 1 month ago (2 children)

And what exactly is the difference between researching shit sources on plain internet and getting the same shit via an AI, except manually it takes 6 hours and with AI it takes 2 minutes?

load more comments (2 replies)
[–] TylerBourbon@lemmy.world 0 points 1 month ago (4 children)

I do not need it, and I hate how it's constantly forced upon me.

Current AI feels like the Metaverse. There's no demand for it or need for it, yet they're trying their damndest to shove it into anything and everything like it's a new miracle answer to every problem that doesn't exist yet.

And all I see it doing is making things worse. People use it to write essays in school; that just makes them dumber because they don't have to show they understand the topic they're writing. And considering AI doesn't exactly have a flawless record when it comes to accuracy, relying on it for anything is just not a good idea currently.

load more comments (4 replies)
[–] Obelix@feddit.org 0 points 1 month ago (1 children)

People here like to shit on AI, but it has its use cases. It's nice that I can search for "horse" in Google Photos and get back all pictures of horses and it is also really great for creating small scripts. I, however, do not need a LLM chatbot on my phone and I really don't want it everywhere in every fucking app with a subscription model.

[–] MattTheProgrammer@lemmy.world 0 points 1 month ago (1 children)

People wouldn't shit on AI if it wasn't needlessly crammed down our throats.

[–] Guns0rWeD13@lemmy.world 1 points 1 month ago

people wouldn't shit on AI if it were actually replacing our jobs without taking our pay and creating a system of resource management free from human greed and error.

[–] Zak@lemmy.world 0 points 1 month ago (1 children)

The AI thing I'd really like is an on-device classifier that decides with reasonably high reliability whether I would want my phone to interrupt me with a given notification or not. I already don't allow useless notifications, but a message from a friend might be a question about something urgent, or a cat picture.

What I don't want is:

  • Ways to make fake photographs
  • Summaries of messages I could just skim the old fashioned way
  • Easier access to LLM chatbots

It seems like those are the main AI features bundled on phones now, and I have no use for any of them.

[–] drthunder@midwest.social 0 points 1 month ago* (last edited 1 month ago) (1 children)

That's useful AI that doesn't take billions of dollars to train, though. (it's also a great idea and I'd be down for it)

[–] dustyData@lemmy.world 1 points 1 month ago

You mean paying money to people to actually program. In fair exchange for their labor and expertise, instead of stealing it from the internet? What are you, a socialist?

/s

[–] spankmonkey@lemmy.world 0 points 1 month ago* (last edited 1 month ago) (1 children)

"Stop trying to make ~~fetch~~ AI happen. It's not going to happen."

AI is worse that adding no value, it is an actual detriment.

[–] octopus_ink@slrpnk.net 1 points 1 month ago

I feel like I'm in those years of You really want a 3d TV, right? Right? 3D is what you've been waiting for, right? all over again, but with a different technology.

It will be VR's turn again next.

I admit I'm really rooting for affordable, real-world, daily-use AR though.

[–] PeteWheeler@lemmy.world 0 points 1 month ago (1 children)

AI is useless for most people because it does not solve any problems for day to day people. The most common use is to make their emails sound less angry and frustrated.

AI is useful for tech people, makes reading documentation or learning anything new a million times better. And when the AI does get something wrong, you'll know eventually because what you learned from the AI won't work in real life, which is part of the normal learning process anyways.

It is great as a custom tutor, but other than that it really doesn't make anything of substance by itself.

[–] nickwitha_k@lemmy.sdf.org 0 points 1 month ago (1 children)

The fact that I can't trust the AI message to be remotely factual makes that sort of use case pointless to me. If I grep and sift through docs, I'll have better comprehension of what I'm trying to figure out. With AI slop, I just end up having to hunt for what it messed up, without any context, wasting my time and patience.

[–] affenlehrer@feddit.org 1 points 1 month ago

I really recommend watching this introduction by Andrej Karpathy https://www.youtube.com/watch?v=7xTGNNLPyMI

One part that really stuck with me is that the data in the model is more like a fading memory but the stuff in the context window is more like the working memory. Since I learned that I tend to put as much information as possible into the context window before asking questions about it. This improved the results drastically and reduced hallucinations.

[–] lack@lemmy.world 0 points 1 month ago (1 children)

Apple Intelligence is trash and only lasted 2 days on my 16 pro. Not turning it back on either.

[–] Daelsky@lemmy.ca 1 points 1 month ago

I’m on my iPhone 12 since it came out in sept 2020 (I bought it on Halloween 2020 lol) and apart from battery health being 77%, I have NO reasons to upgrade and even then, I’ll change the battery when it gets to 70% and… that’s it.

Phones just aren’t exciting anymore. I used to watch so much phone reviews on YouTube and now they are all just.. the same. Folding phones aren’t that interesting for me. I saw that there is a new battery technology, but that’s like the only new fun feature I’m interested in.

Most performance upgrades aren’t used in the real world and AI suuuuucks

[–] UltraMagnus0001@lemmy.world -1 points 1 month ago

It actually made my Google speakers assistant dumber because I think they're trying to merge the 2

load more comments
view more: next ›