this post was submitted on 03 Oct 2024
0 points (NaN% liked)

Technology

71083 readers
3032 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] NuXCOM_90Percent@lemmy.zip 0 points 8 months ago* (last edited 8 months ago) (3 children)

For low contrast greyscale sequrity cameras? Sure.

For any modern even SD color camera in a decently lit scenario? Bullshit. It is just that most of this tech is usually trained/debugged on the developers and their friends and families and... yeah.

I always love to tell the story of, maybe a decade and a half ago, evaluating various facial recognition software. White people never had any problems. Even the various AAPI folk in the group would be hit or miss (except for one project out of Taiwan that was ridiculously accurate). And we weren't able to find a single package that consistently identified even the same black person.

And even professional shills like MKBHD will talk around this problem during his review ads (the apple vision video being particularly funny).

[–] fartsparkles@sh.itjust.works 0 points 8 months ago (2 children)

You’re not wrong. Research into models trained on racially balanced datasets has shown better recognition performance among with reduced biases. This was in limited and GAN generated faces so it still needs to be recreated with real-world data but it shows promise that balancing training data should reduce bias.

[–] NuXCOM_90Percent@lemmy.zip 0 points 8 months ago (1 children)

Yeah but this is (basically) reddit and clearly it isn't racism and is just a problem of multi megapixel cameras not being sufficient to properly handle the needs of phrenology.

There is definitely some truth to needing to tweak how feature points (?) are computed and the like. But yeah, training data goes a long way and this is why there was a really big push to get better training data sets out there... until we all realized those would predominantly be used by corporations and that people don't really want to be the next Lenna because they let some kid take a picture of them for extra credit during an undergrad course.