this post was submitted on 03 Jul 2025
229 points (96.7% liked)

Technology

72360 readers
2817 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The Basque Country is implementing Quantus Skin in its health clinics after an investment of 1.6 million euros. Specialists criticise the artificial intelligence developed by the Asisa subsidiary due to its "poor” and “dangerous" results. The algorithm has been trained only with data from white patients.

top 34 comments
sorted by: hot top controversial new old
[–] Tollana1234567@lemmy.today 10 points 18 hours ago* (last edited 18 hours ago)

you cant diagnosed melanoma just by the skin features alone, you need biopsy and gene tic testing too. furthermore, other types of melanoma do not have typical abcde signs sometimes.

histopathology gives the accurate indication if its melonoma or something else, and how far it spread in the sample.

[–] D4MR0D@lemmy.world 24 points 1 day ago (1 children)

If someone with dark skin gets a real doctor to look at them, because it's known that this thing doesn't work at all in their case, then they are better off, really.

[–] ryannathans@aussie.zone 6 points 22 hours ago

Doctors are best at diagnosing skin cancer in people of the same skin type as themselves, it's just a case of familiarity. Black people should have black skin doctors for highest success rates, white people should have white doctors for highest success rates. Perhaps the next generation of doctors might show more broad success but that remains to be seen in research.

[–] phoenixz@lemmy.ca 8 points 1 day ago (3 children)

Though I get the point, I would caution against calling "racism!" on AI not being able to detect molea or cancers well on people with darker skin; its harder to see darker areas on darker skins. That is physics, not racism

[–] zout@fedia.io 47 points 1 day ago (1 children)

The racism is in training on white patients only, not in the abilities of the AI in this case.

[–] Hardeehar@lemmy.world -5 points 1 day ago (2 children)

It's still not racism. The article itself says there is a lack of diversity in the training data. Training data will consist of 100% "obvious" pictures of skin cancers which is most books and online images I've looked into seems to be majority fair skinned individuals.

"...such algorithms perform worse on black people, which is not due to technical problems, but to a lack of diversity in the training data..."

Calling out things as racist really works to mask what a useful tool this could be to help screen for skin cancers.

[–] Revan343@lemmy.ca 13 points 1 day ago

Training data will consist of 100% "obvious" pictures of skin cancers

Only if you're using shitty training data

[–] xorollo@leminal.space 20 points 1 day ago (3 children)

Why is there a lack of robust training data across skin colors? Could it be that people with darker skin colors have less access to cutting edge medical care and research studies? Would be pretty racist.

There is a similar bias in medical literature for genders. Many studies only consider males. That is sexist.

[–] BassTurd@lemmy.world 9 points 1 day ago* (last edited 1 day ago) (3 children)

My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that? To me, to say something is racist implies intent, where this situation could be that, but it could also be a case where it's just not racially diverse, which doesn't necessarily imply racism.

There's a plethora of reasons that the dataset may be mostly fair skinned. To prattle off a couple that come to mind (all of this may be known, idk, these are ignorant possibilities on my side) perhaps more fair skinned people are susceptible so there's more data, like you mentioned that dark skinned individuals may have less options to get medical help, or maybe the dataset came from a region with not many dark skinned patients. Again, all ignorant speculation on my part, but I would say that none of those options inherently make the model racist, just not a good model. Maybe racist actions led to a bad dataset, but if that's out of the devs control, then I wouldn't personally put that negative on the model.

Also, my interpretation of what racist means may differ, so there's that too. Or it could have all been done intentionally in which case, yea racist 100%

Edit: I actually read the article. It sounds like they used public datasets that did have mostly Caucasian people. They also acknowledged that fair skinned people are significantly more likely to get melanoma, which does give some credence to the unbalanced dataset. It's still not ideal, but I would also say that maybe nobody should put all of their eggs in an AI screening tool, especially for something like cancer.

[–] xorollo@leminal.space 10 points 23 hours ago

There is a more specific word for it: Institutional racism.

Institutional racism, also known as systemic racism, is a form of institutional discrimination based on race or ethnic group and can include policies and practices that exist throughout a whole society or organization that result in and support a continued unfair advantage to some people and unfair or harmful treatment of others. It manifests as discrimination in areas such as criminal justice, employment, housing, healthcare, education and political representation.[1]

[–] WanderingThoughts@europe.pub 5 points 1 day ago* (last edited 1 day ago) (1 children)

My only real counter to this is who created the dataset and did the people that were creating the app have any power to affect that?

A lot of AI research in general was first done by largely Caucasian students, so the datasets they used skewed that way, and other projects very often started from those initial datasets. The historical reason there are more students of that skin tone is because they have in general the most money to finance the schooling, and that's because past racism held African-American families back from accumulating wealth and accessing education, and that still affects their finances and chances today, assuming there is no racism still going on in scholarships and accepting students these days.

Not saying this is specifically happening for this project, just a lot of AI projects in general. It causes issues with facial recognition in lots of apps for example.

[–] BassTurd@lemmy.world 1 points 1 day ago

They did touch on the facial recognition aspect as well. My main thing is, does that make the model racist if the source data is diverse? I'd argue that it's not, although racist decisions may have lead to a poor dataset.

[–] AbidanYre@lemmy.world 2 points 1 day ago (2 children)

Seems more like a byproduct of racism than racist in and of itself.

[–] SreudianFlip@sh.itjust.works 7 points 1 day ago

Yes, we call that "structural racism".

[–] BassTurd@lemmy.world 4 points 1 day ago

I think that's a very possible likely hood, but as with most things, there are other factors that could affect the dataset as well.

[–] stephen01king@lemmy.zip -1 points 1 day ago (1 children)

Yeah, it does make it racist, but which party is performing the racist act? The AI, the AI trainer, the data collector, or the system that prioritises white patients? That's the important distinction that simply calling it racist fails to address.

[–] xorollo@leminal.space 1 points 23 hours ago

There is a more specific word for it: Institutional racism.

Institutional racism, also known as systemic racism, is a form of institutional discrimination based on race or ethnic group and can include policies and practices that exist throughout a whole society or organization that result in and support a continued unfair advantage to some people and unfair or harmful treatment of others. It manifests as discrimination in areas such as criminal justice, employment, housing, healthcare, education and political representation.[1]

[–] Hardeehar@lemmy.world 0 points 1 day ago* (last edited 1 day ago) (2 children)

I never said that the data gathered over decades wasn't biased in some way towards racial prejudice, discrimination, or social/cultural norms over history. I am quite aware of those things.

But if a majority of the data you have at your disposal is from fair skinned people, and that's all you have...using it is not racist.

Would you prefer that no data was used, or that we wait until the spectrum of people are fully represented in sufficient quantities, or that they make up stuff?

This is what they have. Calling them racist for trying to help and create something to speed up diagnosis helps ALL people.

The creators of this AI screening tool do not have any power over how the data was collected. They're not racist and it's quite ignorant to reason that they are.

[–] xorollo@leminal.space 5 points 23 hours ago

I would prefer that as a community, we acknowledge the existence of this bias in healthcare data, and also acknowledge how harmful that bias is while using adequate resources to remedy the known issues.

There is a more specific word for it: Institutional racism.

Institutional racism, also known as systemic racism, is a form of institutional discrimination based on race or ethnic group and can include policies and practices that exist throughout a whole society or organization that result in and support a continued unfair advantage to some people and unfair or harmful treatment of others. It manifests as discrimination in areas such as criminal justice, employment, housing, healthcare, education and political representation.[1]

[–] LustyArgonianMana@lemmy.world 1 points 20 hours ago (1 children)

They absolutely have power over the data sets.

They could also fund research into other cancers and work with other countries like ones in Africa where there are more black people to sample.

It's impossible to know intent but it does seem pretty intentionally eugenics of them to do this when it has been widely criticized and they refuse to fix it. So I'd say it is explicitly racist.

[–] Hardeehar@lemmy.world 0 points 19 hours ago (1 children)

Eugenics??? That's crazy.

So you'd prefer that they don't even start working with this screening method until we have gathered enough data to satisfy everyones representation?

Let's just do that and not do anything until everyone is happy. Nothing will happen ever and we will all collectively suffer.

How about this. Let's let the people with the knowledge use this "racist" data and help move the bar for health forward for everyone.

[–] LustyArgonianMana@lemmy.world 0 points 19 hours ago* (last edited 19 hours ago) (1 children)

It isn't crazy and it's the basis for bioethics, something I had to learn about when becoming a bioengineer who also worked with people who literally designed AI today and they continue to work with MIT, Google, and Stanford on machine learning... I have spoked extensively with these people about ethics and a large portion of any AI engineer's job is literally just ethics. Actually, a lot of engineering is learning ethics and accidents - they go hand in hand, like the Hotel Hyatt collapse.

I never suggested they stop developing the screening technology, don't strawman, it's boring. I literally gave suggestions for how they can fix it and fix their data so it is no longer functioning as a tool of eugenics.

Different case below, but related sentiment that AI is NOT a separate entity from its creators/engineers and they ABSOLUTELY should be held liable for the outcomes of what they engineer regardless of provable intent.

https://lemmy.world/post/21189801/13055286

You don’t think the people who make the generative algorithm have a duty to what it generates?

And whatever you think anyway, the company itself shows that it feels obligated about what the AI puts out, because they are constantly trying to stop the AI from giving out bomb instructions and hate speech and illegal sexual content.

The standard is not and was never if they were “entirely” at fault here. It’s whether they have any responsibility towards this (and we all here can see that they do indeed have some), and how much financially that’s worth in damages.

[–] Hardeehar@lemmy.world 1 points 17 hours ago* (last edited 17 hours ago) (1 children)

I know what bioethics is and how it applies to research and engineering. Your response doesn't seem to really get to the core of what I'm saying: which is that the people making the AI tool aren't racist.

Help me out: what do the researchers creating this AI screening tool in its current form (with racist data) have to do with it being a tool of eugenics? That's quite a damning statement.

I'm assuming you have a much deeper understanding of what kind of data this AI screening tool uses and the finances and whatever else that goes into it. I feel that the whole "talk with Africa" to balance out the data is not great sounding and is overly simplified.

Do you really believe that the people who created this AI screening tool should be punished for using this racist data, regardless of provable intent? Even if it saved lives?

Does this kind of punishment apply to the doctor who used this unethical AI tool? His knowledge has to go into building it up somehow. Is he, by extension, a tool of eugenics too?

I understand ethical obligations and that we need higher standards moving forward in society. But even if the data right now is unethical, and it saves lives, we should absolutely use it.

[–] LustyArgonianMana@lemmy.world 1 points 15 hours ago (1 children)

I addressed that point by saying their intent to be racist or not is irrelevant when we focus on impact to the actual victims (ie systemic racism). Who cares about the individual engineer's morality and thoughts when we have provable, measurable evidence of racial disparity that we can correct easily?

It literally allows black people to die and saves white people more. That's eugenics.

It is fine to coordinate with universities in like Kenya, what are you talking about?

I never said shit about the makers of THIS tool being punished! Learn to read! I said the tool needs fixed!

Like seriously you are constantly taking the position of the white male, empathizing, then running interference for him as if he was you and as if I'm your mommy about to spank you. Stop being weird and projecting your bullshit.

Yes, doctors who use this tool on their black patients and white patients equally would be perofmring eugenics, just like the doctors who sterikized indigenous women because they were poor were doing the same. Again, intent and your ego isn't relevanf when we focus on impacts to victims and how to help them.

We should demand they work in a very meaningful way to get the data to be as good for black people as their #1 priority, ie doing studies and collecting that data

[–] Hardeehar@lemmy.world 0 points 8 hours ago* (last edited 8 hours ago) (1 children)

Define eugenics for me, please.

You're saying the tool in its current form with it's data "seems pretty intentionally eugenics" and..."a tool for eugenics". And since you said the people who made that data, the AI tool, and those who are now using it are also responsible for anything bad ...they are by your supposed extension eugenicists/racists and whatever other grotesque and immoral thing you can think of. Because your link says that regardless of intention, the AI engineers should ABSOLUTELY be punished.

They have to fix it, of course, so it can become something other than a tool for eugenics as it is currently. Can you see where I think your argument goes way beyond rational?

Would I have had this conversation with you if the tool worked really well on only black people and allowed white people to die disproportionately? I honestly can't say. But I feel you would be quiet on the issue. Am I wrong?

I don't think using the data, as it is, to save lives makes you racist or supports eugenics. You seem to believe it does. That's what I'm getting after. That's why I think we are reading different books.

Once again...define eugenics for me, please.

Regardless, nothing I have said means that I don't recognize institutional racism and that I don't want the data set to become more evenly distributed so it takes into consideration the full spectrum of human life and helps ALL people.

[–] LustyArgonianMana@lemmy.world 1 points 2 hours ago* (last edited 1 hour ago)

Yeah I'm done educating you tbh. Not worth my time when you're arguing in bad faith.

Learn what a strawman is. 90% of your post was strawman after strawman.

Define strawman for me, kiddo. Then re-read your above comment. I counted 6, can you find all 6 strawman arguments in your comment?

The conversation was never about you or your ego, but youve thoroughly convinced me with this conversation that you are probably both racist and a eugenicist - hit dog hollers and you seriously keep identifying yourself as the racist eugenicist here with no prompting from anyone else. Ig if that's who you are then, whatever. I dont talk to eugenicist racists either.

[–] TimewornTraveler@lemmy.dbzer0.com 4 points 19 hours ago* (last edited 19 hours ago)

if only you read more than three sentences you'd see the problem is with the training data. instead you chose to make sure no one said the R word. ben shapiro would be proud

[–] Melvin_Ferd@lemmy.world 0 points 1 day ago (1 children)

Think more about the intended audience.

This isn't about melanoma. The media has been pushing yellow journalism like this regarding AI since it became big.

It's similar to how right wing media would push headlines about immigrant invasions. Hating on AI is the left's version of illegal immigrants.

[–] goldenbug@fedia.io 3 points 1 day ago (1 children)

Reading the article, it seems like badly regulated procurement processes with a company that did not meet the criteria to begin with.

Poor results on people with darker skin colour are a known issue. However, the article says its training data containes ONLY white patients. The issue is not hate against AI, it's about what the tools can do with obviously problematic data.

Unless the article is lying, these are valid concerns that have nothing to do with hating on AI, it has all to do with the minimal requirements for health AI tools.

[–] Melvin_Ferd@lemmy.world 0 points 1 day ago* (last edited 1 day ago) (1 children)

Do you think any of these articles are lying or that these are not intended to generate certain sentiments towards immigrants?

Are they valid concerns to be aware of?

The reason I'm asking is because could you not say the same about any of these articles even though we all know exactly what the NY Post is doing?

Compare it to posts on Lemmy with AI topics. They're the same.

[–] goldenbug@fedia.io 1 points 22 hours ago (1 children)
[–] Melvin_Ferd@lemmy.world 1 points 22 hours ago* (last edited 22 hours ago)

Media forcing opinions using the same framework they always use.

Regardless if it's the right or the left. Media is owned by people lik the Koch and bannons and Murdoch's even left leading media.

They don't want the left using AI or building on it. They've been pushing a ton of articles to left leaning spaces using the same framework they use when it's election season and are looking to spin up the right wing base. It's all about taking jobs, threats to children, status quo.

[–] Imgonnatrythis@sh.itjust.works 4 points 1 day ago (1 children)
[–] surewhynotlem@lemmy.world 2 points 22 hours ago

Who said that?