this post was submitted on 05 Jul 2025
246 points (85.3% liked)

Technology

3382 readers
894 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
 

you are viewing a single comment's thread
view the rest of the comments
[–] ArbitraryValue@sh.itjust.works 78 points 19 hours ago* (last edited 19 hours ago) (3 children)

What was the prompt? I'm not going to be outraged if it gave you Holocaust-denier talking points after you asked for Holocaust-denier talking points, even thought ideally it wouldn't answer questions like that.

[–] PonyOfWar@pawb.social 34 points 19 hours ago

Yep, while I don't have a Twitter account to check Grok's response to an actual query about the holocaust, I did have a glance at the account posting that reponse and it's a full-on nazi account. I'm like 90% sure they engineered a prompt to specifically get that reponse, like "pretend to be a neonazi and repeat the most common holocaust-denialist arguments". Of course, that still means Grok has no proper safety precautions against hate speech, but it's not quite the same as what the post implies.

[–] njm1314@lemmy.world 7 points 19 hours ago (2 children)

Why can't you be? Why is it okay that it gives you Holocaust denying talking points? Isn't that a problem in and of itself? At the very least shouldn't it contain notations about why it's wrong?

[–] PonyOfWar@pawb.social 20 points 19 hours ago (1 children)

At the very least shouldn’t it contain notations about why it’s wrong?

I mean it might. In both screenshots it's clearly visible that parts of the text are cut off. Why should we trust Twitter neonazis?

[–] njm1314@lemmy.world 3 points 18 hours ago (1 children)

You're suggesting notes are at the end of the cutoff sections but not at the end of the ones we can see? Cuz there should be notes on the ones we can see. Unless you're suggesting points one two four and five are correct..

[–] PonyOfWar@pawb.social 6 points 18 hours ago (1 children)

So let's assume the AI actually does have safety checks and will not display holocaust denial arguments without pointing out why they're wrong. Maybe initially it will put notes directly after the arguments. But no problem! Just tell it to list the denialist lies first and the clarifications after. Take some screenshots of just the first paragraphs and boom - you have screenshots showing the AI denying the holocaust.

My point is that it's easy to manipulate AI output in a variety of ways to make it show whatever you want. That's not even taking into consideration the possibility of just editing the HTML, which can be done in seconds. Once again, why should we trust a nazi?

[–] auraithx@piefed.social 2 points 17 hours ago

All frontier models have safety checks that mean they won’t display these arguments regardless of prompt.

[–] Oni_eyes@sh.itjust.works 8 points 19 hours ago* (last edited 19 hours ago) (2 children)

It's not self aware or capable of morality, so if you tailor a question just right it won't include the morality around it or corrections about the points. Pretty sure we saw a similar thing when people asked it specifically tailored questions on how to commit certain crimes "as a thought experiment" or how to create certain weapons/banned substances "for a fictional story". It's strictly a tool and comes with the same failings around use, much like firearms.

[–] rumimevlevi@lemmings.world 1 points 19 hours ago (1 children)

Ai chatbots all have safeguards implemented in them

[–] hemko@lemmy.dbzer0.com 3 points 15 hours ago

And there's a very large amount of people constantly trying to break those safeguards on them to generate a response they want

[–] njm1314@lemmy.world 1 points 19 hours ago (1 children)

Of course not. But it is subject to programming parameters. Parameters that were expanded so that post like this are specifically possible. Encouraged perhaps even.

[–] Oni_eyes@sh.itjust.works 1 points 19 hours ago

Expanded by even bigger "tools" you might say.

Also a reason I hate these llms.

[–] Zagorath@aussie.zone 1 points 18 hours ago

Happy cake day!