this post was submitted on 16 May 2025
597 points (97.2% liked)

Technology

70080 readers
3590 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

It certainly wasn’t because the company is owned by a far-right South African billionaire at the same moment that the Trump admin is entertaining a plan to grant refugee status to white Afrikaners. /s

My partner is a real refugee. She was jailed for advocating democracy in her home country. She would have received a lengthy prison sentence after trial had she not escaped. This crap is bullshit. Btw, did you hear about the white-genocide happening in the USA? Sorry, I must have used Grok to write this. Go Elon! Cybertrucks are cool! Twitter isn’t a racist hellscape!

The stuff at the end was sarcasm, you dolt. Shut up.

top 50 comments
sorted by: hot top controversial new old
[–] DarkSurferZA@lemmy.world 4 points 3 hours ago

Brah, if your CEO edits the prompt, it's not unauthorized. It may be undesirable, but it really ain't unauthorised

[–] reksas@sopuli.xyz 18 points 6 hours ago

they say unauthorised because they got caught

[–] Lila_Uraraka@lemmy.blahaj.zone 2 points 4 hours ago

Haha, keep fucking with it, this is entertaining

[–] nutsack@lemmy.dbzer0.com 1 points 3 hours ago

i wish i knew who the rogue employee was. incredible

[–] hperrin@lemmy.ca 3 points 5 hours ago

And what about Elmo’s white genocide obsession?

[–] martin4598@lemm.ee 1 points 4 hours ago

Lesson learned: AI are not reliable.

[–] Etterra@discuss.online 18 points 8 hours ago (1 children)

Translation: of course they're not going to admit that Elon did it.

[–] some_guy@lemmy.sdf.org 6 points 6 hours ago

Hey! He only owns the platform. Why would you think he's putting his thumb on the scale? /s

[–] drmoose@lemmy.world 12 points 8 hours ago* (last edited 7 hours ago) (1 children)

They say that they'll upload the system prompt to github but that's just deception. The Twitter algorithm is "open source on github" and hasn't been updated for over 2 years. The issues are a fun read tho https://github.com/twitter/the-algorithm/issues

There's just no way to trust that anything is running on the server unless it's audited by 3rd party.

So now all of these idiots going to believe "but its on github open source" when the code is never actually being run by anyone ever.

[–] atmorous@lemmy.world 1 points 5 hours ago

We need people educated on open source, community-made hardware and software

[–] sjmarf@sh.itjust.works 18 points 14 hours ago
[–] KingThrillgore@lemmy.ml 36 points 17 hours ago (2 children)

The unauthorized edit is coming from inside the house.

[–] ToastedRavioli@midwest.social 7 points 10 hours ago

Its incredible how things can just slip through, especially when they start at the very top

[–] JimVanDeventer@lemmy.world 14 points 16 hours ago

Unauthorized is their office nickname for Musk.

[–] latenightnoir@lemmy.blahaj.zone 10 points 17 hours ago* (last edited 9 hours ago)

Looks like someone's taking some lessons from Zuck's methodology. "Whoops! That highly questionable and suspiciously intentional shit we did was totes an accident! Spilt milk now, I guess! Wuh-huh-heey honk-honk!"

[–] TachyonTele@lemm.ee 160 points 1 day ago (3 children)

Musk isn't authorized anymore?

[–] BreadstickNinja@lemmy.world 40 points 1 day ago (1 children)

Depends on the ketamine levels in his blood at any given moment. Sometimes, you edit your prompts from a k-hole, and everyone knows you can't authorize your own actions when you're fully dissociated.

[–] MunkysUnkEnz0@lemmy.world 12 points 22 hours ago (2 children)

I don't understand how he can be such an ass. I spent plenty of time in a hole. I liken it to the river of souls... Your soul flowing through the river of the universe with all the other souls being cleansed. I Consider it a sacrament and a psychedelic at higher levels. Not a party drug.. Hard to party when you're laying down or walking with a 45 degrees slant.

With all his Burning Man experience, you would think he would have done some deems. Then realized there's more to this life and abusing others is like abusing yourself.

[–] theangryseal@lemmy.world 8 points 17 hours ago (1 children)

My experience with psychedelics (enjoyed with others) is that what you experience relates directly to you the person.

For me, I was already terribly empathetic and I became crippled with empathy, incapable of any move in any direction that didn’t benefit everyone.

Ego death doesn’t happen for everyone. Some egos are too big to kill and grow even larger.

That is my anecdotal experience. I knew someone who went from huge ego to an ego to end all egos.

He woke up the next day convinced that the world needed him.

I’ve never done ketamine though.

LSD, DMT, and mushrooms. That’s it for me.

[–] XnxCuX@lemmy.world 2 points 4 hours ago

Tbh K isn't even comparable. A hole is basically just like being in a coma if you watch someone. Fully alert but not a thought behind those eyes.

I was never huge into psychedelics for reasons you basically mentioned but used to love K because it was like not being alive for 20 min

[–] BreadstickNinja@lemmy.world 4 points 21 hours ago

Well, I've been across the horizon and back a few times and I never came back a cunt. But I also never came back a Nazi billionaire, and then I'm making no promises.

[–] Kolanaki@pawb.social 19 points 1 day ago* (last edited 1 day ago)

Unilaterally Authorized. Or UnAuthorized for short.

[–] painfulasterisk1@lemmy.ml 10 points 1 day ago

Looks like Elon used his Alt account.

[–] Kurious84@eviltoast.org 30 points 22 hours ago* (last edited 22 hours ago) (1 children)

Musk made the change but since AI is still as rough as his auto driving tech it did t work like he planned

But this is the future folks. Modifying the AI to fit the narrative of the regime. He's just too stupid to do it right or he might be stupid and think these llms work better than they actually do.

[–] some_guy@lemmy.sdf.org 2 points 6 hours ago

he might be stupid and think these llms work better than they actually do.

There it is.

[–] LesserAbe@lemmy.world 49 points 1 day ago (1 children)
[–] theangryseal@lemmy.world 4 points 18 hours ago (2 children)

Don’t know the reference but I’m sure it’s awesome. :p

[–] hperrin@lemmy.ca 4 points 5 hours ago (1 children)
[–] theangryseal@lemmy.world 3 points 5 hours ago

Heeeey a link!

You’re the best. I loved that.

[–] LesserAbe@lemmy.world 7 points 17 hours ago (1 children)

It's from the show "I think you should leave." There's a sketch where someone has crashed a weinermobile into a storefront, and bystanders are like "did anyone get hurt?" "What happened to the driver?" And then this guy shows up.

[–] shittydwarf@sh.itjust.works 116 points 1 day ago* (last edited 1 day ago)

Elon looking for the unauthorized person:

[–] applemao@lemmy.world 20 points 23 hours ago (1 children)

This is why I tell people stop using LLMs. The owner class owns them (imagine that) and will tell it to tell you what they want so they make more money. Simple as that.

This is why the Chinese openly releasing deepseek was such a kick in the balls to the LLM tech bros.

[–] DreamAccountant@lemmy.world 67 points 1 day ago (7 children)

Yeah, billionaires are just going to randomly change AI around whenever they feel like it.

That AI you've been using for 5 years? Wake up one day, and it's been lobotomized into a trump asshole. Now it gives you bad information constantly.

Maybe the AI was taken over by religious assholes, now telling people that gods exist, manufacturing false evidence?

Who knows who is controlling these AI. Billionaires, tech assholes, some random evil corporation?

[–] SpaceNoodle@lemmy.world 39 points 1 day ago* (last edited 1 day ago) (1 children)

Joke's on you, LLMs already give us bad information

[–] ilinamorato@lemmy.world 13 points 1 day ago (1 children)

Sure, but unintentionally. I heard about a guy whose small business (which is just him) recently had someone call in, furious because ChatGPT told them that he was having a sale that she couldn't find. The customer didn't believe him when he said that the promotion didn't exist. Once someone decides to leverage that, and make a sufficiently-popular AI model start giving bad information on purpose, things will escalate.

Even now, I think Elon could put a small company out of business if he wanted to, just by making Grok claim that its owner was a pedophile or something.

[–] knightly@pawb.social 10 points 1 day ago (2 children)

"Unintentionally" is the wrong word, because it attributes the intent to the model rather than the people who designed it.

Hallucinations are not an accidental side effect, they are the inevitable result of building a multidimensional map of human language use. People hallucinate, lie, dissemble, write fiction, misrepresent reality, etc. Obviously a system that is designed to map out a human-sounding path from a given system prompt to a particular query is going to take those same shortcuts that people used in its training data.

load more comments (2 replies)
[–] LostXOR@fedia.io 10 points 1 day ago (2 children)

That's a good reason to use open source models. If your provider does something you don't like, you can always switch to another one, or even selfhost it.

[–] WatDabney@fedia.io 21 points 1 day ago (1 children)

Or better yet, use your own brain.

load more comments (1 replies)
[–] ArchRecord@lemm.ee 9 points 1 day ago

While true, it doesn't keep you safe from sleeper agent attacks.

These can essentially allow the creator of your model to inject (seamlessly, undetectably until the desired response is triggered) behaviors into a model that will only trigger when given a specific prompt, or when a certain condition is met. (such as a date in time having passed)

https://arxiv.org/pdf/2401.05566

It's obviously not as likely as a company simply tweaking their models when they feel like it, and it prevents them from changing anything on the fly after the training is complete and the model is distributed, (although I could see a model designed to pull from the internet being given a vulnerability where it queries a specific URL on the company's servers that can then be updated with any given additional payload) but I personally think we'll see vulnerabilities like this become evident over time, as I have no doubts it will become a target, especially for nation state actors, to simply slip some faulty data into training datasets or fine-tuning processes that get picked up by many models.

load more comments (5 replies)
[–] shalafi@lemmy.world 15 points 1 day ago

None of the explanations matter now, too late. "White genocide" is now a thing in SA. The term is in people's heads. Mission accomplished.

[–] whydudothatdrcrane@lemmy.ml 16 points 1 day ago* (last edited 1 day ago) (4 children)

I'm going to bring it up.

Isn't this the same asshole who posted the "Woke racist" meme as a response to Gemini generating images of Black SS officers? Of course we now know he was merely triggered by the suggestion because of his commitment to white supremacy and alignment with the SS ideals, which he could not stand to see, pun not intended, denigrated.

The Gemini ordeal was itself a result of a system prompt; a half-ass attempt to correct for white bias deeply learned by the algorithm, just a few short years after Google ousted their AI ethics researcher for bringing this type of stuff up.

Few were the outlets that did not lend credence to the "outrage" about "diversity bias" bullshit and actually covered that deep learning algorithms are indeed sexist and racist.

Now this nazi piece of shit goes ahead and does the exact same thing; he tweaks a system prompt causing the bot to bring up the self-serving and racially charged topic of apartheid racists being purportedly persecuted. He does the vary same thing he said was "uncivilizational", the same concept he brought up just before he performed the two back-to-back Sieg Heil salutes during Trump's inauguration.

He was clearly not concerned about historical accuracy, not the superficial attempt to brown-wash the horrible past of racism which translates to modern algorithms' bias. His concern was clearly the representation of people of color, and the very ideal of diversity, so he effectively went on and implemented his supremacist seething into a brutal, misanthropic policy with his interference in the election and involvement in the criminal, fascist operation also known as DOGE.

Is there anyone at this point that is still sitting on the fence about Musk's intellectual dishonesty and deeply held supremacist convictions? Quickest way to discover nazis nowadays really: (thinks that Musk is a misunderstood genius and the nazi shit is all fake).

load more comments (4 replies)
[–] sbv@sh.itjust.works 12 points 1 day ago

There goes Adrian Dittman again. That guy oughta be locked up.

[–] RedWeasel@lemmy.world 17 points 1 day ago
[–] Darkard@lemmy.world 12 points 1 day ago
load more comments
view more: next ›