XLE

joined 8 months ago
[–] XLE@piefed.social 1 points 1 day ago

AI-generated revenge porn of adults is already sexual abuse. Hopefully you, Iconoclast, agree that such a thing is already reprehensible. Now hopefully you understand why it's bad when it's done to real children.

The AI sphere is full of people who hate consent: Sam Altman the sister rapist, Eli Yudkowsky the serial abuser, Elon Musk who I don't even know where to start, etc. I know you love AI an unhealthy amount, but this is not a hill you have to die on.

[–] XLE@piefed.social -4 points 2 days ago* (last edited 2 days ago)

It's linked in his public profile.

Edit: Speaking of blind rage, I hope you downvoted this accidentally...

[–] XLE@piefed.social 24 points 2 days ago

Finding out about this gives me some extra questions, though.

  • Was this data summarized on enabling this window, or before?
  • Did it use an existing model, or re-use one that someone may have already downloaded for a different feature?
  • Is this activity going anywhere else, like Mozilla's recent "privacy-preserving" advertising?
  • When this does release, what will the default be?
[–] XLE@piefed.social 8 points 2 days ago (1 children)

Between this and the Chat Control rollback, Europe has been on a roll with the good choices for a change.

The companies generating this stuff should have been in the crosshairs from the beginning.

[–] XLE@piefed.social 1 points 2 days ago

You beat me to the punch on slop. I would also like to opt out out of all the ghost bands Spotify assembled so they wouldn't have to pay royalties to artists who joined the site

[–] XLE@piefed.social 10 points 2 days ago

For us, sure. For the average Joe who doesn't know about the side effects of fingerprinting, not so much.

[–] XLE@piefed.social 1 points 2 days ago (1 children)

The Wikipedia article is yours to peruse and fix if you think it's wrong. It has examples. I just quoted something that was particularly funny given your insistence that AI is literally the PC and clones.

[–] XLE@piefed.social 22 points 2 days ago

"When it comes to privacy, defaults matter."

- Mozilla

Why not remove the AI and offer them as a separate extension? That way you're happy, and everybody else doesn't have crap shoved down their throats.

[–] XLE@piefed.social 21 points 2 days ago (8 children)

It might be easier to soften Librewolf than harden Firefox, but fair point.

If you're a relatively normal user and you still want to use LibreWolf, I would recommend:

  • disable fingerprinting
  • not clearing history on exit

Most of this is easy to find, especially thanks to the LibreWolf menu

[–] XLE@piefed.social 11 points 2 days ago

Hey, I'm not excited about more stuff getting added into an already overflowing Firefox (why not an extension?!), but if they must promote AI choice, I'm with you: actually allow user choice.

(Based on how Mozilla has added two unrequested search engines while ignoring a request to add StartPage, the "choice" thing seems to boil down to backroom deals.)

 

AI translated articles swapped sources or added unsourced sentences with no explanation, while others added paragraphs sourced from completely unrelated material.

The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.

Wikipedia editors investigated how OKA was operating and found that it was mostly relying on cheap labor from contractors in the Global South, and that these contractors were instructed to copy/paste articles to popular LLMs to produce translations.

For example, a public spreadsheet used by OKA translators to keep track of what articles they’re translating instructs them to “pick an article, copy the lead section into Gemini or chatGPT, then review if some of the suggestions are an improvement to readability. Make edits to the Wiki articles only if the suggestions are an improvement and don't change the meaning of the lead. Do not change the content unless you have checked that what Gemini says is correct!”

Lebleu told me, and other editors have noted in their public on-site discussion of the issue, that these same instructions previously told OKA translators to use Grok, Elon Musk’s LLM, for the same purpose. Grok, which also produces an entirely automated alternative to Wikipedia called Grokepedia, is prone to errors precisely because it does not use humans to vet its output.

“Following the recent discussion, we have strengthened our safeguards,” [OKA's] Zimmerman told me. “We are now rolling out a second, independent LLM review step. Translators must run the completed draft through a separate model using a dedicated comparison prompt designed to identify potential discrepancies, omissions, or inaccuracies relative to the source text. Initial findings suggest this is highly effective at detecting potential issues.”

Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer review mechanisms.

Using AI to check the output of AI for errors is a method that is historically prone to errors. For example, we recently reported on an AI-powered private school that used AI to check AI-generated questions for students. Internal testing found it had at least a 10 percent failure rate.

 

Original Reddit post, which the article almost exclusively pulls from: https://old.reddit.com/r/googlecloud/comments/1reqtvi/82000_in_48_hours_from_stolen_gemini_api_key_my/

 

Sam Altman says "the DoW displayed a deep respect for safety."

Not 24 hours ago, he seemed to back Anthropic "supporting our warfighters" as long as two "red lines" weren't crossed, though his tepid support was laden with five instances of "I think" and one "mostly."

The two "red lines" in question:

  • Domestic mass surveillance
    (presumably, foreign mass surveillance is ok)
  • Autonomous weapons
    (likely because they would be held legally liable for misfires)
599
submitted 1 month ago* (last edited 1 month ago) by XLE@piefed.social to c/technology@lemmy.world
 

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

 

cross-posted from: https://discuss.online/post/29265892

Concerns over AI surveillance in schools are intensifying after armed officers swarmed a 16-year-old student outside Kenwood High School in Baltimore when an AI gun detection system falsely flagged a Doritos bag as a firearm.

Allen was handcuffed at gunpoint. Police later showed him the AI-captured image that triggered the alert. The crumpled Doritos bag in his pocket had been mistaken for a gun.

 

r/privacy moderators removed a New York Times Wirecutter journalist's post and comments, accusing them of "promoting a site or blog" (this is not an applicable rule).

The journalist was in the comments talking with users, and the top comment was censored too.

Original post (gone now)

The same post on a different subreddit where it was not censored

view more: next ›