Technology

2910 readers
245 users here now

Which posts fit here?

Anything that is at least tangentially connected to the technology, social media platforms, informational technologies and tech policy.


Post guidelines

[Opinion] prefixOpinion (op-ed) articles must use [Opinion] prefix before the title.


Rules

1. English onlyTitle and associated content has to be in English.
2. Use original linkPost URL should be the original link to the article (even if paywalled) and archived copies left in the body. It allows avoiding duplicate posts when cross-posting.
3. Respectful communicationAll communication has to be respectful of differing opinions, viewpoints, and experiences.
4. InclusivityEveryone is welcome here regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation.
5. Ad hominem attacksAny kind of personal attacks are expressly forbidden. If you can't argue your position without attacking a person's character, you already lost the argument.
6. Off-topic tangentsStay on topic. Keep it relevant.
7. Instance rules may applyIf something is not covered by community rules, but are against lemmy.zip instance rules, they will be enforced.


Companion communities

!globalnews@lemmy.zip
!interestingshare@lemmy.zip


Icon attribution | Banner attribution


If someone is interested in moderating this community, message @brikox@lemmy.zip.

founded 2 years ago
MODERATORS
1
2
 
 

AI companies claim their tools couldn't exist without training on copyrighted material. It turns out, they could and it just takes more work. To prove it, AI researchers trained a model on a dataset that uses only public domain and openly licensed material.

What makes it difficult is curating the data, but once the data has been curated once, in principle everyone can use it without having to go through the painful part. So the whole "we have to violate copyright and steal intellectual property" is (as everybody already knew) total BS.

3
 
 

"this morning, as I was finishing up work on a video about a new mini Pi cluster, I got a cheerful email from YouTube saying my video on LibreELEC on the Pi 5 was removed because it promoted:

Dangerous or Harmful Content Content that describes how to get unauthorized or free access to audio or audiovisual content, software, subscription services, or games that usually require payment isn't allowed on YouTube.

I never described any of that stuff, only how to self-host your own media library.

This wasn't my first rodeo—in October last year, I got a strike for showing people how to install Jellyfin!

In that case, I was happy to see my appeal granted within an hour of the strike being placed on the channel. (Nevermind the fact the video had been live for over two years at that point, with nary a problem!)

So I thought, this case will be similar:

  • The video's been up for over a year, without issue
  • The video's had over half a million views
  • The video doesn't promote or highlight any tools used to circumvent copyright, get around paid subscriptions, or reproduce any content illegally

Slam-dunk, right? Well, not according to whomever reviewed my appeal. Apparently self-hosted open source media library management is harmful.

Who knew open source software could be so subversive?"

4
 
 

I asked teachers to tell me how AI has changed how they teach.

The response from teachers and university professors was overwhelming. In my entire career, I’ve rarely gotten so many email responses to a single article, and I have never gotten so many thoughtful and comprehensive responses.

One thing is clear: teachers are not OK.

They describe trying to grade “hybrid essays half written by students and half written by robots,” trying to teach Spanish to kids who don’t know the meaning of the words they’re trying to teach them in English, and students who use AI in the middle of conversation. They describe spending hours grading papers that took their students seconds to generate: “I've been thinking more and more about how much time I am almost certainly spending grading and writing feedback for papers that were not even written by the student,” one teacher told me. “That sure feels like bullshit.”

5
 
 

Hmmm we'll see, time will reveal all

6
7
8
9
10
 
 

Push notification data can sometimes include the unencrypted content of notifications. Requests include from the U.S., U.K., Germany, and Israel.

Archived version: https://archive.is/IVBiQ

11
12
 
 

Encourages move to Linux but, for goodness sake, RTFM first

13
14
15
 
 

What if you could increase conversations — without collecting more data? You’ve probably been told you need to collect more data to get better results.

16
17
18
 
 

cross-posted from: https://lemmy.sdf.org/post/35955744

Archived

Hundreds of pages of classified documents leaked to the ABC have offered an unprecedented glimpse into China's infamous censorship regime.

It has grown faster, smarter and increasingly invisible, quietly erasing the memory of the 1989 Tiananmen Square massacre from public view.

Thirty-six years on, Beijing still has not disclosed the official death toll of the bloody crackdown on a pro-democracy gathering on June 4, when more than 1 million protesters were in the square.

Historians estimate that the People's Liberation Army (PLA) killed anywhere from 200 to several thousand people that day.

[...]

More than 230 pages of censorship instructions prepared by Chinese social media platforms were shared by industry insiders with the ABC.

They were intended to be circulated among multi-channel networks or MCNs — companies that manage the accounts of content creators across multiple social and video platforms like Douyin, the Chinese version of TikTok.

The files reveal deep anxiety among Chinese authorities about the spread of any reference to the most violently suppressed pro-democracy movement in the country's history.

The documents instruct MCNs to remove any content that depicts state violence and include compilations of text, images and video content for reference.

The reference material includes graphic scenes of the People's Liberation Army opening fire on civilians, while others say students attacked the soldiers.

[...]

The leaked documents also shed light on the lives of censors, who work under close oversight from the Cyberspace Administration.

All censors are required to pass multiple exams to ensure they are vigilant and can respond swiftly to remove potentially risky content — a crucial safeguard to prevent platforms from being suspended or shut down by authorities.

Everything visible online needs to be checked: videos, images, captions, live streams, comments and text.

Algorithms are trained to detect visual cues, while human censors are on alert for coded language, disguised symbols and unusual emoji combinations that may signal dissent.

Documents also show censors must meet strict productivity targets — some are expected to review hundreds of posts per hour.

Their behaviour, accuracy and speed are tracked by internal monitoring software. Mistakes can result in formal warnings or termination.

[...]

[Censors] said their colleagues suffered from burnout, depression and anxiety due to constant exposure to disturbing, violent or politically sensitive content.

One said working as a censor was like "reliving the darkest pages of history every day, while being watched by software that records every keystroke".

They are normally paid with a modest salary — often less than $1,500 a month — though the psychological toll is severe.

[...]

In some cases, platforms in China [such as Douyin which is available only in China] ]allow low-risk content to remain online — but under a shadow ban.

This means the content is visible to the user who posted it and a limited pool of users.

[...]

[One expert] warns that the implications of AI censorship extend beyond China.

"If misleading data continues to flow outward, it could influence the AI models the rest of the world relies on," he said.

"We need to think hard about how to maintain databases that are neutral, uncensored and accurate — because if the data is fake, the future will be fake too."

Despite China's increasing use of AI to automate censorship, [one expert says] Chinese people's intelligence will continue to outsmart the technology.

While he worries future generations may struggle to access truthful information, he believes people will find new ways to express dissent — even under an airtight system.

"After working as a censor for years, I found human creativity can still crush AI censors many times over," he said.

[...]

19
 
 

Plus, Europeans will find it easier to sideline Bing and uninstall the Windows Store

20
 
 

Archive

The key reason is that we just don’t have enough people on the admin team to keep the place running. Most of the admin team has stepped down, mostly due to burnout, and finding replacements hasn’t worked out.

21
22
 
 

In April, Palantir co-founder Joe Lonsdale got into a brawl with former Coinbase chief technology officer and Network State advocate Balaji Srinivasan. It wasn’t on a prominent stage or even Twitter/X; it happened in a Signal group chat that’s become a virtual gathering place for influential tech figures. Srinivasan wasn’t going along with the tech right’s aggressive anti-China rhetoric, so Lonsdale accused him of “insane CCP thinking.” “Not sure what leaders hang out w you in Singapore but on this you have been taken over by a crazy China mind virus,” he wrote.

Before Semafor published its story on the Signal chats that led with the billionaire spat, both Lonsdale and Srinivasan dismissed any notion their exchange was anything but a friendly disagreement. Surely, such wealthy people have much more in common than they do separating them. But the exchange does expose an ideological rift that will likely only grow in the coming years as more of the tech industry openly aligns itself with the security state to pursue lucrative military contracts.

Lonsdale and Srinivasan are arguably on either side of that divide. Palantir is part of the vanguard of defense tech companies openly championing collaboration with the US government. It claims to want to defend American power in the twenty-first century, positioning China as a civilizational threat — in part to mask the commercial threat Shenzhen poses to Silicon Valley. Lonsdale was even helping staff the Trump administration. The Network State movement, on the other hand, wants to escape the authority of the United States — or any other government — entirely, and doesn’t feel it’s part of that fight.

23
 
 

A wired article for !meshtastic@mander.xyz ! In the big leagues now! Lmao

24
25
 
 

Mozilla has developed a new security feature for its add-on portal that helps block Firefox malicious extensions that drain cryptocurrency wallets.

view more: next ›