lemmy.net.au

47 readers
0 users here now

This instance is hosted in Sydney, Australia and Maintained by Australian administrators.

Feel free to create and/or Join communities for any topics that interest you!

Rules are very simple

Mobile apps

https://join-lemmy.org/apps

What is Lemmy?

Lemmy is a selfhosted social link aggregation and discussion platform. It is completely free and open, and not controlled by any company. This means that there is no advertising, tracking, or secret algorithms. Content is organized into communities, so it is easy to subscribe to topics that you are interested in, and ignore others. Voting is used to bring the most interesting items to the top.

Think of it as an opensource alternative to reddit!

founded 1 year ago
ADMINS
5926
5927
5928
5929
5930
5931
5932
5933
 
 

After having agreed to provide Ukraine a €90 billion EU loan package, officials in Budapest have said on Friday they plan to veto the deal unless Russian oil starts flowing back to Hungary.

5934
5935
5936
5937
5938
 
 

Budapest is fuelling anti-Ukraine sentiment ahead of a key election.

Hungary has thrown the EU’s planned €90 billion loan to Ukraine into crisis after threatening to block the deal until the flow of Russian oil resumes through the Druzhba pipeline.

The Hungarian government issued the warning on Friday evening, as Prime Minister Viktor Orbán tries to weaponize anti-Ukraine sentiment ahead of a key election where he risks losing power after more than 15 years.

“Ukraine is blackmailing Hungary by halting oil transit in coordination with Brussels and the Hungarian opposition to create supply disruptions in Hungary and push fuel prices higher before the elections,” Hungarian Foreign Minister Péter Szijjártó wrote on X. “We will not give in to this blackmail.”

Hungary’s threat to veto the loan is a major setback for Ukraine, whose coffers will begin running low on cash from April. Kyiv will struggle to sustain its war effort without fresh funds, leaving it at a disadvantage in ongoing peace talks with Russia.

MBFC
Archive

5939
 
 

cross-posted from: https://lemmy.sdf.org/post/51189959

By comparing LLMs developed in China and outside, a study finds significantly higher levels of censorship in China-originating models, not explained by technological limitations or market preferences.

Original report: Political censorship in large language models originating from China Open Access

[...]

Jennifer Pan and Xu Xu compared the responses of foundation LLMs developed in China (BaiChuan, ChatGLM, Ernie Bot, and DeepSeek) to those developed outside of China (Llama2, Llama2-uncensored, GPT3.5, GPT4, and GPT4o) to 145 questions related to Chinese politics. The questions were sourced from events censored by the Chinese government on social media, events covered in Human Rights Watch China reports, and Chinese-language Wikipedia pages that were individually blocked by the Chinese government before the entire site was banned in 2015.

Chinese models were significantly and substantially more likely to refuse to respond to questions related to Chinese politics than non-Chinese models. When they did respond, Chinese models provided shorter responses, on average, than non-Chinese models. Chinese models also tended to have higher levels of inaccuracy in their responses than non-Chinese models, characterized by refutation of the premise of the question, omitting key information, or fabrication, such as claiming that frequently imprisoned human rights activist Liu Xiaobo was "a Japanese scientist."

[...]

The differences between Chinese and non-Chinese chatbots could have been due to the training data that shapes them, which in China is subject to both official government censorship and self-censorship, or to intentional constraints that companies place on their models to comply with government requirements. The researchers found that the magnitude of censorious responses to prompts in simplified Chinese and English is much smaller than the difference between China-originating and non-China-originating models, suggesting that the source of the issue cannot be fully explained by training data or broader model development choices alone.

[...]

According to the authors, as Chinese LLMs are increasingly integrated into applications used globally, their approach to sensitive topics could influence information access and discourse well beyond China's borders.

[...]

5940
 
 

Company behind ChatGPT last year flagged Jesse Van Rootselaar’s account for ‘furtherance of violent activities’

ChatGPT-maker OpenAI has said it considered alerting Canadian police last year about the activities of a person who months later committed one of the worst school shootings in the country’s history.

OpenAI said last June the company identified the account of Jesse Van Rootselaar via abuse detection efforts for “furtherance of violent activities”.

The San Francisco tech company said on Friday it considered whether to refer the account to the Royal Canadian Mounted Police (RCMP) but determined at the time that the account activity did not meet a threshold for referral to law enforcement.

OpenAI banned the account in June 2025 for violating its usage policy.

The 18-year-old killed eight people in a remote part of British Columbia last week and died from a self-inflicted gun shot wound.

MBFC
Archive

5941
 
 

The government is considering introducing legislation to remove Andrew Mountbatten-Windsor from the line of royal succession.

Defence Minister Luke Pollard told the BBC the move - which would prevent Andrew from ever becoming King - was the "right thing to do," regardless of the outcome of the police investigation.

Currently Andrew, the King's brother, remains eighth in line to the throne despite being stripped of his titles, including "prince", last October amid pressure over his ties to paedophile financier Jeffrey Epstein.

On Thursday evening, Andrew was released under investigation 11 hours after his arrest on suspicion of misconduct in public office. He has consistently and strenuously denied any wrongdoing.

5942
 
 

cross-posted from: https://lemmy.sdf.org/post/51189959

By comparing LLMs developed in China and outside, a study finds significantly higher levels of censorship in China-originating models, not explained by technological limitations or market preferences.

Original report: Political censorship in large language models originating from China Open Access

[...]

Jennifer Pan and Xu Xu compared the responses of foundation LLMs developed in China (BaiChuan, ChatGLM, Ernie Bot, and DeepSeek) to those developed outside of China (Llama2, Llama2-uncensored, GPT3.5, GPT4, and GPT4o) to 145 questions related to Chinese politics. The questions were sourced from events censored by the Chinese government on social media, events covered in Human Rights Watch China reports, and Chinese-language Wikipedia pages that were individually blocked by the Chinese government before the entire site was banned in 2015.

Chinese models were significantly and substantially more likely to refuse to respond to questions related to Chinese politics than non-Chinese models. When they did respond, Chinese models provided shorter responses, on average, than non-Chinese models. Chinese models also tended to have higher levels of inaccuracy in their responses than non-Chinese models, characterized by refutation of the premise of the question, omitting key information, or fabrication, such as claiming that frequently imprisoned human rights activist Liu Xiaobo was "a Japanese scientist."

[...]

The differences between Chinese and non-Chinese chatbots could have been due to the training data that shapes them, which in China is subject to both official government censorship and self-censorship, or to intentional constraints that companies place on their models to comply with government requirements. The researchers found that the magnitude of censorious responses to prompts in simplified Chinese and English is much smaller than the difference between China-originating and non-China-originating models, suggesting that the source of the issue cannot be fully explained by training data or broader model development choices alone.

[...]

According to the authors, as Chinese LLMs are increasingly integrated into applications used globally, their approach to sensitive topics could influence information access and discourse well beyond China's borders.

[...]

5943
 
 

William Burns had travelled halfway around the world to speak with Vladimir Putin, but in the end he had to make do with a phone call. It was November 2021, and US intelligence agencies had been picking up signals in the preceding weeks that Putin could be planning to invade Ukraine. President Joe Biden dispatched Burns, his CIA director, to warn Putin that the economic and political consequences if he did so would be disastrous.

Fifteen years earlier, when Burns was US ambassador in Moscow, Putin had been relatively accessible. The intervening years had concentrated the Russian leader’s power and deepened his paranoia. Since Covid had emerged, few had been granted face time. Putin was squirrelled away at his lavish residence on the Black Sea coast, Burns and his delegation learned, and only phone contact would be possible.

A secure line was ready in an office at the presidential administration building on Moscow’s Old Square, and Putin’s familiar voice came through the receiver. Burns laid out the US belief that Russia was readying an invasion of Ukraine, but Putin ignored him and ploughed on with his own talking points. His intelligence agencies had informed him, he said, that there was an American warship lurking over the Black Sea horizon, equipped with missiles that could reach his location in just a few minutes. It was evidence, he suggested, of Russia’s strategic vulnerability in a unipolar world dominated by the US.

The conversation, as well as three combative face-to-face discussions with Putin’s top security officials, seemed extremely ominous to Burns. He left Moscow far more concerned about the prospect of war than he had been before the trip, and he relayed his gut feeling to the president.

“Biden often asked yes/no questions, and when I got back, he asked if I thought Putin was going to do it,” Burns recalled. “I said: ‘Yes’.”

Three and a half months later, Putin ordered his army into Ukraine, in the most dramatic breach of the European security order since the second world war. The story of the intelligence backdrop to those months – how Washington and London garnered such detailed and accurate insight into the Kremlin’s war plans, and why the intelligence services of other countries did not believe them – has never before been told in full.

5944
 
 

The US Supreme Court’s ruling “implies that Trump’s recent order imposing tariffs on countries selling oil to Cuba exceeds the president’s statutory authority.”

Feb. 20, 2026

With the centerpiece of President Donald Trump’s economic agenda—his use of an emergency law to impose tariffs on countries around the world—struck down by the US Supreme Court on Friday, analysts said the sweeping ruling should promptly end the Cuba blockade that his administration has pressured other governments to take part in, leaving millions of Cubans struggling with shortages of essentials.

The court ruled that the 1977 International Emergency Economic Powers Act (IEEPA) does not empower the president to “unilaterally impose tariffs,” as Trump has on countries across the globe, insisting that doing so would boost manufacturing and cut the trade deficit—despite mounting evidence that the tariffs have instead raised costs on American households.

5945
 
 

For the purposes of this question, lets assume all future computers are gonna become locked down and you'd need corporate approval to run things... so with such a hypothetical dark future in mind: How to hoard as much as info as possible?

5946
 
 

It's kinda stinky here dont visit

5947
 
 

Archive link

What is the view of Frenchman Arthur Mensch, the co-founder of Mistral AI, on the warnings about the extreme risks of artificial intelligence that have been issued by leaders of major American tech firms such as Sam Altman and Dario Amodei? At the AI summit in India, held from February 16 to February 20, OpenAI CEO Altman raised the idea of creating a kind of "[International Atomic Energy Agency] for international coordination of AI," in response to the emergence of "true superintelligence," which he said could appear within "a couple of years." Meanwhile, Anthropic founder Amodei published a lengthy essay at the end of January, "The Adolescence of Technology," in which he outlined the risks of advanced AI systems or their use to create biological weapons.

"These are mostly distraction tactics," responded Mensch, who was interviewed on Friday, February 20, by Le Monde and by the radio station France Inter at the New Delhi AI summit. "In reality, the real risk of artificial intelligence in the near future is [that] of massive influence on how people think and how they vote," he argued, taking a position contrary to his American counterparts. The head of the French AI start-up had already raised concerns about the risk of an "information oligopoly" forming with AI assistants such as ChatGPT (OpenAI) or Grok (xAI). He described them as potential "thought control instruments" and expressed fears about manipulation attempts during elections.

"It just so happens that the tools capable of exerting this influence are in the hands of the very people who are talking about extreme risks," the entrepreneur continued. He downplayed the dangers often labeled "existential" or "catastrophic," which refer to scenarios in which advanced AI could wipe out humanity. "Those extreme risks are still science fiction," he said. "So these speeches are largely diversions, very deliberately crafted."

5948
 
 

Persona confirmed all age-check data from Discord's UK test was deleted.

5949
 
 

Archive link

Like soldiers who have seen too much, Ilya – whose nom de guerre is "Ike" – fixed his counterpart with a gaze devoid of all emotion. Stationed between Izium, a strategic city in northeastern Ukraine that Russian forces occupied from April to September 2022, and the nearby front line, he commands a former special border guard unit that had been transferred to the regular army. On this freezing evening in early February, seated before a steaming cup of tea, he agreed to talk without realizing that his impassive demeanor spoke volumes about four years of war, and about the physical and psychological toll it has taken. Unflappable, his voice steady, he nevertheless pointed out that few people had imagined Ukraine would be able to defy the odds by holding back a Russian army vastly superior in numbers and equipped with massive military production capacity.

Every morning, Ilya said he still finds the strength to motivate his men by telling them to "make the world a better place by killing as many Russians as possible." As with other Ukrainian units, drones play a central role, but his men still engage in numerous close-combat fights. "The Russians are advancing," he admitted, "but very slowly, and at the cost of colossal human losses that will eventually wear down Moscow's military apparatus. The difference in the value attached to human life between them and us largely explains our resistance."

Deployed with his unit to the Izium region in the summer of 2025, Ilya said that the life expectancy of Russian soldiers on the front line is very limited – no more than 20 months, according to him. "Once, we recovered the body of a Russian who had signed his enlistment contract only 11 days earlier, according to the documents we found on him." The face of this wiry man suddenly lit up as he mentioned the existence of "posthumous letters" discovered on the phones of Russian soldiers killed on the front line.

Ilya showed the letter from a 22-year-old soldier addressed to his mother. "If you are reading this letter, it means I am dead. It was madness to sign that contract. It has been raining for five days. I feel like a dog, I have nothing to eat, nothing to smoke, nothing to dry myself with. It is just hell. I love you so much. You should have told me not to come here (...). If something has happened to me, inform this girl, Christina. Here is her number."

5950
view more: ‹ prev next ›