lemmy.net.au

49 readers
1 users here now

This instance is hosted in Sydney, Australia and Maintained by Australian administrators.

Feel free to create and/or Join communities for any topics that interest you!

Rules are very simple

Mobile apps

https://join-lemmy.org/apps

What is Lemmy?

Lemmy is a selfhosted social link aggregation and discussion platform. It is completely free and open, and not controlled by any company. This means that there is no advertising, tracking, or secret algorithms. Content is organized into communities, so it is easy to subscribe to topics that you are interested in, and ignore others. Voting is used to bring the most interesting items to the top.

Think of it as an opensource alternative to reddit!

founded 1 year ago
ADMINS
8301
 
 

In a sprawling aquarium complex in south-eastern France that once drew half a million visitors a year, only a few dozen people now move between pools that contain the last remaining marine mammals of Marineland Antibes. Weeds grow on walkways, the stands are empty and algae grows in the pools, giving the water a greenish hue.

It is here that Wikie and Keijo, a mother and son pair of orcas, are floating. They were born in these pools, and for decades they performed in shows for crowds. But since the park’s closure in January 2025, they no longer have an audience. When they are alone, they “log”, or float at the water’s surface, according to a court-ordered report released last April.

Marineland has long acknowledged that there is an urgent need to transfer the orcas. In a statement to the Guardian, it reiterates this: “Marineland has been saying for some time that the park cannot wait any longer.

In December 2025 the French minister delegate for ecological transition, Mathieu Lefèvre, announced that Wikie and Keijo would be sent to the Whale Sanctuary Project in Nova Scotia, Canada, calling it the “only ethical, credible, and legally compliant solution”. The 40-hectare (100-acre) outdoor site aims to recreate a seaside environment as close as possible to the natural habitat of whales and dolphins.

Lori Marino, a neuroscientist and founder of the Whale Sanctuary Project, says: “They [the orca pair] will have depth to dive, an interesting and vibrant underwater environment to explore, and conditioning and exercise routines with the trainers.”

On Monday, Marino will present her plan for the orcas at the meeting – but getting it through will not be straightforward. The decision by the French government to opt for the Whale Sanctuary Project has met strong resistance from other animal welfare organisations and Marineland’s owner.

“Nobody is actually working together, that is the problem,” says Marino.

8302
8303
 
 

cross-posted from: https://lemmy.dbzer0.com/post/63638729

An AI safety researcher has quit US firm Anthropic with a cryptic warning that the "world is in peril".

In his resignation letter shared on X, Mrinank Sharma told the firm he was leaving amid concerns about AI, bioweapons and the state of the wider world.

He said he would instead look to pursue writing and studying poetry, and move back to the UK to "become invisible".

It comes in the same week that an OpenAI researcher said she had resigned, sharing concerns about the ChatGPT maker's decision to deploy adverts in its chatbot.

Anthropic, best known for its Claude chatbot, had released a series of commercials aimed at OpenAI, criticising the company's move to include adverts for some users.

The company, which was formed in 2021 by a breakaway team of early OpenAI employees, has positioned itself as having a more safety-orientated approach to AI research compared with its rivals.

Sharma led a team there which researched AI safeguards.

He said in his resignation letter his contributions included investigating why generative AI systems suck up to users, combatting AI-assisted bioterrorism risks and researching "how AI assistants could make us less human".

But he said despite enjoying his time at the company, it was clear "the time has come to move on".

****"The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment," Sharma wrote.

He said he had "repeatedly seen how hard it is to truly let our values govern our actions" - including at Anthropic which he said "constantly face pressures to set aside what matters most".

Sharma said he would instead look to pursue a poetry degree and writing.

He added in a reply: "I'll be moving back to the UK and letting myself become invisible for a period of time."****

Those departing AI firms which have loomed large in the latest generative AI boom - and sought to retain talent with huge salaries or compensation offers - often do so with plenty of shares and benefits intact. Eroding principles

Anthropic calls itself a "public benefit corporation dedicated to securing [AI's] benefits and mitigating its risks".

In particular, it has focused on preventing those it believes are posed by more advanced frontier systems, such as them becoming misaligned with human values, misused in areas such as conflict or too powerful.

It has released reports on the safety of its own products, including when it said its technology had been "weaponised" by hackers to carry out sophisticated cyber attacks.

But it has also come under scrutiny over its practices. In 2025, it agreed to pay $1.5bn (£1.1bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models.

Like OpenAI, the firm also seeks to seize on the technology's benefits, including through its own AI products such as its ChatGPT rival Claude.

It recently released a commercial that criticised OpenAI's move to start running ads in ChatGPT.

OpenAI boss Sam Altman had previously said he hated ads and would use them as a "last resort".

Last week, he hit back at the advert's description of this as a "betrayal" - but was mocked for his lengthy post criticising Anthropic.

Writing in the New York Times on Wednesday, former OpenAI researcher Zoe Hitzig said she had "deep reservations about OpenAI's strategy".

"People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife," she wrote.

"Advertising built on that archive creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent."

Hitzig said a potential "erosion of OpenAI's own principles to maximise engagement" might already be underway at the firm.

She said she feared this may accelerate if the company's approach to advertising does not reflect its values to benefit humanity.

BBC News has approached OpenAI for a response.

8304
 
 
8305
8306
8307
 
 

A district court in New York set the Trump administration off on Wednesday by naming the replacement for one of the DOJ's several "not lawfully serving" acting U.S. attorneys, leading to a swift "you are fired" announcement on social media and a near replay of a standoff from the summer.

John Sarcone had clung to his claimed title of acting top prosecutor, through the office of first assistant U.S. attorney, and special attorney, as supervisor for the U.S. Attorney's Office for the Northern District even after a judge quashed his grand jury subpoenas of New York Attorney General Letitia James' office, as Sarcone "used authority he did not lawfully possess to direct the issuance of the subpoenas[.]"

In a brief announcement, the court cited 28 U.S. Code § 546(d) to name [Donald] Kinsella the U.S. attorney, pointing to his "more than 50 years of experience in complex criminal and civil litigation" and his time as the criminal chief of the office.

Under the statute, when a U.S. attorney's stint has expired, the "district court for such district may appoint a United States attorney to serve until the vacancy is filled." And under Article II, Congress "may by Law vest the Appointment of such inferior Officers, as they think proper, in the President alone, in the Courts of Law, or in the Heads of Departments."

8308
 
 

We must dismantle all production. Production has caused all of societies ills. The return to the true nature, to the roots of humanity, is the only way to cure humanity of it's problems. The first human to pick up a stone to start producing things has put humanity on deadly path, that can only end in the complete eradication of humanity. It is for this reason that the phrase "death to production" is spoken of, to save humanity.


This user is suspected of being a cat. Please report any suspicious behavior.

8309
8310
8311
8312
 
 

New York City Mayor Zohran Mamdani has appointed the Zionist leader of a liberal Jewish community group to lead his Office to Combat Antisemitism.

Phylisa Wisdom, who will be leading City Hall’s fight against Jew-hatred, has led the New York Jewish Agenda since 2023.

The group opposed the Boycott, Divestment and Sanctions (BDS) campaign, an anti-Israel initiative Mamdani has previously expressed his support for.

The group been instrumental in calling out antisemitism over the years. Last month, after a protest outside a synagogue saw activists chant “we support Hamas”, NYJA condemned it as “unambiguous and unacceptable antisemitism”.

8313
 
 

Using the title that the ABC has used but the important part I think is the announcement of $87 million in helping support survivors in redress, family tracing, and trauma-informed care.

8314
 
 

With Windows Baseline Security Mode, Windows will move toward operating with runtime integrity safeguards enabled by default. These safeguards ensure that only properly signed apps, services and drivers are allowed to run, helping to protect the system from tampering or unauthorized changes.

https://www.windowslatest.com/2026/02/12/microsoft-wants-windows-11-secure-by-default-could-allow-only-properly-signed-apps-and-drivers-by-default/

8315
 
 

After 26 years of negotiations, the EU’s Partnership Agreement with the South American bloc Mercosur (Argentina, Brazil, Paraguay, and Uruguay) was expected to be a geopolitical win for the European Commission. Instead, it turned out to be a test to overcome the EU’s institutional complexity, while exposing deep political divisions and hindering the EU’s trade agenda.

8316
 
 

ANTWERP, Belgium, Feb 11 (Reuters) - French President Emmanuel Macron on Wednesday advocated for a single European energy market and for the construction of an integrated electricity grid.

8317
8318
8319
 
 

An AI safety researcher has quit US firm Anthropic with a cryptic warning that the "world is in peril".

In his resignation letter shared on X, Mrinank Sharma told the firm he was leaving amid concerns about AI, bioweapons and the state of the wider world.

He said he would instead look to pursue writing and studying poetry, and move back to the UK to "become invisible".

It comes in the same week that an OpenAI researcher said she had resigned, sharing concerns about the ChatGPT maker's decision to deploy adverts in its chatbot.

Anthropic, best known for its Claude chatbot, had released a series of commercials aimed at OpenAI, criticising the company's move to include adverts for some users.

The company, which was formed in 2021 by a breakaway team of early OpenAI employees, has positioned itself as having a more safety-orientated approach to AI research compared with its rivals.

Sharma led a team there which researched AI safeguards.

He said in his resignation letter his contributions included investigating why generative AI systems suck up to users, combatting AI-assisted bioterrorism risks and researching "how AI assistants could make us less human".

But he said despite enjoying his time at the company, it was clear "the time has come to move on".

****"The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment," Sharma wrote.

He said he had "repeatedly seen how hard it is to truly let our values govern our actions" - including at Anthropic which he said "constantly face pressures to set aside what matters most".

Sharma said he would instead look to pursue a poetry degree and writing.

He added in a reply: "I'll be moving back to the UK and letting myself become invisible for a period of time."****

Those departing AI firms which have loomed large in the latest generative AI boom - and sought to retain talent with huge salaries or compensation offers - often do so with plenty of shares and benefits intact. Eroding principles

Anthropic calls itself a "public benefit corporation dedicated to securing [AI's] benefits and mitigating its risks".

In particular, it has focused on preventing those it believes are posed by more advanced frontier systems, such as them becoming misaligned with human values, misused in areas such as conflict or too powerful.

It has released reports on the safety of its own products, including when it said its technology had been "weaponised" by hackers to carry out sophisticated cyber attacks.

But it has also come under scrutiny over its practices. In 2025, it agreed to pay $1.5bn (£1.1bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models.

Like OpenAI, the firm also seeks to seize on the technology's benefits, including through its own AI products such as its ChatGPT rival Claude.

It recently released a commercial that criticised OpenAI's move to start running ads in ChatGPT.

OpenAI boss Sam Altman had previously said he hated ads and would use them as a "last resort".

Last week, he hit back at the advert's description of this as a "betrayal" - but was mocked for his lengthy post criticising Anthropic.

Writing in the New York Times on Wednesday, former OpenAI researcher Zoe Hitzig said she had "deep reservations about OpenAI's strategy".

"People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife," she wrote.

"Advertising built on that archive creates a potential for manipulating users in ways we don't have the tools to understand, let alone prevent."

Hitzig said a potential "erosion of OpenAI's own principles to maximise engagement" might already be underway at the firm.

She said she feared this may accelerate if the company's approach to advertising does not reflect its values to benefit humanity.

BBC News has approached OpenAI for a response.

8320
 
 

Can someone recommend some self-hosted or not, tool that I could schedule for periodical scans of all I host and is exposed to public internet?

I think I did all by the book now, including crowdsec and/or fail2ban, but recently for example I got an email from German CERT that my n8n is out of date and has some CVEs. All of them were not exploitable in my case but that got me thinking that if CERT can do it, maybe there are some services or tools that I could use and get alerts sooner if something is vulnerable in my infrastructure.

Any recommendations welcomed! Ideally self hosted and FOSS of course.

8321
 
 

Invest in my cumpany give me all ur money

8322
 
 

Removed/flagged from Hacker News/ Y Combinator news.

Tan, the CEO of the vaunted startup incubator Y Combinator, announced Wednesday he had spun up a dark-money group called “Garry’s List” that he described as a “voter education group” that is “dedicated to civic engagement, voter education and support for common-sense policies and candidates” in a press release. Such groups give donors a way to anonymously support causes without giving directly to a candidate or a measure.

8323
8324
8325
view more: ‹ prev next ›