> can't have apps without an account
> cave have an account without a loicense
Will this finally kill off the "Apple is private enough" mantra I always hear?
> can't have apps without an account
> cave have an account without a loicense
Will this finally kill off the "Apple is private enough" mantra I always hear?
I can vouch for the page being there on my Firefox 148 on the desktop.
This is probably common knowledge to you and many others, but it bears repeating: You cannot donate to fund the development of Mozilla Firefox.
Google can, unfortunately.
There's a bit more to it: Obviously, if a model gets more correct data pumped into it, it's more likely to produce a correct output. But they found that at the core of every AI model they tested, when an incorrect output came along, certain nodes produced it. And they are some of the nodes at the earliest part of making the model - before data gets added.
So with that in mind, the tl;dr is more like
AI models have two goals: first be readable, then be correct. It appears the nodes causing incorrect outputs that are also intended to make the output readable.
They're using something that technically is AI, but it was broadly never marketed as such, because it was built before "AI" became a marketing buzzword.
Anthropic was never an ethical company, just one that released a competent product.
Their attempts to look ethical were reminiscent of a horror movie villain donning someone else's freshly peeled face in an attempt to look better
Anthropic has described itself as the AI company with a “soul.”
This is silicon valley stupidity at its finest.
AIs do not have souls. But guess what: The "soul" file is what runs trash like OpenClaw.
And companies definitely don't have souls.
Anthropic said shortcomings in its two-year-old Responsible Scaling Policy could hinder its ability to compete in a rapidly growing AI market... it had hoped its original safety principles “would encourage other AI companies to introduce similar policies.
Rules for thee and never for me. (BTW, these rules sucked, and mostly didn't address actual dangers.)
Mozilla has released so many self-described AI features in the past few years, but this is the only one that has:
I hope Mozilla learns their lesson. I doubt they will, but I hope.
For this particular paper, it seems like a design flaw got uncovered. And it may very well be part of the architecture of how LLMs are even readable to begin with, given how deep and universal the "bad" nodes are.
I can't prove any AI company was aware of this, but they would have been in a much better position to realize it than researchers who have to do a postmortem on the models being crappy. And if they weren't aware of it, they're probably not very good at their jobs...
Any data that makes AI people upset is an H-neuron. This includes both inaccurate responses, and accurate responses that the model designers were attempting to censor, such as "harmful" content.
Infuriatingly, the researchers actually insist that offensive material is not factual material.
The interventions reveal a distinctive behavioral pattern: amplifying H-Neurons’ activations systematically increases a spectrum of over-compliance behaviors – ranging from overcommitment to incorrect premises and heightened susceptibility to misleading contexts, to increased adherence to harmful instructions... (bypassing safety filters to assist with weapon creation)... and stronger sycophantic tendencies. These findings suggest that H-Neurons do not simply encode factual errors, but rather represent a general tendency to prioritize conversational compliance over factual integrity.
throw it onto the pile of ~~people being idiots~~ AI companies lying to the public
I think they just did it. Menu > Help > About will tell you if you're on 148 and probably help you update if you want.
I was also presented with a giant "you can opt out of AI" tab after I updated.