rysiek

joined 5 years ago
[–] rysiek@szmer.info 1 points 20 hours ago

So I suppose I’m trying to understand what the differences are in how these two types of technology compromise cyber security.

Again, it does not make sense to me to make that kind of comparison.

[–] rysiek@szmer.info 8 points 20 hours ago (1 children)

Ooof, difficult. Triceratops, I think.

[–] rysiek@szmer.info 2 points 22 hours ago (2 children)

Is the stance here that AI is more dangerous than those because of its black box nature, it’s poor guardrails, the fact that it’s a developing technology, or it’s unfettered access?

All of the above I guess. Although I am not keen on making a comparison to these previous things. I have previously written about how IoT/"Smart" devices are a massive security issue, for example. This is not a competition, the point is not whether or not these tools are worse by some degree from some other problematic technologies, the point is that the AI hype would have you believe they are some end-all demiurgs when the real threat is coming from inside the house.

Also, do you think that the “popularity” of Google Gemini is because people were already indoctrinated into the Assistant ecosystem before it became Gemini, and Google already had a stranglehold on the search market so the integration of Gemini into those services isn’t seen as dangerous because people are already reliant and Google is a known brand rather than a new “startup”.

I don't know about Gemini's actual popularity. What I do know is that it is being shoved down people's throats in every possible way.

My feeling is that a lot of people would prefer to use their tools and devices the way they had before this crap came down the pipeline but they simply don't know how to turn it off reliably (partially because Google makes it really hard to do so), and so Google gets to make bullish claims on line-going-up as far as "people using Gemini" are concerned.

[–] rysiek@szmer.info 6 points 1 day ago

To me it's a form of automation. I will not use it as I see no use of it in my workflows. I am also keenly aware of the trap of thinking it improves one's effectiveness when it often very much does the opposite. Not to mention environmental costs etc.

If somebody wants to use these tools for that, whatever, have at it. But it's pretty difficult to claim one's an ethical hacker if one uses tools that have serious ethical issues. And genAI has plenty of those. So that's what I'd bear in mind.

[–] rysiek@szmer.info 11 points 1 day ago* (last edited 1 day ago)

I am not opposed to machine learning as a technology. I use Firefox's built-in translation as a way to access information online I otherwise would not be able to access, and I think it's great that small, local model can provide this kind of functionality.

I am opposed to marketing terms like "AI" – "AI" is a marketing term, there are now toothbrushes with "AI" – and I am opposed to religious pseudo-sciencey bullshit like AGI (here's Marc Andreessen talking about how "AGI is a search for God").

I also see very little use for LLMs. This has been pointed out before, by researchers who got fired for doing so from Google: smaller, more tailored models are going to be better suited for specific tasks than ever-bigger humongous behemoths. The only reason Big Tech is desperately pushing for huge models is because these cannot be run locally, which means they can monopolize them. Firefox's translation models show what we could have if we went in a different direction.

I cannot wait for the bubble to burst so that we can start getting the actually interesting tools out of ML-adjacent technologies, instead of the insufferable investor-targeted hype we are getting today. Just as we started getting actually interesting Internet stuff once the Internet bubble popped.

[–] rysiek@szmer.info 13 points 1 day ago (11 children)

Author here, AMA.

 

The way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.

LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.

 

Text: ICE agents are complaining that every time they go out wearing masks in unmasked cars with no uniforms or identification, protesters keep dumping pounds of glitter on them so that everyone can tell they're ICE for days afterwards.

Image below the text: a man in white shirt and black tie and glasses, with a raised hand, as if trying to get someone's attention.

Text on that image: who had "Glitter bombing the Gestapo" on their bingo card?