Ooof, difficult. Triceratops, I think.
rysiek
Is the stance here that AI is more dangerous than those because of its black box nature, it’s poor guardrails, the fact that it’s a developing technology, or it’s unfettered access?
All of the above I guess. Although I am not keen on making a comparison to these previous things. I have previously written about how IoT/"Smart" devices are a massive security issue, for example. This is not a competition, the point is not whether or not these tools are worse by some degree from some other problematic technologies, the point is that the AI hype would have you believe they are some end-all demiurgs when the real threat is coming from inside the house.
Also, do you think that the “popularity” of Google Gemini is because people were already indoctrinated into the Assistant ecosystem before it became Gemini, and Google already had a stranglehold on the search market so the integration of Gemini into those services isn’t seen as dangerous because people are already reliant and Google is a known brand rather than a new “startup”.
I don't know about Gemini's actual popularity. What I do know is that it is being shoved down people's throats in every possible way.
My feeling is that a lot of people would prefer to use their tools and devices the way they had before this crap came down the pipeline but they simply don't know how to turn it off reliably (partially because Google makes it really hard to do so), and so Google gets to make bullish claims on line-going-up as far as "people using Gemini" are concerned.
To me it's a form of automation. I will not use it as I see no use of it in my workflows. I am also keenly aware of the trap of thinking it improves one's effectiveness when it often very much does the opposite. Not to mention environmental costs etc.
If somebody wants to use these tools for that, whatever, have at it. But it's pretty difficult to claim one's an ethical hacker if one uses tools that have serious ethical issues. And genAI has plenty of those. So that's what I'd bear in mind.
I am not opposed to machine learning as a technology. I use Firefox's built-in translation as a way to access information online I otherwise would not be able to access, and I think it's great that small, local model can provide this kind of functionality.
I am opposed to marketing terms like "AI" – "AI" is a marketing term, there are now toothbrushes with "AI" – and I am opposed to religious pseudo-sciencey bullshit like AGI (here's Marc Andreessen talking about how "AGI is a search for God").
I also see very little use for LLMs. This has been pointed out before, by researchers who got fired for doing so from Google: smaller, more tailored models are going to be better suited for specific tasks than ever-bigger humongous behemoths. The only reason Big Tech is desperately pushing for huge models is because these cannot be run locally, which means they can monopolize them. Firefox's translation models show what we could have if we went in a different direction.
I cannot wait for the bubble to burst so that we can start getting the actually interesting tools out of ML-adjacent technologies, instead of the insufferable investor-targeted hype we are getting today. Just as we started getting actually interesting Internet stuff once the Internet bubble popped.
Author here, AMA.
Again, it does not make sense to me to make that kind of comparison.