This is good writing.
In promoting their developer registration program, Google purports:
Our recent analysis found over 50 times more malware from internet-sideloaded sources than on apps available through Google Play.
We haven’t seen this recent analysis — or any other supporting evidence — but the “50 times” multiple does certainly sound like great cause for distress (even if it is a surprisingly round number). But given the recent news of “224 malicious apps removed from the Google Play Store after ad fraud campaign discovered”, we are left to wonder whether their energies might better be spent assessing and improving their own safeguards rather than casting vague disparagements against the software development communities that thrive outside their walled garden.
AI companies are definitely aware of the real risks. It's the imaginary ones ("what happens if AI becomes sentient and takes over the world?") that I imagine they'll put that money towards.
Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that's expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.