JealousJail

joined 8 months ago
[–] JealousJail@feddit.org 0 points 4 months ago* (last edited 4 months ago)

I believe that we are not yet in the end stage of AI. LLMs are certainly useful, but they cannot solve the most important problems of mankind.

More research is required to solve e.g. a) Sustainable Energy Supply b) Imbalanced demographies of industrialized countries c) Treatment of several diseases

Like it or not, AI that can do research for us, or even increase efficiency of human researchers, is the most promising trajectory for accelerating progress on these important problems.

Right now, AI has not exceeded this scope. Yeah, AI can generate quite realistic fake videos. But propaganda has been possible before (look at China, Russia or Nazi Germany - even TikTok without any AI is dangerous enough to severely threaten democracies).

As a researcher in the domain, let me tell you that no one who seriously knows about video generation etc. is afraid of the current state of AI

[–] JealousJail@feddit.org 4 points 4 months ago* (last edited 4 months ago)

It has been more than just hyperscaling. First of all, the invention of transformers would likely be significantly delayed without the hype around CNNs in the first AI wave in 2014. OpenAI wouldn‘t have been founded and their early contributions (like Soft Actor-Critic RL) could have taken longer to be explored.

While I agree that the transformer architecture itself hasn‘t advanced far since 2018 apart from scaling, its success has significantly contributed to self-learning policies.

RLHF, Direct Policy Optimization, and in particular DeepSeek‘s GRPO are huge milestones for Reinforcement Learning which arguably is the most promising trajectory for actual intelligence. Those are a direct consequence of the money pumped into AI and the appeal it has to many smart and talented people around the world

[–] JealousJail@feddit.org 5 points 4 months ago* (last edited 4 months ago) (2 children)

At least they‘ve wasted their money for research of what doesn‘t work instead of just building silly products as for the .com bubble.

Humanity will gain insights to the kind of AI approaches that won‘t work much faster than without all the money. It‘s just an allocation of human efforts

[–] JealousJail@feddit.org 11 points 4 months ago (3 children)

I disagree a bit. Any money the ultra-rich invest into research is better spent than on their next Mega-Yacht. Even if AI cannot meet the expectations of AGI etc.

[–] JealousJail@feddit.org 2 points 8 months ago

Agreed. Being financially independent can also foster ad investments and lead to a popularity gain