No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”
There is a reason for this. LLMs are "rewarded" (just an internal scoring mechanism) for generating an answer. No matter what you say, it will try to maximize the reward value by generating an answer with high hallucination. There is no reward mechanism for saying "I don't know" to a difficult question.
I am not into research on LLMs, but i think this is being worked upon.
Thank you for writing exactly what I was thinking.
I heard that Japan is starting to implement a government sponsored/made matchmaking app. The core advantage is that the intention of the platform is to actually match people and make people have babies. Plus, if someone is being naughty, the penalties can be much higher than a simple account ban.