All I can say is that they have to find a solution, and their time is limited.
I think it's a safe bet to say that Japanese sociologists are already studying the problem for decades.
Choosing some solutions they might propose is becoming increasingly urgent. It will most likely require adopting a different view on topics like employment and career. If a career is valued highly and having kids blocks it effectively and incurs enough expenses to set a person back in life, then people are discouraged from raising kids.
In my repeated attempts to solicit the advise of various language models for some situations which a programmer might face (e.g. being unable to read all the world's literature of a subject), I have come to conclude that they cannot understand "truth" as humans perceive it. Today's language models don't fail apologizing, stepping back or admitting inability - they fail confidently bluffing.
Possibilities:
Basically, today's models seem to be low on self-criticism and seem to have a bias towards believing in their own omniscience.
Finally, a few words about the sensibility of letting language models play this sort of a war game. It's silly. They aren't built for that task, and if someone would build an AI for controlling strategic escalation, they would train this AI on rather different information than a chat bot.