this post was submitted on 30 Jan 2026
328 points (99.1% liked)

Technology

79674 readers
4004 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The immediate catalyst, it seems, is an intensifying focus on capex, or capital expenditures. Microsoft revealed that its spending surged 66% to $37.5 billion in the latest quarter, even as growth in its Azure cloud business cooled slightly. Even more concerning to analysts, however, was a new disclosure that approximately 45% of the company’s $625 billion in remaining performance obligations (RPO)—a key measure of future cloud contracts—is tied directly to OpenAI, the company revealed after reporting earnings Wednesday afternoon. (Microsoft is both a major investor in and a provider of cloud-computing services to OpenAI.)

you are viewing a single comment's thread
view the rest of the comments
[–] Wispy2891@lemmy.world 4 points 4 hours ago (1 children)

You need to do a custom program if you want to do that. I mean a traditional program where variables are stored properly.

The models have no memory at all, at every question it starts from scratch, so the clients are just "pretending" it has a memory by simply including all previous questions and answers in your last query. You reply "ok", but the model is getting thousands of words with all the history.

Because each question becomes exponentially expensive, at some point it starts to prune old stuff. It either truncates the content (for example the completely useless meta ai chatbot that WhatsApp forced down the throat loses context after 2-3 questions) or it uses the model itself to have a condensed resume of past interactions, but this is how it hallucinates.

Otherwise it will cost like $1 per question and more

[–] Earthman_Jim@lemmy.zip 3 points 4 hours ago* (last edited 4 hours ago) (1 children)

Which kind of illustrates the fundamental flaw right? Videogame companies have spent decades creating replayable DnD esc experiences that are far more memory efficient and cost effective. They already kind of do it the best way. AI can assist, and things like the machine learning behind the behaviors of the NPCs in Arc Raiders for example is very cool, but as you said, you need a custom program... which is what a video game is, so I guess my point is I don't see the appeal in re-inventing it through sort of automated reverse engineering.

[–] postscarce@lemmy.dbzer0.com 1 points 2 hours ago

LLMs could theoretically give a game a lot more flexibility, by responding dynamically to player actions and creating custom dialogue, etc. but, as you say, it would work best as a module within an existing framework.

I bet some of the big game dev companies are already experimenting with this, and in a few years (maybe a decade considering how long it takes to develop a AAA title these days) we will see RPGs with NPCs you can actually chat with, which remain in-character, and respond to what you do. Of course that would probably mean API calls to the publisher’s server where the custom models are run, with all of the downsides that entails.