The good news, for me at least, is that the computer thinks I have a nice personality. According to an app called MorphCast, I was, in a recent meeting with my boss, generally “amused,” “determined,” and “interested,” though—sue me—occasionally “impatient.” MorphCast, you see, purports to glean insights into the depths and vagaries of human emotion using AI. It found that my affect was “positive” and “active,” as opposed to negative and/or passive. My attention was reasonably high. Also, the AI informed me that I wear glasses—revelatory!
The bad news is that software now purports to glean insights into the depths and vagaries of human emotion using AI, and it is coming to watch you. If it isn’t already: Morphcast, for example, has licensed its technology to a mental-health app, a program that monitors schoolchildren’s attention, and McDonald’s, which launched a promotional campaign in Portugal that scanned app users’ faces and offered them personalized coupons based on their (supposed) mood. It is one of many, many such companies doing similar work—the industry term is emotion AI or sometimes affective computing.
Some products analyze video of meetings or job interviews or focus groups; others listen to audio for pitch, tone, and word choice; still others can scan chat transcripts or emails and spit out a report about worker sentiment. Sometimes, the emotion AI is baked in as a feature in multiuse software, or sold as part of an expensive analytics package marketed to businesses. But it’s also available as a stand-alone product, and the barrier to entry is shin-high: I used MorphCast at no cost, taking advantage of a free trial, and with no special software. At no point was I compelled to ask my interlocutors if they consented to being analyzed in this way (though I did ask, because of my good personality).
this post was submitted on 04 May 2026
42 points (100.0% liked)
Technology
42872 readers
266 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Please, people, don't discuss the new polygraph as if it were more real than the old one. They are both tools, but, also, they are first both lies. It's just a guess, but I believe there's probably someone at every stage of the implementation saying it doesn't quite work like that or that it doesn't work at all. But who cares? The product they are selling is justifiable exploitation.
Thank you. So, so much of AI is designed to manufacture a justification for what the entity using it already wanted to do.
Will AI replace actors and scriptwriters? No, but it gives studios an excuse to fire expensive, unionized workers and quietly replace them with cheap overseas labor.
Will AI replace coders? No, but it gives big tech an excuse to fire expensive, experienced, American coders earning American salaries, and replace them with a dozen new graduates out of Hyderabad.
Will AI replace police work? No, but it'll send bad facial recognition results falsely labeling innocent black men as criminals, and give racist police more excuses to arrest black men for being black in public.
Will AI replace military intelligence? No, but it'll rubber stamp an "assessment" saying any village or church or hospital or fishing boat or girls' school we want to destroy is a legitimate military target, and give the US military that fig leaf of justification that lets the people actually pulling the trigger sleep at night.
Finally, will AI replace sound judgment about the human being in front of you? No, but it sure as hell will provide "objective" support for whatever prejudices you already have.