a few weeks ago I replayed Eliza, a visual novel about someone who used to work at a tech company developing a "next-generation AI" product who gets burned out and disappears for a few years and then returns
in the game Eliza is a supposedly-advanced therapy system where a human "proxy" reads lines off a headset that are generated by this software; sometimes patients do feel better after their sessions due to a kind of placebo effect of talking thru their problems and having someone appear to listen, but for anything nontrivial its advice is unhelpful and patronizing
the debug logs scroll by in your headset when you're facilitating sessions and one subtle gag is you can see its "sentiment analysis" is just "did they say one of these positive/negative words?" ignoring context
today some code from Claude was leaked and it turns out that despite having a very advanced language model, they are just ... basically doing the exact same thing; does the input contain one of these phrases? if so it's negative!
technomancy
@technomancy@hey.hagelb.org
trying to catch the last train out of omelas https://technomancy.us/colophon and my alt is @technomancy how I run my instance: https://technomancy.us/201
hey.hagelb.org
technomancy
@technomancy@hey.hagelb.org
trying to catch the last train out of omelas https://technomancy.us/colophon and my alt is @technomancy how I run my instance: https://technomancy.us/201
hey.hagelb.org
@technomancy@hey.hagelb.org
·
Mar 31, 2026
0
5
0
Conversation (5)
Showing 0 of 5 cached locally.
Syncing comments from the remote thread. 5 more replies are still loading.
Loading comments...