@alisynthesis @davidaugust @blterrible @Robotistry and that the faults we attribute to LLMs (they’re only matching patterns to their training data, they’re only replying what the user expects) are really not all that different to how humans operate. Our brains are pretty much giant pattern matching association machines. Emergent properties we feel are there, like consciousness, have no provable basis 3/