thinkercharmercoderfarmer
@thinkercharmercoderfarmer@slrpnk.net
lemmy
0.19.17
0
Followers
0
Following
Joined October 14, 2025
Posts
Open post
In reply to
thinkercharmercoderfarmer
@thinkercharmercoderfarmer@slrpnk.net
slrpnk.net
thinkercharmercoderfarmer
@thinkercharmercoderfarmer@slrpnk.net
slrpnk.net
@thinkercharmercoderfarmer@slrpnk.net
in
technology
·
Mar 24, 2026
Right, i mean if you made the context window enormous, such that you can include the entire set of embeddings and a set of memories (or maybe, an index of memories that can be “recalled” with keywords) you’ve got a self-observing loop that can learn and remember facts about itself. I’m not saying that’s AGI, but I find it somewhat unsettling that we don’t have an agreed-upon definition. If a for-profit corporation made an AI that could be considered a person with rights, I imagine they’d be reluctant to be convincing about it.
View full thread on slrpnk.net
0
0
0
0
Open post
In reply to
thinkercharmercoderfarmer
@thinkercharmercoderfarmer@slrpnk.net
slrpnk.net
thinkercharmercoderfarmer
@thinkercharmercoderfarmer@slrpnk.net
slrpnk.net
@thinkercharmercoderfarmer@slrpnk.net
in
technology
·
Feb 27, 2026
Why not? if LLMs are good at predicting mean outcomes for the next symbol in a string, and humans have idiosyncrasies that deviate from that mean in a predictable way, I don’t see why you couldn’t detect and correlate certain language features that map to a specific user. You could use things like word choice, punctuation, slang, common misspellings sentence structure… For example, I started with a contradicting question, I used “idiosyncrasies”, I wrote “LLMs” without an apostrophe, “language features” is a term of art, as is “map” as a verb, etc. None of these are indicative on their own, but unless people are taking exceptional care to either hyper-normalize their style, or explicitly spiking their language with confounding elements, I don’t see why an LLM wouldn’t be useful for this kind of espionage.
View full thread on slrpnk.net
0
1
0
0