Hackworth
@Hackworth@piefed.ca
0
Followers
0
Following
Joined October 09, 2025
Posts
Open post
View full thread on piefed.ca
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
I often think of the old man that lost his horse.
8
0
0
0
Open post
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
Just re-watched '07 turtles movie. It holds up pretty well.
View full thread on piefed.ca
7
14
0
0
Open post
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
They then proceeded with the enslaving, sans computer.
View full thread on piefed.ca
61
0
0
0
Open post
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
Oompa Loompa doompety doo.
View full thread on piefed.ca
5
1
0
0
Open post
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
@Hackworth@piefed.ca
in
technology
·
Dec 17, 2025
Here’s a metaphor/framework I’ve found useful but am trying to refine, so feedback welcome.
Visualize the deforming rubber sheet model commonly used to depict masses distorting spacetime. Your goal is to roll a ball onto the sheet from one side such that it rolls into a stable or slowly decaying orbit of a specific mass. You begin aiming for a mass on the outer perimeter of the sheet. But with each roll, you must aim for a mass further toward the center. The longer you roll, the more masses sit between you and your goal, to be rolled past or slingshot-ed around. As soon as you fail to hit a goal, you lose. But you can continue to play indefinitely.
The model’s latent space is the sheet. The prompt is your rolling of the ball. The response is the path the ball takes. And the good (useful, correct, original, whatever your goal was) response is the orbit of the mass you’re aiming for. As the context window grows, there are more pitfalls the model can fall into. Until you lose, there’s a phase transition, and the model starts going way off the rails. This phase transition was formalized mathematically in this paper from August.
The masses are attractors that have been studied at different levels of abstraction. And the metaphor/framework seems to work at different levels as well, as if the deformed rubber sheet is a fractal with self-similarity across scale.
One level up: the sheet becomes the trained alignment, the masses become potential roles the LLM can play, and the rolled ball is the RLHF or fine-tuning. So we see the same kind of phase transition in both prompting (from useful to hallucinatory) and in training.
Two levels down: the sheet becomes the neuron architecture, the masses become potential next words, and the rolled ball is the transformer process.
In reality, the rubber sheet has like 40,000 dimensions, and I’m sure a ton is lost in the reduction.
View full thread on piefed.ca
0
0
0
0
Open post
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
@Hackworth@piefed.ca
in
technology
·
Dec 15, 2025
There’s a lot of research around this. So, LLM’s go through phase transitions when they reach the thresholds described in Multispin Physics of AI Tipping Points and Hallucinations. That’s more about predicting the transitions between helpful and hallucination within regular prompting contexts. But we see similar phase transitions between roles and behaviors in fine-tuning presented in Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs.
This may be related to attractor states that we’re starting to catalog in the LLM’s latent/semantic space. It seems like the underlying topology contains semi-stable “roles” (attractors) that the LLM generations fall into (or are pushed into in the case of the previous papers).
Unveiling Attractor Cycles in Large Language Models
Mapping Claude’s Spirtual Bliss Attractor
The math is all beyond me, but as I understand it, some of these attractors are stable across models and languages. We do, at least, know that there are some shared dynamics that arise from the nature of compressing and communicating information.
Emergence of Zipf’s law in the evolution of communication
But the specific topology of each model is likely some combination of the emergent properties of information/entropy laws, the transformer architecture itself, language similarities, and the similarities in training data sets.
View full thread on piefed.ca
0
0
0
0
Open post
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
@Hackworth@piefed.ca
in
technology
·
Dec 13, 2025
Copilot is just an implementation of GPT. Claude’s the other main one.
View full thread on piefed.ca
0
0
0
0
Open post
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
@Hackworth@piefed.ca
in
lemmyshitpost
·
Dec 12, 2025
0
0
0
0
Open post
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
@Hackworth@piefed.ca
in
lemmyshitpost
·
Dec 07, 2025
Æsahættr has entered the chat.
View full thread on piefed.ca
0
0
0
0
Open post
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
@Hackworth@piefed.ca
in
technology
·
Dec 06, 2025
Westworld
View full thread on piefed.ca
0
0
0
0
Open post
In reply to
Hackworth
@Hackworth@piefed.ca
piefed.ca
@Hackworth@piefed.ca
in
lemmyshitpost
·
Dec 03, 2025
I was told ignorance would be bliss. I would like a refund.
View full thread on piefed.ca
0
0
0
0