In reply to
Pennomi
@pennomi@lemmy.world
lemmy.world
RLHF was a fundamental mistake. Human feedback almost always trains an AI to be sycophantic because humans in general are super easy to flatter.
We are building the perfect addiction machine, far more powerful than social media is, and it actively undermines the honesty of the system.
View full thread on lemmy.world
121
23
0
Conversation (23)
Showing 0 of 23 cached locally.
Syncing comments from the remote thread. 23 more replies are still loading.
Loading comments...