• Sign in
  • Sign up
Elektrine
EN
Log in Register
Modes
Overview Chat Timeline Communities Gallery Lists Friends Email Vault DNS VPN
Back to Timeline !technology @bitjunkie
In reply to 7 earlier posts
@muelltonne@feddit.org on feddit.org Open parent
Open parent Original URL
766
0
126
@ceenote@lemmy.world on lemmy.world Open parent
So, like with Godwin’s law, the probability of a LLM being poisoned as it harvests enough data to become useful approaches 1.
Open parent Original URL
0
0
0
@Gullible@sh.itjust.works on sh.itjust.works Open parent
I mean, if they didn’t piss in the pool, they’d have a lower chance of encountering piss. Godwin’s law is more benign and incidental. This is someone maliciously handing out extra Hitlers in a game of secret Hitler and then feeling shocked at the breakdown in the game
Open parent Original URL
0
0
0
@saltesc@lemmy.world on lemmy.world Open parent
Yeah but they don’t have the money to introduce quality governance into this. So the brain trust of Reddit it is. Which explains why LLMs have gotten all weirdly socially combative too; like two neckbeards having at it with Google skill vs Google skill is a rich source of A+++ knowledge and social behaviour.
Open parent Original URL
0
0
0
@yes_this_time@lemmy.world on lemmy.world Open parent
If I’m creating a corpus for an LLM to consume, I feel like I would probably create some data source quality score and drop anything that makes my model worse.
Open parent Original URL
0
0
0
@hoppolito@mander.xyz on mander.xyz Open parent
As far as I know that’s generally what is often done, but it’s a surprisingly hard problem to solve ‘completely’ for two reasons: The more obvious one - how do you define quality? When you’re working with the amount of data LLMs require as input and need to be checked for on output you’re going to have to automate these quality checks, and in one way or another it comes back around to some system having to define and judge against this score. There’s many different benchmarks out there nowadays, but it’s still virtually impossible to just have ‘a’ quality score for such a complex task. Perhaps the less obvious one - you generally don’t want to ‘overfit’ your model to whatever quality scoring system you set up. If you get too close to it, your model typically won’t be generally useful anymore, rather just always outputting things which exactly satisfy the scoring principle, nothing else. If it reaches a theoretical perfect score, it would just end up being a replication of the quality score itself.
Open parent Original URL
0
0
0
@WhiteOakBayou@lemmy.world on lemmy.world Open parent
like the LLM that was finding cancers and people were initially impressed but then they figured out the LLM had just correlated a DR’s name on the scan to a high likelihood of cancer. Once the complicating data point was removed, the LLM no longer performed impressively. Point #2 is very Goodhart’s law adjacent.
Open parent Original URL
0
0
0
0
bitjunkie in !technology
@bitjunkie@lemmy.world · Dec 16
I never knew the name for this law, but it’s basically how SEO ruined traditional search. I think it’s also a big reason that a LOT of software engineers put way too much emphasis on passing unit tests and not nearly enough on examining what they’re actually testing.
View on lemmy.world
0
0
0
Sign in to interact

Loading comments...

About Community

technology
Technology
!technology@lemmy.world

This is a most excellent place for technology news and articles.


Our Rules
  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots
  • @L4s@lemmy.world
  • @autotldr@lemmings.world
  • @PipedLinkBot@feddit.rocks
  • @wikibot@lemmy.world
83894
Members
18811
Posts
Created: June 11, 2023
View All Posts
313k7r1n3

Company

  • About
  • Contact
  • FAQ

Legal

  • Terms of Service
  • Privacy Policy
  • VPN Policy

Email Settings

IMAP: mail.elektrine.com:993

POP3: pop3.elektrine.com:995

SMTP: mail.elektrine.com:465

SSL/TLS required

Support

  • support@elektrine.com
  • Report Security Issue

Connect

Tor Hidden Service

khav7sdajxu6om3arvglevskg2vwuy7luyjcwfwg6xnkd7qtskr2vhad.onion
© 2026 Elektrine. All rights reserved. • Server: 00:04:50 UTC