• Sign in
  • Sign up
Elektrine
EN
Log in Register
Modes
Overview Chat Timeline Communities Gallery Lists Friends Email Vault DNS VPN
Back to Timeline
  • Open on mander.xyz

hoppolito

@hoppolito@mander.xyz
lemmy 0.19.17
0 Followers
0 Following
Joined November 23, 2024

Posts

Open post
In reply to
hoppolito
@hoppolito@mander.xyz
mander.xyz
hoppolito
hoppolito
@hoppolito@mander.xyz
mander.xyz
@hoppolito@mander.xyz · Apr 09, 2026

I may be misunderstanding your argument but just to make sure I want to point out that

desperate people will do desperate things to survive

does not run counter to

if you can’t afford to live, then you certainly can’t afford to move to another country

View full thread on mander.xyz
35
0
0
0
Open post
Boosted by Technology @technology@lemmy.world
In reply to
hoppolito
@hoppolito@mander.xyz
mander.xyz
hoppolito
hoppolito
@hoppolito@mander.xyz
mander.xyz
@hoppolito@mander.xyz in technology · Dec 15, 2025
As far as I know that’s generally what is often done, but it’s a surprisingly hard problem to solve ‘completely’ for two reasons: The more obvious one - how do you define quality? When you’re working with the amount of data LLMs require as input and need to be checked for on output you’re going to have to automate these quality checks, and in one way or another it comes back around to some system having to define and judge against this score. There’s many different benchmarks out there nowadays, but it’s still virtually impossible to just have ‘a’ quality score for such a complex task. Perhaps the less obvious one - you generally don’t want to ‘overfit’ your model to whatever quality scoring system you set up. If you get too close to it, your model typically won’t be generally useful anymore, rather just always outputting things which exactly satisfy the scoring principle, nothing else. If it reaches a theoretical perfect score, it would just end up being a replication of the quality score itself.
View full thread on mander.xyz
0
0
0
0
Open post
Boosted by Technology @technology@lemmy.world
In reply to
hoppolito
@hoppolito@mander.xyz
mander.xyz
hoppolito
hoppolito
@hoppolito@mander.xyz
mander.xyz
@hoppolito@mander.xyz in technology · Dec 05, 2025
I think you really nailed the crux of the matter. With the ‘autocomplete-like’ nature of current LLMs the issue is precisely that you can never be sure of any answer’s validity. Some approaches try by giving ‘sources’ next to it, but that doesn’t mean those sources’ findings actually match the text output and it’s not a given that the sources themselves are reputable - thus you’re back to perusing those to make sure anyway. If there was a meter of certainty next to the answers this would be much more meaningful for serious use-cases, but of course by design such a thing seems impossible to implement with the current approaches. I will say that in my personal (hobby) projects I have found a few good use cases of letting the models spit out some guesses, e.g. for the causes of a programming bug or proposing directions to research in, but I am just not sold that the heaviness of all the costs (cognitive, social, and of course environmental) is worth it for that alone.
View full thread on mander.xyz
2
0
0
0
Open post
Boosted by Lemmy Shitpost @lemmyshitpost@lemmy.world
In reply to
hoppolito
@hoppolito@mander.xyz
mander.xyz
hoppolito
hoppolito
@hoppolito@mander.xyz
mander.xyz
@hoppolito@mander.xyz in lemmyshitpost · Dec 04, 2025
Holyy, thanks for this. I can finally put a name to it. Was wondering with my partner for ages what sometimes suddenly befalls us, especially if we’re lying in a weird position.
View full thread on mander.xyz
0
0
0
0
313k7r1n3

Company

  • About
  • Contact
  • FAQ

Legal

  • Terms of Service
  • Privacy Policy
  • VPN Policy

Email Settings

IMAP: mail.elektrine.com:993

POP3: pop3.elektrine.com:995

SMTP: mail.elektrine.com:465

SSL/TLS required

Support

  • support@elektrine.com
  • Report Security Issue

Connect

Tor Hidden Service

khav7sdajxu6om3arvglevskg2vwuy7luyjcwfwg6xnkd7qtskr2vhad.onion
© 2026 Elektrine. All rights reserved. • Server: 03:19:39 UTC