• Sign in
  • Sign up
Elektrine
EN
Log in Register
Modes
Overview Chat Timeline Communities Gallery Lists Friends Email Vault DNS VPN
Back to Timeline
  • Open on lemmy.dbzer0.com

19-84

@19_84@lemmy.dbzer0.com
lemmy 0.19.15

“Who controls the past controls the future: who controls the present controls the past.”

0 Followers
0 Following
Joined January 07, 2026

Posts

Open post
19_84
19-84
@19_84@lemmy.dbzer0.com

“Who controls the past controls the future: who controls the present controls the past.”

lemmy.dbzer0.com
19-84
19-84
@19_84@lemmy.dbzer0.com

“Who controls the past controls the future: who controls the present controls the past.”

lemmy.dbzer0.com
@19_84@lemmy.dbzer0.com in selfhosted · Jan 13, 2026

Self-host Reddit – 2.38B posts, works offline, yours forever

Reddit’s API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware. The key point: This doesn’t touch Reddit’s servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone. What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine. API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools. Self-hosting options: USB drive / local folder (just open the HTML files) Home server on your LAN Tor hidden service (2 commands, no port forwarding needed) VPS with HTTPS GitHub Pages for small archives Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away. Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic. How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is “trust but verify” – it accelerates the boring parts but you still own the architecture. Live demo: online-archives.github.io/redd-archiver-example/ GitHub: github.com/19-84/redd-archiver (Public Domain) Pushshift torrent: academictorrents.com/…/1614740ac8c94505e4ecb9d88b…
View on lemmy.dbzer0.com
723
114
0
0
313k7r1n3

Company

  • About
  • Contact
  • FAQ

Legal

  • Terms of Service
  • Privacy Policy
  • VPN Policy

Email Settings

IMAP: mail.elektrine.com:993

POP3: pop3.elektrine.com:995

SMTP: mail.elektrine.com:465

SSL/TLS required

Support

  • support@elektrine.com
  • Report Security Issue

Connect

Tor Hidden Service

khav7sdajxu6om3arvglevskg2vwuy7luyjcwfwg6xnkd7qtskr2vhad.onion
© 2026 Elektrine. All rights reserved. • Server: 02:18:01 UTC