Tonight's fun was trying to speed up #perplexica on a local homelab install with ollama, I run it on a mac mini m4, 16gb so am limited in models I can use, however 3 minutes to tell me how to make a pizza across multiple models seemed slow.. It was, same question, same #ollama, same models on #AnythingLLM 15 seconds.. I have ended up keeping the #searxng install which was quick and works well. Tomorrow I want to work out more about anything llm and RAG and the any cli that it has, as if I can do what I need to using that, I might move over from warp.dev to tabby with any cli.. Lets see.. Also seems like #lens has added more agentic on its #kubernetes app, interested to see what that might do Also might move from #docker to #podman and spin up a local 3 node k8s cluster on the laptop too. That would be good to test my latest vibe app out Love a weekend... #daveknowstech #selfhosted #homelab