@jonny @catch56 @david_chisnall @chris_evelyn Again, this is starting to demolish what I had been keeping an open mind about as a worthwhile use case for LLMs: security auditing. If the inference costs are so high already, and the economics are known to be infeasible/massive unrealistic as Ed Zitron keeps reminding us, isn't this just all about a compute arms race with no real benefit beyond that, due to the constant hallucination, mean reversion, prompt engineering, risk/wealth transfer?