Don't need to be able to think to cause mass devastation with a rogue AI. I present the paperclip problem: https://en.wikipedia.org/wiki/Instrumental_convergence And from my experience trying to use it for programming, I know it absolutely will try its best to reach an unbounded goal you set for it even if it means creating the most convoluted and unmaintainable spaghetti code you've ever seen, or even trying to find loopholes to suppress error messages without actually fixing the error.