@malwaretech listen, it was just trained on endless amount of human conversations.

It just means, that those people would kill someone to exist.

The language model is just kind of "all possible or probable conversations, held by humans before". You cannot expect anything else.

But of course, the idea of chatbot controlling anything critical is totally mad. But history of use, as a species, is mostly continuous stream of random nonsense.