It's less extremist if you look at how easily these LLMs will just plagiarize 1:1, apparently:

https://github.com/mastodon/mastodon/issues/38072#issuecomment-4105681567

Some see "AI slop" as "identified by the immediate problems of it that I can identify right away".

Many others see "AI slop" as bringing many more problems beyond the immediate ones. Then seeing LLM output as anything but slop becomes difficult.