If it's flagged as "assisted by " then it's easy to identify where that code came from. If a commercial LLM is trained on proprietary code, that's on the AI company, not on the developer who used the LLM to write code. Unless they can somehow prove that the developer had access to said proprietary code and was able to personally exploit it. If AI companies are claiming "fair use," and it holds up in court, then there's no way in hell open-source developers should be held accountable when closed-source snippets magically appear in AI-assisted code. Granted, I am not a lawyer, and this is not legal advice. I think it's better to avoid using AI-written code in general. At most use it to generate boilerplate, and maybe add a layer to security audits (not as a replacement for what's already being done). But if an LLM regurgitates closed-source code from its training data, I just can't see any way how that would be the developer's fault...