AI being used as a buzzword against open source is so annoying. The attackers who want to break into your software have the same models as the developers writing it.
If you’re genuinely concerned that your code is going to have a bunch of exploits that AI can find faster than previous researchers, then run the models to audit things yourself.
The code was already open, the bugs are visible already. You’re not solving any problems.
AI being used as a buzzword against open source is so annoying. The attackers who want to break into your software have the same models as the developers writing it.
If you’re genuinely concerned that your code is going to have a bunch of exploits that AI can find faster than previous researchers, then run the models to audit things yourself.
The code was already open, the bugs are visible already. You’re not solving any problems.