My company is working on one such product. So far AI has essentially no guardrails so can too easily go from potentially helpful to harmful. There’s nothing keeping it from sharing confidential data, like customer data, nothing looking at whatever dreck it’s pulling in from the internet, and nothing keeping it from doing harm internally. We can do that.
But most of the more immediate harm is poor practices that we’ve known for decades. Maybe consultants can make bank by pushing best practices we should have followed all along. That article that keeps coming around Lemmy about losing their production database and backup in 9 seconds is a great example. Sure, that LLM needed guardrails to stop that, but any normal best practices would also have prevented that
I’ve seen some baby steps
LLMs with “planning” mode that shouldn’t make changes until approved …… except when they ignore that
IDEs with sandbox where you need to whitelist allowed LLM invocations, except there’s too much noise so too many people allow all
MCP servers with a read-only mode, or allowing configuring credentials to be read-only
My company is working on one such product. So far AI has essentially no guardrails so can too easily go from potentially helpful to harmful. There’s nothing keeping it from sharing confidential data, like customer data, nothing looking at whatever dreck it’s pulling in from the internet, and nothing keeping it from doing harm internally. We can do that.
But most of the more immediate harm is poor practices that we’ve known for decades. Maybe consultants can make bank by pushing best practices we should have followed all along. That article that keeps coming around Lemmy about losing their production database and backup in 9 seconds is a great example. Sure, that LLM needed guardrails to stop that, but any normal best practices would also have prevented that
I’ve seen some baby steps