PROTIP for DoD GenAI system approval: Isolate your prompts to buttons instead of using free text fields…

AI

There was much gnashing of teeth recently at an early look into the upcoming DAF GenAI system approval process, but instead of complaining, there’s some tricks to system design you can use to help improve your odds of approval. First, one of the biggest issues with any GenAI system is the ethics considerations. Drawing the line on what is and isn’t allowed in or out of the model for use is a very tricky thing. Ultimately some line must be drawn by someone, but therein lies the issue. How does the system owner enforce that limitation?

If you have a free text field as part of a chatbot interface, only some very extreme levels of realtime auditing and direct intervention can detect and stop a user from typing in anything they want to query the model. There’s a simple system design fix to this problem…just use buttons for fixed prompts for approved use cases. Assume you do your prompt engineering for use cases in a separate RDT&E system, and your production/operations system utilizes those optimized prompts against ethics approved use cases via single button activation in a user interface. This is a very simple solution to an otherwise very complicated or intractable security problem.

Not saying you have to do this, but I am saying that taking advantage of guidance is a better path to approval in a security conscious environment. Also for bonus points, using fixed prompts also isolates the number of possible tokens used (only varying for any supplemental attachments) to a much smaller number allowing for much more accurate operating cost calculations.

Previous
Previous

Why did my SBIR/STTR submission get “Not Selected?…

Next
Next

With the release yesterday of the new Llama 3 LLMs from Meta, we are right on track with my predictions from last year’s GenAI study for the DAF…