A fun experiment with GenAI and Mobile devices for the DoD…

Can you use the vision capabilities of multimodal LLMs to detect uniform wear errors? Anyone who’s been through some form of professional military education knows the “joys” of uniform inspection. The only thing worse than having an issue found, is having one found after you, your friend, and someone else already checked your uniform. As there’s a written standard for all uniform wear, the uniforms themselves are all standardized in production, and only personal mobile devices would be required, this all provides an opportunity to test GPT4o and other vision capable LLMs with an interesting challenge.

If I was a motivated service office looking to dabble in GenAI use cases that were completely harmless, used no DoD equipment, but provided both military application and utility…this would be a good start. I have a sneaking suspicion this would be a sufficiently tough nut to crack in terms of producing results that were reliable, accurate, and all encompassing. That said, tough harmless military specific problems are exactly what’s needed right now for experimentation. They get troops comfortable and knowledgeable with the tech, so as service approved solutions rollout on military networks their new found skills further enhance their job’s utilization of those new tools.

Time to think out of the box and play with some “shadow IT” at your next formation.
🫡

Previous
Previous

The key is backward planning your modular approach to DoD GenAI capability development…

Next
Next

The big impact of NIPRGPT on the DoD is in risk tolerance not capability…