Lessons learned from LLM based Chatbot study and assessment…#2

Continuing on in the series of lessons learned from the chatbot study, we find a clear but important distinction.

There is an important level of TTP (Tactics, Techniques, and Procedures) development above/before Prompt Engineering in all complex use cases.

It’s common to see people use Prompt Engineering as a catch all term for everything involved in a user trying to control for an output with LLM based systems. That’s a gross overuse of the term, and really prompt engineering should be defined as what just the words represent. The designer application of individual prompts to a specific LLM-based system to elicit desired outputs.

In all complex use cases, you will first need to understand the capabilities and limitations of all available LLM-based systems you have access to, along with their available plugins, tools, etc. These form the individual tools in your available toolbox of sorts, and the proper selection and application of those tools in sequence are generally what make complex use cases possible. For example, you may very well need the massive context window of Claude 2, but also need a specific plugin from ChatGPT. Figuring out which tools, in which order, with which prompts are how to combine TTP development and Prompt Engineering together to accomplish the complex use cases at the absolute limit of LLM-based system capabilities.

Previous
Previous

Lessons learned from LLM based Chatbot study and assessment…#3

Next
Next

Lessons learned from LLM based Chatbot study and assessment…#1