“How do you do that”, is the common question I often get on my AI projects…
So here’s the simple but EXTREMELY valuable process for drastically improving anything you do with LLMs. First, never zero-shot anything with your LLM usage unless it’s extremely simple tasks you have extreme confidence in the models capability of performing it. Always have the model review/edit its initial output. This is where all the value in the process below comes from. You want to instruct the model how to think, and not just rely on the inate capability of the model to produce the best answer for you. This is especially true the lower down the performance ladder you go with lesser models. In terms of reasoning, there are tons of frameworks and processes, and my current list has 105 across 21 categories. So which are best for what is the magic question to unlocking huge performance gains in output quality from any model.
To that end, after extensive testing and validation, the two frameworks that work best are “Gap Analysis” and “CARS Checklist”. The Gap analysis is used to define gaps between the sample text and the “ideal” end state (answer/response) to the original user input. This is a “build” step in identifying missing or lower quality information provided in the original output. The CARS checklist is a “refine” step to validate, trim down, or replace the information from a previous output. You can use these frameworks independently as part of a standard 3-step editing workflow for content improvement, or you can combine them in a sequential loop to continually build and refine towards an optimum end state.
Lots of ways to utilize this, but that’s the secret sauce to content improvement from LLMs.