As board members we all need to lean in and begin using generative AI LLMs. I have been using Chat GPT, Perplexity and Gemini. My colleague has shared some of the insights on how to most effectively use LLMs. Whether it’s Open AI, Anthropic, Gemini or Grok, it’s all about how we ask the question.
There are tricks to optimize our workflows and the research we do with AI. It seems there are a couple of major steps in the iterative workflow that give us the best answers.
Here’s the key learning: Follow these 4 big steps: prompt, refine, research, repeat.
We have to develop our iterative prompting skills. We have to plan into our process that we’re going to fine-tune our prompt in multiple cycles, after seeing the initial results.
While we are learning how to get better AI output, We are also improving how we learn to “communicate with our machine” with specificity and clarity.
We have to learn to do very specific and explicit prompts. The more details that we can add, whether it is the specific database we want to research, the links that we want, the context, the tone, the formatting, or the target audience, this will allow us to get a better answer. The key thing is to be very explicit and precise. Looking at GPT, they supplied an example in which if you ask, “tell me about heart disease” it will not give you a good answer. A more detailed explicit question is “what are the top three risk factors for heart disease in adults,” which will give us better information.
We are all in the early stages of the journey of using deep research as part of our workflow.We can use our LLM as a research assistant by guiding the AI to go deep into the topic. One of the most effective approaches is to take a big question and break it into very small sub questions. We should ask for the output in the structured format that we want. It could be graphs, tables, bullet points. We must remain objective, and we have to continuously be critical of the answers AI gives us. There will always be some inconsistencies. To avoid these inaccuracies / inconsistencies we should request more data and direct the AI where to get the data from.
In summary: We should direct AI as an iterative workflow, in which we go through the four key steps: Prompt, Refine Research, Repeat.
Other Learnings:
Spend extra time on how you create your prompt. What are the specifics and the exact instructions you want the prompt to follow? What is the context?
Assume that the first answer is only going to be a draft. That you’re going to have to do follow ups for better results. It’s essentially as if you are “co-writing” the content, together with AI.
Give guidance on your request describing the structure you want. For example, do you want a framework like SWOT? Do you want a framework with pros and cons? Do you want your steps numbered? Do you want graphs?
Always be critical and always double check facts and figures and verify. Ask AI for clarifications on the source. Always verify facts with AI, so that you are not receiving bad answers.
Keep adapting and keep learning. For example, maybe you want to ask for very crisp bullet points rather than verbose long paragraphs.
Think about visuals. Do you want to ask for graphs or icons? Do you want to highlight or bold specific statistics or use larger fonts? Do you want three key bullet points and then the two important take away recommendations in an executive summary? The more direction you give the better satisfied you’ll be.
As we go on the learning curve in using AI, I hope some of these pointers are helpful. My colleague, Bill Lenehan shared a lot of his learnings in using GPT to provide this tutorial and I share many of the insights that he has graciously shared. Hopefully it’s helpful to all of us on our LLM journey!