Effective conversations with generative AI models (I)
Generative AI models such as GPT, Copilot or Bard represent a significant advance in natural language processing and understanding. However, getting accurate and relevant answers from these models will depend a lot on the way we communicate with them, on the instructions ('prompts', requests, or commands) we use to explain what we need.
Constructing the appropriate instruction is a practice known as 'prompt engineering' or 'prompting', which is nothing else than
The art of communicating with a generativelarge language model.
This is how "Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4" defines it, a study that also serves as a guide with 26 rules that help optimize the way to converse, ask or give instructions to AI models.
What is prompt engineering or prompting engineering?
Prompt engineering consists of designing queries or describing instructions that are appropriate and specific for AI models to provide accurate and relevant responses, while maintaining their ability to inform and be creative.
In this sense, the structure and specific content of these prompts, including language, are critical to getting the most out of AI models, both in terms of executing tasks (such as analyzing data, generating text, video, or images) and obtaining information. In this sense, the ability of these models to handle queries and return answers using natural language has simplified their adoption and use for the general public.
The basic principle of prompting is to create detailed, clear, and concise instructions to guide the response of the AI model while allowing for a creative and informative response.
26 simple rules to build effective prompts
In order to help users get better executions and responses, the study authors propose 26 rules that help build better prompts, simplifying and optimizing question formulation.
In addition, they explain, formulating the right prompts also helps users better understand how an AI model works: what it can and cannot do and its limitations. And why they get the answers they do.
The rules proposed in the study are simple and designed to streamline the process of formulating queries and prompts, and fall into five categories:
- Structure and clearness include using affirmative directives such as "do" and avoiding negative language such as "don't do." Also mention the target audience: indicating whether the query is for a student, or a subject matter expert will significantly change the model's response.
✅ Formatting the instruction
When formatting an instruction start with "###Instruction###", followed by "###Example###" or "###Question###" if relevant. Subsequently, introduce your content, if applicable. Use one or more-line breaks to separate instructions, examples, questions, context, and input data.
- Specificity and information: Adding phrases such as "Ensure that your answer is unbiased and does not rely on stereotypes" can guide the model to predict or generate more neutral, factual, or data-driven responses; while including examples in the instructions helps the model understand what format or type of response you expect.
- Interact with the user: prompt the model to ask for additional or more precise details you may need by asking questions of the user and until you have enough information to provide the most appropriate response.
✅ For example, "From now on, I would like you to ask me questions...".
- Language content and style: The study suggests that it is not necessary to use formalities or polite formulas such as "please" or "thank you". Being direct and concise can produce more focused responses.
✅ Of course, you can use polite formulas with AI models. Sometimes you won't be able to avoid it. Not so much for the effect they are going to have on the AI model (which can be influential, according to Microsoft) but also for the effect on yourself of having a cordial conversation that enhances the experience and encourages learning.
You can also ask that your response mimic the language used by the user, either by delivering a sample or that of the instruction itself.
✅ You can ask them to review and correct a text to clarify or improve grammar or vocabulary while ensuring that it does not change the naturalness of the text or the writing style.
- Complex tasks: It may be more effective for technical or multi-step queries to break the task into simpler, sequential prompts (using the "think step by step" formula) or by having an incremental conversation that goes deeper into the topic. You can then ask him or her to generate a single response with all the information contained in that conversation.
✅Extended prompts also work better if they are structured, e.g
[Assign a role] [Context] [Objective] [Request]
These recommendations are general and may vary depending on the model, the domain and the objective of the task, so it is best to experiment and adapt the instructions and prompts depending on the tools used and the needs and preferences of each one.
◾CONTINUING THIS SERIES
* * *