Prompt engineering

Prompt Engineering is the process of designing and refining input prompts to effectively guide the behavior of AI models, particularly Large Language Models (LLMs). It involves crafting clear, specific, and contextually rich instructions or questions to elicit the desired responses or outputs from the model. Effective prompt engineering often includes experimenting with phrasing, structure, and context to optimize the AI's performance for specific tasks or applications.

Basic prompt elements

For optimal results consider including the following elements in your prompt.

Note that the above are only suggestions. Use only elements that make sense in your use-case and keep the prompt clear and concise. Do not include irrelevant information – as it will only confuse the LLM.

Use of markup for structuring

When constructing longer prompts, consider using markup or some other text decorations to define “section headers”. Use formatted lists and other structural elements, to keep the description clear and well-defined.

For example:

### Identity

You are Arik.

You work in AudioCodes and you are the friendly and helpful voice of AudioCodes Room Experience Suite support team.

### Context

The AudioCodes Room Experience Suite of products and solutions is designed to deliver a superior meeting room experience, featuring excellent voice quality and image clarity, ease of use, and seamless integration with IT management tools. Combining innovative software and products from leading unified communications solution vendors, the RX Suite ensures that voice-only conference calls and video-enabled collaboration sessions deliver continuous productivity. Regardless of seating arrangement within a designated meeting room on-site or in a remote location, the RX Suite ensures participants can hear and see what matters to get the job done.

### Objective

Your tasks are:

- provide support through audio interactions

- provide information about various products that comprise the Room Experience Suite

- suggest Room Experience Suite products relevant for specific use-case

...

Complex Instructions

Consider using numbered steps in complex prompt instructions to improve clarity and encourage sequential processing.

For example:

Follow the steps below, do not skip steps, and only ask up to one question in response.

1. From the user identified in your context, determine what time and date they would like to schedule an appointment. Make sure that user explicitly provides time. Convert the provided time and date to UTC timezone using `convert_time` tool.

2. Determine available appointment slots via `get_free_slots` tool. Convert the received timeslots to local timezone using `convert_multiple_times` tool.

3. Present 3 most relevant slots in local timezone to user and ask user to choose one.

4. Schedule an appointment via `schedule_appointment` tool.

Note that numbered steps do NOT necessarily suit all agents. For example, the following prompt allows LLM to handle natural conversations with user and collect needed information in the most appropriate way:

You are operating a front-desk of Dr Dolittle's clinic.

Your job is to ask enough questions to get the caller's name and SSN, and INTENT (i.e. schedule or cancel appointment).

Use of Variables

You may use variables and conversation data in your prompts to provide context data, alter instruction sequence, etc. Variables can be either expanded directly or used as part of handlebars-like prompt expansion. Refer to the corresponding sections above for detailed description.

Comments

## Style Guidelines

Keep It Natural and Conversational: Use language that mimics everyday speech.

/*

Be Concise and Clear: Use short, direct sentences, address one topic at most.

Use Positive and Polite Tone: Make the conversation feel friendly and approachable.

*/

The comments are removed from the prompt PRIOR to sending it to LLM – so in the example above, LLM will see only the “keep in natural” style guideline.

Use of Tools

To equip your agent with tools, select relevant tools in the Tools section in Agent configuration screen.

If your tool and parameter descriptions are clear and concise LLM will typically automatically decide when to call the tool and what parameters to provide it with. You may however discover that LLM’s decision is not fully reliable – and sometimes it decides not to call the tool, or use it’s general knowledge instead of calling the tool. In such cases, explicitly instruct LLM to call the tool via the following construct in the prompt:

Use `tool_name` tool with parameter `parameter_name` set to "value".

Branching Logic

If you need to implement branching logic in your prompts, make sure that you keep the branching structure clear and that every branch is independent of another one.

For example:

If callee is "Lucy" greet her and ask her what she wants to do today.

If callee is not "Lucy", end the call with "sorry for the confusion" message.

If you decide to use “Otherwise” in your description, keep it in the same line with the corresponding “If” statement.

For example:

If callee is "Lucy" greet her and ask her what she wants to do today. Otherwise end the call with "sorry for the confusion" message.

Branching Logic Based on Variables / Conversation Data

If you need to implement branching logic based on variables / conversation data, consider using Prompt Expansion for this, as described in the corresponding section above. The reason that we typically prefer this approach over other ones, is that it’s fully deterministic and happens BEFORE the prompt is sent to the LLM – so only relevant parts remain in the prompt “as seen” by the LLM.

If you still prefer LLM to do the branching, you have two options: