Skip to content

Commit

Permalink
Render system prompts in template as well
Browse files Browse the repository at this point in the history
  • Loading branch information
NivekT committed Nov 3, 2023
1 parent 43aac18 commit 6e6ef77
Showing 1 changed file with 4 additions and 1 deletion.
5 changes: 4 additions & 1 deletion prompttools/harness/chat_prompt_template_harness.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,9 @@

def _render_messages_openai_chat(message_template: list[dict], user_input: dict, environment):
rendered_message = deepcopy(message_template)
sys_msg_template = environment.from_string(rendered_message[0]["content"])
user_msg_template = environment.from_string(rendered_message[-1]["content"])
rendered_message[0]["content"] = sys_msg_template.render(**user_input)
rendered_message[-1]["content"] = user_msg_template.render(**user_input)
return rendered_message

Expand All @@ -29,7 +31,8 @@ class ChatPromptTemplateExperimentationHarness(ExperimentationHarness):
experiment (Type[Experiment]): The experiment constructor that you would like to execute within the harness
(e.g. ``prompttools.experiment.OpenAICompletionExperiment``)
model_name (str): The name of the model.
message_templates (List[str]): A list of prompt ``jinja``-styled templates.
message_templates (List[str]): A list of prompt ``jinja``-styled templates. Each template should have two
messages inside (first system prompt and second a user message).
user_inputs (List[Dict[str, str]]): A list of dictionaries representing user inputs.
model_arguments (Optional[Dict[str, object]], optional): Additional arguments for the model.
Defaults to ``None``. Note that the values are not lists.
Expand Down

0 comments on commit 6e6ef77

Please sign in to comment.