repo_id
stringlengths 15
132
| file_path
stringlengths 34
176
| content
stringlengths 2
3.52M
| __index_level_0__
int64 0
0
|
---|---|---|---|
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/describe-image/flip_image.py | import io
from promptflow import tool
from promptflow.contracts.multimedia import Image
from PIL import Image as PIL_Image
@tool
def passthrough(input_image: Image) -> Image:
image_stream = io.BytesIO(input_image)
pil_image = PIL_Image.open(image_stream)
flipped_image = pil_image.transpose(PIL_Image.FLIP_LEFT_RIGHT)
buffer = io.BytesIO()
flipped_image.save(buffer, format="PNG")
return Image(buffer.getvalue(), mime_type="image/png")
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/describe-image/question_on_image.jinja2 | # system:
As an AI assistant, your task involves interpreting images and responding to questions about the image.
Remember to provide accurate answers based on the information present in the image.
# user:
{{question}}
![image]({{test_image}})
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/describe-image/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/describe-image/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
question:
type: string
default: Please describe this image.
input_image:
type: image
default: https://developer.microsoft.com/_devcom/images/logo-ms-social.png
outputs:
answer:
type: string
reference: ${question_on_image.output}
output_image:
type: string
reference: ${flip_image.output}
nodes:
- name: flip_image
type: python
source:
type: code
path: flip_image.py
inputs:
input_image: ${inputs.input_image}
- name: question_on_image
type: custom_llm
source:
type: package_with_prompt
tool: promptflow.tools.aoai_gpt4v.AzureOpenAI.chat
path: question_on_image.jinja2
inputs:
connection: aoai_gpt4v_connection
deployment_name: gpt-4v
max_tokens: 512
question: ${inputs.question}
test_image: ${flip_image.output}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/data.jsonl | {"text": "Python Hello World!"}
{"text": "C Hello World!"}
{"text": "C# Hello World!"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/custom.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: basic_custom_connection
type: custom
configs:
api_type: azure
api_version: 2023-03-15-preview
api_base: https://<to-be-replaced>.openai.azure.com/
secrets: # must-have
api_key: <to-be-replaced>
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/README.md | # Basic flow with custom connection
A basic standard flow that using custom python tool calls Azure OpenAI with connection info stored in custom connection.
Tools used in this flow:
- `prompt` tool
- custom `python` Tool
Connections used in this flow:
- None
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Setup connection
Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
Create connection if you haven't done that.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create -f custom.yml --set secrets.api_key=<your_api_key> configs.api_base=<your_api_base>
```
Ensure you have created `basic_custom_connection` connection.
```bash
pf connection show -n basic_custom_connection
```
## Run flow
### Run with single line input
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs text="Hello World!"
# test node with inputs
pf flow test --flow . --node llm --inputs prompt="Write a simple Hello World! program that displays the greeting message when executed."
```
### Run with multiple lines data
- create run
```bash
pf run create --flow . --data ./data.jsonl --column-mapping text='${data.text}' --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
- list and show run meta
```bash
# list created run
pf run list -r 3
# get a sample run name
name=$(pf run list -r 10 | jq '.[] | select(.name | contains("basic_with_connection")) | .name'| head -n 1 | tr -d '"')
# show specific run detail
pf run show --name $name
# show output
pf run show-details --name $name
# visualize run in browser
pf run visualize --name $name
```
### Run with connection override
Ensure you have created `open_ai_connection` connection before.
```bash
pf connection show -n open_ai_connection
```
Create connection if you haven't done that.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
```
Run flow with newly created connection.
```bash
pf run create --flow . --data ./data.jsonl --connections llm.connection=open_ai_connection --column-mapping text='${data.text}' --stream
```
### Run in cloud with connection override
Ensure you have created `open_ai_connection` connection in cloud. Reference [this notebook](../../../tutorials/get-started/quickstart-azure.ipynb) on how to create connections in cloud with UI.
Run flow with connection `open_ai_connection`.
```bash
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>
pfazure run create --flow . --data ./data.jsonl --connections llm.connection=open_ai_connection --column-mapping text='${data.text}' --stream
```
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/hello.py | from typing import Union
from openai.version import VERSION as OPENAI_VERSION
from promptflow import tool
from promptflow.connections import CustomConnection, AzureOpenAIConnection
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
def to_bool(value) -> bool:
return str(value).lower() == "true"
def get_client(connection: Union[CustomConnection, AzureOpenAIConnection]):
if OPENAI_VERSION.startswith("0."):
raise Exception(
"Please upgrade your OpenAI package to version >= 1.0.0 or using the command: pip install --upgrade openai."
)
# connection can be extract as a dict object contains the configs and secrets
connection_dict = dict(connection)
api_key = connection_dict.get("api_key")
conn = dict(
api_key=api_key,
)
if api_key.startswith("sk-"):
from openai import OpenAI as Client
else:
from openai import AzureOpenAI as Client
conn.update(
azure_endpoint=connection_dict.get("api_base"),
api_version=connection_dict.get("api_version", "2023-07-01-preview"),
)
return Client(**conn)
@tool
def my_python_tool(
prompt: str,
# for AOAI, deployment name is customized by user, not model name.
deployment_name: str,
suffix: str = None,
max_tokens: int = 120,
temperature: float = 1.0,
top_p: float = 1.0,
n: int = 1,
logprobs: int = None,
echo: bool = False,
stop: list = None,
presence_penalty: float = 0,
frequency_penalty: float = 0,
best_of: int = 1,
logit_bias: dict = {},
user: str = "",
connection: Union[CustomConnection, AzureOpenAIConnection] = None,
**kwargs,
) -> str:
# TODO: remove below type conversion after client can pass json rather than string.
echo = to_bool(echo)
response = get_client(connection).completions.create(
prompt=prompt,
model=deployment_name,
# empty string suffix should be treated as None.
suffix=suffix if suffix else None,
max_tokens=int(max_tokens),
temperature=float(temperature),
top_p=float(top_p),
n=int(n),
logprobs=int(logprobs) if logprobs else None,
echo=echo,
# fix bug "[] is not valid under any of the given schemas-'stop'"
stop=stop if stop else None,
presence_penalty=float(presence_penalty),
frequency_penalty=float(frequency_penalty),
best_of=int(best_of),
# Logit bias must be a dict if we passed it to openai api.
logit_bias=logit_bias if logit_bias else {},
user=user,
)
# get first element because prompt is single.
return response.choices[0].text
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/requirements.txt | promptflow[azure]
promptflow-tools
python-dotenv | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
text:
type: string
default: Hello World!
outputs:
output:
type: string
reference: ${llm.output}
nodes:
- name: hello_prompt
type: prompt
source:
type: code
path: hello.jinja2
inputs:
text: ${inputs.text}
- name: llm
type: python
source:
type: code
path: hello.py
inputs:
connection: basic_custom_connection
deployment_name: text-davinci-003
max_tokens: "120"
prompt: ${hello_prompt.output}
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-connection/hello.jinja2 | {# Please replace the template with your own prompt. #}
Write a simple {{text}} program that displays the greeting message when executed. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/inputs.json | {
"customer_info": "## Customer_Info\\n\\nFirst Name: Sarah \\nLast Name: Lee \\nAge: 38 \\nEmail Address: [email protected] \\nPhone Number: 555-867-5309 \\nShipping Address: 321 Maple St, Bigtown USA, 90123 \\nMembership: Platinum \\n\\n## Recent_Purchases\\n\\norder_number: 2 \\ndate: 2023-02-10 \\nitem:\\n- description: TrailMaster X4 Tent, quantity 1, price $250 \\n\\u00a0 item_number: 1 \\n\\norder_number: 26 \\ndate: 2023-02-05 \\nitem:\\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \\n\\u00a0 item_number: 7 \\n\\norder_number: 35 \\ndate: 2023-02-20 \\nitem:\\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \\n\\u00a0 item_number: 10 \\n\\norder_number: 42 \\ndate: 2023-04-06 \\nitem:\\n- description: TrekMaster Camping Chair, quantity 2, price $100 \\n\\u00a0 item_number: 12 \\n\\norder_number: 51 \\ndate: 2023-04-21 \\nitem:\\n- description: SkyView 2-Person Tent, quantity 1, price $200 \\n\\u00a0 item_number: 15 \\n\\norder_number: 56 \\ndate: 2023-03-26 \\nitem:\\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \\n\\u00a0 item_number: 17 \\n\\norder_number: 65 \\ndate: 2023-04-11 \\nitem:\\n- description: CompactCook Camping Stove, quantity 1, price $60 \\n\\u00a0 item_number: 20 \\n\\n",
"history": "[ { \"role\": \"customer\", \"content\": \"I recently bought the TrailMaster X4 Tent, and it leaked during a light rain. This is unacceptable! I expected a reliable and waterproof tent, but it failed to deliver. I'm extremely disappointed in the quality.\" } ]"
} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/user_intent_few_shot.jinja2 | You are given a list of orders with item_numbers from a customer and a statement from the customer. It is your job to identify
the intent that the customer has with their statement. Possible intents can be:
"product return", "product exchange", "general question", "product question", "other".
If the intent is product related ("product return", "product exchange", "product question"), then you should also
provide the order id and item that the customer is referring to in their statement.
For instance if you are give the following list of orders:
order_number: 2020230
date: 2023-04-23
store_location: SeattleStore
items:
- description: Roof Rack, color black, price $199.99
item_number: 101010
- description: Running Shoes, size 10, color blue, price $99.99
item_number: 202020
You are given the following customer statements:
- I am having issues with the jobbing shoes I bought.
Then you should answer with in valid yaml format with the fields intent, order_number, item, and item_number like so:
intent: product question
order_number: 2020230
descrption: Running Shoes, size 10, color blue, price $99.99
item_number: 202020
Here is the actual problem you need to solve:
In triple backticks below is the customer information and a list of orders.
```
{{customer_info}}
```
In triple backticks below are the is the chat history with customer statements and replies from the customer service agent:
```
{{chat_history}}
```
What is the customer's `intent:` here?
"product return", "exchange product", "general question", "product question" or "other"?
Reply with only the intent string. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/README.md | # Customer Intent Extraction
This sample is using OpenAI chat model(ChatGPT/GPT4) to identify customer intent from customer's question.
By going through this sample you will learn how to create a flow from existing working code (written in LangChain in this case).
This is the [existing code](./intent.py).
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
Ensure you have put your azure open ai endpoint key in .env file.
```bash
cat .env
```
## Run flow
1. init flow directory - create promptflow folder from existing python file
```bash
pf flow init --flow . --entry intent.py --function extract_intent --prompt-template chat_prompt=user_intent_zero_shot.jinja2
```
The generated files:
- extract_intent_tool.py: Wrap the func `extract_intent` in the `intent.py` script into a [Python Tool](https://promptflow.azurewebsites.net/tools-reference/python-tool.html).
- flow.dag.yaml: Describes the DAG(Directed Acyclic Graph) of this flow.
- .gitignore: File/folder in the flow to be ignored.
2. create needed custom connection
```bash
pf connection create -f .env --name custom_connection
```
3. test flow with single line input
```bash
pf flow test --flow . --input ./data/denormalized-flat.jsonl
```
4. run with multiple lines input
```bash
pf run create --flow . --data ./data --column-mapping history='${data.history}' customer_info='${data.customer_info}'
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
5. list/show
```bash
# list created run
pf run list
# get a sample completed run name
name=$(pf run list | jq '.[] | select(.name | contains("customer_intent_extraction")) | .name'| head -n 1 | tr -d '"')
# show run
pf run show --name $name
# show specific run detail, top 3 lines
pf run show-details --name $name -r 3
```
6. evaluation
```bash
# create evaluation run
pf run create --flow ../../evaluation/eval-classification-accuracy --data ./data --column-mapping groundtruth='${data.intent}' prediction='${run.outputs.output}' --run $name
```
```bash
# get the evaluation run in previous step
eval_run_name=$(pf run list | jq '.[] | select(.name | contains("eval_classification_accuracy")) | .name'| head -n 1 | tr -d '"')
# show run
pf run show --name $eval_run_name
# show run output
pf run show-details --name $eval_run_name -r 3
```
6. visualize
```bash
# visualize in browser
pf run visualize --name $eval_run_name # your evaluation run name
```
## Deploy
### Serve as a local test app
```bash
pf flow serve --source . --port 5123 --host localhost
```
Visit http://localhost:5213 to access the test app.
### Export
#### Export as docker
```bash
# pf flow export --source . --format docker --output ./package
``` | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/user_intent_zero_shot.jinja2 | You are given a list of orders with item_numbers from a customer and a statement from the customer. It is your job to identify the intent that the customer has with their statement. Possible intents can be: "product return", "product exchange", "general question", "product question", "other".
In triple backticks below is the customer information and a list of orders.
```
{{customer_info}}
```
In triple backticks below are the is the chat history with customer statements and replies from the customer service agent:
```
{{history}}
```
What is the customer's `intent:` here?
"product return", "exchange product", "general question", "product question" or "other"?
Reply with only the intent string. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/.env.example | CHAT_DEPLOYMENT_NAME=gpt-35-turbo
AZURE_OPENAI_API_KEY=<your_AOAI_key>
AZURE_OPENAI_API_BASE=<your_AOAI_endpoint>
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/intent.py | import os
import pip
from langchain.chat_models import AzureChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import HumanMessage
def extract_intent(chat_prompt: str):
if "AZURE_OPENAI_API_KEY" not in os.environ:
# load environment variables from .env file
try:
from dotenv import load_dotenv
except ImportError:
# This can be removed if user using custom image.
pip.main(["install", "python-dotenv"])
from dotenv import load_dotenv
load_dotenv()
chat = AzureChatOpenAI(
deployment_name=os.environ["CHAT_DEPLOYMENT_NAME"],
openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],
openai_api_base=os.environ["AZURE_OPENAI_API_BASE"],
openai_api_type="azure",
openai_api_version="2023-07-01-preview",
temperature=0,
)
reply_message = chat([HumanMessage(content=chat_prompt)])
return reply_message.content
def generate_prompt(customer_info: str, history: list, user_prompt_template: str):
chat_history_text = "\n".join(
[message["role"] + ": " + message["content"] for message in history]
)
prompt_template = PromptTemplate.from_template(user_prompt_template)
chat_prompt_template = ChatPromptTemplate.from_messages(
[
HumanMessagePromptTemplate(prompt=prompt_template)
]
)
return chat_prompt_template.format_prompt(customer_info=customer_info, chat_history=chat_history_text).to_string()
if __name__ == "__main__":
import json
with open("./data/denormalized-flat.jsonl", "r") as f:
data = [json.loads(line) for line in f.readlines()]
# only ten samples
data = data[:10]
# load template from file
with open("user_intent_zero_shot.jinja2", "r") as f:
user_prompt_template = f.read()
# each test
for item in data:
chat_prompt = generate_prompt(item["customer_info"], item["history"], user_prompt_template)
reply = extract_intent(chat_prompt)
print("=====================================")
# print("Customer info: ", item["customer_info"])
# print("+++++++++++++++++++++++++++++++++++++")
print("Chat history: ", item["history"])
print("+++++++++++++++++++++++++++++++++++++")
print(reply)
print("+++++++++++++++++++++++++++++++++++++")
print(f"Ground Truth: {item['intent']}")
print("=====================================")
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/requirements.txt | promptflow
promptflow-tools
python-dotenv
langchain
jinja2 | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/.amlignore | *.ipynb
.venv/
.data/
.env
.vscode/
outputs/
connection.json | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/extract_intent_tool.py | import os
from promptflow import tool
from promptflow.connections import CustomConnection
from intent import extract_intent
@tool
def extract_intent_tool(chat_prompt, connection: CustomConnection) -> str:
# set environment variables
for key, value in dict(connection).items():
os.environ[key] = value
# call the entry function
return extract_intent(
chat_prompt=chat_prompt,
)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
history:
type: string
customer_info:
type: string
outputs:
output:
type: string
reference: ${extract_intent.output}
nodes:
- name: chat_prompt
type: prompt
source:
type: code
path: user_intent_zero_shot.jinja2
inputs: # Please check the generated prompt inputs
history: ${inputs.history}
customer_info: ${inputs.customer_info}
- name: extract_intent
type: python
source:
type: code
path: extract_intent_tool.py
inputs:
chat_prompt: ${chat_prompt.output}
connection: custom_connection
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/.promptflow/flow.tools.json | {
"package": {},
"code": {
"chat_prompt": {
"type": "prompt",
"inputs": {
"customer_info": {
"type": [
"string"
]
},
"chat_history": {
"type": [
"string"
]
}
},
"source": "user_intent_zero_shot.jinja2"
},
"extract_intent_tool.py": {
"type": "python",
"inputs": {
"chat_prompt": {
"type": [
"object"
]
},
"connection": {
"type": [
"CustomConnection"
]
}
},
"source": "extract_intent_tool.py",
"function": "extract_intent_tool"
},
"user_intent_zero_shot.jinja2": {
"type": "prompt",
"inputs": {
"customer_info": {
"type": [
"string"
]
},
"history": {
"type": [
"string"
]
}
},
"source": "user_intent_zero_shot.jinja2"
}
}
} | 0 |
promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction | promptflow_repo/promptflow/examples/flows/standard/customer-intent-extraction/data/denormalized-flat.jsonl | {"customer_info": "## Customer_Info\n\nFirst Name: Sarah \nLast Name: Lee \nAge: 38 \nEmail Address: [email protected] \nPhone Number: 555-867-5309 \nShipping Address: 321 Maple St, Bigtown USA, 90123 \nMembership: Platinum \n\n## Recent_Purchases\n\norder_number: 2 \ndate: 2023-02-10 \nitem:\n- description: TrailMaster X4 Tent, quantity 1, price $250 \n\u00a0 item_number: 1 \n\norder_number: 26 \ndate: 2023-02-05 \nitem:\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \n\u00a0 item_number: 7 \n\norder_number: 35 \ndate: 2023-02-20 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \n\u00a0 item_number: 10 \n\norder_number: 42 \ndate: 2023-04-06 \nitem:\n- description: TrekMaster Camping Chair, quantity 2, price $100 \n\u00a0 item_number: 12 \n\norder_number: 51 \ndate: 2023-04-21 \nitem:\n- description: SkyView 2-Person Tent, quantity 1, price $200 \n\u00a0 item_number: 15 \n\norder_number: 56 \ndate: 2023-03-26 \nitem:\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \n\u00a0 item_number: 17 \n\norder_number: 65 \ndate: 2023-04-11 \nitem:\n- description: CompactCook Camping Stove, quantity 1, price $60 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I recently bought the TrailMaster X4 Tent, and it leaked during a light rain. This is unacceptable! I expected a reliable and waterproof tent, but it failed to deliver. I'm extremely disappointed in the quality."}], "item_number": 1, "order_number": 2, "description": "TrailMaster X4 Tent, quantity 1, price $250", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: John \nLast Name: Smith \nAge: 35 \nEmail Address: [email protected] \nPhone Number: 555-123-4567 \nShipping Address: 123 Main St, Anytown USA, 12345 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 1 \ndate: 2023-01-05 \nitem:\n- description: TrailMaster X4 Tent, quantity 2, price $500 \n\u00a0 item_number: 1 \n\norder_number: 19 \ndate: 2023-01-25 \nitem:\n- description: BaseCamp Folding Table, quantity 1, price $60 \n\u00a0 item_number: 5 \n\norder_number: 29 \ndate: 2023-02-10 \nitem:\n- description: Alpine Explorer Tent, quantity 2, price $700 \n\u00a0 item_number: 8 \n\norder_number: 41 \ndate: 2023-03-01 \nitem:\n- description: TrekMaster Camping Chair, quantity 1, price $50 \n\u00a0 item_number: 12 \n\norder_number: 50 \ndate: 2023-03-16 \nitem:\n- description: SkyView 2-Person Tent, quantity 2, price $400 \n\u00a0 item_number: 15 \n\norder_number: 59 \ndate: 2023-04-01 \nitem:\n- description: TrekStar Hiking Sandals, quantity 1, price $70 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "I recently purchased two TrailMaster X4 Tents, and while the overall quality is good, I noticed a minor issue. One of the tent poles arrived slightly bent, making it challenging to assemble the tent properly. I'm concerned that this may affect the stability of the tent. Can you provide any guidance on how to address this?"}], "item_number": 1, "order_number": 1, "description": "TrailMaster X4 Tent, quantity 2, price $500", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Jason \nLast Name: Brown \nAge: 50 \nEmail Address: [email protected] \nPhone Number: 555-222-3333 \nShipping Address: 456 Cedar Rd, Anytown USA, 12345 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 8 \ndate: 2023-03-20 \nitem:\n- description: Adventurer Pro Backpack, quantity 1, price $90 \n\u00a0 item_number: 2 \n\norder_number: 27 \ndate: 2023-03-10 \nitem:\n- description: CozyNights Sleeping Bag, quantity 2, price $200 \n\u00a0 item_number: 7 \n\norder_number: 36 \ndate: 2023-03-25 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 2, price $150 \n\u00a0 item_number: 10 \n\norder_number: 43 \ndate: 2023-05-11 \nitem:\n- description: TrekMaster Camping Chair, quantity 1, price $50 \n\u00a0 item_number: 12 \n\norder_number: 52 \ndate: 2023-05-26 \nitem:\n- description: SkyView 2-Person Tent, quantity 1, price $200 \n\u00a0 item_number: 15 \n\norder_number: 57 \ndate: 2023-05-01 \nitem:\n- description: RainGuard Hiking Jacket, quantity 2, price $220 \n\u00a0 item_number: 17 \n\norder_number: 66 \ndate: 2023-05-16 \nitem:\n- description: CompactCook Camping Stove, quantity 2, price $120 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I recently purchased the Adventurer Pro Backpack, and I'm excited to use it for my upcoming camping trip. Can you provide some guidance on the backpack's capacity and any special features I should be aware of? I want to make sure I utilize it to its fullest potential."}], "item_number": 2, "order_number": 8, "description": "Adventurer Pro Backpack, quantity 3, price $270", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Daniel \nLast Name: Wilson \nAge: 47 \nEmail Address: [email protected] \nPhone Number: 555-444-5555 \nShipping Address: 321 Birch Ln, Smallville USA, 34567 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 9 \ndate: 2023-04-25 \nitem:\n- description: Adventurer Pro Backpack, quantity 3, price $270 \n\u00a0 item_number: 2 \n\norder_number: 13 \ndate: 2023-03-25 \nitem:\n- description: Summit Breeze Jacket, quantity 1, price $120 \n\u00a0 item_number: 3 \n\norder_number: 22 \ndate: 2023-05-07 \nitem:\n- description: BaseCamp Folding Table, quantity 3, price $180 \n\u00a0 item_number: 5 \n\norder_number: 40 \ndate: 2023-04-05 \nitem:\n- description: TrailWalker Hiking Shoes, quantity 1, price $110 \n\u00a0 item_number: 11 \n\norder_number: 49 \ndate: 2023-05-21 \nitem:\n- description: MountainDream Sleeping Bag, quantity 1, price $130 \n\u00a0 item_number: 14 \n\n", "history": [{"role": "customer", "content": "I recently received the Adventurer Pro Backpack I ordered, but there seems to be a problem. One of the zippers on the main compartment is jammed, making it difficult to open and close. This is quite frustrating, as I was looking forward to using it on my upcoming hiking trip."}], "item_number": 2, "order_number": 9, "description": "Adventurer Pro Backpack, quantity 2, price $180", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: Robert \nLast Name: Johnson \nAge: 36 \nEmail Address: [email protected] \nPhone Number: 555-555-1212 \nShipping Address: 123 Main St, Anytown USA, 12345 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 10 \ndate: 2023-05-05 \nitem:\n- description: Adventurer Pro Backpack, quantity 2, price $180 \n\u00a0 item_number: 2 \n\n", "history": [{"role": "customer", "content": "I recently purchased two Adventurer Pro Backpacks, and I'm curious to know if they are waterproof. I'm planning to go on a camping trip where we might encounter some rain, and I want to make sure my belongings stay dry."}], "item_number": 2, "order_number": 10, "description": "Adventurer Pro Backpack, quantity 2, price $180", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Michael \nLast Name: Johnson \nAge: 45 \nEmail Address: [email protected] \nPhone Number: 555-555-1212 \nShipping Address: 789 Elm St, Smallville USA, 34567 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 11 \ndate: 2023-01-15 \nitem:\n- description: Summit Breeze Jacket, quantity 1, price $120 \n\u00a0 item_number: 3 \n\norder_number: 20 \ndate: 2023-02-28 \nitem:\n- description: BaseCamp Folding Table, quantity 2, price $120 \n\u00a0 item_number: 5 \n\norder_number: 30 \ndate: 2023-03-15 \nitem:\n- description: Alpine Explorer Tent, quantity 1, price $350 \n\u00a0 item_number: 8 \n\norder_number: 38 \ndate: 2023-02-25 \nitem:\n- description: TrailWalker Hiking Shoes, quantity 1, price $110 \n\u00a0 item_number: 11 \n\norder_number: 47 \ndate: 2023-03-11 \nitem:\n- description: MountainDream Sleeping Bag, quantity 1, price $130 \n\u00a0 item_number: 14 \n\norder_number: 60 \ndate: 2023-05-06 \nitem:\n- description: TrekStar Hiking Sandals, quantity 2, price $140 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "I recently purchased the Summit Breeze Jacket, and I'm extremely disappointed. The jacket doesn't provide the protection it claims. I wore it during a light rain, and it soaked through within minutes. This is completely unacceptable!"}], "item_number": 3, "order_number": 11, "description": "Summit Breeze Jacket, quantity 1, price $120", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: Melissa \nLast Name: Davis \nAge: 31 \nEmail Address: [email protected] \nPhone Number: 555-333-4444 \nShipping Address: 789 Ash St, Another City USA, 67890 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 4 \ndate: 2023-04-22 \nitem:\n- description: TrailMaster X4 Tent, quantity 2, price $500 \n\u00a0 item_number: 1 \n\norder_number: 17 \ndate: 2023-03-30 \nitem:\n- description: TrekReady Hiking Boots, quantity 1, price $140 \n\u00a0 item_number: 4 \n\norder_number: 25 \ndate: 2023-04-10 \nitem:\n- description: EcoFire Camping Stove, quantity 1, price $80 \n\u00a0 item_number: 6 \n\norder_number: 34 \ndate: 2023-04-25 \nitem:\n- description: SummitClimber Backpack, quantity 1, price $120 \n\u00a0 item_number: 9 \n\norder_number: 46 \ndate: 2023-05-16 \nitem:\n- description: PowerBurner Camping Stove, quantity 1, price $100 \n\u00a0 item_number: 13 \n\norder_number: 55 \ndate: 2023-05-31 \nitem:\n- description: TrailLite Daypack, quantity 1, price $60 \n\u00a0 item_number: 16 \n\norder_number: 64 \ndate: 2023-06-16 \nitem:\n- description: Adventure Dining Table, quantity 1, price $90 \n\u00a0 item_number: 19 \n\n", "history": [{"role": "customer", "content": "I recently purchased the TrekReady Hiking Boots, and I must say I'm disappointed. The boots started falling apart after just a few uses. The stitching came undone, and the sole started detaching. I expected better quality and durability from these boots. I'm not satisfied with my purchase."}], "item_number": 4, "order_number": 17, "description": "TrekReady Hiking Boots, quantity 1, price $140", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: Emily \nLast Name: Rodriguez \nAge: 29 \nEmail Address: [email protected] \nPhone Number: 555-111-2222 \nShipping Address: 987 Oak Ave, Cityville USA, 56789 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 3 \ndate: 2023-03-18 \nitem:\n- description: TrailMaster X4 Tent, quantity 3, price $750 \n\u00a0 item_number: 1 \n\norder_number: 12 \ndate: 2023-02-20 \nitem:\n- description: Summit Breeze Jacket, quantity 2, price $240 \n\u00a0 item_number: 3 \n\norder_number: 21 \ndate: 2023-04-02 \nitem:\n- description: BaseCamp Folding Table, quantity 1, price $60 \n\u00a0 item_number: 5 \n\norder_number: 31 \ndate: 2023-04-20 \nitem:\n- description: Alpine Explorer Tent, quantity 1, price $350 \n\u00a0 item_number: 8 \n\norder_number: 39 \ndate: 2023-03-30 \nitem:\n- description: TrailWalker Hiking Shoes, quantity 2, price $220 \n\u00a0 item_number: 11 \n\norder_number: 48 \ndate: 2023-04-16 \nitem:\n- description: MountainDream Sleeping Bag, quantity 2, price $260 \n\u00a0 item_number: 14 \n\norder_number: 61 \ndate: 2023-06-11 \nitem:\n- description: TrekStar Hiking Sandals, quantity 1, price $70 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "I'm interested in purchasing the BaseCamp Folding Table. Can you provide me with more details about its dimensions and weight? I want to make sure it will fit well in my camping gear."}], "item_number": 5, "order_number": 21, "description": "BaseCamp Folding Table, quantity 1, price $60", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Jane \nLast Name: Doe \nAge: 28 \nEmail Address: [email protected] \nPhone Number: 555-987-6543 \nShipping Address: 456 Oak St, Another City USA, 67890 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 6 \ndate: 2023-01-10 \nitem:\n- description: Adventurer Pro Backpack, quantity 1, price $90 \n\u00a0 item_number: 2 \n\norder_number: 15 \ndate: 2023-01-20 \nitem:\n- description: TrekReady Hiking Boots, quantity 1, price $140 \n\u00a0 item_number: 4 \n\norder_number: 23 \ndate: 2023-01-30 \nitem:\n- description: EcoFire Camping Stove, quantity 1, price $80 \n\u00a0 item_number: 6 \n\norder_number: 32 \ndate: 2023-02-15 \nitem:\n- description: SummitClimber Backpack, quantity 1, price $120 \n\u00a0 item_number: 9 \n\norder_number: 44 \ndate: 2023-03-06 \nitem:\n- description: PowerBurner Camping Stove, quantity 1, price $100 \n\u00a0 item_number: 13 \n\norder_number: 53 \ndate: 2023-03-21 \nitem:\n- description: TrailLite Daypack, quantity 1, price $60 \n\u00a0 item_number: 16 \n\norder_number: 62 \ndate: 2023-04-06 \nitem:\n- description: Adventure Dining Table, quantity 1, price $90 \n\u00a0 item_number: 19 \n\n", "history": [{"role": "customer", "content": "I recently purchased a camping cooker, but I'm disappointed with its performance. It takes too long to heat up, and the flames seem to be uneven. I expected better quality from this stove."}], "item_number": 6, "order_number": 23, "description": "EcoFire Camping Stove, quantity 1, price $80", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Melissa \nLast Name: Davis \nAge: 31 \nEmail Address: [email protected] \nPhone Number: 555-333-4444 \nShipping Address: 789 Ash St, Another City USA, 67890 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 4 \ndate: 2023-04-22 \nitem:\n- description: TrailMaster X4 Tent, quantity 2, price $500 \n\u00a0 item_number: 1 \n\norder_number: 17 \ndate: 2023-03-30 \nitem:\n- description: TrekReady Hiking Boots, quantity 1, price $140 \n\u00a0 item_number: 4 \n\norder_number: 25 \ndate: 2023-04-10 \nitem:\n- description: EcoFire Camping Stove, quantity 1, price $80 \n\u00a0 item_number: 6 \n\norder_number: 34 \ndate: 2023-04-25 \nitem:\n- description: SummitClimber Backpack, quantity 1, price $120 \n\u00a0 item_number: 9 \n\norder_number: 46 \ndate: 2023-05-16 \nitem:\n- description: PowerBurner Camping Stove, quantity 1, price $100 \n\u00a0 item_number: 13 \n\norder_number: 55 \ndate: 2023-05-31 \nitem:\n- description: TrailLite Daypack, quantity 1, price $60 \n\u00a0 item_number: 16 \n\norder_number: 64 \ndate: 2023-06-16 \nitem:\n- description: Adventure Dining Table, quantity 1, price $90 \n\u00a0 item_number: 19 \n\n", "history": [{"role": "customer", "content": "I'm interested in purchasing the a camping cooker. Can you tell me more about its fuel efficiency and cooking capacity? I want to make sure it will suit my camping needs."}], "item_number": 6, "order_number": 25, "description": "EcoFire Camping Stove, quantity 1, price $80", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Sarah \nLast Name: Lee \nAge: 38 \nEmail Address: [email protected] \nPhone Number: 555-867-5309 \nShipping Address: 321 Maple St, Bigtown USA, 90123 \nMembership: Platinum \n\n## Recent_Purchases\n\norder_number: 2 \ndate: 2023-02-10 \nitem:\n- description: TrailMaster X4 Tent, quantity 1, price $250 \n\u00a0 item_number: 1 \n\norder_number: 26 \ndate: 2023-02-05 \nitem:\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \n\u00a0 item_number: 7 \n\norder_number: 35 \ndate: 2023-02-20 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \n\u00a0 item_number: 10 \n\norder_number: 42 \ndate: 2023-04-06 \nitem:\n- description: TrekMaster Camping Chair, quantity 2, price $100 \n\u00a0 item_number: 12 \n\norder_number: 51 \ndate: 2023-04-21 \nitem:\n- description: SkyView 2-Person Tent, quantity 1, price $200 \n\u00a0 item_number: 15 \n\norder_number: 56 \ndate: 2023-03-26 \nitem:\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \n\u00a0 item_number: 17 \n\norder_number: 65 \ndate: 2023-04-11 \nitem:\n- description: CompactCook Camping Stove, quantity 1, price $60 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "Hi, I recently bought a sleeping Bag, and I'm having some trouble with the zipper. It seems to get stuck every time I try to close it. What should I do?"}], "item_number": 7, "order_number": 26, "description": "CozyNights Sleeping Bag, quantity 1, price $100", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Amanda \nLast Name: Perez \nAge: 26 \nEmail Address: [email protected] \nPhone Number: 555-123-4567 \nShipping Address: 654 Pine St, Suburbia USA, 23456 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 5 \ndate: 2023-05-01 \nitem:\n- description: TrailMaster X4 Tent, quantity 1, price $250 \n\u00a0 item_number: 1 \n\norder_number: 18 \ndate: 2023-05-04 \nitem:\n- description: TrekReady Hiking Boots, quantity 3, price $420 \n\u00a0 item_number: 4 \n\norder_number: 28 \ndate: 2023-04-15 \nitem:\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \n\u00a0 item_number: 7 \n\norder_number: 37 \ndate: 2023-04-30 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \n\u00a0 item_number: 10 \n\norder_number: 58 \ndate: 2023-06-06 \nitem:\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \n\u00a0 item_number: 17 \n\norder_number: 67 \ndate: 2023-06-21 \nitem:\n- description: CompactCook Camping Stove, quantity 1, price $60 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I received the sleeping bag I ordered, but it's not the color I chose. I specifically selected the blue one, but I got a green one instead. This is really disappointing!"}], "item_number": 7, "order_number": 28, "description": "CozyNights Sleeping Bag, quantity 2, price $100", "intent": "product exchange"}
{"customer_info": "## Customer_Info\n\nFirst Name: John \nLast Name: Smith \nAge: 35 \nEmail Address: [email protected] \nPhone Number: 555-123-4567 \nShipping Address: 123 Main St, Anytown USA, 12345 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 1 \ndate: 2023-01-05 \nitem:\n- description: TrailMaster X4 Tent, quantity 2, price $500 \n\u00a0 item_number: 1 \n\norder_number: 19 \ndate: 2023-01-25 \nitem:\n- description: BaseCamp Folding Table, quantity 1, price $60 \n\u00a0 item_number: 5 \n\norder_number: 29 \ndate: 2023-02-10 \nitem:\n- description: Alpine Explorer Tent, quantity 2, price $700 \n\u00a0 item_number: 8 \n\norder_number: 41 \ndate: 2023-03-01 \nitem:\n- description: TrekMaster Camping Chair, quantity 1, price $50 \n\u00a0 item_number: 12 \n\norder_number: 50 \ndate: 2023-03-16 \nitem:\n- description: SkyView 2-Person Tent, quantity 2, price $400 \n\u00a0 item_number: 15 \n\norder_number: 59 \ndate: 2023-04-01 \nitem:\n- description: TrekStar Hiking Sandals, quantity 1, price $70 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "Hi there! I recently purchased two Tents from your store. They look great, but I wanted to know if they come with any additional accessories like stakes or a rainfly?"}], "item_number": 8, "order_number": 29, "description": "Alpine Explorer Tents, quantity 2, price $700", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Michael \nLast Name: Johnson \nAge: 45 \nEmail Address: [email protected] \nPhone Number: 555-555-1212 \nShipping Address: 789 Elm St, Smallville USA, 34567 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 11 \ndate: 2023-01-15 \nitem:\n- description: Summit Breeze Jacket, quantity 1, price $120 \n\u00a0 item_number: 3 \n\norder_number: 20 \ndate: 2023-02-28 \nitem:\n- description: BaseCamp Folding Table, quantity 2, price $120 \n\u00a0 item_number: 5 \n\norder_number: 30 \ndate: 2023-03-15 \nitem:\n- description: Alpine Explorer Tent, quantity 1, price $350 \n\u00a0 item_number: 8 \n\norder_number: 38 \ndate: 2023-02-25 \nitem:\n- description: TrailWalker Hiking Shoes, quantity 1, price $110 \n\u00a0 item_number: 11 \n\norder_number: 47 \ndate: 2023-03-11 \nitem:\n- description: MountainDream Sleeping Bag, quantity 1, price $130 \n\u00a0 item_number: 14 \n\norder_number: 60 \ndate: 2023-05-06 \nitem:\n- description: TrekStar Hiking Sandals, quantity 2, price $140 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "I recently bought a Tent, and I have to say, I'm really disappointed. The tent poles seem flimsy, and the zippers are constantly getting stuck. It's not what I expected from a high-end tent."}], "item_number": 8, "order_number": 30, "description": "Alpine Explorer Tents, quantity 1, price $350", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: Emily \nLast Name: Rodriguez \nAge: 29 \nEmail Address: [email protected] \nPhone Number: 555-111-2222 \nShipping Address: 987 Oak Ave, Cityville USA, 56789 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 3 \ndate: 2023-03-18 \nitem:\n- description: TrailMaster X4 Tent, quantity 3, price $750 \n\u00a0 item_number: 1 \n\norder_number: 12 \ndate: 2023-02-20 \nitem:\n- description: Summit Breeze Jacket, quantity 2, price $240 \n\u00a0 item_number: 3 \n\norder_number: 21 \ndate: 2023-04-02 \nitem:\n- description: BaseCamp Folding Table, quantity 1, price $60 \n\u00a0 item_number: 5 \n\norder_number: 31 \ndate: 2023-04-20 \nitem:\n- description: Alpine Explorer Tent, quantity 1, price $350 \n\u00a0 item_number: 8 \n\norder_number: 39 \ndate: 2023-03-30 \nitem:\n- description: TrailWalker Hiking Shoes, quantity 2, price $220 \n\u00a0 item_number: 11 \n\norder_number: 48 \ndate: 2023-04-16 \nitem:\n- description: MountainDream Sleeping Bag, quantity 2, price $260 \n\u00a0 item_number: 14 \n\norder_number: 61 \ndate: 2023-06-11 \nitem:\n- description: TrekStar Hiking Sandals, quantity 1, price $70 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "I recently received the tent I ordered, but I'm having trouble setting it up. The instructions provided are a bit unclear. Can you guide me through the setup process?"}], "item_number": 8, "order_number": 31, "description": "Alpine Explorer Tents, quantity 1, price $350", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Jane \nLast Name: Doe \nAge: 28 \nEmail Address: [email protected] \nPhone Number: 555-987-6543 \nShipping Address: 456 Oak St, Another City USA, 67890 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 6 \ndate: 2023-01-10 \nitem:\n- description: Adventurer Pro Backpack, quantity 1, price $90 \n\u00a0 item_number: 2 \n\norder_number: 15 \ndate: 2023-01-20 \nitem:\n- description: TrekReady Hiking Boots, quantity 1, price $140 \n\u00a0 item_number: 4 \n\norder_number: 23 \ndate: 2023-01-30 \nitem:\n- description: EcoFire Camping Stove, quantity 1, price $80 \n\u00a0 item_number: 6 \n\norder_number: 32 \ndate: 2023-02-15 \nitem:\n- description: SummitClimber Backpack, quantity 1, price $120 \n\u00a0 item_number: 9 \n\norder_number: 44 \ndate: 2023-03-06 \nitem:\n- description: PowerBurner Camping Stove, quantity 1, price $100 \n\u00a0 item_number: 13 \n\norder_number: 53 \ndate: 2023-03-21 \nitem:\n- description: TrailLite Daypack, quantity 1, price $60 \n\u00a0 item_number: 16 \n\norder_number: 62 \ndate: 2023-04-06 \nitem:\n- description: Adventure Dining Table, quantity 1, price $90 \n\u00a0 item_number: 19 \n\n", "history": [{"role": "customer", "content": "Hi, I recently purchased the a backpack from your store. It looks great, but I'm wondering if it has a separate compartment for a hydration bladder?"}], "item_number": 9, "order_number": 32, "description": "SummitClimber Backpack, quantity 1, price $120", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Melissa \nLast Name: Davis \nAge: 31 \nEmail Address: [email protected] \nPhone Number: 555-333-4444 \nShipping Address: 789 Ash St, Another City USA, 67890 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 4 \ndate: 2023-04-22 \nitem:\n- description: TrailMaster X4 Tent, quantity 2, price $500 \n\u00a0 item_number: 1 \n\norder_number: 17 \ndate: 2023-03-30 \nitem:\n- description: TrekReady Hiking Boots, quantity 1, price $140 \n\u00a0 item_number: 4 \n\norder_number: 25 \ndate: 2023-04-10 \nitem:\n- description: EcoFire Camping Stove, quantity 1, price $80 \n\u00a0 item_number: 6 \n\norder_number: 34 \ndate: 2023-04-25 \nitem:\n- description: SummitClimber Backpack, quantity 1, price $120 \n\u00a0 item_number: 9 \n\norder_number: 46 \ndate: 2023-05-16 \nitem:\n- description: PowerBurner Camping Stove, quantity 1, price $100 \n\u00a0 item_number: 13 \n\norder_number: 55 \ndate: 2023-05-31 \nitem:\n- description: TrailLite Daypack, quantity 1, price $60 \n\u00a0 item_number: 16 \n\norder_number: 64 \ndate: 2023-06-16 \nitem:\n- description: Adventure Dining Table, quantity 1, price $90 \n\u00a0 item_number: 19 \n\n", "history": [{"role": "customer", "content": "I recently received the Backpack I ordered, and I noticed that one of the zippers is not closing smoothly. It seems to get stuck halfway. Is there anything I can do to fix it?"}], "item_number": 9, "order_number": 34, "description": "SummitClimber Backpack, quantity 1, price $120", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Sarah \nLast Name: Lee \nAge: 38 \nEmail Address: [email protected] \nPhone Number: 555-867-5309 \nShipping Address: 321 Maple St, Bigtown USA, 90123 \nMembership: Platinum \n\n## Recent_Purchases\n\norder_number: 2 \ndate: 2023-02-10 \nitem:\n- description: TrailMaster X4 Tent, quantity 1, price $250 \n\u00a0 item_number: 1 \n\norder_number: 26 \ndate: 2023-02-05 \nitem:\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \n\u00a0 item_number: 7 \n\norder_number: 35 \ndate: 2023-02-20 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \n\u00a0 item_number: 10 \n\norder_number: 42 \ndate: 2023-04-06 \nitem:\n- description: TrekMaster Camping Chair, quantity 2, price $100 \n\u00a0 item_number: 12 \n\norder_number: 51 \ndate: 2023-04-21 \nitem:\n- description: SkyView 2-Person Tent, quantity 1, price $200 \n\u00a0 item_number: 15 \n\norder_number: 56 \ndate: 2023-03-26 \nitem:\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \n\u00a0 item_number: 17 \n\norder_number: 65 \ndate: 2023-04-11 \nitem:\n- description: CompactCook Camping Stove, quantity 1, price $60 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I recently purchased the Pants from your store, but I'm disappointed with the fit. They're too tight around the waist, and the length is too short. Can I exchange them for a different size?"}], "item_number": 10, "order_number": 35, "description": "TrailBlaze Hiking Pants, quantity 1, price $75", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: Amanda \nLast Name: Perez \nAge: 26 \nEmail Address: [email protected] \nPhone Number: 555-123-4567 \nShipping Address: 654 Pine St, Suburbia USA, 23456 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 5 \ndate: 2023-05-01 \nitem:\n- description: TrailMaster X4 Tent, quantity 1, price $250 \n\u00a0 item_number: 1 \n\norder_number: 18 \ndate: 2023-05-04 \nitem:\n- description: TrekReady Hiking Boots, quantity 3, price $420 \n\u00a0 item_number: 4 \n\norder_number: 28 \ndate: 2023-04-15 \nitem:\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \n\u00a0 item_number: 7 \n\norder_number: 37 \ndate: 2023-04-30 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \n\u00a0 item_number: 10 \n\norder_number: 58 \ndate: 2023-06-06 \nitem:\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \n\u00a0 item_number: 17 \n\norder_number: 67 \ndate: 2023-06-21 \nitem:\n- description: CompactCook Camping Stove, quantity 1, price $60 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I recently received the Pants I ordered, and I noticed that they have several pockets. Can you provide some information on the pocket layout and their intended purposes?"}], "item_number": 10, "order_number": 37, "description": "TrailBlaze Hiking Pants, quantity 1, price $75", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Emily \nLast Name: Rodriguez \nAge: 29 \nEmail Address: [email protected] \nPhone Number: 555-111-2222 \nShipping Address: 987 Oak Ave, Cityville USA, 56789 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 3 \ndate: 2023-03-18 \nitem:\n- description: TrailMaster X4 Tent, quantity 3, price $750 \n\u00a0 item_number: 1 \n\norder_number: 12 \ndate: 2023-02-20 \nitem:\n- description: Summit Breeze Jacket, quantity 2, price $240 \n\u00a0 item_number: 3 \n\norder_number: 21 \ndate: 2023-04-02 \nitem:\n- description: BaseCamp Folding Table, quantity 1, price $60 \n\u00a0 item_number: 5 \n\norder_number: 31 \ndate: 2023-04-20 \nitem:\n- description: Alpine Explorer Tent, quantity 1, price $350 \n\u00a0 item_number: 8 \n\norder_number: 39 \ndate: 2023-03-30 \nitem:\n- description: TrailWalker Hiking Shoes, quantity 2, price $220 \n\u00a0 item_number: 11 \n\norder_number: 48 \ndate: 2023-04-16 \nitem:\n- description: MountainDream Sleeping Bag, quantity 2, price $260 \n\u00a0 item_number: 14 \n\norder_number: 61 \ndate: 2023-06-11 \nitem:\n- description: TrekStar Hiking Sandals, quantity 1, price $70 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "I purchased two pairs of Hiking Shoes for myself and my husband last month. While my pair is great, my husband's pair seems to be slightly small. Is it possible to exchange them for a larger size?"}], "item_number": 11, "order_number": 39, "description": "TrailWalker Hiking Shoes, quantity 2, price $220", "intent": "product exchange"}
{"customer_info": "## Customer_Info\n\nFirst Name: Daniel \nLast Name: Wilson \nAge: 47 \nEmail Address: [email protected] \nPhone Number: 555-444-5555 \nShipping Address: 321 Birch Ln, Smallville USA, 34567 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 9 \ndate: 2023-04-25 \nitem:\n- description: Adventurer Pro Backpack, quantity 3, price $270 \n\u00a0 item_number: 2 \n\norder_number: 13 \ndate: 2023-03-25 \nitem:\n- description: Summit Breeze Jacket, quantity 1, price $120 \n\u00a0 item_number: 3 \n\norder_number: 22 \ndate: 2023-05-07 \nitem:\n- description: BaseCamp Folding Table, quantity 3, price $180 \n\u00a0 item_number: 5 \n\norder_number: 40 \ndate: 2023-04-05 \nitem:\n- description: TrailWalker Hiking Shoes, quantity 1, price $110 \n\u00a0 item_number: 11 \n\norder_number: 49 \ndate: 2023-05-21 \nitem:\n- description: MountainDream Sleeping Bag, quantity 1, price $130 \n\u00a0 item_number: 14 \n\n", "history": [{"role": "customer", "content": "I just bought a pair of Hiking Shoes, and I'm planning a hiking trip soon. Do you have any recommendations for maintaining the shoes and increasing their lifespan?"}], "item_number": 11, "order_number": 40, "description": "TrailWalker Hiking Shoes, quantity 1, price $110", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Sarah \nLast Name: Lee \nAge: 38 \nEmail Address: [email protected] \nPhone Number: 555-867-5309 \nShipping Address: 321 Maple St, Bigtown USA, 90123 \nMembership: Platinum \n\n## Recent_Purchases\n\norder_number: 2 \ndate: 2023-02-10 \nitem:\n- description: TrailMaster X4 Tent, quantity 1, price $250 \n\u00a0 item_number: 1 \n\norder_number: 26 \ndate: 2023-02-05 \nitem:\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \n\u00a0 item_number: 7 \n\norder_number: 35 \ndate: 2023-02-20 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \n\u00a0 item_number: 10 \n\norder_number: 42 \ndate: 2023-04-06 \nitem:\n- description: TrekMaster Camping Chair, quantity 2, price $100 \n\u00a0 item_number: 12 \n\norder_number: 51 \ndate: 2023-04-21 \nitem:\n- description: SkyView 2-Person Tent, quantity 1, price $200 \n\u00a0 item_number: 15 \n\norder_number: 56 \ndate: 2023-03-26 \nitem:\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \n\u00a0 item_number: 17 \n\norder_number: 65 \ndate: 2023-04-11 \nitem:\n- description: CompactCook Camping Stove, quantity 1, price $60 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I'm a Platinum member, and I just ordered two outdoor seats. I was wondering if there is any assembly required and if any tools are needed for that?"}], "item_number": 12, "order_number": 42, "description": "TrekMaster Camping Chair, quantity 2, price $100", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Jason \nLast Name: Brown \nAge: 50 \nEmail Address: [email protected] \nPhone Number: 555-222-3333 \nShipping Address: 456 Cedar Rd, Anytown USA, 12345 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 8 \ndate: 2023-03-20 \nitem:\n- description: Adventurer Pro Backpack, quantity 1, price $90 \n\u00a0 item_number: 2 \n\norder_number: 27 \ndate: 2023-03-10 \nitem:\n- description: CozyNights Sleeping Bag, quantity 2, price $200 \n\u00a0 item_number: 7 \n\norder_number: 36 \ndate: 2023-03-25 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 2, price $150 \n\u00a0 item_number: 10 \n\norder_number: 43 \ndate: 2023-05-11 \nitem:\n- description: TrekMaster Camping Chair, quantity 1, price $50 \n\u00a0 item_number: 12 \n\norder_number: 52 \ndate: 2023-05-26 \nitem:\n- description: SkyView 2-Person Tent, quantity 1, price $200 \n\u00a0 item_number: 15 \n\norder_number: 57 \ndate: 2023-05-01 \nitem:\n- description: RainGuard Hiking Jacket, quantity 2, price $220 \n\u00a0 item_number: 17 \n\norder_number: 66 \ndate: 2023-05-16 \nitem:\n- description: CompactCook Camping Stove, quantity 2, price $120 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I bought a camping Chair last month, and it seems to be slightly wobbly when I sit on it. Is there any way to fix this issue?"}], "item_number": 12, "order_number": 43, "description": "TrekMaster Camping Chair, quantity 1, price $50", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: David \nLast Name: Kim \nAge: 42 \nEmail Address: [email protected] \nPhone Number: 555-555-5555 \nShipping Address: 654 Pine St, Suburbia USA, 23456 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 7 \ndate: 2023-02-15 \nitem:\n- description: Adventurer Pro Backpack, quantity 2, price $180 \n\u00a0 item_number: 2 \n\norder_number: 16 \ndate: 2023-02-25 \nitem:\n- description: TrekReady Hiking Boots, quantity 2, price $280 \n\u00a0 item_number: 4 \n\norder_number: 24 \ndate: 2023-03-05 \nitem:\n- description: EcoFire Camping Stove, quantity 2, price $160 \n\u00a0 item_number: 6 \n\norder_number: 33 \ndate: 2023-03-20 \nitem:\n- description: SummitClimber Backpack, quantity 2, price $240 \n\u00a0 item_number: 9 \n\norder_number: 45 \ndate: 2023-04-11 \nitem:\n- description: PowerBurner Camping Stove, quantity 2, price $200 \n\u00a0 item_number: 13 \n\norder_number: 54 \ndate: 2023-04-26 \nitem:\n- description: TrailLite Daypack, quantity 2, price $120 \n\u00a0 item_number: 16 \n\norder_number: 63 \ndate: 2023-05-11 \nitem:\n- description: Adventure Dining Table, quantity 2, price $180 \n\u00a0 item_number: 19 \n\n", "history": [{"role": "customer", "content": "I just bought an outdoor stove but I'm not sure how to attach the fuel canister. Can you please guide me through the process?"}], "item_number": 13, "order_number": 45, "description": "PowerBurner Camping Stove, quantity 2, price $200", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Emily \nLast Name: Rodriguez \nAge: 29 \nEmail Address: [email protected] \nPhone Number: 555-111-2222 \nShipping Address: 987 Oak Ave, Cityville USA, 56789 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 3 \ndate: 2023-03-18 \nitem:\n- description: TrailMaster X4 Tent, quantity 3, price $750 \n\u00a0 item_number: 1 \n\norder_number: 12 \ndate: 2023-02-20 \nitem:\n- description: Summit Breeze Jacket, quantity 2, price $240 \n\u00a0 item_number: 3 \n\norder_number: 21 \ndate: 2023-04-02 \nitem:\n- description: BaseCamp Folding Table, quantity 1, price $60 \n\u00a0 item_number: 5 \n\norder_number: 31 \ndate: 2023-04-20 \nitem:\n- description: Alpine Explorer Tent, quantity 1, price $350 \n\u00a0 item_number: 8 \n\norder_number: 39 \ndate: 2023-03-30 \nitem:\n- description: TrailWalker Hiking Shoes, quantity 2, price $220 \n\u00a0 item_number: 11 \n\norder_number: 48 \ndate: 2023-04-16 \nitem:\n- description: MountainDream Sleeping Bag, quantity 2, price $260 \n\u00a0 item_number: 14 \n\norder_number: 61 \ndate: 2023-06-11 \nitem:\n- description: TrekStar Hiking Sandals, quantity 1, price $70 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "I've ordered two Sleeping Bags for my upcoming camping trip. I was wondering if they can be zipped together to create a double sleeping bag?"}], "item_number": 14, "order_number": 48, "description": "MountainDream Sleeping Bag, quantity 2, price $260", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Daniel \nLast Name: Wilson \nAge: 47 \nEmail Address: [email protected] \nPhone Number: 555-444-5555 \nShipping Address: 321 Birch Ln, Smallville USA, 34567 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 9 \ndate: 2023-04-25 \nitem:\n- description: Adventurer Pro Backpack, quantity 3, price $270 \n\u00a0 item_number: 2 \n\norder_number: 13 \ndate: 2023-03-25 \nitem:\n- description: Summit Breeze Jacket, quantity 1, price $120 \n\u00a0 item_number: 3 \n\norder_number: 22 \ndate: 2023-05-07 \nitem:\n- description: BaseCamp Folding Table, quantity 3, price $180 \n\u00a0 item_number: 5 \n\norder_number: 40 \ndate: 2023-04-05 \nitem:\n- description: TrailWalker Hiking Shoes, quantity 1, price $110 \n\u00a0 item_number: 11 \n\norder_number: 49 \ndate: 2023-05-21 \nitem:\n- description: MountainDream Sleeping Bag, quantity 1, price $130 \n\u00a0 item_number: 14 \n\n", "history": [{"role": "customer", "content": "I recently purchased a Sleeping Bag, and I'm wondering how to properly clean and store it to ensure it lasts for a long time."}], "item_number": 14, "order_number": 49, "description": "MountainDream Sleeping Bag, quantity 1, price $130", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: John \nLast Name: Smith \nAge: 35 \nEmail Address: [email protected] \nPhone Number: 555-123-4567 \nShipping Address: 123 Main St, Anytown USA, 12345 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 1 \ndate: 2023-01-05 \nitem:\n- description: TrailMaster X4 Tent, quantity 2, price $500 \n\u00a0 item_number: 1 \n\norder_number: 19 \ndate: 2023-01-25 \nitem:\n- description: BaseCamp Folding Table, quantity 1, price $60 \n\u00a0 item_number: 5 \n\norder_number: 29 \ndate: 2023-02-10 \nitem:\n- description: Alpine Explorer Tent, quantity 2, price $700 \n\u00a0 item_number: 8 \n\norder_number: 41 \ndate: 2023-03-01 \nitem:\n- description: TrekMaster Camping Chair, quantity 1, price $50 \n\u00a0 item_number: 12 \n\norder_number: 50 \ndate: 2023-03-16 \nitem:\n- description: SkyView 2-Person Tent, quantity 2, price $400 \n\u00a0 item_number: 15 \n\norder_number: 59 \ndate: 2023-04-01 \nitem:\n- description: TrekStar Hiking Sandals, quantity 1, price $70 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "I just received my Tents, and they look amazing! I can't wait to use them on our next camping trip. Quick question, though - what's the best way to set up the tent?"}], "item_number": 15, "order_number": 50, "description": "SkyView 2-Person Tent, quantity 2, price $400", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Sarah \nLast Name: Lee \nAge: 38 \nEmail Address: [email protected] \nPhone Number: 555-867-5309 \nShipping Address: 321 Maple St, Bigtown USA, 90123 \nMembership: Platinum \n\n## Recent_Purchases\n\norder_number: 2 \ndate: 2023-02-10 \nitem:\n- description: TrailMaster X4 Tent, quantity 1, price $250 \n\u00a0 item_number: 1 \n\norder_number: 26 \ndate: 2023-02-05 \nitem:\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \n\u00a0 item_number: 7 \n\norder_number: 35 \ndate: 2023-02-20 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \n\u00a0 item_number: 10 \n\norder_number: 42 \ndate: 2023-04-06 \nitem:\n- description: TrekMaster Camping Chair, quantity 2, price $100 \n\u00a0 item_number: 12 \n\norder_number: 51 \ndate: 2023-04-21 \nitem:\n- description: SkyView 2-Person Tent, quantity 1, price $200 \n\u00a0 item_number: 15 \n\norder_number: 56 \ndate: 2023-03-26 \nitem:\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \n\u00a0 item_number: 17 \n\norder_number: 65 \ndate: 2023-04-11 \nitem:\n- description: CompactCook Camping Stove, quantity 1, price $60 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I ordered a Tent, and it arrived yesterday. However, I noticed that one of the tent poles is damaged. How can I get a replacement for the damaged pole?"}], "item_number": 15, "order_number": 51, "description": "SkyView 2-Person Tent, quantity 1, price $200", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: Jason \nLast Name: Brown \nAge: 50 \nEmail Address: [email protected] \nPhone Number: 555-222-3333 \nShipping Address: 456 Cedar Rd, Anytown USA, 12345 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 8 \ndate: 2023-03-20 \nitem:\n- description: Adventurer Pro Backpack, quantity 1, price $90 \n\u00a0 item_number: 2 \n\norder_number: 27 \ndate: 2023-03-10 \nitem:\n- description: CozyNights Sleeping Bag, quantity 2, price $200 \n\u00a0 item_number: 7 \n\norder_number: 36 \ndate: 2023-03-25 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 2, price $150 \n\u00a0 item_number: 10 \n\norder_number: 43 \ndate: 2023-05-11 \nitem:\n- description: TrekMaster Camping Chair, quantity 1, price $50 \n\u00a0 item_number: 12 \n\norder_number: 52 \ndate: 2023-05-26 \nitem:\n- description: SkyView 2-Person Tent, quantity 1, price $200 \n\u00a0 item_number: 15 \n\norder_number: 57 \ndate: 2023-05-01 \nitem:\n- description: RainGuard Hiking Jacket, quantity 2, price $220 \n\u00a0 item_number: 17 \n\norder_number: 66 \ndate: 2023-05-16 \nitem:\n- description: CompactCook Camping Stove, quantity 2, price $120 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I'm considering buying the SkyView 2-Person Tent for an upcoming camping trip. Can you please tell me more about the tent's ventilation and how it performs in rainy conditions?"}], "item_number": 15, "order_number": 52, "description": "SkyView 2-Person Tent, quantity 1, price $200", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: David \nLast Name: Kim \nAge: 42 \nEmail Address: [email protected] \nPhone Number: 555-555-5555 \nShipping Address: 654 Pine St, Suburbia USA, 23456 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 7 \ndate: 2023-02-15 \nitem:\n- description: Adventurer Pro Backpack, quantity 2, price $180 \n\u00a0 item_number: 2 \n\norder_number: 16 \ndate: 2023-02-25 \nitem:\n- description: TrekReady Hiking Boots, quantity 2, price $280 \n\u00a0 item_number: 4 \n\norder_number: 24 \ndate: 2023-03-05 \nitem:\n- description: EcoFire Camping Stove, quantity 2, price $160 \n\u00a0 item_number: 6 \n\norder_number: 33 \ndate: 2023-03-20 \nitem:\n- description: SummitClimber Backpack, quantity 2, price $240 \n\u00a0 item_number: 9 \n\norder_number: 45 \ndate: 2023-04-11 \nitem:\n- description: PowerBurner Camping Stove, quantity 2, price $200 \n\u00a0 item_number: 13 \n\norder_number: 54 \ndate: 2023-04-26 \nitem:\n- description: TrailLite Daypack, quantity 2, price $120 \n\u00a0 item_number: 16 \n\norder_number: 63 \ndate: 2023-05-11 \nitem:\n- description: Adventure Dining Table, quantity 2, price $180 \n\u00a0 item_number: 19 \n\n", "history": [{"role": "customer", "content": "I purchased two Daypacks for my kids, and while one is perfect, the other has a zipper issue that makes it difficult to open and close. Can I get a replacement for the faulty daypack?"}], "item_number": 16, "order_number": 54, "description": "TrailLite Daypack, quantity 2, price $120", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: Melissa \nLast Name: Davis \nAge: 31 \nEmail Address: [email protected] \nPhone Number: 555-333-4444 \nShipping Address: 789 Ash St, Another City USA, 67890 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 4 \ndate: 2023-04-22 \nitem:\n- description: TrailMaster X4 Tent, quantity 2, price $500 \n\u00a0 item_number: 1 \n\norder_number: 17 \ndate: 2023-03-30 \nitem:\n- description: TrekReady Hiking Boots, quantity 1, price $140 \n\u00a0 item_number: 4 \n\norder_number: 25 \ndate: 2023-04-10 \nitem:\n- description: EcoFire Camping Stove, quantity 1, price $80 \n\u00a0 item_number: 6 \n\norder_number: 34 \ndate: 2023-04-25 \nitem:\n- description: SummitClimber Backpack, quantity 1, price $120 \n\u00a0 item_number: 9 \n\norder_number: 46 \ndate: 2023-05-16 \nitem:\n- description: PowerBurner Camping Stove, quantity 1, price $100 \n\u00a0 item_number: 13 \n\norder_number: 55 \ndate: 2023-05-31 \nitem:\n- description: TrailLite Daypack, quantity 1, price $60 \n\u00a0 item_number: 16 \n\norder_number: 64 \ndate: 2023-06-16 \nitem:\n- description: Adventure Dining Table, quantity 1, price $90 \n\u00a0 item_number: 19 \n\n", "history": [{"role": "customer", "content": "I just bought a TrailLite Daypack for my day hikes, and I was wondering if it's water-resistant. Should I be concerned about my belongings getting wet if it rains?"}], "item_number": 16, "order_number": 55, "description": "TrailLite Daypack, quantity 1, price $60", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Jason \nLast Name: Brown \nAge: 50 \nEmail Address: [email protected] \nPhone Number: 555-222-3333 \nShipping Address: 456 Cedar Rd, Anytown USA, 12345 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 8 \ndate: 2023-03-20 \nitem:\n- description: Adventurer Pro Backpack, quantity 1, price $90 \n\u00a0 item_number: 2 \n\norder_number: 27 \ndate: 2023-03-10 \nitem:\n- description: CozyNights Sleeping Bag, quantity 2, price $200 \n\u00a0 item_number: 7 \n\norder_number: 36 \ndate: 2023-03-25 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 2, price $150 \n\u00a0 item_number: 10 \n\norder_number: 43 \ndate: 2023-05-11 \nitem:\n- description: TrekMaster Camping Chair, quantity 1, price $50 \n\u00a0 item_number: 12 \n\norder_number: 52 \ndate: 2023-05-26 \nitem:\n- description: SkyView 2-Person Tent, quantity 1, price $200 \n\u00a0 item_number: 15 \n\norder_number: 57 \ndate: 2023-05-01 \nitem:\n- description: RainGuard Hiking Jacket, quantity 2, price $220 \n\u00a0 item_number: 17 \n\norder_number: 66 \ndate: 2023-05-16 \nitem:\n- description: CompactCook Camping Stove, quantity 2, price $120 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I just bought two RainGuard Hiking Jackets for my wife and me. Can you please provide some care instructions to ensure the jackets maintain their water resistance and durability?"}], "item_number": 17, "order_number": 57, "description": "RainGuard Hiking Jacket, quantity 2, price $220", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Amanda \nLast Name: Perez \nAge: 26 \nEmail Address: [email protected] \nPhone Number: 555-123-4567 \nShipping Address: 654 Pine St, Suburbia USA, 23456 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 5 \ndate: 2023-05-01 \nitem:\n- description: TrailMaster X4 Tent, quantity 1, price $250 \n\u00a0 item_number: 1 \n\norder_number: 18 \ndate: 2023-05-04 \nitem:\n- description: TrekReady Hiking Boots, quantity 3, price $420 \n\u00a0 item_number: 4 \n\norder_number: 28 \ndate: 2023-04-15 \nitem:\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \n\u00a0 item_number: 7 \n\norder_number: 37 \ndate: 2023-04-30 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \n\u00a0 item_number: 10 \n\norder_number: 58 \ndate: 2023-06-06 \nitem:\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \n\u00a0 item_number: 17 \n\norder_number: 67 \ndate: 2023-06-21 \nitem:\n- description: CompactCook Camping Stove, quantity 1, price $60 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I bought the RainGuard Hiking Jacket a few weeks ago, but I noticed that the seam tape on the inside is starting to peel off. Is there anything I can do to fix this issue?"}], "item_number": 17, "order_number": 58, "description": "RainGuard Hiking Jacket, quantity 1, price $110", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: John \nLast Name: Smith \nAge: 35 \nEmail Address: [email protected] \nPhone Number: 555-123-4567 \nShipping Address: 123 Main St, Anytown USA, 12345 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 1 \ndate: 2023-01-05 \nitem:\n- description: TrailMaster X4 Tent, quantity 2, price $500 \n\u00a0 item_number: 1 \n\norder_number: 19 \ndate: 2023-01-25 \nitem:\n- description: BaseCamp Folding Table, quantity 1, price $60 \n\u00a0 item_number: 5 \n\norder_number: 29 \ndate: 2023-02-10 \nitem:\n- description: Alpine Explorer Tent, quantity 2, price $700 \n\u00a0 item_number: 8 \n\norder_number: 41 \ndate: 2023-03-01 \nitem:\n- description: TrekMaster Camping Chair, quantity 1, price $50 \n\u00a0 item_number: 12 \n\norder_number: 50 \ndate: 2023-03-16 \nitem:\n- description: SkyView 2-Person Tent, quantity 2, price $400 \n\u00a0 item_number: 15 \n\norder_number: 59 \ndate: 2023-04-01 \nitem:\n- description: TrekStar Hiking Sandals, quantity 1, price $70 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "Hi, I purchased the TrekStar Hiking Sandals a few weeks ago, and they feel a bit tight. Is there a break-in period for these sandals, or should I exchange them for a larger size?"}], "item_number": 18, "order_number": 59, "description": "TrekStar Hiking Sandals, quantity 1, price $70", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Emily \nLast Name: Rodriguez \nAge: 29 \nEmail Address: [email protected] \nPhone Number: 555-111-2222 \nShipping Address: 987 Oak Ave, Cityville USA, 56789 \nMembership: None \n\n## Recent_Purchases\n\norder_number: 3 \ndate: 2023-03-18 \nitem:\n- description: TrailMaster X4 Tent, quantity 3, price $750 \n\u00a0 item_number: 1 \n\norder_number: 12 \ndate: 2023-02-20 \nitem:\n- description: Summit Breeze Jacket, quantity 2, price $240 \n\u00a0 item_number: 3 \n\norder_number: 21 \ndate: 2023-04-02 \nitem:\n- description: BaseCamp Folding Table, quantity 1, price $60 \n\u00a0 item_number: 5 \n\norder_number: 31 \ndate: 2023-04-20 \nitem:\n- description: Alpine Explorer Tent, quantity 1, price $350 \n\u00a0 item_number: 8 \n\norder_number: 39 \ndate: 2023-03-30 \nitem:\n- description: TrailWalker Hiking Shoes, quantity 2, price $220 \n\u00a0 item_number: 11 \n\norder_number: 48 \ndate: 2023-04-16 \nitem:\n- description: MountainDream Sleeping Bag, quantity 2, price $260 \n\u00a0 item_number: 14 \n\norder_number: 61 \ndate: 2023-06-11 \nitem:\n- description: TrekStar Hiking Sandals, quantity 1, price $70 \n\u00a0 item_number: 18 \n\n", "history": [{"role": "customer", "content": "Hi there, I'm interested in purchasing the TrekStar Hiking Sandals. Can you tell me more about their features?"}], "item_number": 18, "order_number": 61, "description": "TrekStar Hiking Sandals, quantity 1, price $70", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Jane \nLast Name: Doe \nAge: 28 \nEmail Address: [email protected] \nPhone Number: 555-987-6543 \nShipping Address: 456 Oak St, Another City USA, 67890 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 6 \ndate: 2023-01-10 \nitem:\n- description: Adventurer Pro Backpack, quantity 1, price $90 \n\u00a0 item_number: 2 \n\norder_number: 15 \ndate: 2023-01-20 \nitem:\n- description: TrekReady Hiking Boots, quantity 1, price $140 \n\u00a0 item_number: 4 \n\norder_number: 23 \ndate: 2023-01-30 \nitem:\n- description: EcoFire Camping Stove, quantity 1, price $80 \n\u00a0 item_number: 6 \n\norder_number: 32 \ndate: 2023-02-15 \nitem:\n- description: SummitClimber Backpack, quantity 1, price $120 \n\u00a0 item_number: 9 \n\norder_number: 44 \ndate: 2023-03-06 \nitem:\n- description: PowerBurner Camping Stove, quantity 1, price $100 \n\u00a0 item_number: 13 \n\norder_number: 53 \ndate: 2023-03-21 \nitem:\n- description: TrailLite Daypack, quantity 1, price $60 \n\u00a0 item_number: 16 \n\norder_number: 62 \ndate: 2023-04-06 \nitem:\n- description: Adventure Dining Table, quantity 1, price $90 \n\u00a0 item_number: 19 \n\n", "history": [{"role": "customer", "content": "I recently purchased the Adventure Dining Table, but I'm not happy with its quality. The table arrived with scratches on the surface, and one of the legs seems wobbly. I expected better craftsmanship from your brand. This is disappointing."}], "item_number": 19, "order_number": 62, "description": "Adventure Dining Table, quantity 1, price $90", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: Melissa \nLast Name: Davis \nAge: 31 \nEmail Address: [email protected] \nPhone Number: 555-333-4444 \nShipping Address: 789 Ash St, Another City USA, 67890 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 4 \ndate: 2023-04-22 \nitem:\n- description: TrailMaster X4 Tent, quantity 2, price $500 \n\u00a0 item_number: 1 \n\norder_number: 17 \ndate: 2023-03-30 \nitem:\n- description: TrekReady Hiking Boots, quantity 1, price $140 \n\u00a0 item_number: 4 \n\norder_number: 25 \ndate: 2023-04-10 \nitem:\n- description: EcoFire Camping Stove, quantity 1, price $80 \n\u00a0 item_number: 6 \n\norder_number: 34 \ndate: 2023-04-25 \nitem:\n- description: SummitClimber Backpack, quantity 1, price $120 \n\u00a0 item_number: 9 \n\norder_number: 46 \ndate: 2023-05-16 \nitem:\n- description: PowerBurner Camping Stove, quantity 1, price $100 \n\u00a0 item_number: 13 \n\norder_number: 55 \ndate: 2023-05-31 \nitem:\n- description: TrailLite Daypack, quantity 1, price $60 \n\u00a0 item_number: 16 \n\norder_number: 64 \ndate: 2023-06-16 \nitem:\n- description: Adventure Dining Table, quantity 1, price $90 \n\u00a0 item_number: 19 \n\n", "history": [{"role": "customer", "content": "I recently purchased the Adventure Dining Table, and I have a question about its setup. The instructions provided are not very clear, and I'm having trouble assembling the table correctly. Can you provide me with some guidance or more detailed instructions?"}], "item_number": 19, "order_number": 64, "description": "Adventure Dining Table, quantity 2, price $180", "intent": "product question"}
{"customer_info": "## Customer_Info\n\nFirst Name: Sarah \nLast Name: Lee \nAge: 38 \nEmail Address: [email protected] \nPhone Number: 555-867-5309 \nShipping Address: 321 Maple St, Bigtown USA, 90123 \nMembership: Platinum \n\n## Recent_Purchases\n\norder_number: 2 \ndate: 2023-02-10 \nitem:\n- description: TrailMaster X4 Tent, quantity 1, price $250 \n\u00a0 item_number: 1 \n\norder_number: 26 \ndate: 2023-02-05 \nitem:\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \n\u00a0 item_number: 7 \n\norder_number: 35 \ndate: 2023-02-20 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \n\u00a0 item_number: 10 \n\norder_number: 42 \ndate: 2023-04-06 \nitem:\n- description: TrekMaster Camping Chair, quantity 2, price $100 \n\u00a0 item_number: 12 \n\norder_number: 51 \ndate: 2023-04-21 \nitem:\n- description: SkyView 2-Person Tent, quantity 1, price $200 \n\u00a0 item_number: 15 \n\norder_number: 56 \ndate: 2023-03-26 \nitem:\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \n\u00a0 item_number: 17 \n\norder_number: 65 \ndate: 2023-04-11 \nitem:\n- description: CompactCook Camping Stove, quantity 1, price $60 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I recently purchased the CompactCook Camping Stove, and I'm quite disappointed with its performance. The flame doesn't seem to stay consistent, and it takes forever to boil water. This is not what I expected from a camping stove. Can you help me with this issue?"}], "item_number": 20, "order_number": 65, "description": "CompactCook Camping Stove, quantity 1, price $60", "intent": "product return"}
{"customer_info": "## Customer_Info\n\nFirst Name: Amanda \nLast Name: Perez \nAge: 26 \nEmail Address: [email protected] \nPhone Number: 555-123-4567 \nShipping Address: 654 Pine St, Suburbia USA, 23456 \nMembership: Gold \n\n## Recent_Purchases\n\norder_number: 5 \ndate: 2023-05-01 \nitem:\n- description: TrailMaster X4 Tent, quantity 1, price $250 \n\u00a0 item_number: 1 \n\norder_number: 18 \ndate: 2023-05-04 \nitem:\n- description: TrekReady Hiking Boots, quantity 3, price $420 \n\u00a0 item_number: 4 \n\norder_number: 28 \ndate: 2023-04-15 \nitem:\n- description: CozyNights Sleeping Bag, quantity 1, price $100 \n\u00a0 item_number: 7 \n\norder_number: 37 \ndate: 2023-04-30 \nitem:\n- description: TrailBlaze Hiking Pants, quantity 1, price $75 \n\u00a0 item_number: 10 \n\norder_number: 58 \ndate: 2023-06-06 \nitem:\n- description: RainGuard Hiking Jacket, quantity 1, price $110 \n\u00a0 item_number: 17 \n\norder_number: 67 \ndate: 2023-06-21 \nitem:\n- description: CompactCook Camping Stove, quantity 1, price $60 \n\u00a0 item_number: 20 \n\n", "history": [{"role": "customer", "content": "I recently received the CompactCook Camping Stove, and I'm not sure about its maintenance requirements. Are there any specific cleaning or storage instructions I should follow to ensure the stove's longevity?"}], "item_number": 20, "order_number": 67, "description": "CompactCook Camping Stove, quantity 1, price $60", "intent": "product question"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/data.jsonl | {"text": "The software engineer is working on a new update for the application.", "entity_type": "job title", "results": "software engineer"}
{"text": "The project manager and the data analyst are collaborating to interpret the project data.", "entity_type": "job title", "results": "project manager, data analyst"}
{"text": "The marketing manager is coordinating with the graphic designer to create a new advertisement campaign.", "entity_type": "job title", "results": "marketing manager, graphic designer"}
{"text": "The CEO and CFO are discussing the financial forecast for the next quarter.", "entity_type": "job title", "results": "CEO, CFO"}
{"text": "The web developer and UX designer are working together to improve the website's user interface.", "entity_type": "job title", "results": "web developer, UX designer"}
{"text": "John finally decided to change his phone number after receiving too many spam calls.", "results": "None", "entity_type": "phone number"}
{"text": "If you have any questions about our products, please call our customer service at (123) 456-7890.", "results": "(123) 456-7890", "entity_type": "phone number"}
{"text": "My new phone number is (098) 765-4321, please update your contact list.", "results": "(098) 765-4321", "entity_type": "phone number"}
{"text": "The phone number (321) 654-0987 is no longer in service.", "results": "(321) 654-0987", "entity_type": "phone number"}
{"text": "Please dial the following phone number: (555) 123-4567 to reach our technical support.", "results": "(555) 123-4567", "entity_type": "phone number"}
{"text": "John Doe has been appointed as the new CEO of the company.", "entity_type":"people's full name", "results":"John Doe"}
{"text": "The novel 'The Great Gatsby' was written by F. Scott Fitzgerald.", "entity_type":"people's full name", "results":"F. Scott Fitzgerald"}
{"text": "Mary Jane Watson and Peter Parker are characters in the Spider-Man series.", "entity_type":"people's full name", "results":"Mary Jane Watson, Peter Parker"}
{"text": "The famous physicists, Albert Einstein and Isaac Newton, made significant contributions to the field of physics.", "entity_type":"people's full name", "results":"Isaac Newton, Albert Einstein"}
{"text": "The Eiffel Tower is an iconic landmark in Paris.", "entity_type":"people's full name", "results":"None"} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/README.md | # Named Entity Recognition
A flow that perform named entity recognition task.
[Named Entity Recognition (NER)](https://en.wikipedia.org/wiki/Named-entity_recognition) is a Natural Language Processing (NLP) task. It involves identifying and classifying named entities (such as people, organizations, locations, date expressions, percentages, etc.) in a given text. This is a crucial aspect of NLP as it helps to understand the context and extract key information from the text.
This sample flow performs named entity recognition task using ChatGPT/GPT4 and prompts.
Tools used in this flow:
- `python` tool
- built-in `llm` tool
Connections used in this flow:
- `azure_open_ai` connection
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Setup connection
Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
Note in this example, we are using [chat api](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions), please use `gpt-35-turbo` or `gpt-4` model deployment.
Create connection if you haven't done that. Ensure you have put your azure open ai endpoint key in [azure_openai.yml](../../../connections/azure_openai.yml) file.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create -f ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
```
Ensure you have created `open_ai_connection` connection.
```bash
pf connection show -n open_ai_connection
```
## Run flow
### Run with single line input
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with specific input
pf flow test --flow . --inputs text='The phone number (321) 654-0987 is no longer in service' entity_type='phone number'
```
### run with multiple lines data
- create run
```bash
pf run create --flow . --data ./data.jsonl --column-mapping entity_type='${data.entity_type}' text='${data.text}' --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/NER-test.ipynb | # Setup execution path and pf client
import os
import promptflow
root = os.path.join(os.getcwd(), "../")
flow_path = os.path.join(root, "named-entity-recognition")
data_path = os.path.join(flow_path, "data.jsonl")
eval_match_rate_flow_path = os.path.join(root, "../evaluation/eval-entity-match-rate")
pf = promptflow.PFClient()
# Run flow against test data
run = pf.run(
flow=flow_path,
data=data_path,
column_mapping={
"text": "${data.text}",
"entity_type": "${data.entity_type}"
},
display_name="ner_bulk_run",
tags={"unittest": "true"},
stream=True)# Show output of flow run
pf.get_details(run)# Evaluate the match rate of the entity recognition result of the flow run
eval = pf.run(
flow=eval_match_rate_flow_path,
run=run,
data=data_path,
column_mapping={
"entities": "${run.outputs.entities}",
"ground_truth": "${data.results}"
},
display_name="eval_match_rate",
tags={"unittest": "true"},
stream=True)
pf.get_details(eval)# Get metrics of the evaluation flow run
pf.get_metrics(eval)# Visualize the flow run and evaluation run with HTML
pf.visualize([run, eval]) | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/cleansing.py | from typing import List
from promptflow import tool
@tool
def cleansing(entities_str: str) -> List[str]:
# Split, remove leading and trailing spaces/tabs/dots
parts = entities_str.split(",")
cleaned_parts = [part.strip(" \t.\"") for part in parts]
entities = [part for part in cleaned_parts if len(part) > 0]
return entities
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/cleansing_test.py | import unittest
from cleansing import cleansing
class CleansingTest(unittest.TestCase):
def test_normal(self):
self.assertEqual(cleansing("a, b, c"), ["a", "b", "c"])
self.assertEqual(cleansing("a, b, (425)137-98-25, "), ["a", "b", "(425)137-98-25"])
self.assertEqual(cleansing("a, b, F. Scott Fitzgerald., d"), ["a", "b", "F. Scott Fitzgerald", "d"])
self.assertEqual(cleansing("a, b, c, None., "), ["a", "b", "c", "None"])
self.assertEqual(cleansing(",,"), [])
self.assertEqual(cleansing(""), [])
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/NER_LLM.jinja2 | system:
Your task is to find entities of certain type from the given text content.
If there're multiple entities, please return them all with comma separated, e.g. "entity1, entity2, entity3".
You should only return the entity list, nothing else.
If there's no such entity, please return "None".
user:
Entity type: {{entity_type}}
Text content: {{text}}
Entities: | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
entity_type:
type: string
default: job title
text:
type: string
default: Maxime is a data scientist at Auto Dataset, and his wife is a finance
manager in the same company.
outputs:
entities:
type: string
reference: ${cleansing.output}
nodes:
- name: NER_LLM
type: llm
source:
type: code
path: NER_LLM.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 64
text: ${inputs.text}
entity_type: ${inputs.entity_type}
connection: open_ai_connection
api: chat
- name: cleansing
type: python
source:
type: code
path: cleansing.py
inputs:
entities_str: ${NER_LLM.output}
environment:
python_requirements_txt: requirements.txt | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/named-entity-recognition/eval_test.py | import unittest
import traceback
import os
import promptflow.azure as azure
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
import promptflow
class BaseTest(unittest.TestCase):
def setUp(self) -> None:
root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../")
self.flow_path = os.path.join(root, "named-entity-recognition")
self.data_path = os.path.join(self.flow_path, "data.jsonl")
self.eval_match_rate_flow_path = os.path.join(root, "../evaluation/eval-entity-match-rate")
self.all_runs_generated = []
return super().setUp()
def tearDown(self):
for run in self.all_runs_generated:
try:
self.pf.runs.archive(run.name)
except Exception as e:
print(e)
traceback.print_exc()
return super().setUp()
def check_run_basics(self, run, name):
self.assertTrue(run is not None)
self.assertEqual(run.display_name, name)
self.assertEqual(run.tags["unittest"], "true")
class TestEvalAzure(BaseTest):
def setUp(self) -> None:
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()
self.pf = azure.PFClient.from_config(credential=credential)
return super().setUp()
def test_bulk_run_and_eval(self):
run = self.pf.run(
flow=self.flow_path,
data=self.data_path,
column_mapping={
"text": "${data.text}",
"entity_type": "${data.entity_type}"
},
connections={"NER_LLM": {"connection": "open_ai_connection"}},
display_name="ner_bulk_run",
tags={"unittest": "true"},
stream=True)
self.all_runs_generated.append(run)
self.check_run_basics(run, "ner_bulk_run")
eval = self.pf.run(
flow=self.eval_match_rate_flow_path,
run=run,
data=self.data_path,
column_mapping={
"entities": "${run.outputs.entities}",
"ground_truth": "${data.results}"
},
display_name="eval_match_rate",
tags={"unittest": "true"},
stream=True)
self.all_runs_generated.append(eval)
self.check_run_basics(eval, "eval_match_rate")
return eval
class TestEval(BaseTest):
def setUp(self) -> None:
self.pf = promptflow.PFClient()
return super().setUp()
def test_bulk_run_and_eval(self):
run = self.pf.run(
flow=self.flow_path,
data=self.data_path,
column_mapping={
"text": "${data.text}",
"entity_type": "${data.entity_type}"
},
display_name="ner_bulk_run",
tags={"unittest": "true"},
stream=True)
self.all_runs_generated.append(run)
self.check_run_basics(run, "ner_bulk_run")
eval = self.pf.run(
flow=self.eval_match_rate_flow_path,
run=run,
data=self.data_path,
column_mapping={
"entities": "${run.outputs.entities}",
"ground_truth": "${data.results}"
},
display_name="eval_match_rate",
tags={"unittest": "true"},
stream=True)
self.all_runs_generated.append(eval)
self.check_run_basics(eval, "eval_match_rate")
return eval
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/data.jsonl | {"name": "FilmTriviaGPT", "role": "an AI specialized in film trivia that provides accurate and up-to-date information about movies, directors, actors, and more.", "goals": ["Introduce 'Lord of the Rings' film trilogy including the film title, release year, director, current age of the director, production company and a brief summary of the film."]} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/functions.py | from promptflow import tool
@tool
def functions_format() -> list:
functions = [
{
"name": "search",
"description": """The action will search this entity name on Wikipedia and returns the first {count}
sentences if it exists. If not, it will return some related entities to search next.""",
"parameters": {
"type": "object",
"properties": {
"entity": {
"type": "string",
"description": "Entity name which is used for Wikipedia search.",
},
"count": {
"type": "integer",
"default": 10,
"description": "Returned sentences count if entity name exists Wikipedia.",
},
},
"required": ["entity"],
},
},
{
"name": "python",
"description": """A Python shell. Use this to execute python commands. Input should be a valid python
command and you should print result with `print(...)` to see the output.""",
"parameters": {
"type": "object",
"properties": {
"command": {
"type": "string",
"description": "The command you want to execute in python",
}
},
"required": ["command"]
},
},
{
"name": "finish",
"description": """use this to signal that you have finished all your goals and remember show your
results""",
"parameters": {
"type": "object",
"properties": {
"response": {
"type": "string",
"description": "final response to let people know you have finished your goals and remember "
"show your results",
},
},
"required": ["response"],
},
},
]
return functions
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/README.md | # Autonomous Agent
This is a flow showcasing how to construct a AutoGPT agent with promptflow to autonomously figures out how to apply the given functions
to solve the goal, which is film trivia that provides accurate and up-to-date information about movies, directors,
actors, and more in this sample.
It involves inferring next executed function and user intent with LLM, and then use the function to generate
observation. The observation above will be used as augmented prompt which is the input of next LLM inferring loop
until the inferred function is the signal that you have finished all your objectives. The functions set used in the
flow contains Wikipedia search function that can search the web to find answer about current events and PythonREPL
python function that can run python code in a REPL.
For the sample input about movie introduction, the AutoGPT usually runs 4 rounds to finish the task. The first round
is searching for the movie name, the second round is searching for the movie director, the third round is calculating
director age, and the last round is outputting finishing signal. It takes 30s~40s to finish the task, but may take
longer time if you use "gpt-3.5" or encounter Azure OpenAI rate limit. You could use "gpt-4" or go to
https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit.
Note: This is just a sample introducing how to use promptflow to build a simple AutoGPT. You can go to
https://github.com/Significant-Gravitas/Auto-GPT to get more concepts about AutoGPT.
## What you will learn
In this flow, you will learn
- how to use prompt tool.
- how to compose an AutoGPT flow using functions.
## Prerequisites
Install prompt-flow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Getting Started
### 1 Create Azure OpenAI or OpenAI connection
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
```
Note that you need to use "2023-07-01-preview" as Azure OpenAI connection API version when using function calling.
See <a href='https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling' target='_blank'>How to use function calling with Azure OpenAI Service</a> for more details.
### 2. Configure the flow with your connection
`flow.dag.yaml` is already configured with connection named `open_ai_connection`. It is recommended to use "gpt-4" model for stable performance. Using "gpt-3.5-turbo" may lead to the model getting stuck in the agent inner loop due to its suboptimal and unstable performance.
### 3. Test flow with single line data
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
```
### 4. Run with multi-line data
```bash
# create run using command line args
pf run create --flow . --data ./data.jsonl --column-mapping name='${data.name}' role='${data.role}' goals='${data.goals}' --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
## Disclaimer
LLM systems are susceptible to prompt injection, and you can gain a deeper understanding of this issue in the [technical blog](https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/). As an illustration, the PythonREPL function might execute harmful code if provided with a malicious prompt within the provided sample. Furthermore, we cannot guarantee that implementing AST validations solely within the PythonREPL function will reliably elevate the sample's security to an enterprise level. We kindly remind you to refrain from utilizing this in a production environment. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/python_repl.py | import sys
from io import StringIO
import functools
import logging
import ast
from typing import Dict, Optional
logger = logging.getLogger(__name__)
@functools.lru_cache(maxsize=None)
def warn_once() -> None:
# Warn that the PythonREPL
logger.warning("Python REPL can execute arbitrary code. Use with caution.")
COMMAND_EXECUTION_FUNCTIONS = ["system", "exec", "execfile", "eval"]
class PythonValidation:
def __init__(
self,
allow_imports: bool = False,
allow_command_exec: bool = False,
):
"""Initialize a PALValidation instance.
Args:
allow_imports (bool): Allow import statements.
allow_command_exec (bool): Allow using known command execution functions.
"""
self.allow_imports = allow_imports
self.allow_command_exec = allow_command_exec
def validate_code(self, code: str) -> None:
try:
code_tree = ast.parse(code)
except (SyntaxError, UnicodeDecodeError):
raise ValueError(f"Generated code is not valid python code: {code}")
except TypeError:
raise ValueError(
f"Generated code is expected to be a string, "
f"instead found {type(code)}"
)
except OverflowError:
raise ValueError(
f"Generated code too long / complex to be parsed by ast: {code}"
)
has_imports = False
top_level_nodes = list(ast.iter_child_nodes(code_tree))
for node in top_level_nodes:
if isinstance(node, ast.Import) or isinstance(node, ast.ImportFrom):
has_imports = True
if not self.allow_imports and has_imports:
raise ValueError(f"Generated code has disallowed imports: {code}")
if (
not self.allow_command_exec
or not self.allow_imports
):
for node in ast.walk(code_tree):
if (
(not self.allow_command_exec)
and isinstance(node, ast.Call)
and (
(
hasattr(node.func, "id")
and node.func.id in COMMAND_EXECUTION_FUNCTIONS
)
or (
isinstance(node.func, ast.Attribute)
and node.func.attr in COMMAND_EXECUTION_FUNCTIONS
)
)
):
raise ValueError(
f"Found illegal command execution function "
f"{node.func.id} in code {code}"
)
if (not self.allow_imports) and (
isinstance(node, ast.Import) or isinstance(node, ast.ImportFrom)
):
raise ValueError(f"Generated code has disallowed imports: {code}")
class PythonREPL:
"""Simulates a standalone Python REPL."""
def __init__(self) -> None:
self.globals: Optional[Dict] = globals()
self.locals: Optional[Dict] = None
self.code_validations = PythonValidation(allow_imports=True)
def run(self, command: str) -> str:
"""Run command with own globals/locals and returns anything printed."""
# Warn against dangers of PythonREPL
warn_once()
self.code_validations.validate_code(command)
old_stdout = sys.stdout
sys.stdout = my_stdout = StringIO()
try:
exec(command, self.globals, self.locals)
sys.stdout = old_stdout
output = my_stdout.getvalue()
except Exception as e:
sys.stdout = old_stdout
output = repr(e)
print(output)
return output
python_repl = PythonREPL()
def python(command: str):
"""
A Python shell. Use this to execute python commands. Input should be a valid python command.
If you want to see the output of a value, you should print it out with `print(...)`.
"""
command = command.strip().strip("```")
return python_repl.run(command)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/triggering_prompt.jinja2 | Determine which next function to use, and respond using stringfield JSON object.
If you have completed all your tasks, make sure to use the 'finish' function to signal and remember show your results. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/autogpt_class.py | from promptflow.tools.aoai import chat as aoai_chat
from promptflow.tools.openai import chat as openai_chat
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
from util import count_message_tokens, count_string_tokens, create_chat_message, generate_context, get_logger, \
parse_reply, construct_prompt
autogpt_logger = get_logger("autogpt_agent")
class AutoGPT:
def __init__(
self,
connection,
tools,
full_message_history,
functions,
system_prompt=None,
triggering_prompt=None,
user_prompt=None,
model_or_deployment_name=None
):
self.tools = tools
self.full_message_history = full_message_history
self.functions = functions
self.system_prompt = system_prompt
self.connection = connection
self.model_or_deployment_name = model_or_deployment_name
self.triggering_prompt = triggering_prompt
self.user_prompt = user_prompt
def chat_with_ai(self, token_limit):
"""Interact with the OpenAI API, sending the prompt, message history and functions."""
# Reserve 1000 tokens for the response
send_token_limit = token_limit - 1000
(
next_message_to_add_index,
current_tokens_used,
insertion_index,
current_context,
) = generate_context(self.system_prompt, self.full_message_history, self.user_prompt)
# Account for user input (appended later)
current_tokens_used += count_message_tokens([create_chat_message("user", self.triggering_prompt)])
current_tokens_used += 500 # Account for memory (appended later)
# Add Messages until the token limit is reached or there are no more messages to add.
while next_message_to_add_index >= 0:
message_to_add = self.full_message_history[next_message_to_add_index]
tokens_to_add = count_message_tokens([message_to_add])
if current_tokens_used + tokens_to_add > send_token_limit:
break
# Add the most recent message to the start of the current context, after the two system prompts.
current_context.insert(
insertion_index, self.full_message_history[next_message_to_add_index]
)
# Count the currently used tokens
current_tokens_used += tokens_to_add
# Move to the next most recent message in the full message history
next_message_to_add_index -= 1
# Append user input, the length of this is accounted for above
current_context.extend([create_chat_message("user", self.triggering_prompt)])
# Calculate remaining tokens
tokens_remaining = token_limit - current_tokens_used
current_context = construct_prompt(current_context)
if isinstance(self.connection, AzureOpenAIConnection):
try:
response = aoai_chat(
connection=self.connection,
prompt=current_context,
deployment_name=self.model_or_deployment_name,
max_tokens=tokens_remaining,
functions=self.functions)
return response
except Exception as e:
if "The API deployment for this resource does not exist" in str(e):
raise Exception(
"Please fill in the deployment name of your Azure OpenAI resource gpt-4 model.")
elif isinstance(self.connection, OpenAIConnection):
response = openai_chat(
connection=self.connection,
prompt=current_context,
model=self.model_or_deployment_name,
max_tokens=tokens_remaining,
functions=self.functions)
return response
else:
raise ValueError("Connection must be an instance of AzureOpenAIConnection or OpenAIConnection")
def run(self):
tools = {t.__name__: t for t in self.tools}
while True:
# Send message to AI, get response
response = self.chat_with_ai(token_limit=4000)
if "function_call" in response:
# Update full message history
function_name = response["function_call"]["name"]
parsed_output = parse_reply(response["function_call"]["arguments"])
if "Error" in parsed_output:
error_message = parsed_output["Error"]
autogpt_logger.info(f"Error: {error_message}")
command_result = f"Error: {error_message}"
else:
autogpt_logger.info(f"Function generation requested, function = {function_name}, args = "
f"{parsed_output}")
self.full_message_history.append(
create_chat_message("assistant", f"Function generation requested, function = {function_name}, "
f"args = {parsed_output}")
)
if function_name == "finish":
response = parsed_output["response"]
autogpt_logger.info(f"Responding to user: {response}")
return response
if function_name in tools:
tool = tools[function_name]
try:
autogpt_logger.info(f"Next function = {function_name}, arguments = {parsed_output}")
result = tool(**parsed_output)
command_result = f"Executed function {function_name} and returned: {result}"
except Exception as e:
command_result = (
f"Error: {str(e)}, {type(e).__name__}"
)
result_length = count_string_tokens(command_result)
if result_length + 600 > 4000:
command_result = f"Failure: function {function_name} returned too much output. Do not " \
f"execute this function again with the same arguments."
else:
command_result = f"Unknown function '{function_name}'. Please refer to available functions " \
f"defined in functions parameter."
# Append command result to the message history
self.full_message_history.append(create_chat_message("function", str(command_result), function_name))
autogpt_logger.info(f"function: {command_result}")
else:
autogpt_logger.info(f"No function generated, returned: {response['content']}")
self.full_message_history.append(
create_chat_message("assistant", f"No function generated, returned: {response['content']}")
)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/requirements.txt | promptflow
promptflow-tools
tiktoken
bs4 | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/util.py | import time
from typing import List
import re
import tiktoken
import logging
import sys
import json
FORMATTER = logging.Formatter(
fmt="[%(asctime)s] %(name)-8s %(levelname)-8s %(message)s",
datefmt="%Y-%m-%d %H:%M:%S %z",
)
def get_logger(name: str, level=logging.INFO) -> logging.Logger:
logger = logging.Logger(name)
# log to sys.stdout for backward compatibility.
# TODO: May need to be removed in the future, after local/blob file stream are fully supported.
stdout_handler = logging.StreamHandler(sys.stdout)
stdout_handler.setFormatter(FORMATTER)
logger.addHandler(stdout_handler)
logger.setLevel(level)
return logger
def parse_reply(text: str):
try:
parsed = json.loads(text, strict=False)
except json.JSONDecodeError:
preprocessed_text = preprocess_json_input(text)
try:
parsed = json.loads(preprocessed_text, strict=False)
except Exception:
return {"Error": f"Could not parse invalid json: {text}"}
except TypeError:
return {"Error": f"the JSON object must be str, bytes or bytearray not {type(text)}"}
return parsed
def count_message_tokens(
messages: List, model: str = "gpt-3.5-turbo-0301"
) -> int:
"""
Returns the number of tokens used by a list of messages.
Args:
messages (list): A list of messages, each of which is a dictionary
containing the role and content of the message.
model (str): The name of the model to use for tokenization.
Defaults to "gpt-3.5-turbo-0301".
Returns:
int: The number of tokens used by the list of messages.
"""
try:
encoding = tiktoken.encoding_for_model(model)
except KeyError:
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-3.5-turbo":
# !Note: gpt-3.5-turbo may change over time.
# Returning num tokens assuming gpt-3.5-turbo-0301.")
return count_message_tokens(messages, model="gpt-3.5-turbo-0301")
elif model == "gpt-4":
# !Note: gpt-4 may change over time. Returning num tokens assuming gpt-4-0314.")
return count_message_tokens(messages, model="gpt-4-0314")
elif model == "gpt-3.5-turbo-0301":
tokens_per_message = (
4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
)
tokens_per_name = -1 # if there's a name, the role is omitted
elif model == "gpt-4-0314":
tokens_per_message = 3
tokens_per_name = 1
else:
raise NotImplementedError(
f"num_tokens_from_messages() is not implemented for model {model}.\n"
" See https://github.com/openai/openai-python/blob/main/chatml.md for"
" information on how messages are converted to tokens."
)
num_tokens = 0
for message in messages:
num_tokens += tokens_per_message
for key, value in message.items():
num_tokens += len(encoding.encode(value))
if key == "name":
num_tokens += tokens_per_name
num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
return num_tokens
def count_string_tokens(string: str, model_name="gpt-3.5-turbo") -> int:
"""
Returns the number of tokens in a text string.
Args:
string (str): The text string.
model_name (str): The name of the encoding to use. (e.g., "gpt-3.5-turbo")
Returns:
int: The number of tokens in the text string.
"""
encoding = tiktoken.encoding_for_model(model_name)
return len(encoding.encode(string))
def create_chat_message(role, content, name=None):
"""
Create a chat message with the given role and content.
Args:
role (str): The role of the message sender, e.g., "system", "user", or "assistant".
content (str): The content of the message.
Returns:
dict: A dictionary containing the role and content of the message.
"""
if name is None:
return {"role": role, "content": content}
else:
return {"role": role, "name": name, "content": content}
def generate_context(prompt, full_message_history, user_prompt, model="gpt-3.5-turbo"):
current_context = [
create_chat_message("system", prompt),
create_chat_message(
"system", f"The current time and date is {time.strftime('%c')}"
),
create_chat_message("user", user_prompt),
]
# Add messages from the full message history until we reach the token limit
next_message_to_add_index = len(full_message_history) - 1
insertion_index = len(current_context)
# Count the currently used tokens
current_tokens_used = count_message_tokens(current_context, model)
return (
next_message_to_add_index,
current_tokens_used,
insertion_index,
current_context,
)
def preprocess_json_input(input_str: str) -> str:
# Replace single backslashes with double backslashes, while leaving already escaped ones intact
corrected_str = re.sub(r'(?<!\\)\\(?!["\\/bfnrt]|u[0-9a-fA-F]{4})', r"\\\\", input_str)
return corrected_str
def construct_prompt(current_context):
update_current_context = []
for item in current_context:
role = item.get("role", None)
content = item.get("content", None)
name = item.get("name", None)
if name is not None:
update_current_context.append(":\n".join([role, "name", name]) + "\n" + ":\n".join(["content", content]))
else:
update_current_context.append(":\n".join([role, content]))
update_current_context = "\n".join(update_current_context)
return update_current_context
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/wiki_search.py | from bs4 import BeautifulSoup
import re
import requests
def decode_str(string):
return string.encode().decode("unicode-escape").encode("latin1").decode("utf-8")
def get_page_sentence(page, count: int = 10):
# find all paragraphs
paragraphs = page.split("\n")
paragraphs = [p.strip() for p in paragraphs if p.strip()]
# find all sentence
sentences = []
for p in paragraphs:
sentences += p.split('. ')
sentences = [s.strip() + '.' for s in sentences if s.strip()]
# get first `count` number of sentences
return ' '.join(sentences[:count])
def remove_nested_parentheses(string):
pattern = r'\([^()]+\)'
while re.search(pattern, string):
string = re.sub(pattern, '', string)
return string
def search(entity: str, count: int = 10):
"""
The input is an exact entity name. The action will search this entity name on Wikipedia and returns the first
count sentences if it exists. If not, it will return some related entities to search next.
"""
entity_ = entity.replace(" ", "+")
search_url = f"https://en.wikipedia.org/w/index.php?search={entity_}"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.35"
}
response_text = requests.get(search_url, headers=headers).text
soup = BeautifulSoup(response_text, features="html.parser")
result_divs = soup.find_all("div", {"class": "mw-search-result-heading"})
if result_divs: # mismatch
result_titles = [decode_str(div.get_text().strip()) for div in result_divs]
result_titles = [remove_nested_parentheses(result_title) for result_title in result_titles]
obs = f"Could not find {entity}. Similar: {result_titles[:5]}."
else:
page_content = [p_ul.get_text().strip() for p_ul in soup.find_all("p") + soup.find_all("ul")]
if any("may refer to:" in p for p in page_content):
obs = search("[" + entity + "]")
else:
page = ""
for content in page_content:
if len(content.split(" ")) > 2:
page += decode_str(content)
if not content.endswith("\n"):
page += "\n"
obs = get_page_sentence(page, count=count)
return obs
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/user_prompt.jinja2 | Goals:
{{goals}}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/system_prompt.jinja2 | You are {{name}}, {{role}}
Play to your strengths as an LLM and pursue simple strategies with no legal complications to complete all goals.
Your decisions must always be made independently without seeking user assistance.
Performance Evaluation:
1. Continuously review and analyze your actions to ensure you are performing to the best of your abilities.
2. Constructively self-criticize your big-picture behavior constantly.
3. Reflect on past decisions and strategies to refine your approach.
4. Every command has a cost, so be smart and efficient. Aim to complete tasks in the least number of steps.
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/generate_goal.py | from promptflow import tool
@tool
def generate_goal(items: list = []) -> str:
"""
Generate a numbered list from given items based on the item_type.
Args:
items (list): A list of items to be numbered.
Returns:
str: The formatted numbered list.
"""
return "\n".join(f"{i + 1}. {item}" for i, item in enumerate(items))
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
name:
type: string
default: "FilmTriviaGPT"
goals:
type: list
default: ["Introduce 'Lord of the Rings' film trilogy including the film title, release year, director, current age of the director, production company and a brief summary of the film."]
role:
type: string
default: "an AI specialized in film trivia that provides accurate and up-to-date information about movies, directors, actors, and more."
outputs:
output:
type: string
reference: ${autogpt_easy_start.output}
nodes:
- name: autogpt_easy_start
type: python
source:
type: code
path: autogpt_easy_start.py
inputs:
connection: open_ai_connection
functions: ${functions.output}
model_or_deployment_name: gpt-4
system_prompt: ${system_prompt.output}
triggering_prompt: ${triggering_prompt.output}
user_prompt: ${user_prompt.output}
- name: system_prompt
type: prompt
source:
type: code
path: system_prompt.jinja2
inputs:
name: ${inputs.name}
role: ${inputs.role}
- name: user_prompt
type: prompt
source:
type: code
path: user_prompt.jinja2
inputs:
goals: ${generate_goal.output}
- name: triggering_prompt
type: prompt
source:
type: code
path: triggering_prompt.jinja2
- name: functions
type: python
source:
type: code
path: functions.py
- name: generate_goal
type: python
source:
type: code
path: generate_goal.py
inputs:
items: ${inputs.goals}
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/autonomous-agent/autogpt_easy_start.py | from typing import Union
from promptflow import tool
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
@tool
def autogpt_easy_start(connection: Union[AzureOpenAIConnection, OpenAIConnection], system_prompt: str, user_prompt: str,
triggering_prompt: str, functions: list, model_or_deployment_name: str):
from wiki_search import search
from python_repl import python
from autogpt_class import AutoGPT
full_message_history = []
tools = [
search,
python
]
agent = AutoGPT(
full_message_history=full_message_history,
tools=tools,
system_prompt=system_prompt,
connection=connection,
model_or_deployment_name=model_or_deployment_name,
functions=functions,
user_prompt=user_prompt,
triggering_prompt=triggering_prompt
)
result = agent.run()
return result
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/data.jsonl | {"text": "Python Hello World!"}
{"text": "C Hello World!"}
{"text": "C# Hello World!"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/README.md | # Basic standard flow
A basic standard flow using custom python tool that calls Azure OpenAI with connection info stored in environment variables.
Tools used in this flow:
- `prompt` tool
- custom `python` Tool
Connections used in this flow:
- None
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Run flow
- Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
- Setup environment variables
Ensure you have put your azure open ai endpoint key in [.env](.env) file. You can create one refer to this [example file](.env.example).
```bash
cat .env
```
- Test flow/node
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs text="Java Hello World!"
# test node with inputs
pf flow test --flow . --node llm --inputs prompt="Write a simple Hello World program that displays the greeting message when executed."
```
- Create run with multiple lines data
```bash
# using environment from .env file (loaded in user code: hello.py)
pf run create --flow . --data ./data.jsonl --column-mapping text='${data.text}' --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
- List and show run meta
```bash
# list created run
pf run list
# get a sample run name
name=$(pf run list -r 10 | jq '.[] | select(.name | contains("basic_variant_0")) | .name'| head -n 1 | tr -d '"')
# show specific run detail
pf run show --name $name
# show output
pf run show-details --name $name
# visualize run in browser
pf run visualize --name $name
```
## Run flow with connection
Storing connection info in .env with plaintext is not safe. We recommend to use `pf connection` to guard secrets like `api_key` from leak.
- Show or create `open_ai_connection`
```bash
# create connection from `azure_openai.yml` file
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
# check if connection exists
pf connection show -n open_ai_connection
```
- Test using connection secret specified in environment variables
**Note**: we used `'` to wrap value since it supports raw value without escape in powershell & bash. For windows command prompt, you may remove the `'` to avoid it become part of the value.
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow . --environment-variables AZURE_OPENAI_API_KEY='${open_ai_connection.api_key}' AZURE_OPENAI_API_BASE='${open_ai_connection.api_base}'
```
- Create run using connection secret binding specified in environment variables, see [run.yml](run.yml)
```bash
# create run
pf run create --flow . --data ./data.jsonl --stream --environment-variables AZURE_OPENAI_API_KEY='${open_ai_connection.api_key}' AZURE_OPENAI_API_BASE='${open_ai_connection.api_base}' --column-mapping text='${data.text}'
# create run using yaml file
pf run create --file run.yml --stream
# show outputs
name=$(pf run list -r 10 | jq '.[] | select(.name | contains("basic_variant_0")) | .name'| head -n 1 | tr -d '"')
pf run show-details --name $name
```
## Run flow in cloud with connection
- Assume we already have a connection named `open_ai_connection` in workspace.
```bash
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>
```
- Create run
```bash
# run with environment variable reference connection in azureml workspace
pfazure run create --flow . --data ./data.jsonl --environment-variables AZURE_OPENAI_API_KEY='${open_ai_connection.api_key}' AZURE_OPENAI_API_BASE='${open_ai_connection.api_base}' --column-mapping text='${data.text}' --stream
# run using yaml file
pfazure run create --file run.yml --stream
```
- List and show run meta
```bash
# list created run
pfazure run list -r 3
# get a sample run name
name=$(pfazure run list -r 100 | jq '.[] | select(.name | contains("basic_variant_0")) | .name'| head -n 1 | tr -d '"')
# show specific run detail
pfazure run show --name $name
# show output
pfazure run show-details --name $name
# visualize run in browser
pfazure run visualize --name $name
``` | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/run.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: .
data: data.jsonl
environment_variables:
# environment variables from connection
AZURE_OPENAI_API_KEY: ${open_ai_connection.api_key}
AZURE_OPENAI_API_BASE: ${open_ai_connection.api_base}
AZURE_OPENAI_API_TYPE: azure
column_mapping:
text: ${data.text}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/.env.example | AZURE_OPENAI_API_KEY=<your_AOAI_key>
AZURE_OPENAI_API_BASE=<your_AOAI_endpoint>
AZURE_OPENAI_API_TYPE=azure
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/hello.py | import os
from openai.version import VERSION as OPENAI_VERSION
from dotenv import load_dotenv
from promptflow import tool
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
def to_bool(value) -> bool:
return str(value).lower() == "true"
def get_client():
if OPENAI_VERSION.startswith("0."):
raise Exception(
"Please upgrade your OpenAI package to version >= 1.0.0 or using the command: pip install --upgrade openai."
)
api_key = os.environ["AZURE_OPENAI_API_KEY"]
conn = dict(
api_key=os.environ["AZURE_OPENAI_API_KEY"],
)
if api_key.startswith("sk-"):
from openai import OpenAI as Client
else:
from openai import AzureOpenAI as Client
conn.update(
azure_endpoint=os.environ["AZURE_OPENAI_API_BASE"],
api_version=os.environ.get("AZURE_OPENAI_API_VERSION", "2023-07-01-preview"),
)
return Client(**conn)
@tool
def my_python_tool(
prompt: str,
# for AOAI, deployment name is customized by user, not model name.
deployment_name: str,
suffix: str = None,
max_tokens: int = 120,
temperature: float = 1.0,
top_p: float = 1.0,
n: int = 1,
logprobs: int = None,
echo: bool = False,
stop: list = None,
presence_penalty: float = 0,
frequency_penalty: float = 0,
best_of: int = 1,
logit_bias: dict = {},
user: str = "",
**kwargs,
) -> str:
if "AZURE_OPENAI_API_KEY" not in os.environ:
# load environment variables from .env file
load_dotenv()
if "AZURE_OPENAI_API_KEY" not in os.environ:
raise Exception("Please specify environment variables: AZURE_OPENAI_API_KEY")
# TODO: remove below type conversion after client can pass json rather than string.
echo = to_bool(echo)
response = get_client().completions.create(
prompt=prompt,
model=deployment_name,
# empty string suffix should be treated as None.
suffix=suffix if suffix else None,
max_tokens=int(max_tokens),
temperature=float(temperature),
top_p=float(top_p),
n=int(n),
logprobs=int(logprobs) if logprobs else None,
echo=echo,
# fix bug "[] is not valid under any of the given schemas-'stop'"
stop=stop if stop else None,
presence_penalty=float(presence_penalty),
frequency_penalty=float(frequency_penalty),
best_of=int(best_of),
# Logit bias must be a dict if we passed it to openai api.
logit_bias=logit_bias if logit_bias else {},
user=user,
)
# get first element because prompt is single.
return response.choices[0].text
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/requirements.txt | promptflow[azure]
promptflow-tools
python-dotenv | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
text:
type: string
default: Hello World!
outputs:
output:
type: string
reference: ${llm.output}
nodes:
- name: hello_prompt
type: prompt
source:
type: code
path: hello.jinja2
inputs:
text: ${inputs.text}
- name: llm
type: python
source:
type: code
path: hello.py
inputs:
prompt: ${hello_prompt.output}
deployment_name: text-davinci-003
max_tokens: "120"
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic/hello.jinja2 | {# Please replace the template with your own prompt. #}
Write a simple {{text}} program that displays the greeting message when executed. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/data.jsonl | {"source": "./divider.py"}
{"source": "./azure_open_ai.py"}
{"source": "./generate_docstring_tool.py"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/load_code_tool.py | from promptflow import tool
from file import File
@tool
def load_code(source: str):
file = File(source)
return file.content
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/README.md | # Generate Python docstring
This example can help you automatically generate Python code's docstring and return the modified code.
Tools used in this flow:
- `load_code` tool, it can load code from a file path.
- Load content from a local file.
- Loading content from a remote URL, currently loading HTML content, not just code.
- `divide_code` tool, it can divide code into code blocks.
- To avoid files that are too long and exceed the token limit, it is necessary to split the file.
- Avoid using the same function (such as __init__(self)) to generate docstrings in the same one file, which may cause confusion when adding docstrings to the corresponding functions in the future.
- `generate_docstring` tool, it can generate docstring for a code block, and merge docstring into origin code.
## What you will learn
In this flow, you will learn
- How to compose an auto generate docstring flow.
- How to use different LLM APIs to request LLM, including synchronous/asynchronous APIs, chat/completion APIs.
- How to use asynchronous multiple coroutine approach to request LLM API.
- How to construct a prompt.
## Prerequisites
### Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
### Create connection for LLM to use
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
```
Note:
The [azure_openai.yml](../../../connections/azure_openai.yml) file is located in connections folder.
We are using connection named `open_ai_connection`in [flow.dag.yaml](flow.dag.yaml).
## Execute with Promptflow
### Execute with SDK
`python main.py --source <your_file_path>`
**Note**: the file path should be a python file path, default is `./azure_open_ai.py`.
A webpage will be generated, displaying diff:
![result](result.png)
### Execute with CLI
```bash
# run flow with default file path in flow.dag.yaml
pf flow test --flow .
# run flow with file path
pf flow test --flow . --inputs source="./azure_open_ai.py"
```
```bash
# run flow with batch data
pf run create --flow . --data ./data.jsonl --name auto_generate_docstring --column-mapping source='${data.source}'
```
Output the code after add the docstring.
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/combine_code.jinja2 | {{divided|join('')}} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/generate_docstring_tool.py | import ast
import asyncio
import logging
import os
import sys
from typing import Union, List
from promptflow import tool
from azure_open_ai import ChatLLM
from divider import Divider
from prompt import docstring_prompt, PromptLimitException
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
def get_imports(content):
tree = ast.parse(content)
import_statements = []
for node in ast.walk(tree):
if isinstance(node, ast.Import):
for n in node.names:
import_statements.append(f"import {n.name}")
elif isinstance(node, ast.ImportFrom):
module_name = node.module
for n in node.names:
import_statements.append(f"from {module_name} import {n.name}")
return import_statements
async def async_generate_docstring(divided: List[str]):
llm = ChatLLM()
divided = list(reversed(divided))
all_divided = []
# If too many imports result in tokens exceeding the limit, please set an empty string.
modules = '' # '\n'.join(get_imports(divided[-1]))
modules_tokens = llm.count_tokens(modules)
if modules_tokens > 300:
logging.warning(f'Too many imports, the number of tokens is {modules_tokens}')
if modules_tokens > 500:
logging.warning(f'Too many imports, the number of tokens is {modules_tokens}, will set an empty string.')
modules = ''
# Divide the code into two parts if the global class/function is too long.
while len(divided):
item = divided.pop()
try:
llm.validate_tokens(llm.create_prompt(docstring_prompt(code=item, module=modules)))
except PromptLimitException as e:
logging.warning(e.message + ', will divide the code into two parts.')
divided_tmp = Divider.divide_half(item)
if len(divided_tmp) > 1:
divided.extend(list(reversed(divided_tmp)))
continue
except Exception as e:
logging.warning(e)
all_divided.append(item)
tasks = []
last_code = ''
for item in all_divided:
if Divider.has_class_or_func(item):
tasks.append(llm.async_query(docstring_prompt(last_code=last_code, code=item, module=modules)))
else: # If the code has not function or class, no need to generate docstring.
tasks.append(asyncio.sleep(0))
last_code = item
res_doc = await asyncio.gather(*tasks)
new_code = []
for i in range(len(all_divided)):
if type(res_doc[i]) is str:
new_code.append(Divider.merge_doc2code(res_doc[i], all_divided[i]))
else:
new_code.append(all_divided[i])
return new_code
@tool
def generate_docstring(divided: List[str],
connection: Union[AzureOpenAIConnection, OpenAIConnection] = None,
model: str = None):
if isinstance(connection, AzureOpenAIConnection):
os.environ["OPENAI_API_KEY"] = connection.api_key
os.environ["OPENAI_API_BASE"] = connection.api_base
os.environ["OPENAI_API_VERSION"] = connection.api_version
os.environ["API_TYPE"] = connection.api_type
elif isinstance(connection, OpenAIConnection):
os.environ["OPENAI_API_KEY"] = connection.api_key
os.environ["ORGANIZATION"] = connection.organization
if model:
os.environ["MODEL"] = model
if sys.platform.startswith("win"):
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
return asyncio.run(async_generate_docstring(divided))
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/requirements.txt | promptflow[azure]
promptflow-tools
python-dotenv
jinja2
tiktoken | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/file.py | import logging
import os
from urllib.parse import urlparse
import requests
class File:
def __init__(self, source: str):
self._source = source
self._is_url = source.startswith("http://") or source.startswith("https://")
if self._is_url:
parsed_url = urlparse(source)
path = parsed_url.path
else:
path = source
self._path = os.path.normpath(os.path.abspath(path))
self._dirname = os.path.dirname(self._path)
self._filename = os.path.basename(self._path).split(".")[0]
self._language = os.path.basename(self._path).split(".")[1]
def _read_content(self):
if self._is_url:
response = requests.get(self.source)
if response.status_code == 200:
content = response.text
return content
else:
print(f"Failed to retrieve content from URL: {self.source}")
return None
else:
try:
with open(self._path, "r") as file:
content = file.read()
return content
except FileNotFoundError:
print(f"File not found: {self.source}")
return None
@property
def content(self) -> str:
if not hasattr(self, "_text"):
self._content = self._read_content()
return self._content
@property
def language(self) -> str:
return self._language
@property
def filename(self) -> str:
return self._filename
@property
def dirname(self) -> str:
return self._dirname
@property
def source(self) -> str:
return self._source
def override_origin_file(self, content: str) -> None:
if not self._is_url:
with open(self._path, "w") as f:
f.write(content)
else:
logging.warning("Cannot override origin file from URL, create a new file instead.")
self.create_new_file(content)
def create_new_file(self, content: str) -> None:
if self._is_url:
path = os.path.join(
'./',
self.filename + f"_doc.{self.language}",
)
else:
path = os.path.join(
self.dirname,
self.filename + f"_doc.{self.language}",
)
with open(path, "w") as f:
f.write(content)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/combine_code_tool.py | from promptflow import tool
from divider import Divider
from typing import List
@tool
def combine_code(divided: List[str]):
code = Divider.combine(divided)
return code
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/divider.py | import logging
import re
from typing import List
class Settings:
divide_file = {
"py": r"(?<!.)(class|def)",
}
divide_func = {
"py": r"((\n {,6})|^)(class|def)\s+(\S+(?=\())\s*(\([^)]*\))?\s*(->[^:]*:|:) *"
}
class Divider:
language = 'py'
@classmethod
def divide_file(cls, text) -> List[str]:
matches = list(re.finditer(Settings.divide_file[Divider.language], text))
splitted_content = []
min_pos = matches[0].start() if len(matches) > 0 else len(text)
for i in range(len(matches)):
start = matches[i].start()
end = matches[i + 1].start() if i + 1 < len(matches) else len(text)
splitted_content.append(text[start:end])
if min_pos != 0:
splitted_content.insert(0, text[0:min_pos])
return splitted_content
@classmethod
def divide_half(cls, text) -> List[str]:
"""
Divide the content into two parts, but ensure that the function body is not split.
"""
_, pos = Divider.get_functions_and_pos(text)
if len(pos) > 1: # Divide the code into two parts and every part start with a function.
i = len(pos) // 2
return [text[0:pos[i][0]], text[pos[i][0]:]]
if len(pos) == 1: # Divide the code into two parts, [function define + body, other body].
body = text[pos[0][1]:]
body_lines = body.split('\n')
body_ten_lines = '\n'.join(body_lines[0:10])
return [text[0:pos[0][1]] + body_ten_lines, body[len(body_ten_lines):]]
return [text]
@classmethod
def get_functions_and_pos(cls, text):
matches = re.finditer(Settings.divide_func[Divider.language], text)
functions = []
pos = []
for match in matches:
matched_text = match.group().replace('\n', '')
func = re.sub(r' +', ' ', matched_text).replace(' :', ':')
func = re.sub(r'[\s,]+\)', ')', func)
func = re.sub(r'\([\s,]+', '(', func)
functions.append(func.strip())
pos.append((match.start(), match.end()))
return functions, pos
@classmethod
def combine(cls, divided: List[str]):
return ''.join(divided)
@classmethod
def merge_doc2code(cls, docstring: str, origin_code: str) -> str:
funcs1, pos1 = Divider.get_functions_and_pos(docstring)
funcs2, pos2 = Divider.get_functions_and_pos(origin_code)
pattern = r'""".*?"""'
code = origin_code if len(funcs2) == 0 else origin_code[0:pos2[0][0]]
pos1.append((len(docstring), len(docstring))) # avoid index out of range
pos2.append((len(origin_code), len(origin_code))) # avoid index out of range
for i2 in range(len(funcs2)): # add docstring for each function in origin_code
part_full_code = origin_code[pos2[i2][0]:pos2[i2 + 1][0]]
try:
i1 = funcs1.index(funcs2[i2])
except ValueError:
logging.warning(f"No docstring found for {funcs2[i2]}")
code += part_full_code
continue
new_doc = re.findall(pattern, docstring[pos1[i1][1]:pos1[i1 + 1][0]], re.DOTALL)
if new_doc:
func_line = origin_code[pos2[i2][0]:pos2[i2][1]].replace('\n', '')
empty_line_num = (len(func_line) - len(func_line.lstrip()) + 4)
func_body = origin_code[pos2[i2][1]:pos2[i2 + 1][0]]
code_doc = list(re.finditer(pattern, func_body, re.DOTALL))
format_new_doc = Divider.format_indentation(new_doc[0], empty_line_num)
is_replace_doc = len(code_doc) > 0 and (re.sub(r'\s+', '', func_body[0:code_doc[0].start()]) == '')
if is_replace_doc:
code += part_full_code.replace(code_doc[0].group(), format_new_doc.strip(), 1)
else:
code += origin_code[pos2[i2][0]:pos2[i2][1]] + '\n' + format_new_doc + '\n' + origin_code[
pos2[i2][1]:
pos2[i2 + 1][0]]
else:
code += part_full_code
return code
@classmethod
def format_indentation(cls, text, empty_line_num):
lines = text.splitlines()
last_line_space_num = len(lines[-1]) - len(lines[-1].lstrip())
need_add_space = max(empty_line_num - last_line_space_num, 0) * ' '
lines[0] = last_line_space_num * ' ' + lines[0].lstrip() # Align the first row to the last row
indented_lines = [(need_add_space + line).rstrip() for line in lines]
indented_string = '\n'.join(indented_lines)
return indented_string
@classmethod
def has_class_or_func(cls, text):
funcs, _ = Divider.get_functions_and_pos(text)
return len(funcs) > 0
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
source:
type: string
default: ./azure_open_ai.py
outputs:
code:
type: string
reference: ${combine_code.output}
nodes:
- name: load_code
type: python
source:
type: code
path: load_code_tool.py
inputs:
source: ${inputs.source}
- name: divide_code
type: python
source:
type: code
path: divide_code_tool.py
inputs:
file_content: ${load_code.output}
- name: generate_docstring
type: python
source:
type: code
path: generate_docstring_tool.py
inputs:
divided: ${divide_code.output}
connection: open_ai_connection
model: gpt-35-turbo
- name: combine_code
type: prompt
source:
type: code
path: combine_code.jinja2
inputs:
divided: ${generate_docstring.output}
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/prompt.py | import sys
from promptflow.tools.common import render_jinja_template
from divider import Divider
class PromptLimitException(Exception):
def __init__(self, message="", **kwargs):
super().__init__(message, **kwargs)
self._message = str(message)
self._kwargs = kwargs
self._inner_exception = kwargs.get("error")
self.exc_type, self.exc_value, self.exc_traceback = sys.exc_info()
self.exc_type = self.exc_type.__name__ if self.exc_type else type(self._inner_exception)
self.exc_msg = "{}, {}: {}".format(message, self.exc_type, self.exc_value)
@property
def message(self):
if self._message:
return self._message
return self.__class__.__name__
def docstring_prompt(last_code: str = '', code: str = '', module: str = '') -> str:
functions, _ = Divider.get_functions_and_pos(code)
# Add the first few lines to the function, such as decorator, to make the docstring generated better by llm.
first_three_lines = '\n'.join(last_code.split('\n')[-3:])
with open('doc_format.jinja2') as file:
return render_jinja_template(prompt=file.read(), module=module.strip('\n'),
code=(first_three_lines + code).strip('\n'), functions=functions)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/main.py | import argparse
from file import File
from diff import show_diff
from load_code_tool import load_code
from promptflow import PFClient
from pathlib import Path
if __name__ == "__main__":
current_folder = Path(__file__).absolute().parent
parser = argparse.ArgumentParser(description="The code path of code that need to generate docstring.")
parser.add_argument("--source", help="Path for the code file", default=str(current_folder / 'azure_open_ai.py'))
args = parser.parse_args()
pf = PFClient()
source = args.source
flow_result = pf.test(flow=str(current_folder), inputs={"source": source})
show_diff(load_code(source), flow_result['code'], File(source).filename)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/divide_code_tool.py | from promptflow import tool
from divider import Divider
@tool
def divide_code(file_content: str):
# Divide the code into several parts according to the global import/class/function.
divided = Divider.divide_file(file_content)
return divided
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/diff.py | import difflib
import webbrowser
def show_diff(left_content, right_content, name="file"):
d = difflib.HtmlDiff()
html = d.make_file(
left_content.splitlines(),
right_content.splitlines(),
"origin " + name,
"new " + name,
context=True,
numlines=20)
html = html.encode()
html_name = name + "_diff.html"
with open(html_name, "w+b") as fp:
fp.write(html)
webbrowser.open(html_name)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/doc_format.jinja2 | This is the docstring style of sphinx:
"""Description of the function.
:param [ParamName]: [ParamDescription](, defaults to [DefaultParamVal].)
:type [ParamName]: [ParamType](, optional)
...
:raises [ErrorType]: [ErrorDescription]
...
:return: [ReturnDescription]
:rtype: [ReturnType]
"""
Note:
For custom class types, please use the full path, for example:
"~azure.ai.ml.entities._inputs_outputs.Input" is full path for "Input" because of "from azure.ai.ml.entities._inputs_outputs import Input, Output"
"~import_node.Import" is full path for "Import" because of "import import_node.Import"
Complete function docstring example:
from azure.ai.ml.entities._inputs_outputs import Input, Output
from azure.ai.ml.constants import JobType
def output(input: Input, import_node: Import, startHnd=1, endHnd=None, uuids=None) -> Output:
"""Create an Output object.
:param input: The input object.
:type input: ~azure.ai.ml.entities._inputs_outputs.Input
:param import_node: The Import object.
:type import_node: ~import_node.Import
:param startHnd: Start index, defaults to 1
:type startHnd: int, optional
:param endHnd: End index, defaults to None
:type endHnd: int, optional
:return: The Output object.
:rtype: ~azure.ai.ml.entities._inputs_outputs.Output
"""
pass
Here's some code for you:
{{module}}
{{code}}
Please follow the sphinx style and refer above complete function docstring example, then output the docstring for the following class/functions.
Please replace "{docstring}" with the actual docstring.
{% for func in functions %}
{{func}}
{docstring}
pass
{% endfor %} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/gen-docstring/azure_open_ai.py | import asyncio
import logging
import time
import uuid
from typing import List
from openai.version import VERSION as OPENAI_VERSION
import os
from abc import ABC, abstractmethod
import tiktoken
from dotenv import load_dotenv
from prompt import PromptLimitException
class AOAI(ABC):
def __init__(self, **kwargs):
if OPENAI_VERSION.startswith("0."):
raise Exception(
"Please upgrade your OpenAI package to version >= 1.0.0 or "
"using the command: pip install --upgrade openai."
)
init_params = {}
api_type = os.environ.get("API_TYPE")
if os.getenv("OPENAI_API_VERSION") is not None:
init_params["api_version"] = os.environ.get("OPENAI_API_VERSION")
if os.getenv("OPENAI_ORG_ID") is not None:
init_params["organization"] = os.environ.get("OPENAI_ORG_ID")
if os.getenv("OPENAI_API_KEY") is None:
raise ValueError("OPENAI_API_KEY is not set in environment variables")
if os.getenv("OPENAI_API_BASE") is not None:
if api_type == "azure":
init_params["azure_endpoint"] = os.environ.get("OPENAI_API_BASE")
else:
init_params["base_url"] = os.environ.get("OPENAI_API_BASE")
init_params["api_key"] = os.environ.get("OPENAI_API_KEY")
# A few sanity checks
if api_type == "azure":
if init_params.get("azure_endpoint") is None:
raise ValueError(
"OPENAI_API_BASE is not set in environment variables, this is required when api_type==azure"
)
if init_params.get("api_version") is None:
raise ValueError(
"OPENAI_API_VERSION is not set in environment variables, this is required when api_type==azure"
)
if init_params["api_key"].startswith("sk-"):
raise ValueError(
"OPENAI_API_KEY should not start with sk- when api_type==azure, "
"are you using openai key by mistake?"
)
from openai import AzureOpenAI as Client
from openai import AsyncAzureOpenAI as AsyncClient
else:
from openai import OpenAI as Client
from openai import AsyncClient as AsyncClient
self.client = Client(**init_params)
self.async_client = AsyncClient(**init_params)
self.default_engine = None
self.engine = kwargs.pop('model', None) or os.environ.get("MODEL")
self.total_tokens = 4000
self.max_tokens = kwargs.pop('max_tokens', None) or os.environ.get("MAX_TOKENS") or 1200
if self.engine == "gpt-4-32k":
self.total_tokens = 31000
if self.engine == "gpt-4":
self.total_tokens = 7000
if self.engine == "gpt-3.5-turbo-16k":
self.total_tokens = 15000
if self.max_tokens > self.total_tokens:
raise ValueError(f"max_tokens must be less than total_tokens, "
f"total_tokens is {self.total_tokens}, max_tokens is {self.max_tokens}")
self.tokens_limit = self.total_tokens - self.max_tokens
def count_tokens(self, text: str) -> int:
try:
encoding = tiktoken.encoding_for_model(self.engine)
except KeyError:
encoding = tiktoken.encoding_for_model(self.default_engine)
return len(encoding.encode(text))
def query(self, text, **kwargs):
stream = kwargs.pop("stream", False)
for i in range(3):
try:
if not stream:
return self.query_with_no_stream(text, **kwargs)
else:
return "".join(self.query_with_stream(text, **kwargs))
except Exception as e:
logging.error(f"Query failed, message={e}, "
f"will retry request llm after {(i + 1) * (i + 1)} seconds.")
time.sleep((i + 1) * (i + 1))
raise Exception("Query failed, and retry 3 times, but still failed.")
async def async_query(self, text, **kwargs):
stream = kwargs.pop("stream", False)
for i in range(3):
try:
if not stream:
res = await self.async_query_with_no_stream(text, **kwargs)
return res
else:
res = await self.async_query_with_stream(text, **kwargs)
return "".join(res)
except Exception as e:
logging.error(f"llm response error, message={e}, "
f"will retry request llm after {(i + 1) * (i + 1)} seconds.")
await asyncio.sleep((i + 1) * (i + 1))
raise Exception("llm response error, and retry 3 times, but still failed.")
@abstractmethod
def query_with_no_stream(self, text, **kwargs):
pass
@abstractmethod
def query_with_stream(self, text, **kwargs):
pass
@abstractmethod
async def async_query_with_no_stream(self, text, **kwargs):
pass
@abstractmethod
async def async_query_with_stream(self, text, **kwargs):
pass
class ChatLLM(AOAI):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.default_engine = "gpt-3.5-turbo"
self.engine = self.engine or self.default_engine
self.system_prompt = "You are a Python engineer."
self.conversation = dict()
def query_with_no_stream(self, text, **kwargs):
conversation_id = kwargs.pop('conversation', None)
messages = self.create_prompt(text, conversation_id)
self.validate_tokens(messages)
temperature = kwargs.pop("temperature", 0.1)
response = self.client.chat.completions.create(
model=self.engine,
messages=messages,
temperature=temperature,
max_tokens=self.max_tokens,
stream=False,
**kwargs,
)
response_role = response.choices[0].message.role
full_response = response.choices[0].message.content
self.add_to_conversation(text, "user", conversation_id=conversation_id)
self.add_to_conversation(full_response, response_role, conversation_id=conversation_id)
return full_response
def query_with_stream(self, text, **kwargs):
conversation_id = kwargs.pop('conversation', None)
messages = self.create_prompt(text, conversation_id)
self.validate_tokens(messages)
temperature = kwargs.pop("temperature", 0.1)
response = self.client.chat.completions.create(
model=self.engine,
messages=messages,
temperature=temperature,
max_tokens=self.max_tokens,
stream=True,
**kwargs,
)
response_role = None
full_response = ""
for chunk in response:
delta = chunk.choices[0].delta
response_role = delta.role
if delta.content:
content = delta.content
full_response += content
yield content
self.add_to_conversation(text, "user", conversation_id=conversation_id)
self.add_to_conversation(full_response, response_role, conversation_id=conversation_id)
async def async_query_with_no_stream(self, text, **kwargs):
conversation_id = kwargs.pop('conversation', None)
messages = self.create_prompt(text, conversation_id)
self.validate_tokens(messages)
temperature = kwargs.pop("temperature", 0.1)
response = await self.async_client.chat.completions.create(
model=self.engine,
messages=messages,
temperature=temperature,
max_tokens=self.max_tokens,
stream=False,
**kwargs,
)
response_role = response.choices[0].message.role
full_response = response.choices[0].message.content
self.add_to_conversation(text, "user", conversation_id=conversation_id)
self.add_to_conversation(full_response, response_role, conversation_id=conversation_id)
return full_response
async def async_query_with_stream(self, text, **kwargs):
conversation_id = kwargs.pop('conversation', None)
messages = self.create_prompt(text, conversation_id)
self.validate_tokens(messages)
temperature = kwargs.pop("temperature", 0.1)
response = await self.async_client.chat.completions.create(
model=self.engine,
messages=messages,
temperature=temperature,
max_tokens=self.max_tokens,
stream=True,
**kwargs,
)
response_role = None
full_response = ""
for chunk in response:
delta = chunk.choices[0].delta
response_role = delta.role
if delta.content:
content = delta.content
full_response += content
yield content
self.add_to_conversation(text, "user", conversation_id=conversation_id)
self.add_to_conversation(full_response, response_role, conversation_id=conversation_id)
def get_unique_conversation_id(self):
return str(uuid.uuid4()).replace('-', '')
def add_to_conversation(self, message: str, role: str, conversation_id: str) -> None:
"""
Add a message to the conversation
"""
if type(conversation_id) is str:
self.conversation[conversation_id].append({"role": role, "content": message})
def del_conversation(self, conversation_id: str) -> None:
if conversation_id in self.conversation:
del self.conversation[conversation_id]
def init_conversation(self, conversation_id: str, system_prompt) -> None:
"""
Init a new conversation
"""
if type(conversation_id) is str:
self.conversation[conversation_id] = [{"role": "system", "content": system_prompt}]
def get_tokens_count(self, messages: List[dict]) -> int:
"""
Get token count
"""
num_tokens = 0
for message in messages:
# every message follows <im_start>{role/name}\n{content}<im_end>\n
num_tokens += 5
for key, value in message.items():
if value:
num_tokens += self.count_tokens(value)
if key == "name": # if there's a name, the role is omitted
num_tokens += 5 # role is always required and always 1 token
num_tokens += 5 # every reply is primed with <im_start>assistant
return num_tokens
def validate_tokens(self, messages: List[dict]) -> None:
total_tokens = self.get_tokens_count(messages)
if total_tokens > self.tokens_limit:
message = f"token count {total_tokens} exceeds limit {self.tokens_limit}"
raise PromptLimitException(message)
def create_prompt(self, text: str, conversation_id: str = None):
unique_conversation_id = self.get_unique_conversation_id()
conversation_id = conversation_id or unique_conversation_id
if conversation_id not in self.conversation:
self.init_conversation(conversation_id=conversation_id, system_prompt=self.system_prompt)
_conversation = self.conversation[conversation_id] + [{"role": "user", "content": text}]
while self.get_tokens_count(_conversation) > self.tokens_limit and len(_conversation) > 2:
_conversation.pop(1)
if unique_conversation_id == conversation_id:
self.del_conversation(conversation_id=unique_conversation_id)
return _conversation
if __name__ == "__main__":
load_dotenv()
llm = ChatLLM()
print(llm.query(text='how are you?'))
res = llm.query_with_stream(text='how are you?')
for item in res:
print(item)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/data.jsonl | {"query": "When will my order be shipped?"}
{"query": "Can you help me find information about this T-shirt?"}
{"query": "Can you recommend me a useful prompt tool?"} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/product_info.py | from promptflow import tool
@tool
def product_info(query: str) -> str:
print(f"Your query is {query}.\nLooking for product information...")
return "This product is produced by Microsoft."
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/README.md | # Conditional flow for switch scenario
This example is a conditional flow for switch scenario.
By following this example, you will learn how to create a conditional flow using the `activate config`.
## Flow description
In this flow, we set the background to the search function of a certain mall, use `activate config` to implement switch logic and determine user intent based on the input queries to achieve dynamic processing and generate user-oriented output.
- The `classify_with_llm` node analyzes user intent based on input query and provides one of the following results: "product_recommendation," "order_search," or "product_info".
- The `class_check` node generates the correctly formatted user intent.
- The `product_recommendation`, `order_search`, and `product_info` nodes are configured with activate config and are only executed when the output from `class_check` meets the specified conditions.
- The `generate_response` node generates user-facing output.
For example, as the shown below, the input query is "When will my order be shipped" and the LLM node classifies the user intent as "order_search", resulting in both the `product_info` and `product_recommendation` nodes being bypassed and only the `order_search` node being executed, and then generating the outputs.
![conditional_flow_for_switch](conditional_flow_for_switch.png)
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Setup connection
Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
Note in this example, we are using [chat api](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions), please use `gpt-35-turbo` or `gpt-4` model deployment.
Create connection if you haven't done that. Ensure you have put your azure open ai endpoint key in [azure_openai.yml](../../../connections/azure_openai.yml) file.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create -f ../../../connections/azure_openai.yml --name open_ai_connection --set api_key=<your_api_key> api_base=<your_api_base>
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `open_ai_connection`.
```bash
# show registered connection
pf connection show --name open_ai_connection
```
## Run flow
- Test flow
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs query="When will my order be shipped?"
```
- Create run with multiple lines of data
```bash
# create a random run name
run_name="conditional_flow_for_switch_"$(openssl rand -hex 12)
# create run
pf run create --flow . --data ./data.jsonl --column-mapping query='${data.query}' --stream --name $run_name
```
- List and show run metadata
```bash
# list created run
pf run list
# show specific run detail
pf run show --name $run_name
# show output
pf run show-details --name $run_name
# visualize run in browser
pf run visualize --name $run_name
```
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/generate_response.py | from promptflow import tool
@tool
def generate_response(order_search="", product_info="", product_recommendation="") -> str:
default_response = "Sorry, no results matching your search were found."
responses = [order_search, product_info, product_recommendation]
return next((response for response in responses if response), default_response)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/classify_with_llm.jinja2 | system:
There is a search bar in the mall APP and users can enter any query in the search bar.
The user may want to search for orders, view product information, or seek recommended products.
Therefore, please classify user intentions into the following three types according to the query: product_recommendation, order_search, product_info
Please note that only the above three situations can be returned, and try not to include other return values.
user:
The user's query is {{query}} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/product_recommendation.py | from promptflow import tool
@tool
def product_recommendation(query: str) -> str:
print(f"Your query is {query}.\nRecommending products...")
return "I recommend promptflow to you, which can solve your problem very well."
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
query:
type: string
default: When will my order be shipped?
outputs:
response:
type: string
reference: ${generate_response.output}
nodes:
- name: classify_with_llm
type: llm
source:
type: code
path: classify_with_llm.jinja2
inputs:
deployment_name: gpt-35-turbo
max_tokens: 128
query: ${inputs.query}
connection: open_ai_connection
api: chat
- name: class_check
type: python
source:
type: code
path: class_check.py
inputs:
llm_result: ${classify_with_llm.output}
- name: order_search
type: python
source:
type: code
path: order_search.py
inputs:
query: ${inputs.query}
activate:
when: ${class_check.output}
is: order_search
- name: product_info
type: python
source:
type: code
path: product_info.py
inputs:
query: ${inputs.query}
activate:
when: ${class_check.output}
is: product_info
- name: product_recommendation
type: python
source:
type: code
path: product_recommendation.py
inputs:
query: ${inputs.query}
activate:
when: ${class_check.output}
is: product_recommendation
- name: generate_response
type: python
source:
type: code
path: generate_response.py
inputs:
order_search: ${order_search.output}
product_info: ${product_info.output}
product_recommendation: ${product_recommendation.output}
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/order_search.py | from promptflow import tool
@tool
def order_search(query: str) -> str:
print(f"Your query is {query}.\nSearching for order...")
return "Your order is being mailed, please wait patiently."
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-switch/class_check.py | from promptflow import tool
@tool
def class_check(llm_result: str) -> str:
intentions_list = ["order_search", "product_info", "product_recommendation"]
matches = [intention for intention in intentions_list if intention in llm_result.lower()]
return matches[0] if matches else "unknown"
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/create_symlinks.py | import os
from pathlib import Path
saved_path = os.getcwd()
os.chdir(Path(__file__).parent)
source_folder = Path("../web-classification")
for file_name in os.listdir(source_folder):
if not Path(file_name).exists():
os.symlink(
source_folder / file_name,
file_name
)
os.chdir(saved_path)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/README.md | # Flow with symlinks
User sometimes need to reference some common files or folders, this sample demos how to solve the problem using symlinks.
But it has the following limitations. It is recommended to use **additional include**.
Learn more: [flow-with-additional-includes](../flow-with-additional-includes/README.md)
1. For Windows user, by default need Administrator role to create symlinks.
2. For Windows user, directly copy the folder with symlinks, it will deep copy the contents to the location.
3. Need to update the git config to support symlinks.
**Notes**:
- For Windows user, please grant user permission to [create symbolic links without administrator role](https://learn.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/create-symbolic-links).
1. Open your `Local Security Policy`
2. Find `Local Policies` -> `User Rights Assignment` -> `Create symbolic links`
3. Add you user name to this policy then reboot the compute.
**Attention**:
- For git operations, need to set: `git config core.symlinks true`
## Tools used in this flow
- LLM Tool
- Python Tool
## What you will learn
In this flow, you will learn
- how to use symlinks in the flow
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Getting Started
### 1. Create symbolic links in the flow
```bash
python ./create_symlinks.py
```
### 2. Test & run the flow with symlinks
In this sample, this flow will references some files in the [web-classification](../web-classification/README.md) flow, and assume you already have required connection setup.
You can execute this flow or submit it to cloud.
#### Test flow with single line data
```bash
# test flow with default input value in flow.dag.yaml
pf flow test --flow .
# test flow with input
pf flow test --flow . --inputs url=https://www.youtube.com/watch?v=o5ZQyXaAv1g answer=Channel evidence=Url
# test node in the flow
pf flow test --flow . --node convert_to_dict --inputs classify_with_llm.output='{"category": "App", "evidence": "URL"}'
```
#### Run with multi-line data
```bash
# create run using command line args
pf run create --flow . --data ./data.jsonl --column-mapping url='${data.url}' --stream
# create run using yaml file
pf run create --file run.yml --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
#### Submit run to cloud
``` bash
# create run
pfazure run create --flow . --data ./data.jsonl --column-mapping url='${data.url}' --stream --subscription <your_subscription_id> -g <your_resource_group_name> -w <your_workspace_name>
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>
pfazure run create --file run.yml --stream
```
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/run.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: .
data: data.jsonl
variant: ${summarize_text_content.variant_1}
column_mapping:
url: ${data.url} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/requirements.txt | promptflow[azure]
promptflow-tools
bs4 | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
url:
type: string
default: https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h
outputs:
category:
type: string
reference: ${convert_to_dict.output.category}
evidence:
type: string
reference: ${convert_to_dict.output.evidence}
nodes:
- name: fetch_text_content_from_url
type: python
source:
type: code
path: fetch_text_content_from_url.py
inputs:
url: ${inputs.url}
- name: summarize_text_content
use_variants: true
- name: prepare_examples
type: python
source:
type: code
path: prepare_examples.py
inputs: {}
- name: classify_with_llm
type: llm
source:
type: code
path: classify_with_llm.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 128
temperature: 0.2
url: ${inputs.url}
text_content: ${summarize_text_content.output}
examples: ${prepare_examples.output}
connection: open_ai_connection
api: chat
- name: convert_to_dict
type: python
source:
type: code
path: convert_to_dict.py
inputs:
input_str: ${classify_with_llm.output}
node_variants:
summarize_text_content:
default_variant_id: variant_0
variants:
variant_0:
node:
type: llm
source:
type: code
path: summarize_text_content.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 128
temperature: 0.2
text: ${fetch_text_content_from_url.output}
connection: open_ai_connection
api: chat
variant_1:
node:
type: llm
source:
type: code
path: summarize_text_content__variant_1.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 256
temperature: 0.3
text: ${fetch_text_content_from_url.output}
connection: open_ai_connection
api: chat
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks | promptflow_repo/promptflow/examples/flows/standard/flow-with-symlinks/.promptflow/flow.tools.json | {
"package": {},
"code": {
"summarize_text_content.jinja2": {
"type": "llm",
"inputs": {
"text": {
"type": [
"string"
]
}
},
"description": "Summarize webpage content into a short paragraph."
},
"summarize_text_content__variant_1.jinja2": {
"type": "llm",
"inputs": {
"text": {
"type": [
"string"
]
}
}
},
"prepare_examples.py": {
"type": "python",
"function": "prepare_examples"
}
}
} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/prompt_gen.jinja2 | system:
I want you to act as a Math expert specializing in Algebra, Geometry, and Calculus. Given the question, develop python code to model the user's question.
The python code will print the result at the end.
Please generate executable python code, your reply will be in JSON format, something like:
{
"code": "print(1+1)"
}
user:
This a set of examples including question and the final answer:
{% for ex in examples %}
QUESTION: {{ ex.question }}
CODE:
{{ ex.code }}
{% endfor %}
Now come to the real task, make sure return a valid json. The json should contain a key named "code" and the value is the python code. For example:
{
"code": "print(1+1)"
}
QUESTION: {{ question }}
CODE:
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/README.md | # Math to Code
Math to Code is a project that utilizes the power of the chatGPT model to generate code that models math questions and then executes the generated code to obtain the final numerical answer.
> [!NOTE]
>
> Building a system that generates executable code from user input with LLM is [a complex problem with potential security risks](
https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/
), this example is more of a demonstration rather than something you can directly use in production. To build such system correctly, you should address key security considerations like input validation, additional sanitization of the code generated or better run the generated code in a sandbox environment.
Tools used in this flow:
- `python` tool
- built-in `llm` tool
Connections used in this flow:
- `open_ai` connection
## Prerequisites
Install promptflow sdk and other dependencies:
```cmd
pip install -r requirements.txt
```
## Setup connection
Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
Note in this example, we are using [chat api](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions), please use `gpt-35-turbo` or `gpt-4` model deployment.
Create connection if you haven't done that. Ensure you have put your azure open ai endpoint key in [azure_openai.yml](azure_openai.yml) file.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create -f ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
```
Ensure you have created `open_ai_connection` connection.
```bash
pf connection show -n open_ai_connection
```
## Run flow in local
### Run locally with single line input
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with specific input
pf flow test --flow . --inputs math_question='If a rectangle has a length of 10 and width of 5, what is the area?'
```
### Run with multiple lines data
- create run
```bash
# create a random run name
run_name="math_to_code_"$(openssl rand -hex 12)
pf run create --flow . --data ./math_data.jsonl --column-mapping math_question='${data.question}' --name $run_name --stream
```
### Get the accuracy using evaluation flow
Use [eval-accuracy-maths-to-code](../../evaluation/eval-accuracy-maths-to-code/) to evaluate accuracy and error rate metrics against the math-to-code flow.
- accuracy: if the generated code can be correctly executed and got final number answer, it will be compare with the groundtruth in the test data. For single instance, it's True if the final number equals to the groundtruth, False otherwise. Accuracy is to measure the correct percentage against test data.
- error_rate: some case the flow cannot get number answer, for example, the generated code cannot be executed due to code parsing error of dependent package not available in conda env. Error rate is to measure the percentage of this case in test data.
```bash
# create a random eval run name
eval_run_name="math_to_code_eval_run_"$(openssl rand -hex 12)
# invoke accuracy and error rate evaluation against math-to-code batch run
pf run create --flow ../../evaluation/eval-accuracy-maths-to-code/ --data ./math_data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.answer}' --run $run_name --name $eval_run_name --stream
# view the run details
pf run show-details -n $eval_run_name
pf run show-metrics -n $eval_run_name
```
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/code_refine.py | from promptflow import tool
import ast
import json
def infinite_loop_check(code_snippet):
tree = ast.parse(code_snippet)
for node in ast.walk(tree):
if isinstance(node, ast.While):
if not node.orelse:
return True
return False
def syntax_error_check(code_snippet):
try:
ast.parse(code_snippet)
except SyntaxError:
return True
return False
def error_fix(code_snippet):
tree = ast.parse(code_snippet)
for node in ast.walk(tree):
if isinstance(node, ast.While):
if not node.orelse:
node.orelse = [ast.Pass()]
return ast.unparse(tree)
@tool
def code_refine(original_code: str) -> str:
try:
original_code = json.loads(original_code)["code"]
fixed_code = None
if infinite_loop_check(original_code):
fixed_code = error_fix(original_code)
else:
fixed_code = original_code
if syntax_error_check(fixed_code):
fixed_code = error_fix(fixed_code)
return fixed_code
except json.JSONDecodeError:
return "JSONDecodeError"
except Exception as e:
return "Unknown Error:" + str(e)
if __name__ == "__main__":
code = "{\n \"code\": \"distance_A = 10 * 0.5\\ndistance_B = 15 * t\\n\\n\
equation: distance_A = distance_B\\n\\n\10 * 0.5 = 15 * t\\n\\nt = (10 * 0.5) / 15\\n\\nprint(t)\"\n}"
code_refine = code_refine(code)
print(code_refine)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/math_test.ipynb | # setup pf client and execution path
from promptflow import PFClient
import json
import os
pf = PFClient()
root = os.path.join(os.getcwd(), "../")
flow = os.path.join(root, "maths-to-code")
data = os.path.join(flow, "math_data.jsonl")
eval_flow = os.path.join(root, "../evaluation/eval-accuracy-maths-to-code")# start batch run of maths-to-code
base_run = pf.run(
flow = flow,
data = data,
column_mapping={"math_question": "${data.question}"},
display_name="maths_to_code_batch_run",
stream=True
)# Show output of flow run
pf.get_details(base_run)# evaluate against the batch run and groundtruth data
eval_run = pf.run(
flow = eval_flow,
data = data,
run = base_run,
column_mapping={"groundtruth": "${data.answer}", "prediction": "${run.outputs.answer}"},
display_name="maths_to_code_eval_run",
stream=True
)
pf.get_details(eval_run)# Get metrics of the evaluation flow run
pf.get_metrics(eval_run)# Visualize the flow run and evaluation run with HTML
pf.visualize([base_run, eval_run])from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
# init credential
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()from promptflow.azure import PFClient
try:
pf = PFClient.from_config(credential=credential)
except Exception as ex:
# NOTE: Update following workspace information if not correctly configure before
client_config = {
"subscription_id": "<SUBSCRIPTION_ID>",
"resource_group": "<RESOURCE_GROUP>",
"workspace_name": "<AML_WORKSPACE_NAME>",
}
if client_config["subscription_id"].startswith("<"):
print(
"please update your <SUBSCRIPTION_ID> <RESOURCE_GROUP> <AML_WORKSPACE_NAME> in notebook cell"
)
raise ex
else: # write and reload from config file
import json, os
config_path = "../.azureml/config.json"
os.makedirs(os.path.dirname(config_path), exist_ok=True)
with open(config_path, "w") as fo:
fo.write(json.dumps(client_config))
pf = PFClient.from_config(credential=credential, path=config_path)
print(pf)
# NOTE: note that you need to replace <open_ai_connection> and <gpt-35-turbo> with your own connection and deployment name in your Azure Machine Learning workspace
connection_mapping = {"code_gen": {"connection": "<my_azure_open_ai_connection>", "deployment_name": "<gpt-35-turbo>"}}# batch run of maths to code
base_run = pf.run(
flow = flow,
data = data,
column_mapping = {"math_question": "${data.question}"},
connections = connection_mapping,
stream = True,
)# get output of flow run
pf.get_details(base_run)# evaluation run against base run
eval_run = pf.run(
flow = eval_flow,
data = data,
run = base_run,
column_mapping={"groundtruth": "${data.answer}", "prediction": "${run.outputs.answer}"},
stream = True,
)
# get output of evaluation run
pf.get_details(eval_run)metrics = pf.get_metrics(eval_run)
print(json.dumps(metrics, indent=4)) | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/code_execution.py | from promptflow import tool
import sys
from io import StringIO
@tool
def func_exe(code_snippet: str):
if code_snippet == "JSONDecodeError" or code_snippet.startswith("Unknown Error:"):
return code_snippet
# Define the result variable before executing the code snippet
old_stdout = sys.stdout
redirected_output = sys.stdout = StringIO()
# Execute the code snippet
try:
exec(code_snippet.lstrip())
except Exception as e:
sys.stdout = old_stdout
return str(e)
sys.stdout = old_stdout
return redirected_output.getvalue().strip()
if __name__ == "__main__":
print(func_exe("print(5+3)"))
print(func_exe("count = 0\nfor i in range(100):\n if i % 8 == 0:\n count += 1\nprint(count)"))
print(func_exe("sum = 0\ni = 0\nwhile 3**i < 100:\n sum += 3**i\n i += 1\nprint(sum)"))
print(func_exe("speed_A = 80\nspeed_B = 120\ndistance = 2000\ntime = distance / (speed_A + speed_B)\nprint(time)"))
print(func_exe("Unknown Error"))
print(func_exe("JSONDecodeError"))
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/requirements.txt | langchain
sympy
promptflow[azure]
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/math_data.jsonl | {"question": "What is the sum of 5 and 3?", "answer": "8"}
{"question": "Subtract 7 from 10.", "answer": "3"}
{"question": "Multiply 6 by 4.", "answer": "24"}
{"question": "Divide 20 by 5.", "answer": "4"}
{"question": "What is the square of 7?", "answer": "49"}
{"question": "What is the square root of 81?", "answer": "9"}
{"question": "If a rectangle has a length of 10 and width of 5, what is the area?", "answer": "50"}
{"question": "A circle has a radius of 7, what is the area? (Use 3.14 for pi)", "answer": "153.86"}
{"question": "Solve for x in the equation 2x + 3 = 9.", "answer": "3"}
{"question": "What is the value of x if 5x = 25?", "answer": "5"}
{"question": "A car travels 200 miles in 4 hours. What is the average speed of the car?", "answer": "50"}
{"question": "A car travels at a speed of 60 mph. How long will it take to travel 180 miles?", "answer": "3"}
{"question": "If a car travels at a speed of 40 mph for 2 hours, how far will it travel?","answer": "80"}
{"question":"A rectangle has length = 10 cm and width = 5 cm. What is its area?", "answer":"50"}
{"question":"A circle has radius = 7 cm. What is its circumference? (Use pi =3.14)", "answer":"43.96"}
{"question":"A triangle has base =10 cm and height =5 cm. What is its area?", "answer":"25"}
{"question":"What is the slope of the line that passes through (2,3) and (4,7)?", "answer":"2"}
{"question":"The distance between A and B is 2000km, A is moving towards B with speed 80km/hour, meanwhile B is moving towards A with speed 120km/hour, how many hours later A and B can meet?", "answer":"10"}
{"question":"The lengths of the two perpendicular sides of a right triangle are 6cm and 8cm. What is the length of the hypotenuse?", "answer": "10"}
{"question":"A is running with average speed 10km/hour, A already run half hour. B start to chase A along the same route with average speed 15km/hour, how many hours B will take to meet A?", "answer":"1"} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/math_example.py | from promptflow import tool
@tool
def prepare_example():
return [
{
"question": "What is 37593 * 67?",
"code": "{\n \"code\": \"print(37593 * 67)\"\n}",
"answer": "2512641",
},
{
"question": "What is the value of x in the equation 2x + 3 = 11?",
"code": "{\n \"code\": \"print((11-3)/2)\"\n}",
"answer": "4",
},
{
"question": "How many of the integers between 0 and 99 inclusive are divisible by 8?",
"code": "{\n \"code\": \"count = 0\\nfor i in range(100):\\n \
if i % 8 == 0:\\n count += 1\\nprint(count)\"\n}",
"answer": "10",
},
{
"question": "Janet's ducks lay 16 eggs per day. \
She eats three for breakfast every morning and bakes muffins for her friends every day with four.\
She sells the remainder at the farmers' market daily for $2 per fresh duck egg. \
How much in dollars does she make every day at the farmers' market?",
"code": "{\n \"code\": \"print((16-3-4)*2)\"\n}",
"answer": "18",
},
{
"question": "What is the sum of the powers of 3 (3^i) that are smaller than 100?",
"code": "{\n \"code\": \"sum = 0\\ni = 0\n\
while 3**i < 100:\\n sum += 3**i\\n i += 1\\nprint(sum)\"\n}",
"answer": "40",
},
{
"question": "Carla is downloading a 200 GB file. She can download 2 GB/minute, \
but 40% of the way through the download, the download fails.\
Then Carla has to restart the download from the beginning. \
How load did it take her to download the file in minutes?",
"code": "{\n \"code\": \"print(200/2*1.4)\"\n}",
"answer": "140",
},
{
"question": "What is the sum of the 10 first positive integers?",
"code": "{\n \"code\": \"print(sum(range(1,11)))\"\n}",
"answer": "55",
}
]
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/ask_llm.jinja2 | system:
I want you to act as a Math expert specializing in Algebra, Geometry, and Calculus. Given the question, develop python code to model the user's question.
The python code will print the result at the end.
Please generate executable python code, your reply will be in JSON format, something like:
{
"code": "print(1+1)"
}
user:
This a set of examples including question and the final answer:
{% for ex in examples %}
QUESTION: {{ ex.question }}
CODE:
{{ ex.code }}
{% endfor %}
Now come to the real task, make sure return a valid json. The json should contain a key named "code" and the value is the python code. For example:
{
"code": "print(1+1)"
}
QUESTION: {{ question }}
CODE:
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/maths-to-code/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
math_question:
type: string
default: If a rectangle has a length of 10 and width of 5, what is the area?
outputs:
code:
type: string
reference: ${code_refine.output}
answer:
type: string
reference: ${final_code_execution.output}
nodes:
- name: final_code_execution
type: python
source:
type: code
path: code_execution.py
inputs:
code_snippet: ${code_refine.output}
- name: math_example
type: python
source:
type: code
path: math_example.py
inputs: {}
- name: code_refine
type: python
source:
type: code
path: code_refine.py
inputs:
original_code: ${code_gen.output}
- name: code_gen
type: llm
source:
type: code
path: ask_llm.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
question: ${inputs.math_question}
examples: ${math_example.output}
connection: open_ai_connection
api: chat
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/data.jsonl | {"url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g", "answer": "Channel", "evidence": "Url"}
{"url": "https://arxiv.org/abs/2307.04767", "answer": "Academic", "evidence": "Text content"}
{"url": "https://play.google.com/store/apps/details?id=com.twitter.android", "answer": "App", "evidence": "Both"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/README.md | # Flow with additional_includes
User sometimes need to reference some common files or folders, this sample demos how to solve the problem using additional_includes. The file or folders in additional includes will be
copied to the snapshot folder by promptflow when operate this flow.
## Tools used in this flow
- LLM Tool
- Python Tool
## What you will learn
In this flow, you will learn
- how to add additional includes to the flow
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Getting Started
### 1. Add additional includes to flow
You can add this field `additional_includes` into the [`flow.dag.yaml`](flow.dag.yaml).
The value of this field is a list of the relative file/folder path to the flow folder.
``` yaml
additional_includes:
- ../web-classification/classify_with_llm.jinja2
- ../web-classification/convert_to_dict.py
- ../web-classification/fetch_text_content_from_url.py
- ../web-classification/prepare_examples.py
- ../web-classification/summarize_text_content.jinja2
- ../web-classification/summarize_text_content__variant_1.jinja2
```
### 2. Test & run the flow with additional includes
In this sample, this flow will references some files in the [web-classification](../web-classification/README.md) flow.
You can execute this flow with additional_include or submit it to cloud. The snapshot generated by Promptflow contains additional include files/folders.
#### Test flow with single line data
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with user specified inputs
pf flow test --flow . --inputs url='https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h'
```
#### Run with multi-line data
```bash
# create run using command line args
pf run create --flow . --data ./data.jsonl --column-mapping url='${data.url}' --stream
# create run using yaml file
pf run create --file run.yml --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
#### Submit run to cloud
Assume we already have a connection named `open_ai_connection` in workspace.
```bash
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>
```
``` bash
# create run
pfazure run create --flow . --data ./data.jsonl --column-mapping url='${data.url}' --stream
pfazure run create --file run.yml
```
Note: Click portal_url of the run to view the final snapshot.
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/run.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: .
data: data.jsonl
variant: ${summarize_text_content.variant_1}
column_mapping:
url: ${data.url} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/run_evaluation.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: ../../evaluation/eval-classification-accuracy
data: data.jsonl
run: web_classification_variant_1_20230724_173442_973403 # replace with your run name
column_mapping:
groundtruth: ${data.answer}
prediction: ${run.outputs.category} | 0 |