repo_id
stringlengths 15
132
| file_path
stringlengths 34
176
| content
stringlengths 2
3.52M
| __index_level_0__
int64 0
0
|
---|---|---|---|
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/requirements.txt | promptflow[azure]
promptflow-tools
bs4 | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/flow-with-additional-includes/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
url:
type: string
default: https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h
outputs:
category:
type: string
reference: ${convert_to_dict.output.category}
evidence:
type: string
reference: ${convert_to_dict.output.evidence}
nodes:
- name: fetch_text_content_from_url
type: python
source:
type: code
path: fetch_text_content_from_url.py
inputs:
url: ${inputs.url}
- name: summarize_text_content
use_variants: true
- name: prepare_examples
type: python
source:
type: code
path: prepare_examples.py
inputs: {}
- name: classify_with_llm
type: llm
source:
type: code
path: classify_with_llm.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: '128'
temperature: '0.2'
url: ${inputs.url}
text_content: ${summarize_text_content.output}
examples: ${prepare_examples.output}
connection: open_ai_connection
api: chat
- name: convert_to_dict
type: python
source:
type: code
path: convert_to_dict.py
inputs:
input_str: ${classify_with_llm.output}
additional_includes:
- ../web-classification/classify_with_llm.jinja2
- ../web-classification/convert_to_dict.py
- ../web-classification/fetch_text_content_from_url.py
- ../web-classification/prepare_examples.py
- ../web-classification/summarize_text_content.jinja2
- ../web-classification/summarize_text_content__variant_1.jinja2
node_variants:
summarize_text_content:
default_variant_id: variant_0
variants:
variant_0:
node:
type: llm
source:
type: code
path: summarize_text_content.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: '128'
temperature: '0.2'
text: ${fetch_text_content_from_url.output}
connection: open_ai_connection
api: chat
variant_1:
node:
type: llm
source:
type: code
path: summarize_text_content__variant_1.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: '256'
temperature: '0.3'
text: ${fetch_text_content_from_url.output}
connection: open_ai_connection
api: chat
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/data.jsonl | {"question": "What is Prompt flow?"}
{"question": "What is ChatGPT?"} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/llm_result.py | from promptflow import tool
@tool
def llm_result(question: str) -> str:
# You can use an LLM node to replace this tool.
return (
"Prompt flow is a suite of development tools designed to streamline "
"the end-to-end development cycle of LLM-based AI applications."
)
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/README.md | # Conditional flow for if-else scenario
This example is a conditional flow for if-else scenario.
By following this example, you will learn how to create a conditional flow using the `activate config`.
## Flow description
In this flow, it checks if an input query passes content safety check. If it's denied, we'll return a default response; otherwise, we'll call LLM to get a response and then summarize the final results.
The following are two execution situations of this flow:
- if input query passes content safety check:
![content_safety_check_passed](content_safety_check_passed.png)
- else:
![content_safety_check_failed](content_safety_check_failed.png)
**Notice**: The `content_safety_check` and `llm_result` node in this flow are dummy nodes that do not actually use the conten safety tool and LLM tool. You can replace them with the real ones. Learn more: [LLM Tool](https://microsoft.github.io/promptflow/reference/tools-reference/llm-tool.html)
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Run flow
- Test flow
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs question="What is Prompt flow?"
```
- Create run with multiple lines of data
```bash
# create a random run name
run_name="conditional_flow_for_if_else_"$(openssl rand -hex 12)
# create run
pf run create --flow . --data ./data.jsonl --column-mapping question='${data.question}' --stream --name $run_name
```
- List and show run metadata
```bash
# list created run
pf run list
# show specific run detail
pf run show --name $run_name
# show output
pf run show-details --name $run_name
# visualize run in browser
pf run visualize --name $run_name
```
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/default_result.py | from promptflow import tool
@tool
def default_result(question: str) -> str:
return f"I'm not familiar with your query: {question}."
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/generate_result.py | from promptflow import tool
@tool
def generate_result(llm_result="", default_result="") -> str:
if llm_result:
return llm_result
else:
return default_result
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
question:
type: string
default: What is Prompt flow?
outputs:
answer:
type: string
reference: ${generate_result.output}
nodes:
- name: content_safety_check
type: python
source:
type: code
path: content_safety_check.py
inputs:
text: ${inputs.question}
- name: llm_result
type: python
source:
type: code
path: llm_result.py
inputs:
question: ${inputs.question}
activate:
when: ${content_safety_check.output}
is: true
- name: default_result
type: python
source:
type: code
path: default_result.py
inputs:
question: ${inputs.question}
activate:
when: ${content_safety_check.output}
is: false
- name: generate_result
type: python
source:
type: code
path: generate_result.py
inputs:
llm_result: ${llm_result.output}
default_result: ${default_result.output}
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/conditional-flow-for-if-else/content_safety_check.py | from promptflow import tool
import random
@tool
def content_safety_check(text: str) -> str:
# You can use a content safety node to replace this tool.
return random.choice([True, False])
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-builtin-llm/data.jsonl | {"text": "Python Hello World!"}
{"text": "C Hello World!"}
{"text": "C# Hello World!"} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-builtin-llm/README.md | # Basic flow with builtin llm tool
A basic standard flow that calls Azure OpenAI with builtin llm tool.
Tools used in this flow:
- `prompt` tool
- built-in `llm` tool
Connections used in this flow:
- `azure_open_ai` connection
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Setup connection
Prepare your Azure Open AI resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
Note in this example, we are using [chat api](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions), please use `gpt-35-turbo` or `gpt-4` model deployment.
Ensure you have created `open_ai_connection` connection before.
```bash
pf connection show -n open_ai_connection
```
Create connection if you haven't done that. Ensure you have put your azure open ai endpoint key in [azure_openai.yml](../../../connections/azure_openai.yml) file.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create -f ../../../connections/azure_openai.yml --name open_ai_connection --set api_key=<your_api_key> api_base=<your_api_base>
```
## Run flow
### Run with single line input
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with inputs
pf flow test --flow . --inputs text="Python Hello World!"
```
### run with multiple lines data
- create run
```bash
pf run create --flow . --data ./data.jsonl --column-mapping text='${data.text}' --stream
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
- list and show run meta
```bash
# list created run
pf run list
# get a sample run name
name=$(pf run list -r 10 | jq '.[] | select(.name | contains("basic_with_builtin_llm")) | .name'| head -n 1 | tr -d '"')
# show specific run detail
pf run show --name $name
# show output
pf run show-details --name $name
# visualize run in browser
pf run visualize --name $name
```
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-builtin-llm/requirements.txt | promptflow
promptflow-tools
python-dotenv | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-builtin-llm/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
text:
type: string
default: Python Hello World!
outputs:
output:
type: string
reference: ${llm.output}
nodes:
- name: hello_prompt
type: prompt
inputs:
text: ${inputs.text}
source:
type: code
path: hello.jinja2
- name: llm
type: llm
inputs:
prompt: ${hello_prompt.output}
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: '120'
source:
type: code
path: hello.jinja2
connection: open_ai_connection
api: chat
node_variants: {}
environment:
python_requirements_txt: requirements.txt | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/basic-with-builtin-llm/hello.jinja2 | system:
You are a assistant which can write code. Response should only contain code.
user:
Write a simple {{text}} program that displays the greeting message when executed. | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/data.jsonl | {"url": "https://www.youtube.com/watch?v=kYqRtjDBci8", "answer": "Channel", "evidence": "Both"}
{"url": "https://arxiv.org/abs/2307.04767", "answer": "Academic", "evidence": "Both"}
{"url": "https://play.google.com/store/apps/details?id=com.twitter.android", "answer": "App", "evidence": "Both"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/convert_to_dict.py | import json
from promptflow import tool
@tool
def convert_to_dict(input_str: str):
try:
return json.loads(input_str)
except Exception as e:
print("The input is not valid, error: {}".format(e))
return {"category": "None", "evidence": "None"}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/README.md | # Web Classification
This is a flow demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.
## Tools used in this flow
- LLM Tool
- Python Tool
## What you will learn
In this flow, you will learn
- how to compose a classification flow with LLM.
- how to feed few shots to LLM classifier.
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Getting Started
### 1. Setup connection
If you are using Azure Open AI, prepare your resource follow this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
If you using OpenAI, sign up account [OpenAI website](https://openai.com/), login and [find personal API key](https://platform.openai.com/account/api-keys).
```shell
pf connection create --file ../../../connections/openai.yml --set api_key=<your_api_key>
```
### 2. Configure the flow with your connection
`flow.dag.yaml` is already configured with connection named `open_ai_connection`.
### 3. Test flow with single line data
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with user specified inputs
pf flow test --flow . --inputs url='https://www.youtube.com/watch?v=kYqRtjDBci8'
```
### 4. Run with multi-line data
```bash
# create run using command line args
pf run create --flow . --data ./data.jsonl --column-mapping url='${data.url}' --stream
# (Optional) create a random run name
run_name="web_classification_"$(openssl rand -hex 12)
# create run using yaml file, run_name will be used in following contents, --name is optional
pf run create --file run.yml --stream --name $run_name
```
You can also skip providing `column-mapping` if provided data has same column name as the flow.
Reference [here](https://aka.ms/pf/column-mapping) for default behavior when `column-mapping` not provided in CLI.
```bash
# list run
pf run list
# show run
pf run show --name $run_name
# show run outputs
pf run show-details --name $run_name
```
### 5. Run with classification evaluation flow
create `evaluation` run:
```bash
# (Optional) save previous run name into variable, and create a new random run name for further use
prev_run_name=$run_name
run_name="classification_accuracy_"$(openssl rand -hex 12)
# create run using command line args
pf run create --flow ../../evaluation/eval-classification-accuracy --data ./data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.category}' --run $prev_run_name --stream
# create run using yaml file, --name is optional
pf run create --file run_evaluation.yml --run $prev_run_name --stream --name $run_name
```
```bash
pf run show-details --name $run_name
pf run show-metrics --name $run_name
pf run visualize --name $run_name
```
### 6. Submit run to cloud
```bash
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>
# create run
pfazure run create --flow . --data ./data.jsonl --column-mapping url='${data.url}' --stream
# (Optional) create a new random run name for further use
run_name="web_classification_"$(openssl rand -hex 12)
# create run using yaml file, --name is optional
pfazure run create --file run.yml --name $run_name
pfazure run stream --name $run_name
pfazure run show-details --name $run_name
pfazure run show-metrics --name $run_name
# (Optional) save previous run name into variable, and create a new random run name for further use
prev_run_name=$run_name
run_name="classification_accuracy_"$(openssl rand -hex 12)
# create evaluation run, --name is optional
pfazure run create --flow ../../evaluation/eval-classification-accuracy --data ./data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.category}' --run $prev_run_name
pfazure run create --file run_evaluation.yml --run $prev_run_name --stream --name $run_name
pfazure run stream --name $run_name
pfazure run show --name $run_name
pfazure run show-details --name $run_name
pfazure run show-metrics --name $run_name
pfazure run visualize --name $run_name
``` | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/run.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: .
data: data.jsonl
variant: ${summarize_text_content.variant_1}
column_mapping:
url: ${data.url}
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/fetch_text_content_from_url.py | import bs4
import requests
from promptflow import tool
@tool
def fetch_text_content_from_url(url: str):
# Send a request to the URL
try:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.35"
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
# Parse the HTML content using BeautifulSoup
soup = bs4.BeautifulSoup(response.text, "html.parser")
soup.prettify()
return soup.get_text()[:2000]
else:
msg = (
f"Get url failed with status code {response.status_code}.\nURL: {url}\nResponse: "
f"{response.text[:100]}"
)
print(msg)
return "No available content"
except Exception as e:
print("Get url failed with error: {}".format(e))
return "No available content"
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/classify_with_llm.jinja2 | system:
Your task is to classify a given url into one of the following categories:
Movie, App, Academic, Channel, Profile, PDF or None based on the text content information.
The classification will be based on the url, the webpage text content summary, or both.
user:
The selection range of the value of "category" must be within "Movie", "App", "Academic", "Channel", "Profile", "PDF" and "None".
The selection range of the value of "evidence" must be within "Url", "Text content", and "Both".
Here are a few examples:
{% for ex in examples %}
URL: {{ex.url}}
Text content: {{ex.text_content}}
OUTPUT:
{"category": "{{ex.category}}", "evidence": "{{ex.evidence}}"}
{% endfor %}
For a given URL and text content, classify the url to complete the category and indicate evidence:
URL: {{url}}
Text content: {{text_content}}.
OUTPUT: | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/summarize_text_content__variant_1.jinja2 | system:
Please summarize some keywords of this paragraph and have some details of each keywords.
Do not add any information that is not in the text.
user:
Text: {{text}}
Summary: | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/prepare_examples.py | from promptflow import tool
@tool
def prepare_examples():
return [
{
"url": "https://play.google.com/store/apps/details?id=com.spotify.music",
"text_content": "Spotify is a free music and podcast streaming app with millions of songs, albums, and "
"original podcasts. It also offers audiobooks, so users can enjoy thousands of stories. "
"It has a variety of features such as creating and sharing music playlists, discovering "
"new music, and listening to popular and exclusive podcasts. It also has a Premium "
"subscription option which allows users to download and listen offline, and access "
"ad-free music. It is available on all devices and has a variety of genres and artists "
"to choose from.",
"category": "App",
"evidence": "Both",
},
{
"url": "https://www.youtube.com/channel/UC_x5XG1OV2P6uZZ5FSM9Ttw",
"text_content": "NFL Sunday Ticket is a service offered by Google LLC that allows users to watch NFL "
"games on YouTube. It is available in 2023 and is subject to the terms and privacy policy "
"of Google LLC. It is also subject to YouTube's terms of use and any applicable laws.",
"category": "Channel",
"evidence": "URL",
},
{
"url": "https://arxiv.org/abs/2303.04671",
"text_content": "Visual ChatGPT is a system that enables users to interact with ChatGPT by sending and "
"receiving not only languages but also images, providing complex visual questions or "
"visual editing instructions, and providing feedback and asking for corrected results. "
"It incorporates different Visual Foundation Models and is publicly available. Experiments "
"show that Visual ChatGPT opens the door to investigating the visual roles of ChatGPT with "
"the help of Visual Foundation Models.",
"category": "Academic",
"evidence": "Text content",
},
{
"url": "https://ab.politiaromana.ro/",
"text_content": "There is no content available for this text.",
"category": "None",
"evidence": "None",
},
]
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/run_evaluation.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: ../../evaluation/eval-classification-accuracy
data: data.jsonl
run: web_classification_variant_1_20230724_173442_973403 # replace with your run name
column_mapping:
groundtruth: ${data.answer}
prediction: ${run.outputs.category} | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/requirements.txt | promptflow[azure]
promptflow-tools
bs4 | 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
url:
type: string
default: https://play.google.com/store/apps/details?id=com.twitter.android
outputs:
category:
type: string
reference: ${convert_to_dict.output.category}
evidence:
type: string
reference: ${convert_to_dict.output.evidence}
nodes:
- name: fetch_text_content_from_url
type: python
source:
type: code
path: fetch_text_content_from_url.py
inputs:
url: ${inputs.url}
- name: summarize_text_content
use_variants: true
- name: prepare_examples
type: python
source:
type: code
path: prepare_examples.py
inputs: {}
- name: classify_with_llm
type: llm
source:
type: code
path: classify_with_llm.jinja2
inputs:
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 128
temperature: 0.2
url: ${inputs.url}
text_content: ${summarize_text_content.output}
examples: ${prepare_examples.output}
connection: open_ai_connection
api: chat
- name: convert_to_dict
type: python
source:
type: code
path: convert_to_dict.py
inputs:
input_str: ${classify_with_llm.output}
node_variants:
summarize_text_content:
default_variant_id: variant_0
variants:
variant_0:
node:
type: llm
source:
type: code
path: summarize_text_content.jinja2
inputs:
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 128
temperature: 0.2
text: ${fetch_text_content_from_url.output}
connection: open_ai_connection
api: chat
variant_1:
node:
type: llm
source:
type: code
path: summarize_text_content__variant_1.jinja2
inputs:
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: 256
temperature: 0.3
text: ${fetch_text_content_from_url.output}
connection: open_ai_connection
api: chat
| 0 |
promptflow_repo/promptflow/examples/flows/standard | promptflow_repo/promptflow/examples/flows/standard/web-classification/summarize_text_content.jinja2 | system:
Please summarize the following text in one paragraph. 100 words.
Do not add any information that is not in the text.
user:
Text: {{text}}
Summary: | 0 |
promptflow_repo/promptflow/examples/flows/standard/web-classification | promptflow_repo/promptflow/examples/flows/standard/web-classification/.promptflow/flow.tools.json | {
"package": {},
"code": {
"fetch_text_content_from_url.py": {
"type": "python",
"inputs": {
"url": {
"type": [
"string"
]
}
},
"source": "fetch_text_content_from_url.py",
"function": "fetch_text_content_from_url"
},
"summarize_text_content.jinja2": {
"type": "llm",
"inputs": {
"text": {
"type": [
"string"
]
}
},
"source": "summarize_text_content.jinja2"
},
"summarize_text_content__variant_1.jinja2": {
"type": "llm",
"inputs": {
"text": {
"type": [
"string"
]
}
},
"source": "summarize_text_content__variant_1.jinja2"
},
"prepare_examples.py": {
"type": "python",
"source": "prepare_examples.py",
"function": "prepare_examples"
},
"classify_with_llm.jinja2": {
"type": "llm",
"inputs": {
"url": {
"type": [
"string"
]
},
"examples": {
"type": [
"string"
]
},
"text_content": {
"type": [
"string"
]
}
},
"source": "classify_with_llm.jinja2"
},
"convert_to_dict.py": {
"type": "python",
"inputs": {
"input_str": {
"type": [
"string"
]
}
},
"source": "convert_to_dict.py",
"function": "convert_to_dict"
}
}
} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/README.md | # Basic Chat
This example shows how to create a basic chat flow. It demonstrates how to create a chatbot that can remember previous interactions and use the conversation history to generate next message.
Tools used in this flow:
- `llm` tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## What you will learn
In this flow, you will learn
- how to compose a chat flow.
- prompt template format of LLM tool chat api. Message delimiter is a separate line containing role name and colon: "system:", "user:", "assistant:".
See <a href="https://platform.openai.com/docs/api-reference/chat/create#chat/create-role" target="_blank">OpenAI Chat</a> for more about message role.
```jinja
system:
You are a chatbot having a conversation with a human.
user:
{{question}}
```
- how to consume chat history in prompt.
```jinja
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
```
## Getting started
### 1 Create connection for LLM tool to use
Go to "Prompt flow" "Connections" tab. Click on "Create" button, select one of LLM tool supported connection types and fill in the configurations.
Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer to [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for more details. If you want to use "OpenAI" connection type, you need to create an OpenAI account first. Please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `open_ai_connection`.
```bash
# show registered connection
pf connection show --name open_ai_connection
```
### 2 Start chatting
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow .
# run chat flow with new question
pf flow test --flow . --inputs question="What's Azure Machine Learning?"
# start a interactive chat session in CLI
pf flow test --flow . --interactive
# start a interactive chat session in CLI with verbose info
pf flow test --flow . --interactive --verbose
```
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/chat.jinja2 | system:
You are a helpful assistant.
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
chat_history:
type: list
default: []
question:
type: string
is_chat_input: true
default: What is ChatGPT?
outputs:
answer:
type: string
reference: ${chat.output}
is_chat_output: true
nodes:
- inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: "256"
temperature: "0.7"
chat_history: ${inputs.chat_history}
question: ${inputs.question}
name: chat
type: llm
source:
type: code
path: chat.jinja2
api: chat
connection: open_ai_connection
node_variants: {}
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/data.jsonl | {"chat_history":[{"inputs":{"question":"What is ChatGPT?"},"outputs":{"answer":"ChatGPT is a chatbot product developed by OpenAI. It is powered by the Generative Pre-trained Transformer (GPT) series of language models, with GPT-4 being the latest version. ChatGPT uses natural language processing to generate responses to user inputs in a conversational manner. It was released as ChatGPT Plus, a premium version, which provides enhanced features and access to the GPT-4 based version of OpenAI's API. ChatGPT allows users to interact and have conversations with the language model, utilizing both text and image inputs. It is designed to be more reliable, creative, and capable of handling nuanced instructions compared to previous versions. However, it is important to note that while GPT-4 improves upon its predecessors, it still retains some of the same limitations and challenges."}}],"question":"What is the difference between this model and previous neural network?"}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/search_result_from_url.py | import random
import time
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import bs4
import requests
from promptflow import tool
session = requests.Session()
def decode_str(string):
return string.encode().decode("unicode-escape").encode("latin1").decode("utf-8")
def get_page_sentence(page, count: int = 10):
# find all paragraphs
paragraphs = page.split("\n")
paragraphs = [p.strip() for p in paragraphs if p.strip()]
# find all sentence
sentences = []
for p in paragraphs:
sentences += p.split(". ")
sentences = [s.strip() + "." for s in sentences if s.strip()]
# get first `count` number of sentences
return " ".join(sentences[:count])
def fetch_text_content_from_url(url: str, count: int = 10):
# Send a request to the URL
try:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.35"
}
delay = random.uniform(0, 0.5)
time.sleep(delay)
response = session.get(url, headers=headers)
if response.status_code == 200:
# Parse the HTML content using BeautifulSoup
soup = bs4.BeautifulSoup(response.text, "html.parser")
page_content = [p_ul.get_text().strip() for p_ul in soup.find_all("p") + soup.find_all("ul")]
page = ""
for content in page_content:
if len(content.split(" ")) > 2:
page += decode_str(content)
if not content.endswith("\n"):
page += "\n"
text = get_page_sentence(page, count=count)
return (url, text)
else:
msg = (
f"Get url failed with status code {response.status_code}.\nURL: {url}\nResponse: "
f"{response.text[:100]}"
)
print(msg)
return (url, "No available content")
except Exception as e:
print("Get url failed with error: {}".format(e))
return (url, "No available content")
@tool
def search_result_from_url(url_list: list, count: int = 10):
results = []
partial_func_of_fetch_text_content_from_url = partial(fetch_text_content_from_url, count=count)
with ThreadPoolExecutor(max_workers=5) as executor:
futures = executor.map(partial_func_of_fetch_text_content_from_url, url_list)
for feature in futures:
results.append(feature)
return results
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/README.md | # Chat With Wikipedia
This flow demonstrates how to create a chatbot that can remember previous interactions and use the conversation history to generate next message.
Tools used in this flow:
- `llm` tool
- custom `python` Tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## What you will learn
In this flow, you will learn
- how to compose a chat flow.
- prompt template format of LLM tool chat api. Message delimiter is a separate line containing role name and colon: "system:", "user:", "assistant:".
See <a href="https://platform.openai.com/docs/api-reference/chat/create#chat/create-role" target="_blank">OpenAI Chat</a> for more about message role.
```jinja
system:
You are a chatbot having a conversation with a human.
user:
{{question}}
```
- how to consume chat history in prompt.
```jinja
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
```
## Getting started
### 1 Create connection for LLM tool to use
Go to "Prompt flow" "Connections" tab. Click on "Create" button, select one of LLM tool supported connection types and fill in the configurations.
Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer to [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for more details. If you want to use "OpenAI" connection type, you need to create an OpenAI account first. Please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base>
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `open_ai_connection`.
```bash
# show registered connection
pf connection show --name open_ai_connection
```
### 2 Start chatting
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow .
# run chat flow with new question
pf flow test --flow . --inputs question="What's Azure Machine Learning?"
# start a interactive chat session in CLI
pf flow test --flow . --interactive
# start a interactive chat session in CLI with verbose info
pf flow test --flow . --interactive --verbose
```
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/get_wiki_url.py | import re
import bs4
import requests
from promptflow import tool
def decode_str(string):
return string.encode().decode("unicode-escape").encode("latin1").decode("utf-8")
def remove_nested_parentheses(string):
pattern = r"\([^()]+\)"
while re.search(pattern, string):
string = re.sub(pattern, "", string)
return string
@tool
def get_wiki_url(entity: str, count=2):
# Send a request to the URL
url = f"https://en.wikipedia.org/w/index.php?search={entity}"
url_list = []
try:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.35"
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
# Parse the HTML content using BeautifulSoup
soup = bs4.BeautifulSoup(response.text, "html.parser")
mw_divs = soup.find_all("div", {"class": "mw-search-result-heading"})
if mw_divs: # mismatch
result_titles = [decode_str(div.get_text().strip()) for div in mw_divs]
result_titles = [remove_nested_parentheses(result_title) for result_title in result_titles]
print(f"Could not find {entity}. Similar entity: {result_titles[:count]}.")
url_list.extend(
[f"https://en.wikipedia.org/w/index.php?search={result_title}" for result_title in result_titles]
)
else:
page_content = [p_ul.get_text().strip() for p_ul in soup.find_all("p") + soup.find_all("ul")]
if any("may refer to:" in p for p in page_content):
url_list.extend(get_wiki_url("[" + entity + "]"))
else:
url_list.append(url)
else:
msg = (
f"Get url failed with status code {response.status_code}.\nURL: {url}\nResponse: "
f"{response.text[:100]}"
)
print(msg)
return url_list[:count]
except Exception as e:
print("Get url failed with error: {}".format(e))
return url_list
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/augmented_chat.jinja2 | system:
You are a chatbot having a conversation with a human.
Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
If you don't know the answer, just say that you don't know. Don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
{{contexts}}
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/requirements.txt | promptflow
promptflow-tools
bs4 | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/process_search_result.py | from promptflow import tool
@tool
def process_search_result(search_result):
def format(doc: dict):
return f"Content: {doc['Content']}\nSource: {doc['Source']}"
try:
context = []
for url, content in search_result:
context.append({"Content": content, "Source": url})
context_str = "\n\n".join([format(c) for c in context])
return context_str
except Exception as e:
print(f"Error: {e}")
return ""
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
chat_history:
type: list
default: []
question:
type: string
default: What is ChatGPT?
is_chat_input: true
outputs:
answer:
type: string
reference: ${augmented_chat.output}
is_chat_output: true
nodes:
- name: extract_query_from_question
type: llm
source:
type: code
path: extract_query_from_question.jinja2
inputs:
# This is for easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
temperature: '0.7'
top_p: '1.0'
stop: ''
max_tokens: '256'
presence_penalty: '0'
frequency_penalty: '0'
logit_bias: ''
question: ${inputs.question}
chat_history: ${inputs.chat_history}
connection: open_ai_connection
api: chat
- name: get_wiki_url
type: python
source:
type: code
path: get_wiki_url.py
inputs:
entity: ${extract_query_from_question.output}
count: '2'
- name: search_result_from_url
type: python
source:
type: code
path: search_result_from_url.py
inputs:
url_list: ${get_wiki_url.output}
count: '10'
- name: process_search_result
type: python
source:
type: code
path: process_search_result.py
inputs:
search_result: ${search_result_from_url.output}
- name: augmented_chat
type: llm
source:
type: code
path: augmented_chat.jinja2
inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
temperature: '0.8'
question: ${inputs.question}
chat_history: ${inputs.chat_history}
contexts: ${process_search_result.output}
connection: open_ai_connection
api: chat
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/extract_query_from_question.jinja2 | system:
You are an AI assistant reading the transcript of a conversation between an AI and a human. Given an input question and conversation history, infer user real intent.
The conversation history is provided just in case of a context (e.g. "What is this?" where "this" is defined in previous conversation).
Return the output as query used for next round user message.
user:
EXAMPLE
Conversation history:
Human: I want to find the best restaurants nearby, could you recommend some?
AI: Sure, I can help you with that. Here are some of the best restaurants nearby: Rock Bar.
Human: How do I get to Rock Bar?
Output: directions to Rock Bar
END OF EXAMPLE
EXAMPLE
Conversation history:
Human: I want to find the best restaurants nearby, could you recommend some?
AI: Sure, I can help you with that. Here are some of the best restaurants nearby: Rock Bar.
Human: How do I get to Rock Bar?
AI: To get to Rock Bar, you need to go to the 52nd floor of the Park A. You can take the subway to Station A and walk for about 8 minutes from exit A53. Alternatively, you can take the train to S Station and walk for about 12 minutes from the south exit3.
Human: Show me more restaurants.
Output: best restaurants nearby
END OF EXAMPLE
Conversation history (for reference only):
{% for item in chat_history %}
Human: {{item.inputs.question}}
AI: {{item.outputs.answer}}
{% endfor %}
Human: {{question}}
Output:
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/data.jsonl | {"question": "Compute $\\dbinom{16}{5}$.", "answer": "4368", "raw_answer": "$\\dbinom{16}{5}=\\dfrac{16\\times 15\\times 14\\times 13\\times 12}{5\\times 4\\times 3\\times 2\\times 1}=\\boxed{4368}.$"}
{"question": "Determine the number of ways to arrange the letters of the word PROOF.", "answer": "60", "raw_answer": "There are two O's and five total letters, so the answer is $\\dfrac{5!}{2!} = \\boxed{60}$."}
{"question": "23 people attend a party. Each person shakes hands with at most 22 other people. What is the maximum possible number of handshakes, assuming that any two people can shake hands at most once?", "answer": "253", "raw_answer": "Note that if each person shakes hands with every other person, then the number of handshakes is maximized. There are $\\binom{23}{2} = \\frac{(23)(22)}{2} = (23)(11) = 230+23 = \\boxed{253}$ ways to choose two people to form a handshake."}
{"question": "James has 7 apples. 4 of them are red, and 3 of them are green. If he chooses 2 apples at random, what is the probability that both the apples he chooses are green?", "answer": "1/7", "raw_answer": "There are $\\binom{7}{2}=21$ total ways for James to choose 2 apples from 7, but only $\\binom{3}{2}=3$ ways for him to choose 2 green apples. So, the probability that he chooses 2 green apples is $\\frac{3}{21}=\\boxed{\\frac{1}{7}}$."}
{"question": "We are allowed to remove exactly one integer from the list $$-1,0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,$$and then we choose two distinct integers at random from the remaining list. What number should we remove if we wish to maximize the probability that the sum of the two chosen numbers is 10?", "answer": "5", "raw_answer": "For each integer $x$ in the list besides 5, the integer $10-x$ is also in the list. So, for each of these integers, removing $x$ reduces the number of pairs of distinct integers whose sum is 10. However, there is no other integer in list that can be added to 5 to give 10, so removing 5 from the list will not reduce the number of pairs of distinct integers whose sum is 10.\n\nSince removing any integer besides 5 will reduce the number of pairs that add to 10, while removing 5 will leave the number of pairs that add to 10 unchanged, we have the highest probability of having a sum of 10 when we remove $\\boxed{5}$."}
{"question": "The numbers 1 through 25 are written on 25 cards with one number on each card. Sara picks one of the 25 cards at random. What is the probability that the number on her card will be a multiple of 2 or 5? Express your answer as a common fraction.", "answer": "3/5", "raw_answer": "There are $12$ even numbers and $5$ multiples of $5$ in the range $1$ to $25$. However, we have double-counted $10$ and $20$, which are divisible by both $2$ and $5$. So the number of good outcomes is $12+5-2=15$ and the probability is $\\frac{15}{25}=\\boxed{\\frac{3}{5}}$."}
{"question": "A bag has 3 red marbles and 5 white marbles. Two marbles are drawn from the bag and not replaced. What is the probability that the first marble is red and the second marble is white?", "answer": "15/56", "raw_answer": "The probability that the first is red is $\\dfrac38$. Now with 7 remaining, the probability that the second is white is $\\dfrac57$. The answer is $\\dfrac38 \\times \\dfrac57 = \\boxed{\\dfrac{15}{56}}$."}
{"question": "Find the largest prime divisor of 11! + 12!", "answer": "13", "raw_answer": "Since $12! = 12 \\cdot 11!$, we can examine the sum better by factoring $11!$ out of both parts: $$ 11! + 12! = 11! + 12 \\cdot 11! = 11!(1 + 12) = 11! \\cdot 13. $$Since no prime greater than 11 divides $11!$, $\\boxed{13}$ is the largest prime factor of $11! + 12!$."}
{"question": "These two spinners are divided into thirds and quarters, respectively. If each of these spinners is spun once, what is the probability that the product of the results of the two spins will be an even number? Express your answer as a common fraction.\n\n[asy]\n\nsize(5cm,5cm);\n\ndraw(Circle((0,0),1));\n\ndraw(Circle((3,0),1));\n\ndraw((0,0)--(0,1));\n\ndraw((0,0)--(-0.9,-0.47));\n\ndraw((0,0)--(0.9,-0.47));\n\ndraw((2,0)--(4,0));\n\ndraw((3,1)--(3,-1));\n\nlabel(\"$3$\",(-0.5,0.3));\n\nlabel(\"$4$\",(0.5,0.3));\n\nlabel(\"$5$\",(0,-0.5));\n\nlabel(\"$5$\",(2.6,-0.4));\n\nlabel(\"$6$\",(2.6,0.4));\n\nlabel(\"$7$\",(3.4,0.4));\n\nlabel(\"$8$\",(3.4,-0.4));\n\ndraw((0,0)--(0.2,0.8),Arrow);\n\ndraw((3,0)--(3.2,0.8),Arrow);\n\n[/asy]", "answer": "2/3", "raw_answer": "We will subtract the probability that the product is odd from 1 to get the probability that the product is even. In order for the product to be odd, we must have both numbers be odd. There are $2\\cdot2=4$ possibilities for this (a 3 or 5 is spun on the left spinner and a 5 or 7 on the right) out of a total of $3\\cdot4=12$ possibilities, so the probability that the product is odd is $4/12=1/3$. The probability that the product is even is $1-1/3=\\boxed{\\frac{2}{3}}$."}
{"question": "No two students in Mrs. Vale's 26-student mathematics class have the same two initials. Each student's first name and last name begin with the same letter. If the letter ``Y'' is considered a vowel, what is the probability of randomly picking a student whose initials are vowels? Express your answer as a common fraction.", "answer": "3/13", "raw_answer": "The students' initials are AA, BB, CC, $\\cdots$, ZZ, representing all 26 letters. The vowels are A, E, I, O, U, and Y, which are 6 letters out of the possible 26. So the probability of picking a student whose initials are vowels is $\\frac{6}{26}=\\boxed{\\frac{3}{13}}$."}
{"question": "What is the expected value of the roll of a standard 6-sided die?", "answer": "3.5", "raw_answer": "Each outcome of rolling a 6-sided die has probability $\\frac16$, and the possible outcomes are 1, 2, 3, 4, 5, and 6. So the expected value is $$ \\frac16(1) + \\frac16(2) + \\frac16(3) + \\frac16(4) + \\frac16(5) + \\frac16(6) = \\frac{21}{6} = \\boxed{3.5}. $$"}
{"question": "How many positive divisors of 30! are prime?", "answer": "10", "raw_answer": "The only prime numbers that divide $30!$ are less than or equal to 30. So 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 are primes that divide $30!$, and there are $\\boxed{10}$ of these."}
{"question": "Marius is entering a wildlife photo contest, and wishes to arrange his seven snow leopards of different heights in a row. If the shortest two leopards have inferiority complexes and demand to be placed at the ends of the row, how many ways can he line up the leopards?", "answer": "240", "raw_answer": "There are two ways to arrange the shortest two leopards. For the five remaining leopards, there are $5!$ ways to arrange them.\n\nTherefore, the answer is $2\\times5!=\\boxed{240\\text{ ways.}}$"}
{"question": "My school's math club has 6 boys and 8 girls. I need to select a team to send to the state math competition. We want 6 people on the team. In how many ways can I select the team without restrictions?", "answer": "3003", "raw_answer": "With no restrictions, we are merely picking 6 students out of 14. This is $\\binom{14}{6} = \\boxed{3003}$."}
{"question": "Nathan will roll two six-sided dice. What is the probability that he will roll a number less than three on the first die and a number greater than three on the second die? Express your answer as a common fraction.", "answer": "1/6", "raw_answer": "For the first die to be less than three, it must be a 1 or a 2, which occurs with probability $\\frac{1}{3}$. For the second die to be greater than 3, it must be a 4 or a 5 or a 6, which occurs with probability $\\frac{1}{2}$. The probability of both of these events occuring, as they are independent, is $\\frac{1}{3} \\cdot \\frac{1}{2} = \\boxed{\\frac{1}{6}}$."}
{"question": "A Senate committee has 8 Republicans and 6 Democrats. In how many ways can we form a subcommittee with 3 Republicans and 2 Democrats?", "answer": "840", "raw_answer": "There are 8 Republicans and 3 spots for them, so there are $\\binom{8}{3} = 56$ ways to choose the Republicans. There are 6 Democrats and 2 spots for them, so there are $\\binom{6}{2} = 15$ ways to choose the Democrats. So there are $56 \\times 15 = \\boxed{840}$ ways to choose the subcommittee."}
{"question": "How many different positive, four-digit integers can be formed using the digits 2, 2, 9 and 9?", "answer": "6", "raw_answer": "We could go ahead and count these directly, but instead we could count in general and then correct for overcounting. That is, if we had 4 distinct digits, there would be $4! = 24$ orderings. However, we must divide by 2! once for the repetition of the digit 2, and divide by 2! for the repetition of the digit 9 (this should make sense because if the repeated digit were different we would have twice as many orderings). So, our answer is $\\frac{4!}{2!\\cdot 2!} = 2 \\cdot 3 = \\boxed{6}$."}
{"question": "I won a trip for four to the Super Bowl. I can bring three of my friends. I have 8 friends. In how many ways can I form my Super Bowl party?", "answer": "56", "raw_answer": "Order does not matter, so it is a combination. Choosing $3$ out of $8$ is $\\binom{8}{3}=\\boxed{56}.$"}
{"question": "Determine the number of ways to arrange the letters of the word MADAM.", "answer": "30", "raw_answer": "First we count the arrangements if all the letters are unique, which is $5!$. Then since the M's and the A's are not unique, we divide by $2!$ twice for the arrangements of M's and the arrangements of A's, for an answer of $\\dfrac{5!}{2! \\times 2!} = \\boxed{30}$."}
{"question": "A palindrome is a number that reads the same forwards and backwards, such as 3003. How many positive four-digit integers are palindromes?", "answer": "90", "raw_answer": "Constructing palindromes requires that we choose the thousands digit (which defines the units digit) and the hundreds digit (which defines the tens digit). Since there are 9 choices for the thousands digit, and 10 choices for the hundreds digit, creating $9 \\cdot 10 = \\boxed{90}$ palindromes."} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/README.md | # Test your prompt variants for chat with math
This is a prompt tuning case with 3 prompt variants for math question answering.
By utilizing this flow, in conjunction with the `evaluation/eval-chat-math` flow, you can quickly grasp the advantages of prompt tuning and experimentation with prompt flow. Here we provide a [video](https://www.youtube.com/watch?v=gcIe6nk2gA4) and a [tutorial]((../../../tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md)) for you to get started.
Tools used in this flow:
- `llm` tool
- custom `python` Tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## Getting started
### 1 Create connection for LLM tool to use
Go to "Prompt flow" "Connections" tab. Click on "Create" button, select one of LLM tool supported connection types and fill in the configurations.
Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer to [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for more details. If you want to use "OpenAI" connection type, you need to create an OpenAI account first. Please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `open_ai_connection`.
```bash
# show registered connection
pf connection show --name open_ai_connection
```
### 2 Start chatting
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow .
# run chat flow with new question
pf flow test --flow . --inputs question="2+5=?"
# start a interactive chat session in CLI
pf flow test --flow . --interactive
# start a interactive chat session in CLI with verbose info
pf flow test --flow . --interactive --verbose | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/chat.jinja2 | system:
You are an assistant to calculate the answer to the provided math problems.
Please return the final numerical answer only, without any accompanying reasoning or explanation.
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/chat_variant_2.jinja2 | system:
You are an assistant to calculate the answer to the provided math problems.
Please think step by step.
Return the final numerical answer only and any accompanying reasoning or explanation seperately as json format.
user:
A jar contains two red marbles, three green marbles, ten white marbles and no other marbles. Two marbles are randomly drawn from this jar without replacement. What is the probability that these two marbles drawn will both be red? Express your answer as a common fraction.
assistant:
{Chain of thought: "The total number of marbles is $2+3+10=15$. The probability that the first marble drawn will be red is $2/15$. Then, there will be one red left, out of 14. Therefore, the probability of drawing out two red marbles will be: $$\\frac{2}{15}\\cdot\\frac{1}{14}=\\boxed{\\frac{1}{105}}$$.", "answer": "1/105"}
user:
Find the greatest common divisor of $7!$ and $(5!)^2.$
assistant:
{"Chain of thought": "$$ \\begin{array} 7! &=& 7 \\cdot 6 \\cdot 5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1 &=& 2^4 \\cdot 3^2 \\cdot 5^1 \\cdot 7^1 \\\\ (5!)^2 &=& (5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1)^2 &=& 2^6 \\cdot 3^2 \\cdot 5^2 \\\\ \\text{gcd}(7!, (5!)^2) &=& 2^4 \\cdot 3^2 \\cdot 5^1 &=& \\boxed{720} \\end{array} $$.", "answer": "720"}
user:
A club has 10 members, 5 boys and 5 girls. Two of the members are chosen at random. What is the probability that they are both girls?
assistant:
{"Chain of thought": "There are $\\binomial{10}{2} = 45$ ways to choose two members of the group, and there are $\\binomial{5}{2} = 10$ ways to choose two girls. Therefore, the probability that two members chosen at random are girls is $\\dfrac{10}{45} = \\boxed{\\dfrac{2}{9}}$.", "answer": "2/9"}
user:
Allison, Brian and Noah each have a 6-sided cube. All of the faces on Allison's cube have a 5. The faces on Brian's cube are numbered 1, 2, 3, 4, 5 and 6. Three of the faces on Noah's cube have a 2 and three of the faces have a 6. All three cubes are rolled. What is the probability that Allison's roll is greater than each of Brian's and Noah's? Express your answer as a common fraction.
assistant:
{"Chain of thought": "Since Allison will always roll a 5, we must calculate the probability that both Brian and Noah roll a 4 or lower. The probability of Brian rolling a 4 or lower is $\\frac{4}{6} = \\frac{2}{3}$ since Brian has a standard die. Noah, however, has a $\\frac{3}{6} = \\frac{1}{2}$ probability of rolling a 4 or lower, since the only way he can do so is by rolling one of his 3 sides that have a 2. So, the probability of both of these independent events occurring is $\\frac{2}{3} \\cdot \\frac{1}{2} = \\boxed{\\frac{1}{3}}$.", "answer": "1/3"}
user:
Compute $\\density binomial{50}{2}$.
assistant:
{"Chain of thought": "$\\density binomial{50}{2} = \\dfrac{50!}{2!48!}=\\dfrac{50\\times 49}{2\\times 1}=\\boxed{1225}.$", "answer": "1225"}
user:
The set $S = \\{1, 2, 3, \\ldots , 49, 50\\}$ contains the first $50$ positive integers. After the multiples of 2 and the multiples of 3 are removed, how many integers remain in the set $S$?
assistant:
{"Chain of thought": "The set $S$ contains $25$ multiples of 2 (that is, even numbers). When these are removed, the set $S$ is left with only the odd integers from 1 to 49. At this point, there are $50-25=25$ integers in $S$. We still need to remove the multiples of 3 from $S$.\n\nSince $S$ only contains odd integers after the multiples of 2 are removed, we must remove the odd multiples of 3 between 1 and 49. These are 3, 9, 15, 21, 27, 33, 39, 45, of which there are 8. Therefore, the number of integers remaining in the set $S$ is $25 - 8 = \\boxed{17}$.", "answer": "17"}
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/chat_variant_1.jinja2 | system:
You are an assistant to calculate the answer to the provided math problems.
Please think step by step.
Return the final numerical answer only and any accompanying reasoning or explanation seperately as json format.
user:
A jar contains two red marbles, three green marbles, ten white marbles and no other marbles. Two marbles are randomly drawn from this jar without replacement. What is the probability that these two marbles drawn will both be red? Express your answer as a common fraction.
assistant:
{Chain of thought: "The total number of marbles is $2+3+10=15$. The probability that the first marble drawn will be red is $2/15$. Then, there will be one red left, out of 14. Therefore, the probability of drawing out two red marbles will be: $$\\frac{2}{15}\\cdot\\frac{1}{14}=\\boxed{\\frac{1}{105}}$$.", "answer": "1/105"}
user:
Find the greatest common divisor of $7!$ and $(5!)^2.$
assistant:
{"Chain of thought": "$$ \\begin{array} 7! &=& 7 \\cdot 6 \\cdot 5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1 &=& 2^4 \\cdot 3^2 \\cdot 5^1 \\cdot 7^1 \\\\ (5!)^2 &=& (5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1)^2 &=& 2^6 \\cdot 3^2 \\cdot 5^2 \\\\ \\text{gcd}(7!, (5!)^2) &=& 2^4 \\cdot 3^2 \\cdot 5^1 &=& \\boxed{720} \\end{array} $$.", "answer": "720"}
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
chat_history:
type: list
is_chat_history: true
default: []
question:
type: string
is_chat_input: true
default: '1+1=?'
outputs:
answer:
type: string
reference: ${extract_result.output}
is_chat_output: true
nodes:
- name: chat
use_variants: true
- name: extract_result
type: python
source:
type: code
path: extract_result.py
inputs:
input1: ${chat.output}
node_variants:
chat:
default_variant_id: variant_0
variants:
variant_0:
node:
type: llm
source:
type: code
path: chat.jinja2
inputs:
deployment_name: gpt-4
max_tokens: 256
temperature: 0
chat_history: ${inputs.chat_history}
question: ${inputs.question}
model: gpt-4
connection: open_ai_connection
api: chat
variant_1:
node:
type: llm
source:
type: code
path: chat_variant_1.jinja2
inputs:
deployment_name: gpt-4
max_tokens: 256
temperature: 0
chat_history: ${inputs.chat_history}
question: ${inputs.question}
model: gpt-4
connection: open_ai_connection
api: chat
variant_2:
node:
type: llm
source:
type: code
path: chat_variant_2.jinja2
inputs:
deployment_name: gpt-4
max_tokens: 256
temperature: 0
chat_history: ${inputs.chat_history}
question: ${inputs.question}
model: gpt-4
connection: open_ai_connection
api: chat
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/extract_result.py | from promptflow import tool
import json
import re
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
@tool
def my_python_tool(input1: str) -> str:
input1 = re.sub(r'[$\\!]', '', input1)
try:
json_answer = json.loads(input1)
answer = json_answer['answer']
except Exception:
answer = input1
return answer
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-math-variant | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/.promptflow/flow.tools.json | {
"package": {},
"code": {
"chat.jinja2": {
"type": "llm",
"inputs": {
"chat_history": {
"type": [
"string"
]
},
"question": {
"type": [
"string"
]
}
},
"source": "chat.jinja2"
},
"chat_variant_1.jinja2": {
"type": "llm",
"inputs": {
"chat_history": {
"type": [
"string"
]
},
"question": {
"type": [
"string"
]
}
},
"source": "chat_variant_1.jinja2"
},
"chat_variant_2.jinja2": {
"type": "llm",
"inputs": {
"chat_history": {
"type": [
"string"
]
},
"question": {
"type": [
"string"
]
}
},
"source": "chat_variant_2.jinja2"
},
"extract_result.py": {
"type": "python",
"inputs": {
"input1": {
"type": [
"string"
]
}
},
"source": "extract_result.py",
"function": "my_python_tool"
}
}
} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/README.md | # Chat With Image
This flow demonstrates how to create a chatbot that can take image and text as input.
Tools used in this flow:
- `OpenAI GPT-4V` tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## What you will learn
In this flow, you will learn
- how to compose a chat flow with image and text as input. The chat input should be a list of text and/or images.
## Getting started
### 1 Create connection for OpenAI GPT-4V tool to use
Go to "Prompt flow" "Connections" tab. Click on "Create" button, and create an "OpenAI" connection. If you do not have an OpenAI account, please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> name=aoai_gpt4v_connection api_version=2023-07-01-preview
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `aoai_gpt4v_connection`.
```bash
# show registered connection
pf connection show --name aoai_gpt4v_connection
```
### 2 Start chatting
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow .
# run chat flow with new question
pf flow test --flow . --inputs question='["How many colors can you see?", {"data:image/png;url": "https://developer.microsoft.com/_devcom/images/logo-ms-social.png"}]'
```
```sh
# start a interactive chat session in CLI
pf flow test --flow . --interactive
# start a interactive chat session in CLI with verbose info
pf flow test --flow . --interactive --verbose
```
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/chat.jinja2 | # system:
You are a helpful assistant.
{% for item in chat_history %}
# user:
{{item.inputs.question}}
# assistant:
{{item.outputs.answer}}
{% endfor %}
# user:
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
chat_history:
type: list
is_chat_history: true
question:
type: list
default:
- data:image/png;url: https://images.idgesg.net/images/article/2019/11/edge-browser-logo_microsoft-100816808-large.jpg
- How many colors can you see?
is_chat_input: true
outputs:
answer:
type: string
reference: ${chat.output}
is_chat_output: true
nodes:
- name: chat
type: custom_llm
source:
type: package_with_prompt
tool: promptflow.tools.aoai_gpt4v.AzureOpenAI.chat
path: chat.jinja2
inputs:
connection: aoai_gpt4v_connection
deployment_name: gpt-4v
max_tokens: 512
chat_history: ${inputs.chat_history}
question: ${inputs.question}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf_tool.py | from promptflow import tool
from chat_with_pdf.main import chat_with_pdf
@tool
def chat_with_pdf_tool(question: str, pdf_url: str, history: list, ready: str):
history = convert_chat_history_to_chatml_messages(history)
stream, context = chat_with_pdf(question, pdf_url, history)
answer = ""
for str in stream:
answer = answer + str + ""
return {"answer": answer, "context": context}
def convert_chat_history_to_chatml_messages(history):
messages = []
for item in history:
messages.append({"role": "user", "content": item["inputs"]["question"]})
messages.append({"role": "assistant", "content": item["outputs"]["answer"]})
return messages
def convert_chatml_messages_to_chat_history(messages):
history = []
for i in range(0, len(messages), 2):
history.append(
{
"inputs": {"question": messages[i]["content"]},
"outputs": {"answer": messages[i + 1]["content"]},
}
)
return history
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat-with-pdf-azure.ipynb | %pip install -r requirements.txtfrom azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()import promptflow.azure as azure
# Get a handle to workspace
pf = azure.PFClient.from_config(credential=credential)conn_name = "open_ai_connection"
# TODO integrate with azure.ai sdk
# currently we only support create connection in Azure ML Studio UI
# raise Exception(f"Please create {conn_name} connection in Azure ML Studio.")flow_path = "."
data_path = "./data/bert-paper-qna-3-line.jsonl"
config_2k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo",
"PROMPT_TOKEN_LIMIT": 2000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 32,
}
column_mapping = {
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config_2k_context,
}
run_2k_context = pf.run(
flow=flow_path,
data=data_path,
column_mapping=column_mapping,
display_name="chat_with_pdf_2k_context",
tags={"chat_with_pdf": "", "1st_round": ""},
)
pf.stream(run_2k_context)print(run_2k_context)detail = pf.get_details(run_2k_context)
detaileval_groundedness_flow_path = "../../evaluation/eval-groundedness/"
eval_groundedness_2k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_2k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_2k_context",
)
pf.stream(eval_groundedness_2k_context)
print(eval_groundedness_2k_context)flow_path = "."
data_path = "./data/bert-paper-qna-3-line.jsonl"
config_3k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo",
"PROMPT_TOKEN_LIMIT": 3000, # different from 2k context
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 32,
}
column_mapping = {
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config_3k_context,
}
run_3k_context = pf.run(
flow=flow_path,
data=data_path,
column_mapping=column_mapping,
display_name="chat_with_pdf_3k_context",
tags={"chat_with_pdf": "", "2nd_round": ""},
)
pf.stream(run_3k_context)print(run_3k_context)detail = pf.get_details(run_3k_context)
detaileval_groundedness_3k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_3k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_3k_context",
)
pf.stream(eval_groundedness_3k_context)
print(eval_groundedness_3k_context)pf.get_details(eval_groundedness_3k_context)pf.visualize([eval_groundedness_2k_context, eval_groundedness_3k_context]) | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/README.md | # Chat with PDF
This is a simple flow that allow you to ask questions about the content of a PDF file and get answers.
You can run the flow with a URL to a PDF file and question as argument.
Once it's launched it will download the PDF and build an index of the content.
Then when you ask a question, it will look up the index to retrieve relevant content and post the question with the relevant content to OpenAI chat model (gpt-3.5-turbo or gpt4) to get an answer.
Learn more on corresponding [tutorials](../../../tutorials/e2e-development/chat-with-pdf.md).
Tools used in this flow:
- custom `python` Tool
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Get started
### Create connection in this folder
```bash
# create connection needed by flow
if pf connection list | grep open_ai_connection; then
echo "open_ai_connection already exists"
else
pf connection create --file ../../../connections/azure_openai.yml --name open_ai_connection --set api_key=<your_api_key> api_base=<your_api_base>
fi
```
### CLI Example
#### Run flow
**Note**: this sample uses [predownloaded PDFs](./chat_with_pdf/.pdfs/) and [prebuilt FAISS Index](./chat_with_pdf/.index/) to speed up execution time.
You can remove the folders to start a fresh run.
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs question="What is the name of the new language representation model introduced in the document?" pdf_url="https://arxiv.org/pdf/1810.04805.pdf"
# (Optional) create a random run name
run_name="web_classification_"$(openssl rand -hex 12)
# run with multiline data, --name is optional
pf run create --file batch_run.yaml --name $run_name
# visualize run output details
pf run visualize --name $run_name
```
#### Submit run to cloud
Assume we already have a connection named `open_ai_connection` in workspace.
```bash
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>
```
``` bash
# create run
pfazure run create --file batch_run.yaml --name $run_name
```
Note: Click portal_url of the run to view the final snapshot.
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/batch_run.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
#name: chat_with_pdf_default_20230820_162219_559000
flow: .
data: ./data/bert-paper-qna.jsonl
#run: <Uncomment to select a run input>
column_mapping:
chat_history: ${data.chat_history}
pdf_url: ${data.pdf_url}
question: ${data.question}
config:
EMBEDDING_MODEL_DEPLOYMENT_NAME: text-embedding-ada-002
CHAT_MODEL_DEPLOYMENT_NAME: gpt-4
PROMPT_TOKEN_LIMIT: 3000
MAX_COMPLETION_TOKENS: 1024
VERBOSE: true
CHUNK_SIZE: 1024
CHUNK_OVERLAP: 64 | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/build_index_tool.py | from promptflow import tool
from chat_with_pdf.build_index import create_faiss_index
@tool
def build_index_tool(pdf_path: str) -> str:
return create_faiss_index(pdf_path)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/__init__.py | import sys
import os
sys.path.append(
os.path.join(os.path.dirname(os.path.abspath(__file__)), "chat_with_pdf")
)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/setup_env.py | import os
from typing import Union
from promptflow import tool
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
from chat_with_pdf.utils.lock import acquire_lock
BASE_DIR = os.path.dirname(os.path.abspath(__file__)) + "/chat_with_pdf/"
@tool
def setup_env(connection: Union[AzureOpenAIConnection, OpenAIConnection], config: dict):
if not connection or not config:
return
if isinstance(connection, AzureOpenAIConnection):
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = connection.api_base
os.environ["OPENAI_API_KEY"] = connection.api_key
os.environ["OPENAI_API_VERSION"] = connection.api_version
if isinstance(connection, OpenAIConnection):
os.environ["OPENAI_API_KEY"] = connection.api_key
if connection.organization is not None:
os.environ["OPENAI_ORG_ID"] = connection.organization
for key in config:
os.environ[key] = str(config[key])
with acquire_lock(BASE_DIR + "create_folder.lock"):
if not os.path.exists(BASE_DIR + ".pdfs"):
os.mkdir(BASE_DIR + ".pdfs")
if not os.path.exists(BASE_DIR + ".index/.pdfs"):
os.makedirs(BASE_DIR + ".index/.pdfs")
return "Ready"
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/eval_run.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
#name: eval_groundedness_default_20230820_200152_009000
flow: ../../evaluation/eval-groundedness
run: chat_with_pdf_default_20230820_162219_559000
column_mapping:
question: ${run.inputs.question}
answer: ${run.outputs.answer}
context: ${run.outputs.context} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/openai.yaml | # All the values should be string type, please use "123" instead of 123 or "True" instead of True.
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: open_ai_connection
type: open_ai
api_key: "<open-ai-api-key>"
organization: ""
# Note:
# The connection information will be stored in a local database with api_key encrypted for safety.
# Prompt flow will ONLY use the connection information (incl. keys) when instructed by you, e.g. manage connections, use connections to run flow etc.
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/requirements.txt | PyPDF2
faiss-cpu
openai
jinja2
python-dotenv
tiktoken
promptflow[azure]
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml.single-node | inputs:
chat_history:
type: list
default:
- inputs:
question: what is BERT?
outputs:
answer: BERT (Bidirectional Encoder Representations from Transformers) is a
language representation model that pre-trains deep bidirectional
representations from unlabeled text by jointly conditioning on both
left and right context in all layers. Unlike other language
representation models, BERT can be fine-tuned with just one additional
output layer to create state-of-the-art models for a wide range of
tasks such as question answering and language inference, without
substantial task-specific architecture modifications. BERT is
effective for both fine-tuning and feature-based approaches. It
obtains new state-of-the-art results on eleven natural language
processing tasks, including pushing the GLUE score to 80.5% (7.7%
point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute
improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point
absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point
absolute improvement).
pdf_url:
type: string
default: https://arxiv.org/pdf/1810.04805.pdf
question:
type: string
is_chat_input: true
default: what NLP tasks does it perform well?
outputs:
answer:
type: string
is_chat_output: true
reference: ${chat_with_pdf_tool.output.answer}
context:
type: string
reference: ${chat_with_pdf_tool.output.context}
nodes:
- name: setup_env
type: python
source:
type: code
path: setup_env.py
inputs:
conn: my_custom_connection
- name: chat_with_pdf_tool
type: python
source:
type: code
path: chat_with_pdf_tool.py
inputs:
history: ${inputs.chat_history}
pdf_url: ${inputs.pdf_url}
question: ${inputs.question}
ready: ${setup_env.output}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/rewrite_question_tool.py | from promptflow import tool
from chat_with_pdf.rewrite_question import rewrite_question
@tool
def rewrite_question_tool(question: str, history: list, env_ready_signal: str):
return rewrite_question(question, history)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/download_tool.py | from promptflow import tool
from chat_with_pdf.download import download
@tool
def download_tool(url: str, env_ready_signal: str) -> str:
return download(url)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
chat_history:
type: list
default: []
pdf_url:
type: string
default: https://arxiv.org/pdf/1810.04805.pdf
question:
type: string
is_chat_input: true
default: what is BERT?
config:
type: object
default:
EMBEDDING_MODEL_DEPLOYMENT_NAME: text-embedding-ada-002
CHAT_MODEL_DEPLOYMENT_NAME: gpt-4
PROMPT_TOKEN_LIMIT: 3000
MAX_COMPLETION_TOKENS: 1024
VERBOSE: true
CHUNK_SIZE: 1024
CHUNK_OVERLAP: 64
outputs:
answer:
type: string
is_chat_output: true
reference: ${qna_tool.output.answer}
context:
type: string
reference: ${find_context_tool.output.context}
nodes:
- name: setup_env
type: python
source:
type: code
path: setup_env.py
inputs:
connection: open_ai_connection
config: ${inputs.config}
- name: download_tool
type: python
source:
type: code
path: download_tool.py
inputs:
url: ${inputs.pdf_url}
env_ready_signal: ${setup_env.output}
- name: build_index_tool
type: python
source:
type: code
path: build_index_tool.py
inputs:
pdf_path: ${download_tool.output}
- name: find_context_tool
type: python
source:
type: code
path: find_context_tool.py
inputs:
question: ${rewrite_question_tool.output}
index_path: ${build_index_tool.output}
- name: qna_tool
type: python
source:
type: code
path: qna_tool.py
inputs:
prompt: ${find_context_tool.output.prompt}
history: ${inputs.chat_history}
- name: rewrite_question_tool
type: python
source:
type: code
path: rewrite_question_tool.py
inputs:
question: ${inputs.question}
history: ${inputs.chat_history}
env_ready_signal: ${setup_env.output}
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/find_context_tool.py | from promptflow import tool
from chat_with_pdf.find_context import find_context
@tool
def find_context_tool(question: str, index_path: str):
prompt, context = find_context(question, index_path)
return {"prompt": prompt, "context": [c.text for c in context]}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat-with-pdf.ipynb | %pip install -r requirements.txtimport promptflow
pf = promptflow.PFClient()
# List all the available connections
for c in pf.connections.list():
print(c.name + " (" + c.type + ")")# create needed connection
from promptflow.entities import AzureOpenAIConnection, OpenAIConnection
try:
conn_name = "open_ai_connection"
conn = pf.connections.get(name=conn_name)
print("using existing connection")
except:
# Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure Open AI resource.
connection = AzureOpenAIConnection(
name=conn_name,
api_key="<user-input>",
api_base="<test_base>",
api_type="azure",
api_version="<test_version>",
)
# use this if you have an existing OpenAI account
# connection = OpenAIConnection(
# name=conn_name,
# api_key="<user-input>",
# )
conn = pf.connections.create_or_update(connection)
print("successfully created connection")
print(conn)output = pf.flows.test(
".",
inputs={
"chat_history": [],
"pdf_url": "https://arxiv.org/pdf/1810.04805.pdf",
"question": "what is BERT?",
},
)
print(output)flow_path = "."
data_path = "./data/bert-paper-qna-3-line.jsonl"
config_2k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-4", # change this to the name of your deployment if you're using Azure OpenAI
"PROMPT_TOKEN_LIMIT": 2000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 64,
}
column_mapping = {
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config_2k_context,
}
run_2k_context = pf.run(flow=flow_path, data=data_path, column_mapping=column_mapping)
pf.stream(run_2k_context)
print(run_2k_context)pf.get_details(run_2k_context)eval_groundedness_flow_path = "../../evaluation/eval-groundedness/"
eval_groundedness_2k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_2k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_2k_context",
)
pf.stream(eval_groundedness_2k_context)
print(eval_groundedness_2k_context)pf.get_details(eval_groundedness_2k_context)pf.get_metrics(eval_groundedness_2k_context)pf.visualize(eval_groundedness_2k_context)config_3k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-4", # change this to the name of your deployment if you're using Azure OpenAI
"PROMPT_TOKEN_LIMIT": 3000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 64,
}
run_3k_context = pf.run(flow=flow_path, data=data_path, column_mapping=column_mapping)
pf.stream(run_3k_context)
print(run_3k_context)eval_groundedness_3k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_3k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_3k_context",
)
pf.stream(eval_groundedness_3k_context)
print(eval_groundedness_3k_context)pf.get_details(eval_groundedness_3k_context)pf.visualize([eval_groundedness_2k_context, eval_groundedness_3k_context]) | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/qna_tool.py | from promptflow import tool
from chat_with_pdf.qna import qna
@tool
def qna_tool(prompt: str, history: list):
stream = qna(prompt, convert_chat_history_to_chatml_messages(history))
answer = ""
for str in stream:
answer = answer + str + ""
return {"answer": answer}
def convert_chat_history_to_chatml_messages(history):
messages = []
for item in history:
messages.append({"role": "user", "content": item["inputs"]["question"]})
messages.append({"role": "assistant", "content": item["outputs"]["answer"]})
return messages
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml.multi-node | inputs:
chat_history:
type: list
default: []
pdf_url:
type: string
default: https://arxiv.org/pdf/1810.04805.pdf
question:
type: string
is_chat_input: true
default: what NLP tasks does it perform well?
outputs:
answer:
type: string
is_chat_output: true
reference: ${qna_tool.output.answer}
context:
type: string
reference: ${qna_tool.output.context}
nodes:
- name: setup_env
type: python
source:
type: code
path: setup_env.py
inputs:
conn: my_custom_connection
- name: download_tool
type: python
source:
type: code
path: download_tool.py
inputs:
url: ${inputs.pdf_url}
env_ready_signal: ${setup_env.output}
- name: build_index_tool
type: python
source:
type: code
path: build_index_tool.py
inputs:
pdf_path: ${download_tool.output}
- name: qna_tool
type: python
source:
type: code
path: qna_tool.py
inputs:
question: ${rewrite_question_tool.output}
index_path: ${build_index_tool.output}
history: ${inputs.chat_history}
- name: rewrite_question_tool
type: python
source:
type: code
path: rewrite_question_tool.py
inputs:
question: ${inputs.question}
history: ${inputs.chat_history}
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/download.py | import requests
import os
import re
from utils.lock import acquire_lock
from utils.logging import log
from constants import PDF_DIR
# Download a pdf file from a url and return the path to the file
def download(url: str) -> str:
path = os.path.join(PDF_DIR, normalize_filename(url) + ".pdf")
lock_path = path + ".lock"
with acquire_lock(lock_path):
if os.path.exists(path):
log("Pdf already exists in " + os.path.abspath(path))
return path
log("Downloading pdf from " + url)
response = requests.get(url)
with open(path, "wb") as f:
f.write(response.content)
return path
def normalize_filename(filename):
# Replace any invalid characters with an underscore
return re.sub(r"[^\w\-_. ]", "_", filename)
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/README.md | # Chat with PDF
This is a simple Python application that allow you to ask questions about the content of a PDF file and get answers.
It's a console application that you start with a URL to a PDF file as argument. Once it's launched it will download the PDF and build an index of the content. Then when you ask a question, it will look up the index to retrieve relevant content and post the question with the relevant content to OpenAI chat model (gpt-3.5-turbo or gpt4) to get an answer.
## Screenshot - ask questions about BERT paper
![screenshot-chat-with-pdf](../assets/chat_with_pdf_console.png)
## How it works?
## Get started
### Create .env file in this folder with below content
```
OPENAI_API_BASE=<AOAI_endpoint>
OPENAI_API_KEY=<AOAI_key>
EMBEDDING_MODEL_DEPLOYMENT_NAME=text-embedding-ada-002
CHAT_MODEL_DEPLOYMENT_NAME=gpt-35-turbo
PROMPT_TOKEN_LIMIT=3000
MAX_COMPLETION_TOKENS=256
VERBOSE=false
CHUNK_SIZE=1024
CHUNK_OVERLAP=64
```
Note: CHAT_MODEL_DEPLOYMENT_NAME should point to a chat model like gpt-3.5-turbo or gpt-4
### Run the command line
```shell
python main.py <url-to-pdf-file>
``` | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/find_context.py | import faiss
from jinja2 import Environment, FileSystemLoader
import os
from utils.index import FAISSIndex
from utils.oai import OAIEmbedding, render_with_token_limit
from utils.logging import log
def find_context(question: str, index_path: str):
index = FAISSIndex(index=faiss.IndexFlatL2(1536), embedding=OAIEmbedding())
index.load(path=index_path)
snippets = index.query(question, top_k=5)
template = Environment(
loader=FileSystemLoader(os.path.dirname(os.path.abspath(__file__)))
).get_template("qna_prompt.md")
token_limit = int(os.environ.get("PROMPT_TOKEN_LIMIT"))
# Try to render the template with token limit and reduce snippet count if it fails
while True:
try:
prompt = render_with_token_limit(
template, token_limit, question=question, context=enumerate(snippets)
)
break
except ValueError:
snippets = snippets[:-1]
log(f"Reducing snippet count to {len(snippets)} to fit token limit")
return prompt, snippets
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/.env.example | # Azure OpenAI, uncomment below section if you want to use Azure OpenAI
# Note: EMBEDDING_MODEL_DEPLOYMENT_NAME and CHAT_MODEL_DEPLOYMENT_NAME are deployment names for Azure OpenAI
OPENAI_API_TYPE=azure
OPENAI_API_BASE=<your_AOAI_endpoint>
OPENAI_API_KEY=<your_AOAI_key>
OPENAI_API_VERSION=2023-05-15
EMBEDDING_MODEL_DEPLOYMENT_NAME=text-embedding-ada-002
CHAT_MODEL_DEPLOYMENT_NAME=gpt-4
# OpenAI, uncomment below section if you want to use OpenAI
# Note: EMBEDDING_MODEL_DEPLOYMENT_NAME and CHAT_MODEL_DEPLOYMENT_NAME are model names for OpenAI
#OPENAI_API_KEY=<your_openai_key>
#OPENAI_ORG_ID=<your_openai_org_id> # this is optional
#EMBEDDING_MODEL_DEPLOYMENT_NAME=text-embedding-ada-002
#CHAT_MODEL_DEPLOYMENT_NAME=gpt-4
PROMPT_TOKEN_LIMIT=2000
MAX_COMPLETION_TOKENS=1024
CHUNK_SIZE=256
CHUNK_OVERLAP=16
VERBOSE=True | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/build_index.py | import PyPDF2
import faiss
import os
from pathlib import Path
from utils.oai import OAIEmbedding
from utils.index import FAISSIndex
from utils.logging import log
from utils.lock import acquire_lock
from constants import INDEX_DIR
def create_faiss_index(pdf_path: str) -> str:
chunk_size = int(os.environ.get("CHUNK_SIZE"))
chunk_overlap = int(os.environ.get("CHUNK_OVERLAP"))
log(f"Chunk size: {chunk_size}, chunk overlap: {chunk_overlap}")
file_name = Path(pdf_path).name + f".index_{chunk_size}_{chunk_overlap}"
index_persistent_path = Path(INDEX_DIR) / file_name
index_persistent_path = index_persistent_path.resolve().as_posix()
lock_path = index_persistent_path + ".lock"
log("Index path: " + os.path.abspath(index_persistent_path))
with acquire_lock(lock_path):
if os.path.exists(os.path.join(index_persistent_path, "index.faiss")):
log("Index already exists, bypassing index creation")
return index_persistent_path
else:
if not os.path.exists(index_persistent_path):
os.makedirs(index_persistent_path)
log("Building index")
pdf_reader = PyPDF2.PdfReader(pdf_path)
text = ""
for page in pdf_reader.pages:
text += page.extract_text()
# Chunk the words into segments of X words with Y-word overlap, X=CHUNK_SIZE, Y=OVERLAP_SIZE
segments = split_text(text, chunk_size, chunk_overlap)
log(f"Number of segments: {len(segments)}")
index = FAISSIndex(index=faiss.IndexFlatL2(1536), embedding=OAIEmbedding())
index.insert_batch(segments)
index.save(index_persistent_path)
log("Index built: " + index_persistent_path)
return index_persistent_path
# Split the text into chunks with CHUNK_SIZE and CHUNK_OVERLAP as character count
def split_text(text, chunk_size, chunk_overlap):
# Calculate the number of chunks
num_chunks = (len(text) - chunk_overlap) // (chunk_size - chunk_overlap)
# Split the text into chunks
chunks = []
for i in range(num_chunks):
start = i * (chunk_size - chunk_overlap)
end = start + chunk_size
chunks.append(text[start:end])
# Add the last chunk
chunks.append(text[num_chunks * (chunk_size - chunk_overlap):])
return chunks
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/qna_prompt.md | You're a smart assistant can answer questions based on provided context and previous conversation history between you and human.
Use the context to answer the question at the end, note that the context has order and importance - e.g. context #1 is more important than #2.
Try as much as you can to answer based on the provided the context, if you cannot derive the answer from the context, you should say you don't know.
Answer in the same language as the question.
# Context
{% for i, c in context %}
## Context #{{i+1}}
{{c.text}}
{% endfor %}
# Question
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/__init__.py | import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/rewrite_question_prompt.md | You are able to reason from previous conversation and the recent question, to come up with a rewrite of the question which is concise but with enough information that people without knowledge of previous conversation can understand the question.
A few examples:
# Example 1
## Previous conversation
user: Who is Bill Clinton?
assistant: Bill Clinton is an American politician who served as the 42nd President of the United States from 1993 to 2001.
## Question
user: When was he born?
## Rewritten question
When was Bill Clinton born?
# Example 2
## Previous conversation
user: What is BERT?
assistant: BERT stands for "Bidirectional Encoder Representations from Transformers." It is a natural language processing (NLP) model developed by Google.
user: What data was used for its training?
assistant: The BERT (Bidirectional Encoder Representations from Transformers) model was trained on a large corpus of publicly available text from the internet. It was trained on a combination of books, articles, websites, and other sources to learn the language patterns and relationships between words.
## Question
user: What NLP tasks can it perform well?
## Rewritten question
What NLP tasks can BERT perform well?
Now comes the actual work - please respond with the rewritten question in the same language as the question, nothing else.
## Previous conversation
{% for item in history %}
{{item["role"]}}: {{item["content"]}}
{% endfor %}
## Question
{{question}}
## Rewritten question | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/qna.py | import os
from utils.oai import OAIChat
def qna(prompt: str, history: list):
max_completion_tokens = int(os.environ.get("MAX_COMPLETION_TOKENS"))
chat = OAIChat()
stream = chat.stream(
messages=history + [{"role": "user", "content": prompt}],
max_tokens=max_completion_tokens,
)
return stream
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/test.ipynb | from main import chat_with_pdf, print_stream_and_return_full_answer
from dotenv import load_dotenv
load_dotenv()
bert_paper_url = "https://arxiv.org/pdf/1810.04805.pdf"
questions = [
"what is BERT?",
"what NLP tasks does it perform well?",
"is BERT suitable for NER?",
"is it better than GPT",
"when was GPT come up?",
"when was BERT come up?",
"so about same time?",
]
history = []
for q in questions:
stream, context = chat_with_pdf(q, bert_paper_url, history)
print("User: " + q, flush=True)
print("Bot: ", end="", flush=True)
answer = print_stream_and_return_full_answer(stream)
history = history + [
{"role": "user", "content": q},
{"role": "assistant", "content": answer},
] | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/main.py | import argparse
from dotenv import load_dotenv
import os
from qna import qna
from find_context import find_context
from rewrite_question import rewrite_question
from build_index import create_faiss_index
from download import download
from utils.lock import acquire_lock
from constants import PDF_DIR, INDEX_DIR
def chat_with_pdf(question: str, pdf_url: str, history: list):
with acquire_lock("create_folder.lock"):
if not os.path.exists(PDF_DIR):
os.mkdir(PDF_DIR)
if not os.path.exists(INDEX_DIR):
os.makedirs(INDEX_DIR)
pdf_path = download(pdf_url)
index_path = create_faiss_index(pdf_path)
q = rewrite_question(question, history)
prompt, context = find_context(q, index_path)
stream = qna(prompt, history)
return stream, context
def print_stream_and_return_full_answer(stream):
answer = ""
for str in stream:
print(str, end="", flush=True)
answer = answer + str + ""
print(flush=True)
return answer
def main_loop(url: str):
load_dotenv(os.path.join(os.path.dirname(__file__), ".env"), override=True)
history = []
while True:
question = input("\033[92m" + "$User (type q! to quit): " + "\033[0m")
if question == "q!":
break
stream, context = chat_with_pdf(question, url, history)
print("\033[92m" + "$Bot: " + "\033[0m", end=" ", flush=True)
answer = print_stream_and_return_full_answer(stream)
history = history + [
{"role": "user", "content": question},
{"role": "assistant", "content": answer},
]
def main():
parser = argparse.ArgumentParser(description="Ask questions about a PDF file")
parser.add_argument("url", help="URL to the PDF file")
args = parser.parse_args()
main_loop(args.url)
if __name__ == "__main__":
main()
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/constants.py | import os
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
PDF_DIR = os.path.join(BASE_DIR, ".pdfs")
INDEX_DIR = os.path.join(BASE_DIR, ".index/.pdfs/")
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/rewrite_question.py | from jinja2 import Environment, FileSystemLoader
import os
from utils.logging import log
from utils.oai import OAIChat, render_with_token_limit
def rewrite_question(question: str, history: list):
template = Environment(
loader=FileSystemLoader(os.path.dirname(os.path.abspath(__file__)))
).get_template("rewrite_question_prompt.md")
token_limit = int(os.environ["PROMPT_TOKEN_LIMIT"])
max_completion_tokens = int(os.environ["MAX_COMPLETION_TOKENS"])
# Try to render the prompt with token limit and reduce the history count if it fails
while True:
try:
prompt = render_with_token_limit(
template, token_limit, question=question, history=history
)
break
except ValueError:
history = history[:-1]
log(f"Reducing chat history count to {len(history)} to fit token limit")
chat = OAIChat()
rewritten_question = chat.generate(
messages=[{"role": "user", "content": prompt}], max_tokens=max_completion_tokens
)
log(f"Rewritten question: {rewritten_question}")
return rewritten_question
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/oai.py | from typing import List
import openai
from openai.version import VERSION as OPENAI_VERSION
import os
import tiktoken
from jinja2 import Template
from .retry import (
retry_and_handle_exceptions,
retry_and_handle_exceptions_for_generator,
)
from .logging import log
def extract_delay_from_rate_limit_error_msg(text):
import re
pattern = r"retry after (\d+)"
match = re.search(pattern, text)
if match:
retry_time_from_message = match.group(1)
return float(retry_time_from_message)
else:
return 5 # default retry time
class OAI:
def __init__(self):
if OPENAI_VERSION.startswith("0."):
raise Exception(
"Please upgrade your OpenAI package to version >= 1.0.0 or "
"using the command: pip install --upgrade openai."
)
init_params = {}
api_type = os.environ.get("OPENAI_API_TYPE")
if os.getenv("OPENAI_API_VERSION") is not None:
init_params["api_version"] = os.environ.get("OPENAI_API_VERSION")
if os.getenv("OPENAI_ORG_ID") is not None:
init_params["organization"] = os.environ.get("OPENAI_ORG_ID")
if os.getenv("OPENAI_API_KEY") is None:
raise ValueError("OPENAI_API_KEY is not set in environment variables")
if os.getenv("OPENAI_API_BASE") is not None:
if api_type == "azure":
init_params["azure_endpoint"] = os.environ.get("OPENAI_API_BASE")
else:
init_params["base_url"] = os.environ.get("OPENAI_API_BASE")
init_params["api_key"] = os.environ.get("OPENAI_API_KEY")
# A few sanity checks
if api_type == "azure":
if init_params.get("azure_endpoint") is None:
raise ValueError(
"OPENAI_API_BASE is not set in environment variables, this is required when api_type==azure"
)
if init_params.get("api_version") is None:
raise ValueError(
"OPENAI_API_VERSION is not set in environment variables, this is required when api_type==azure"
)
if init_params["api_key"].startswith("sk-"):
raise ValueError(
"OPENAI_API_KEY should not start with sk- when api_type==azure, "
"are you using openai key by mistake?"
)
from openai import AzureOpenAI as Client
else:
from openai import OpenAI as Client
self.client = Client(**init_params)
class OAIChat(OAI):
@retry_and_handle_exceptions(
exception_to_check=(
openai.RateLimitError,
openai.APIStatusError,
openai.APIConnectionError,
KeyError,
),
max_retries=5,
extract_delay_from_error_message=extract_delay_from_rate_limit_error_msg,
)
def generate(self, messages: list, **kwargs) -> List[float]:
# chat api may return message with no content.
message = self.client.chat.completions.create(
model=os.environ.get("CHAT_MODEL_DEPLOYMENT_NAME"),
messages=messages,
**kwargs,
).choices[0].message
return getattr(message, "content", "")
@retry_and_handle_exceptions_for_generator(
exception_to_check=(
openai.RateLimitError,
openai.APIStatusError,
openai.APIConnectionError,
KeyError,
),
max_retries=5,
extract_delay_from_error_message=extract_delay_from_rate_limit_error_msg,
)
def stream(self, messages: list, **kwargs):
response = self.client.chat.completions.create(
model=os.environ.get("CHAT_MODEL_DEPLOYMENT_NAME"),
messages=messages,
stream=True,
**kwargs,
)
for chunk in response:
if not chunk.choices:
continue
if chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content
else:
yield ""
class OAIEmbedding(OAI):
@retry_and_handle_exceptions(
exception_to_check=openai.RateLimitError,
max_retries=5,
extract_delay_from_error_message=extract_delay_from_rate_limit_error_msg,
)
def generate(self, text: str) -> List[float]:
return self.client.embeddings.create(
input=text, model=os.environ.get("EMBEDDING_MODEL_DEPLOYMENT_NAME")
).data[0].embedding
def count_token(text: str) -> int:
encoding = tiktoken.get_encoding("cl100k_base")
return len(encoding.encode(text))
def render_with_token_limit(template: Template, token_limit: int, **kwargs) -> str:
text = template.render(**kwargs)
token_count = count_token(text)
if token_count > token_limit:
message = f"token count {token_count} exceeds limit {token_limit}"
log(message)
raise ValueError(message)
return text
if __name__ == "__main__":
print(count_token("hello world, this is impressive"))
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/__init__.py | __path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/lock.py | import contextlib
import os
import sys
if sys.platform.startswith("win"):
import msvcrt
else:
import fcntl
@contextlib.contextmanager
def acquire_lock(filename):
if not sys.platform.startswith("win"):
with open(filename, "a+") as f:
fcntl.flock(f, fcntl.LOCK_EX)
yield f
fcntl.flock(f, fcntl.LOCK_UN)
else: # Windows
with open(filename, "w") as f:
msvcrt.locking(f.fileno(), msvcrt.LK_LOCK, 1)
yield f
msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, 1)
try:
os.remove(filename)
except OSError:
pass # best effort to remove the lock file
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/logging.py | import os
def log(message: str):
verbose = os.environ.get("VERBOSE", "false")
if verbose.lower() == "true":
print(message, flush=True)
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/retry.py | from typing import Tuple, Union, Optional, Type
import functools
import time
import random
def retry_and_handle_exceptions(
exception_to_check: Union[Type[Exception], Tuple[Type[Exception], ...]],
max_retries: int = 3,
initial_delay: float = 1,
exponential_base: float = 2,
jitter: bool = False,
extract_delay_from_error_message: Optional[any] = None,
):
def deco_retry(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
delay = initial_delay
for i in range(max_retries):
try:
return func(*args, **kwargs)
except exception_to_check as e:
if i == max_retries - 1:
raise Exception(
"Func execution failed after {0} retries: {1}".format(
max_retries, e
)
)
delay *= exponential_base * (1 + jitter * random.random())
delay_from_error_message = None
if extract_delay_from_error_message is not None:
delay_from_error_message = extract_delay_from_error_message(
str(e)
)
final_delay = (
delay_from_error_message if delay_from_error_message else delay
)
print(
"Func execution failed. Retrying in {0} seconds: {1}".format(
final_delay, e
)
)
time.sleep(final_delay)
return wrapper
return deco_retry
def retry_and_handle_exceptions_for_generator(
exception_to_check: Union[Type[Exception], Tuple[Type[Exception], ...]],
max_retries: int = 3,
initial_delay: float = 1,
exponential_base: float = 2,
jitter: bool = False,
extract_delay_from_error_message: Optional[any] = None,
):
def deco_retry(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
delay = initial_delay
for i in range(max_retries):
try:
for value in func(*args, **kwargs):
yield value
break
except exception_to_check as e:
if i == max_retries - 1:
raise Exception(
"Func execution failed after {0} retries: {1}".format(
max_retries, e
)
)
delay *= exponential_base * (1 + jitter * random.random())
delay_from_error_message = None
if extract_delay_from_error_message is not None:
delay_from_error_message = extract_delay_from_error_message(
str(e)
)
final_delay = (
delay_from_error_message if delay_from_error_message else delay
)
print(
"Func execution failed. Retrying in {0} seconds: {1}".format(
final_delay, e
)
)
time.sleep(final_delay)
return wrapper
return deco_retry
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/index.py | import os
from typing import Iterable, List, Optional
from dataclasses import dataclass
from faiss import Index
import faiss
import pickle
import numpy as np
from .oai import OAIEmbedding as Embedding
@dataclass
class SearchResultEntity:
text: str = None
vector: List[float] = None
score: float = None
original_entity: dict = None
metadata: dict = None
INDEX_FILE_NAME = "index.faiss"
DATA_FILE_NAME = "index.pkl"
class FAISSIndex:
def __init__(self, index: Index, embedding: Embedding) -> None:
self.index = index
self.docs = {} # id -> doc, doc is (text, metadata)
self.embedding = embedding
def insert_batch(
self, texts: Iterable[str], metadatas: Optional[List[dict]] = None
) -> None:
documents = []
vectors = []
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
vector = self.embedding.generate(text)
documents.append((text, metadata))
vectors.append(vector)
self.index.add(np.array(vectors, dtype=np.float32))
self.docs.update(
{i: doc for i, doc in enumerate(documents, start=len(self.docs))}
)
pass
def query(self, text: str, top_k: int = 10) -> List[SearchResultEntity]:
vector = self.embedding.generate(text)
scores, indices = self.index.search(np.array([vector], dtype=np.float32), top_k)
docs = []
for j, i in enumerate(indices[0]):
if i == -1: # This happens when not enough docs are returned.
continue
doc = self.docs[i]
docs.append(
SearchResultEntity(text=doc[0], metadata=doc[1], score=scores[0][j])
)
return docs
def save(self, path: str) -> None:
faiss.write_index(self.index, os.path.join(path, INDEX_FILE_NAME))
# dump docs to pickle file
with open(os.path.join(path, DATA_FILE_NAME), "wb") as f:
pickle.dump(self.docs, f)
pass
def load(self, path: str) -> None:
self.index = faiss.read_index(os.path.join(path, INDEX_FILE_NAME))
with open(os.path.join(path, DATA_FILE_NAME), "rb") as f:
self.docs = pickle.load(f)
pass
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/.promptflow/flow.tools.json | {
"package": {},
"code": {
"setup_env.py": {
"type": "python",
"inputs": {
"connection": {
"type": [
"AzureOpenAIConnection",
"OpenAIConnection"
]
},
"config": {
"type": [
"object"
]
}
},
"source": "setup_env.py",
"function": "setup_env"
},
"download_tool.py": {
"type": "python",
"inputs": {
"url": {
"type": [
"string"
]
},
"env_ready_signal": {
"type": [
"string"
]
}
},
"source": "download_tool.py",
"function": "download_tool"
},
"build_index_tool.py": {
"type": "python",
"inputs": {
"pdf_path": {
"type": [
"string"
]
}
},
"source": "build_index_tool.py",
"function": "build_index_tool"
},
"find_context_tool.py": {
"type": "python",
"inputs": {
"question": {
"type": [
"string"
]
},
"index_path": {
"type": [
"string"
]
}
},
"source": "find_context_tool.py",
"function": "find_context_tool"
},
"qna_tool.py": {
"type": "python",
"inputs": {
"prompt": {
"type": [
"string"
]
},
"history": {
"type": [
"list"
]
}
},
"source": "qna_tool.py",
"function": "qna_tool"
},
"rewrite_question_tool.py": {
"type": "python",
"inputs": {
"question": {
"type": [
"string"
]
},
"history": {
"type": [
"list"
]
},
"env_ready_signal": {
"type": [
"string"
]
}
},
"source": "rewrite_question_tool.py",
"function": "rewrite_question_tool"
}
}
} | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/tests/base_test.py | import unittest
import os
import time
import traceback
class BaseTest(unittest.TestCase):
def setUp(self):
root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../")
self.flow_path = os.path.join(root, "chat-with-pdf")
self.data_path = os.path.join(
self.flow_path, "data/bert-paper-qna-3-line.jsonl"
)
self.eval_groundedness_flow_path = os.path.join(
root, "../evaluation/eval-groundedness"
)
self.eval_perceived_intelligence_flow_path = os.path.join(
root, "../evaluation/eval-perceived-intelligence"
)
self.all_runs_generated = []
self.config_3k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo",
"PROMPT_TOKEN_LIMIT": 3000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 64,
}
self.config_2k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo",
"PROMPT_TOKEN_LIMIT": 2000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 64,
}
# Switch current working directory to the folder of this file
self.cwd = os.getcwd()
os.chdir(os.path.dirname(os.path.abspath(__file__)))
def tearDown(self):
# Switch back to the original working directory
os.chdir(self.cwd)
for run in self.all_runs_generated:
try:
self.pf.runs.archive(run.name)
except Exception as e:
print(e)
traceback.print_exc()
def create_chat_run(
self,
data=None,
column_mapping=None,
connections=None,
display_name="chat_run",
stream=True,
):
if column_mapping is None:
column_mapping = {
"chat_history": "${data.chat_history}",
"pdf_url": "${data.pdf_url}",
"question": "${data.question}",
"config": self.config_2k_context,
}
data = self.data_path if data is None else data
run = self.pf.run(
flow=self.flow_path,
data=data,
column_mapping=column_mapping,
connections=connections,
display_name=display_name,
tags={"unittest": "true"},
stream=stream,
)
self.all_runs_generated.append(run)
self.check_run_basics(run, display_name)
return run
def create_eval_run(
self,
eval_flow_path,
base_run,
column_mapping,
connections=None,
display_name_postfix="",
):
display_name = eval_flow_path.split("/")[-1] + display_name_postfix
eval = self.pf.run(
flow=eval_flow_path,
run=base_run,
column_mapping=column_mapping,
connections=connections,
display_name=display_name,
tags={"unittest": "true"},
stream=True,
)
self.all_runs_generated.append(eval)
self.check_run_basics(eval, display_name)
return eval
def check_run_basics(self, run, display_name=None):
self.assertTrue(run is not None)
if display_name is not None:
self.assertTrue(run.display_name.find(display_name) != -1)
self.assertEqual(run.tags["unittest"], "true")
def run_eval_with_config(self, config: dict, display_name: str = None):
run = self.create_chat_run(
column_mapping={
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config,
},
display_name=display_name,
)
self.pf.stream(run) # wait for completion
self.check_run_basics(run)
eval_groundedness = self.create_eval_run(
self.eval_groundedness_flow_path,
run,
{
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name_postfix="_" + display_name,
)
self.pf.stream(eval_groundedness) # wait for completion
self.check_run_basics(eval_groundedness)
details = self.pf.get_details(eval_groundedness)
self.assertGreater(details.shape[0], 2)
metrics, elapsed = self.wait_for_metrics(eval_groundedness)
self.assertGreaterEqual(metrics["groundedness"], 0.0)
self.assertLessEqual(elapsed, 5) # metrics should be available within 5 seconds
eval_pi = self.create_eval_run(
self.eval_perceived_intelligence_flow_path,
run,
{
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name_postfix="_" + display_name,
)
self.pf.stream(eval_pi) # wait for completion
self.check_run_basics(eval_pi)
details = self.pf.get_details(eval_pi)
self.assertGreater(details.shape[0], 2)
metrics, elapsed = self.wait_for_metrics(eval_pi)
self.assertGreaterEqual(metrics["perceived_intelligence_score"], 0.0)
self.assertLessEqual(elapsed, 5) # metrics should be available within 5 seconds
return run, eval_groundedness, eval_pi
def wait_for_metrics(self, run):
start = time.time()
metrics = self.pf.get_metrics(run)
cnt = 3
while len(metrics) == 0 and cnt > 0:
time.sleep(5)
metrics = self.pf.get_metrics(run)
cnt -= 1
end = time.time()
return metrics, end - start
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/tests/chat_with_pdf_test.py | import os
import unittest
import promptflow
from base_test import BaseTest
from promptflow._sdk._errors import InvalidRunStatusError
class TestChatWithPDF(BaseTest):
def setUp(self):
super().setUp()
self.pf = promptflow.PFClient()
def tearDown(self) -> None:
return super().tearDown()
def test_run_chat_with_pdf(self):
result = self.pf.test(
flow=self.flow_path,
inputs={
"chat_history": [],
"pdf_url": "https://arxiv.org/pdf/1810.04805.pdf",
"question": "BERT stands for?",
"config": self.config_2k_context,
},
)
print(result)
self.assertTrue(
result["answer"].find(
"Bidirectional Encoder Representations from Transformers"
)
!= -1
)
def test_bulk_run_chat_with_pdf(self):
run = self.create_chat_run()
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 3)
def test_eval(self):
run_2k, eval_groundedness_2k, eval_pi_2k = self.run_eval_with_config(
self.config_2k_context,
display_name="chat_with_pdf_2k_context",
)
run_3k, eval_groundedness_3k, eval_pi_3k = self.run_eval_with_config(
self.config_3k_context,
display_name="chat_with_pdf_3k_context",
)
self.check_run_basics(run_2k)
self.check_run_basics(run_3k)
self.check_run_basics(eval_groundedness_2k)
self.check_run_basics(eval_pi_2k)
self.check_run_basics(eval_groundedness_3k)
self.check_run_basics(eval_pi_3k)
def test_bulk_run_valid_mapping(self):
run = self.create_chat_run(
column_mapping={
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": self.config_2k_context,
}
)
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 3)
def test_bulk_run_mapping_missing_one_column(self):
data_path = os.path.join(
self.flow_path, "data/invalid-data-missing-column.jsonl"
)
with self.assertRaises(InvalidRunStatusError):
self.create_chat_run(
column_mapping={
"question": "${data.question}",
},
data=data_path
)
def test_bulk_run_invalid_mapping(self):
with self.assertRaises(InvalidRunStatusError):
self.create_chat_run(
column_mapping={
"question": "${data.question_not_exist}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
}
)
if __name__ == "__main__":
unittest.main()
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/tests/azure_chat_with_pdf_test.py | import unittest
import promptflow.azure as azure
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
from base_test import BaseTest
import os
from promptflow._sdk._errors import InvalidRunStatusError
class TestChatWithPDFAzure(BaseTest):
def setUp(self):
super().setUp()
self.data_path = os.path.join(
self.flow_path, "data/bert-paper-qna-3-line.jsonl"
)
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()
self.pf = azure.PFClient.from_config(credential=credential)
def tearDown(self) -> None:
return super().tearDown()
def test_bulk_run_chat_with_pdf(self):
run = self.create_chat_run(display_name="chat_with_pdf_batch_run")
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 3)
def test_eval(self):
run_2k, eval_groundedness_2k, eval_pi_2k = self.run_eval_with_config(
self.config_2k_context,
display_name="chat_with_pdf_2k_context",
)
run_3k, eval_groundedness_3k, eval_pi_3k = self.run_eval_with_config(
self.config_3k_context,
display_name="chat_with_pdf_3k_context",
)
self.check_run_basics(run_2k)
self.check_run_basics(run_3k)
self.check_run_basics(eval_groundedness_2k)
self.check_run_basics(eval_pi_2k)
self.check_run_basics(eval_groundedness_3k)
self.check_run_basics(eval_pi_3k)
def test_bulk_run_valid_mapping(self):
data = os.path.join(self.flow_path, "data/bert-paper-qna-1-line.jsonl")
run = self.create_chat_run(
data=data,
column_mapping={
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": self.config_2k_context,
},
)
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 1)
def test_bulk_run_mapping_missing_one_column(self):
run = self.create_chat_run(
column_mapping={
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
},
)
self.pf.stream(run) # wait for completion
# run won't be failed, only line runs inside it will be failed.
self.assertEqual(run.status, "Completed")
# TODO: get line run results when supported.
def test_bulk_run_invalid_mapping(self):
run = self.create_chat_run(
column_mapping={
"question": "${data.question_not_exist}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
},
stream=False,
)
with self.assertRaises(InvalidRunStatusError):
self.pf.stream(run) # wait for completion
if __name__ == "__main__":
unittest.main()
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/bert-paper-qna.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the name of the new language representation model introduced in the document?", "answer": "BERT", "context": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the main difference between BERT and previous language representation models?", "answer": "BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers.", "context": "Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the advantage of fine-tuning BERT over using feature-based approaches?", "answer": "Fine-tuning BERT reduces the need for many heavily-engineered taskspecific architectures and transfers all parameters to initialize end-task model parameters.", "context": "We show that pre-trained representations reduce the need for many heavily-engineered taskspecific architectures. BERT is the first finetuning based representation model that achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks, outperforming many task-specific architectures."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What are the two unsupervised tasks used to pre-train BERT?", "answer": "Masked LM and next sentence prediction", "context": "In order to train a deep bidirectional representation, we simply mask some percentage of the input tokens at random, and then predict those masked tokens. We refer to this procedure as a \"masked LM\" (MLM), although it is often referred to as a Cloze task in the literature (Taylor, 1953). In addition to the masked language model, we also use a \"next sentence prediction\" task that jointly pretrains text-pair representations."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "How does BERT handle single sentence and sentence pair inputs?", "answer": "It uses a special classification token ([CLS]) at the beginning of every input sequence and a special separator token ([SEP]) to separate sentences or mark the end of a sequence.", "context": "To make BERT handle a variety of down-stream tasks, our input representation is able to unambiguously represent both a single sentence and a pair of sentences (e.g., h Question, Answeri) in one token sequence. The first token of every sequence is always a special classification token ([CLS]). The final hidden state corresponding to this token is used as the aggregate sequence representation for classification tasks. Sentence pairs are packed together into a single sequence. We differentiate the sentences in two ways. First, we separate them with a special token ([SEP]). Second, we add a learned embedding to every token indicating whether it belongs to sentence A or sentence B."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What are the three types of embeddings used to construct the input representation for BERT?", "answer": "Token embeddings, segment embeddings and position embeddings", "context": "For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings. A visualization of this construction can be seen in Figure 2."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the size of the vocabulary used by BERT?", "answer": "30,000", "context": "We use WordPiece embeddings (Wu et al., 2016) with a 30,000 token vocabulary."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What are the two model sizes reported in the paper for BERT?", "answer": "BERTBASE (L=12, H=768, A=12, Total Parameters=110M) and BERTLARGE (L=24, H=1024, A=16, Total Parameters=340M)", "context": "We primarily report results on two model sizes: BERTBASE (L=12, H=768, A=12, Total Parameters=110M) and BERTLARGE (L=24, H=1024, A=16, Total Parameters=340M)."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "How does BERT predict the start and end positions of an answer span in SQuAD?", "answer": "It uses two vectors S and E whose dot products with the final hidden vectors of each token denote scores for start and end positions.", "context": "We only introduce a start vector S ∈ R H and an end vector E ∈ R H during fine-tuning. The probability of word i being the start of the answer span is computed as a dot product between Ti and S followed by a softmax over all of the words in the paragraph: Pi = e S·Ti P j e S·Tj . The analogous formula is used for the end of the answer span."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the main benefit of using a masked language model over a standard left-to-right or right-to-left language model?", "answer": "It enables the representation to fuse the left and the right context, which allows to pretrain a deep bidirectional Transformer.", "context": "Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pretrain a deep bidirectional Transformer."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "How much does GPT4 API cost?", "answer": "I don't know"} | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/bert-paper-qna-1-line.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the name of the new language representation model introduced in the document?", "answer": "BERT", "context": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers."} | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/bert-paper-qna-3-line.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the main difference between BERT and previous language representation models?", "answer": "BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers.", "context": "Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the size of the vocabulary used by BERT?", "answer": "30,000", "context": "We use WordPiece embeddings (Wu et al., 2016) with a 30,000 token vocabulary."}
{"pdf_url":"https://grs.pku.edu.cn/docs/2018-03/20180301083100898652.pdf", "chat_history":[], "question": "论文写作中论文引言有什么注意事项?", "answer":"", "context":""} | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/invalid-data-missing-column.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf"}
| 0 |
promptflow_repo/promptflow/examples/flows/evaluation | promptflow_repo/promptflow/examples/flows/evaluation/eval-classification-accuracy/data.jsonl | {"groundtruth": "App","prediction": "App"}
{"groundtruth": "Channel","prediction": "Channel"}
{"groundtruth": "Academic","prediction": "Academic"}
| 0 |