repo_id
stringlengths 15
132
| file_path
stringlengths 34
176
| content
stringlengths 2
3.52M
| __index_level_0__
int64 0
0
|
---|---|---|---|
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/tune-prompts-with-variants.md | # Tune prompts using variants
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
To better understand this part, please read [Quick start](./quick-start.md) and [Run and evaluate a flow](./run-and-evaluate-a-flow/index.md) first.
## What is variant and why should we care
In order to help users tune the prompts in a more efficient way, we introduce [the concept of variants](../../concepts/concept-variants.md) which can help you test the model’s behavior under different conditions, such as different wording, formatting, context, temperature, or top-k, compare and find the best prompt and configuration that maximizes the model’s accuracy, diversity, or coherence.
## Create a run with different variant node
In this example, we use the flow [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification), its node `summarize_text_content` has two variants: `variant_0` and `variant_1`. The difference between them is the inputs parameters:
```yaml
...
nodes:
- name: summarize_text_content
use_variants: true
...
node_variants:
summarize_text_content:
default_variant_id: variant_0
variants:
variant_0:
node:
type: llm
source:
type: code
path: summarize_text_content.jinja2
inputs:
deployment_name: text-davinci-003
max_tokens: '128'
temperature: '0.2'
text: ${fetch_text_content_from_url.output}
provider: AzureOpenAI
connection: open_ai_connection
api: completion
module: promptflow.tools.aoai
variant_1:
node:
type: llm
source:
type: code
path: summarize_text_content__variant_1.jinja2
inputs:
deployment_name: text-davinci-003
max_tokens: '256'
temperature: '0.3'
text: ${fetch_text_content_from_url.output}
provider: AzureOpenAI
connection: open_ai_connection
api: completion
module: promptflow.tools.aoai
```
You can check the whole flow definition in [flow.dag.yaml](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/flow.dag.yaml).
Now we will create a variant run which uses node `summarize_text_content`'s variant `variant_1`.
Assuming you are in working directory `<path-to-the-sample-repo>/examples/flows/standard`
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Note we pass `--variant` to specify which variant of the node should be running.
```sh
pf run create --flow web-classification --data web-classification/data.jsonl --variant '${summarize_text_content.variant_1}' --column-mapping url='${data.url}' --stream --name my_first_variant_run
```
:::
:::{tab-item} SDK
:sync: SDK
```python
from promptflow import PFClient
pf = PFClient() # get a promptflow client
flow = "web-classification"
data= "web-classification/data.jsonl"
# use the variant1 of the summarize_text_content node.
variant_run = pf.run(
flow=flow,
data=data,
variant="${summarize_text_content.variant_1}", # use variant 1.
column_mapping={"url": "${data.url}"},
)
pf.stream(variant_run)
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
![img](../media/how-to-guides/vscode_variants_folded.png)
![img](../media/how-to-guides/vscode_variants_unfold.png)
:::
::::
After the variant run is created, you can evaluate the variant run with a evaluation flow, just like you evalute a standard flow run.
## Next steps
Learn more about:
- [Run and evaluate a flow](./run-and-evaluate-a-flow/index.md)
- [Deploy a flow](./deploy-a-flow/index.md)
- [Prompt flow in Azure AI](../cloud/azureai/quick-start.md) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/manage-connections.md | # Manage connections
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
[Connection](../../concepts/concept-connections.md) helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM (Large Language Models) and other external tools, for example, Azure Content Safety.
:::{note}
To use azureml workspace connection locally, refer to [this guide](../how-to-guides/set-global-configs.md#connectionprovider).
:::
## Connection types
There are multiple types of connections supported in promptflow, which can be simply categorized into **strong type connection** and **custom connection**. The strong type connection includes AzureOpenAIConnection, OpenAIConnection, etc. The custom connection is a generic connection type that can be used to store custom defined credentials.
We are going to use AzureOpenAIConnection as an example for strong type connection, and CustomConnection to show how to manage connections.
## Create a connection
:::{note}
If you are using `WSL` or other OS without default keyring storage backend, you may encounter `StoreConnectionEncryptionKeyError`, please refer to [FAQ](./faq.md#connection-creation-failed-with-storeconnectionencryptionkeyerror) for the solutions.
:::
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Each of the strong type connection has a corresponding yaml schema, the example below shows the AzureOpenAIConnection yaml:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: azure_open_ai_connection
type: azure_open_ai
api_key: "<to-be-replaced>"
api_base: "https://<name>.openai.azure.com/"
api_type: "azure"
api_version: "2023-03-15-preview"
```
The custom connection yaml will have two dict fields for secrets and configs, the example below shows the CustomConnection yaml:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: custom_connection
type: custom
configs:
endpoint: "<your-endpoint>"
other_config: "other_value"
secrets: # required
my_key: "<your-api-key>"
```
After preparing the yaml file, use the CLI command below to create them:
```bash
# Override keys with --set to avoid yaml file changes
pf connection create -f <path-to-azure-open-ai-connection> --set api_key=<your-api-key>
# Create the custom connection
pf connection create -f <path-to-custom-connection> --set configs.endpoint=<endpoint> secrets.my_key=<your-api-key>
```
The expected result is as follows if the connection created successfully.
![img](../media/how-to-guides/create_connection.png)
:::
:::{tab-item} SDK
:sync: SDK
Using SDK, each connection type has a corresponding class to create a connection. The following code snippet shows how to import the required class and create the connection:
```python
from promptflow import PFClient
from promptflow.entities import AzureOpenAIConnection, CustomConnection
# Get a pf client to manage connections
pf = PFClient()
# Initialize an AzureOpenAIConnection object
connection = AzureOpenAIConnection(
name="my_azure_open_ai_connection",
api_key="<your-api-key>",
api_base="<your-endpoint>"
api_version="2023-03-15-preview"
)
# Create the connection, note that api_key will be scrubbed in the returned result
result = pf.connections.create_or_update(connection)
print(result)
# Initialize a custom connection object
connection = CustomConnection(
name="my_custom_connection",
# Secrets is a required field for custom connection
secrets={"my_key": "<your-api-key>"},
configs={"endpoint": "<your-endpoint>", "other_config": "other_value"}
)
# Create the connection, note that all secret values will be scrubbed in the returned result
result = pf.connections.create_or_update(connection)
print(result)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > prompt flow pane. You can find the connections pane to manage your local connections. Click the "+" icon on the top right of it and follow the popped out instructions to create your new connection.
![img](../media/how-to-guides/vscode_create_connection.png)
![img](../media/how-to-guides/vscode_create_connection_1.png)
:::
::::
## Update a connection
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
The commands below show how to update existing connections with new values:
```bash
# Update an azure open ai connection with a new api base
pf connection update -n my_azure_open_ai_connection --set api_base='new_value'
# Update a custom connection
pf connection update -n my_custom_connection --set configs.other_config='new_value'
```
:::
:::{tab-item} SDK
:sync: SDK
The code snippet below shows how to update existing connections with new values:
```python
# Update an azure open ai connection with a new api base
connection = pf.connections.get(name="my_azure_open_ai_connection")
connection.api_base = "new_value"
connection.api_key = "<original-key>" # secrets are required when updating connection using sdk
result = pf.connections.create_or_update(connection)
print(connection)
# Update a custom connection
connection = pf.connections.get(name="my_custom_connection")
connection.configs["other_config"] = "new_value"
connection.secrets = {"key1": "val1"} # secrets are required when updating connection using sdk
result = pf.connections.create_or_update(connection)
print(connection)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > prompt flow pane. You can find the connections pane to manage your local connections. Right click the item of the connection list to update or delete your connections.
![img](../media/how-to-guides/vscode_update_delete_connection.png)
:::
::::
## List connections
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
List connection command will return the connections with json list format, note that all secrets and api keys will be scrubbed:
```bash
pf connection list
```
:::
:::{tab-item} SDK
:sync: SDK
List connection command will return the connections object list, note that all secrets and api keys will be scrubbed:
```python
from promptflow import PFClient
# Get a pf client to manage connections
pf = PFClient()
# List and print connections
connection_list = pf.connections.list()
for connection in connection_list:
print(connection)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
![img](../media/how-to-guides/vscode_list_connection.png)
:::
::::
## Delete a connection
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Delete a connection with the following command:
```bash
pf connection delete -n <connection_name>
```
:::
:::{tab-item} SDK
:sync: SDK
Delete a connection with the following code snippet:
```python
from promptflow import PFClient
# Get a pf client to manage connections
pf = PFClient()
# Delete the connection with specific name
client.connections.delete(name="my_custom_connection")
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > prompt flow pane. You can find the connections pane to manage your local connections. Right click the item of the connection list to update or delete your connections.
![img](../media/how-to-guides/vscode_update_delete_connection.png)
:::
::::
## Next steps
- Reach more detail about [connection concepts](../../concepts/concept-connections.md).
- Try the [connection samples](https://github.com/microsoft/promptflow/blob/main/examples/connections/connection.ipynb).
- [Consume connections from Azure AI](../cloud/azureai/consume-connections-from-azure-ai.md).
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/add-conditional-control-to-a-flow.md | # Add conditional control to a flow
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
In prompt flow, we support control logic by activate config, like if-else, switch. Activate config enables conditional execution of nodes within your flow, ensuring that specific actions are taken only when the specified conditions are met.
This guide will help you learn how to use activate config to add conditional control to your flow.
## Prerequisites
Please ensure that your promptflow version is greater than `0.1.0b5`.
## Usage
Each node in your flow can have an associated activate config, specifying when it should execute and when it should bypass. If a node has activate config, it will only be executed when the activate condition is met. The configuration consists of two essential components:
- `activate.when`: The condition that triggers the execution of the node. It can be based on the outputs of a previous node, or the inputs of the flow.
- `activate.is`: The condition's value, which can be a constant value of string, boolean, integer, double.
You can manually change the flow.dag.yaml in the flow folder or use the visual editor in VS Code Extension to add activate config to nodes in the flow.
::::{tab-set}
:::{tab-item} YAML
:sync: YAML
You can add activate config in the node section of flow yaml.
```yaml
activate:
when: ${node.output}
is: true
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
- Click `Visual editor` in the flow.dag.yaml to enter the flow interface.
![visual_editor](../media/how-to-guides/conditional-flow-with-activate/visual_editor.png)
- Click on the `Activation config` section in the node you want to add and fill in the values for "when" and "is".
![activate_config](../media/how-to-guides/conditional-flow-with-activate/activate_config.png)
:::
::::
### Further details and important notes
1. If the node using the python tool has an input that references a node that may be bypassed, please provide a default value for this input whenever possible. If there is no default value for input, the output of the bypassed node will be set to None.
![provide_default_value](../media/how-to-guides/conditional-flow-with-activate/provide_default_value.png)
2. It is not recommended to directly connect nodes that might be bypassed to the flow's outputs. If it is connected, the output will be None and a warning will be raised.
![output_bypassed](../media/how-to-guides/conditional-flow-with-activate/output_bypassed.png)
3. In a conditional flow, if a node has activate config, we will always use this config to determine whether the node should be bypassed. If a node is bypassed, its status will be marked as "Bypassed", as shown in the figure below Show. There are three situations in which a node is bypassed.
![bypassed_nodes](../media/how-to-guides/conditional-flow-with-activate/bypassed_nodes.png)
(1) If a node has activate config and the value of `activate.when` is not equals to `activate.is`, it will be bypassed. If you want to fore a node to always be executed, you can set the activate config to `when dummy is dummy` which always meets the activate condition.
![activate_condition_always_met](../media/how-to-guides/conditional-flow-with-activate/activate_condition_always_met.png)
(2) If a node has activate config and the node pointed to by `activate.when` is bypassed, it will be bypassed.
![activate_when_bypassed](../media/how-to-guides/conditional-flow-with-activate/activate_when_bypassed.png)
(3) If a node does not have activate config but depends on other nodes that have been bypassed, it will be bypassed.
![dependencies_bypassed](../media/how-to-guides/conditional-flow-with-activate/dependencies_bypassed.png)
## Example flow
Let's illustrate how to use activate config with practical examples.
- If-Else scenario: Learn how to develop a conditional flow for if-else scenarios. [View Example](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/conditional-flow-for-if-else)
- Switch scenario: Explore conditional flow for switch scenarios. [View Example](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/conditional-flow-for-switch)
## Next steps
- [Run and evaluate a flow](./run-and-evaluate-a-flow/index.md)
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/quick-start.md | # Quick Start
This guide will walk you through the fist step using of prompt flow code-first experience.
**Prerequisite** - To make the most of this tutorial, you'll need:
- Know how to program with Python :)
- A basic understanding of Machine Learning can be beneficial, but it's not mandatory.
**Learning Objectives** - Upon completing this tutorial, you should learn how to:
- Setup your python environment to run prompt flow
- Clone a sample flow & understand what's a flow
- Understand how to edit the flow using visual editor or yaml
- Test the flow using your favorite experience: CLI, SDK or VS Code Extension.
## Set up your dev environment
1. A python environment with version `python=3.9` or higher version like 3.10. It's recommended to use python environment manager [miniconda](https://docs.conda.io/en/latest/miniconda.html). After you have installed miniconda, run below commands to create a python environment:
```bash
conda create --name pf python=3.9
conda activate pf
```
2. Install `promptflow` and `promptflow-tools`.
```sh
pip install promptflow promptflow-tools
```
3. Check the installation.
```bash
# should print promptflow version, e.g. "0.1.0b3"
pf -v
```
## Understand what's a flow
A flow, represented as a YAML file, is a DAG of functions, which is connected via input/output dependencies, and executed based on the topology by prompt flow executor. See [Flows](../../concepts/concept-flows.md) for more details.
### Get the flow sample
Clone the sample repo and check flows in folder [examples/flows](https://github.com/microsoft/promptflow/tree/main/examples/flows).
```bash
git clone https://github.com/microsoft/promptflow.git
```
### Understand flow directory
The sample used in this tutorial is the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification) flow, which categorizes URLs into several predefined classes. Classification is a traditional machine learning task, and this sample illustrates how to perform classification using GPT and prompts.
```bash
cd promptflow/examples/flows/standard/web-classification
```
A flow directory is a directory that contains all contents of a flow. Structure of flow folder:
- **flow.dag.yaml**: The flow definition with inputs/outputs, nodes, tools and variants for authoring purpose.
- **.promptflow/flow.tools.json**: It contains tools meta referenced in `flow.dag.yaml`.
- **Source code files (.py, .jinja2)**: User managed, the code scripts referenced by tools.
- **requirements.txt**: Python package dependencies for this flow.
![flow_dir](../media/how-to-guides/quick-start/flow_directory.png)
In order to run this specific flow, you need to install its requirements first.
```sh
pip install -r requirements.txt
```
### Understand the flow yaml
The entry file of a flow directory is [`flow.dag.yaml`](https://github.com/microsoft/promptflow/blob/main/examples/flows/standard/web-classification/flow.dag.yaml) which describes the `DAG(Directed Acyclic Graph)` of a flow. Below is a sample of flow DAG:
![flow_dag](../media/how-to-guides/quick-start/flow_dag.png)
This graph is rendered by VS Code extension which will be introduced in the next section.
### Using VS Code Extension to visualize the flow
_Note: Prompt flow VS Code Extension is highly recommended for flow development and debugging._
1. Prerequisites for VS Code extension.
- Install latest stable version of [VS Code](https://code.visualstudio.com/)
- Install [VS Code Python extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
2. Install [Prompt flow for VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow)
3. Select python interpreter
![vscode](../media/how-to-guides/quick-start/vs_code_interpreter_0.png)
![vscode](../media/how-to-guides/quick-start/vs_code_interpreter_1.png)
2. Open dag in vscode. You can open the `flow.dag.yaml` as yaml file, or you can also open it in `visual editor`.
![vscode](../media/how-to-guides/quick-start/vs_code_dag_0.png)
## Develop and test your flow
### How to edit the flow
To test your flow with varying input data, you have the option to modify the default input. If you are well-versed with the structure, you may also add or remove nodes to alter the flow's arrangement.
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
url:
type: string
# change the default value of input url here
default: https://play.google.com/store/apps/details?id=com.twitter.android
...
```
See more details of this topic in [Develop a flow](./develop-a-flow/index.md).
### Create necessary connections
:::{note}
If you are using `WSL` or other OS without default keyring storage backend, you may encounter `StoreConnectionEncryptionKeyError`, please refer to [FAQ](./faq.md#connection-creation-failed-with-storeconnectionencryptionkeyerror) for the solutions.
:::
The [`connection`](../concepts/concept-connections.md) helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety.
The sample flow [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification) uses connection `open_ai_connection` inside, e.g. `classify_with_llm` node needs to talk to `llm` using the connection.
We need to set up the connection if we haven't added it before. Once created, the connection will be stored in local db and can be used in any flow.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Firstly we need a connection yaml file `connection.yaml`:
If you are using Azure Open AI, prepare your resource follow with this [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal) and get your `api_key` if you don't have one.
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: open_ai_connection
type: azure_open_ai
api_key: <test_key>
api_base: <test_base>
api_type: azure
api_version: <test_version>
```
If you are using OpenAI, sign up account via [OpenAI website](https://openai.com/), login and [find personal API key](https://platform.openai.com/account/api-keys), then use this yaml:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: open_ai_connection
type: open_ai
api_key: "<user-input>"
organization: "" # optional
```
Then we can use CLI command to create the connection.
```sh
pf connection create -f connection.yaml
```
More command details can be found in [CLI reference](../reference/pf-command-reference.md).
:::
:::{tab-item} SDK
:sync: SDK
In SDK, connections can be created and managed with `PFClient`.
```python
from promptflow import PFClient
from promptflow.entities import AzureOpenAIConnection
# PFClient can help manage your runs and connections.
pf = PFClient()
try:
conn_name = "open_ai_connection"
conn = pf.connections.get(name=conn_name)
print("using existing connection")
except:
connection = AzureOpenAIConnection(
name=conn_name,
api_key="<test_key>",
api_base="<test_base>",
api_type="azure",
api_version="<test_version>",
)
# use this if you have an existing OpenAI account
# from promptflow.entities import OpenAIConnection
# connection = OpenAIConnection(
# name=conn_name,
# api_key="<user-input>",
# )
conn = pf.connections.create_or_update(connection)
print("successfully created connection")
print(conn)
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
1. Click the promptflow icon to enter promptflow control panel
![vsc_add_connection](../media/how-to-guides/quick-start/vs_code_connection_0.png)
2. Create your connection.
![vsc_add_connection](../media/how-to-guides/quick-start/vs_code_connection_1.png)
![vsc_add_connection](../media/how-to-guides/quick-start/vs_code_connection_2.png)
![vsc_add_connection](../media/how-to-guides/quick-start/vs_code_connection_3.png)
:::
::::
Learn more on more actions like delete connection in: [Manage connections](./manage-connections.md).
### Test the flow
:::{admonition} Note
Testing flow will NOT create a batch run record, therefore it's unable to use commands like `pf run show-details` to get the run information. If you want to persist the run record, see [Run and evaluate a flow](./run-and-evaluate-a-flow/index.md)
:::
Assuming you are in working directory `promptflow/examples/flows/standard/`
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Change the default input to the value you want to test.
![q_0](../media/how-to-guides/quick-start/flow-directory-and-dag-yaml.png)
```sh
pf flow test --flow web-classification # "web-classification" is the directory name
```
![flow-test-output-cli](../media/how-to-guides/quick-start/flow-test-output-cli.png)
:::
:::{tab-item} SDK
:sync: SDK
The return value of `test` function is the flow/node outputs.
```python
from promptflow import PFClient
pf = PFClient()
flow_path = "web-classification" # "web-classification" is the directory name
# Test flow
flow_inputs = {"url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g", "answer": "Channel", "evidence": "Url"} # The inputs of the flow.
flow_result = pf.test(flow=flow_path, inputs=flow_inputs)
print(f"Flow outputs: {flow_result}")
# Test node in the flow
node_name = "fetch_text_content_from_url" # The node name in the flow.
node_inputs = {"url": "https://www.youtube.com/watch?v=o5ZQyXaAv1g"} # The inputs of the node.
node_result = pf.test(flow=flow_path, inputs=node_inputs, node=node_name)
print(f"Node outputs: {node_result}")
```
![Flow test outputs](../media/how-to-guides/quick-start/flow_test_output.png)
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
Use the code lens action on the top of the yaml editor to trigger flow test
![dag_yaml_flow_test](../media/how-to-guides/quick-start/test_flow_dag_yaml.gif)
Click the run flow button on the top of the visual editor to trigger flow test.
![visual_editor_flow_test](../media/how-to-guides/quick-start/test_flow_dag_editor.gif)
:::
::::
See more details of this topic in [Initialize and test a flow](./init-and-test-a-flow.md).
## Next steps
Learn more on how to:
- [Develop a flow](./develop-a-flow/index.md): details on how to develop a flow by writing a flow yaml from scratch.
- [Initialize and test a flow](./init-and-test-a-flow.md): details on how develop a flow from scratch or existing code.
- [Add conditional control to a flow](./add-conditional-control-to-a-flow.md): how to use activate config to add conditional control to a flow.
- [Run and evaluate a flow](./run-and-evaluate-a-flow/index.md): run and evaluate the flow using multi line data file.
- [Deploy a flow](./deploy-a-flow/index.md): how to deploy the flow as a web app.
- [Manage connections](./manage-connections.md): how to manage the endpoints/secrets information to access external services including LLMs.
- [Prompt flow in Azure AI](../cloud/azureai/quick-start.md): run and evaluate flow in Azure AI where you can collaborate with team better.
And you can also check our [examples](https://github.com/microsoft/promptflow/tree/main/examples), especially:
- [Getting started with prompt flow](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/quickstart.ipynb): the notebook covering the python sdk experience for sample introduced in this doc.
- [Tutorial: Chat with PDF](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/e2e-development/chat-with-pdf.md): An end-to-end tutorial on how to build a high quality chat application with prompt flow, including flow development and evaluation with metrics.
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/set-global-configs.md | # Set global configs
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
Promptflow supports setting global configs to avoid passing the same parameters to each command. The global configs are stored in a yaml file, which is located at `~/.promptflow/pf.yaml` by default.
The config file is shared between promptflow extension and sdk/cli. Promptflow extension controls each config through UI, so the following sections will show how to set global configs using promptflow cli.
## Set config
```shell
pf config set <config_name>=<config_value>
```
For example:
```shell
pf config set connection.provider="azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>"
```
## Show config
The following command will get all configs and show them as json format:
```shell
pf config show
```
After running the above config set command, show command will return the following result:
```json
{
"connection": {
"provider": "azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>"
}
}
```
## Supported configs
### connection.provider
The connection provider, default to "local". There are 3 possible provider values.
#### local
Set connection provider to local with `connection.provider=local`.
Connections will be saved locally. `PFClient`(or `pf connection` commands) will [manage local connections](manage-connections.md). Consequently, the flow will be executed using these local connections.
#### full azure machine learning workspace resource id
Set connection provider to a specific workspace with:
```
connection.provider=azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>
```
When `get` or `list` connections, `PFClient`(or `pf connection` commands) will return workspace connections, and flow will be executed using these workspace connections.
_Secrets for workspace connection will not be shown by those commands, which means you may see empty dict `{}` for custom connections._
:::{note}
Command `create`, `update` and `delete` are not supported for workspace connections, please manage it in workspace portal, az ml cli or AzureML SDK.
:::
#### azureml
In addition to the full resource id, you can designate the connection provider as "azureml" with `connection.provider=azureml`. In this case,
promptflow will attempt to retrieve the workspace configuration by searching `.azureml/config.json` from the current directory, then progressively from its parent folders. So it's possible to set the workspace configuration for different flow by placing the config file in the project folder.
The expected format of the config file is as follows:
```json
{
"workspace_name": "<your-workspace-name>",
"resource_group": "<your-resource-group>",
"subscription_id": "<your-subscription-id>"
}
```
> 💡 Tips
> In addition to the CLI command line setting approach, we also support setting this connection provider through the VS Code extension UI. [Click here to learn more](../cloud/azureai/consume-connections-from-azure-ai.md). | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/faq.md | # Frequency asked questions (FAQ)
## General ##
### Stable vs experimental
Prompt flow provides both stable and experimental features in the same SDK.
|Feature status | Description |
|----------------|----------------|
Stable features | **Production ready** <br/><br/> These features are recommended for most use cases and production environments. They are updated less frequently then experimental features.|
Experimental features | **Developmental** <br/><br/> These features are newly developed capabilities & updates that may not be ready or fully tested for production usage. While the features are typically functional, they can include some breaking changes. Experimental features are used to iron out SDK breaking bugs, and will only receive updates for the duration of the testing period. Experimental features are also referred to as features that are in **preview**. <br/> As the name indicates, the experimental (preview) features are for experimenting and is **not considered bug free or stable**. For this reason, we only recommend experimental features to advanced users who wish to try out early versions of capabilities and updates, and intend to participate in the reporting of bugs and glitches.
### OpenAI 1.x support
Please use the following command to upgrade promptflow for openai 1.x support:
```
pip install promptflow>=1.1.0
pip install promptflow-tools>=1.0.0
```
Note that the command above will upgrade your openai package a version later than 1.0.0,
which may introduce breaking changes to custom tool code.
Reach [OpenAI migration guide](https://github.com/openai/openai-python/discussions/742) for more details.
## Troubleshooting ##
### Connection creation failed with StoreConnectionEncryptionKeyError
```
Connection creation failed with StoreConnectionEncryptionKeyError: System keyring backend service not found in your operating system. See https://pypi.org/project/keyring/ to install requirement for different operating system, or 'pip install keyrings.alt' to use the third-party backend.
```
This error raised due to keyring can't find an available backend to store keys.
For example [macOS Keychain](https://en.wikipedia.org/wiki/Keychain_%28software%29) and [Windows Credential Locker](https://learn.microsoft.com/en-us/windows/uwp/security/credential-locker)
are valid keyring backends.
To resolve this issue, install the third-party keyring backend or write your own keyring backend, for example:
`pip install keyrings.alt`
For more detail about keyring third-party backend, please refer to 'Third-Party Backends' in [keyring](https://pypi.org/project/keyring/).
### Pf visualize show error: "tcgetpgrp failed: Not a tty"
If you are using WSL, this is a known issue for `webbrowser` under WSL; see [this issue](https://github.com/python/cpython/issues/89752) for more information. Please try to upgrade your WSL to 22.04 or later, this issue should be resolved.
If you are still facing this issue with WSL 22.04 or later, or you are not even using WSL, please open an issue to us.
### Installed tool not appearing in VSCode Extension tool list
After installing a tool package via `pip install [tool-package-name]`, the new tool may not immediately appear in the tool list within the VSCode Extension, as shown below:
![VSCode Extension tool list](../media/how-to-guides/vscode-tool-list.png)
This is often due to outdated cache. To refresh the tool list and make newly installed tools visible:
1. Open the VSCode Extension window.
2. Bring up the command palette by pressing "Ctrl+Shift+P".
3. Type and select the "Developer: Reload Webviews" command.
4. Wait a moment for the tool list refreshing.
Reloading clears the previous cache and populates the tool list with any newly installed tools. So that the missing tools are now visible.
### Set logging level
Promptflow uses `logging` module to log messages. You can set logging level via environment variable `PF_LOGGING_LEVEL`, valid values includes `CRITICAL`, `ERROR`, `WARNING`, `INFO`, `DEBUG`, default to `INFO`.
Below is the serving logs after setting `PF_LOGGING_LEVEL` to `DEBUG`:
![img](../media/how-to-guides/pf_logging_level.png)
Compare to the serving logs with `WARNING` level:
![img](../media/how-to-guides/pf_logging_level_warning.png)
### Set environment variables
Currently, promptflow supports the following environment variables:
**PF_WORKER_COUNT**
Effective for batch run only, count of parallel workers in batch run execution.
The default value is 4 (was 16 when promptflow<1.4.0)
Please take the following points into consideration when changing it:
1. The concurrency should not exceed the total data rows count. Otherwise, the execution may slow down due to additional time spent on process startup and shutdown.
2. High parallelism may cause the underlying API call to reach the rate limit of your LLM endpoint. In which case you can decrease the `PF_WORKER_COUNT` or increase the rate limit. Please refer to [this doc](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/quota) on quota management. Then you can refer to this expression to set up the concurrency.
```
PF_WORKER_COUNT <= TPM * duration_seconds / token_count / 60
```
TPM: token per minute, capacity rate limit of your LLM endpoint
duration_seconds: single flow run duration in seconds
token_count: single flow run token count
For example, if your endpoint TPM (token per minute) is 50K, the single flow run takes 10k tokens and runs for 30s, pls do not set up PF_WORKER_COUNT bigger than 2. This is a rough estimation. Please also consider collboaration (teammates use the same endpoint at the same time) and tokens consumed in deployed inference endpoints, playground and other cases which might send request to your LLM endpoints.
**PF_BATCH_METHOD**
Valid for batch run only. Optional values: 'spawn', 'fork'.
**spawn**
1. The child processes will not inherit resources of the parent process, therefore, each process needs to reinitialize the resources required for the flow, which may use more system memory.
2. Starting a process is slow because it will take some time to initialize the necessary resources.
**fork**
1. Use the copy-on-write mechanism, the child processes will inherit all the resources of the parent process, thereby using less system memory.
2. The process starts faster as it doesn't need to reinitialize resources.
Note: Windows only supports spawn, Linux and macOS support both spawn and fork.
#### How to configure environment variables
1. Configure environment variables in ```flow.dag.yaml```. Example:
```
inputs: []
outputs: []
nodes: []
environment_variables:
PF_WORKER_COUNT: 2
PF_BATCH_METHOD: "spawn"
MY_CUSTOM_SETTING: my_custom_value
```
2. Specify environment variables when submitting runs.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Use this parameter: ```--environment-variable``` to specify environment variables.
Example: ```--environment-variable PF_WORKER_COUNT="2" PF_BATCH_METHOD="spawn"```.
:::
:::{tab-item} SDK
:sync: SDK
Specify environment variables when creating run. Example:
``` python
pf = PFClient(
credential=credential,
subscription_id="<SUBSCRIPTION_ID>",
resource_group_name="<RESOURCE_GROUP>",
workspace_name="<AML_WORKSPACE_NAME>",
)
flow = "web-classification"
data = "web-classification/data.jsonl"
runtime = "example-runtime-ci"
environment_variables = {"PF_WORKER_COUNT": "2", "PF_BATCH_METHOD": "spawn"}
# create run
base_run = pf.run(
flow=flow,
data=data,
runtime=runtime,
environment_variables=environment_variables,
)
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
VSCode Extension supports specifying environment variables only when submitting batch runs.
Specify environment variables in ```batch_run_create.yaml```. Example:
``` yaml
name: flow_name
display_name: display_name
flow: flow_folder
data: data_file
column_mapping:
customer_info: <Please select a data input>
history: <Please select a data input>
environment_variables:
PF_WORKER_COUNT: "2"
PF_BATCH_METHOD: "spawn"
```
:::
::::
#### Priority
The environment variables specified when submitting runs always takes precedence over the environment variables in the flow.dag.yaml file.
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/process-image-in-flow.md | # Process image in flow
PromptFlow defines a contract to represent image data.
## Data class
`promptflow.contracts.multimedia.Image`
Image class is a subclass of `bytes`, thus you can access the binary data by directly using the object. It has an extra attribute `source_url` to store the origin url of the image, which would be useful if you want to pass the url instead of content of image to APIs like GPT-4V model.
## Data type in flow input
Set the type of flow input to `image` and promptflow will treat it as an image.
## Reference image in prompt template
In prompt templates that support image (e.g. in OpenAI GPT-4V tool), using markdown syntax to denote that a template input is an image: `![image]({{test_image}})`. In this case, `test_image` will be substituted with base64 or source_url (if set) before sending to LLM model.
## Serialization/Deserialization
Promptflow uses a special dict to represent image.
`{"data:image/<mime-type>;<representation>": "<value>"}`
- `<mime-type>` can be html standard [mime](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types) image types. Setting it to specific type can help previewing the image correctly, or it can be `*` for unknown type.
- `<representation>` is the image serialized representation, there are 3 supported types:
- url
It can point to a public accessable web url. E.g.
{"data:image/png;url": "https://developer.microsoft.com/_devcom/images/logo-ms-social.png"}
- base64
It can be the base64 encoding of the image. E.g.
{"data:image/png;base64": "iVBORw0KGgoAAAANSUhEUgAAAGQAAABLAQMAAAC81rD0AAAABGdBTUEAALGPC/xhBQAAACBjSFJNAAB6JgAAgIQAAPoAAACA6AAAdTAAAOpgAAA6mAAAF3CculE8AAAABlBMVEUAAP7////DYP5JAAAAAWJLR0QB/wIt3gAAAAlwSFlzAAALEgAACxIB0t1+/AAAAAd0SU1FB+QIGBcKN7/nP/UAAAASSURBVDjLY2AYBaNgFIwCdAAABBoAAaNglfsAAAAZdEVYdGNvbW1lbnQAQ3JlYXRlZCB3aXRoIEdJTVDnr0DLAAAAJXRFWHRkYXRlOmNyZWF0ZQAyMDIwLTA4LTI0VDIzOjEwOjU1KzAzOjAwkHdeuQAAACV0RVh0ZGF0ZTptb2RpZnkAMjAyMC0wOC0yNFQyMzoxMDo1NSswMzowMOEq5gUAAAAASUVORK5CYII="}
- path
It can reference an image file on local disk. Both absolute path and relative path are supported, but in the cases where the serialized image representation is stored in a file, relative to the containing folder of that file is recommended, as in the case of flow IO data. E.g.
{"data:image/png;path": "./my-image.png"}
Please note that `path` representation is not supported in Deployment scenario.
## Batch Input data
Batch input data containing image can be of 2 formats:
1. The same jsonl format of regular batch input, except that some column may be seriliazed image data or composite data type (dict/list) containing images. The serialized images can only be Url or Base64. E.g.
```json
{"question": "How many colors are there in the image?", "input_image": {"data:image/png;url": "https://developer.microsoft.com/_devcom/images/logo-ms-social.png"}}
{"question": "What's this image about?", "input_image": {"data:image/png;url": "https://developer.microsoft.com/_devcom/images/404.png"}}
```
2. A folder containing a jsonl file under root path, which contains serialized image in File Reference format. The referenced file are stored in the folder and their relative path to the root path is used as path in the file reference. Here is a sample batch input, note that the name of `input.jsonl` is arbitrary as long as it's a jsonl file:
```
BatchInputFolder
|----input.jsonl
|----image1.png
|----image2.png
```
Content of `input.jsonl`
```json
{"question": "How many colors are there in the image?", "input_image": {"data:image/png;path": "image1.png"}}
{"question": "What's this image about?", "input_image": {"data:image/png;path": "image2.png"}}
```
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/index.md | # How-to Guides
Simple and short articles grouped by topics, each introduces a core feature of prompt flow and how you can use it to address your specific use cases.
```{toctree}
:maxdepth: 1
develop-a-flow/index
init-and-test-a-flow
add-conditional-control-to-a-flow
run-and-evaluate-a-flow/index
tune-prompts-with-variants
execute-flow-as-a-function
deploy-a-flow/index
enable-streaming-mode
manage-connections
manage-runs
set-global-configs
develop-a-tool/index
process-image-in-flow
faq
```
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/init-and-test-a-flow.md | # Initialize and test a flow
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
From this document, customer can initialize a flow and test it.
## Initialize flow
Creating a flow folder with code/prompts and yaml definitions of the flow.
### Initialize flow from scratch
Promptflow can [create three types of flow folder](https://promptflow.azurewebsites.net/concepts/concept-flows.html#flow-types):
- standard: Basic structure of flow folder.
- chat: Chat flow is designed for conversational application development, building upon the capabilities of standard flow and providing enhanced support for chat inputs/outputs and chat history management.
- evaluation: Evaluation flows are special types of flows that assess how well the outputs of a flow align with specific criteria and goals.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
```bash
# Create a flow
pf flow init --flow <flow-name>
# Create a chat flow
pf flow init --flow <flow-name> --type chat
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
Use VS Code explorer pane > directory icon > right click > the "New flow in this directory" action. Follow the popped out dialog to initialize your flow in the target folder.
![img](../media/how-to-guides/init-and-test-a-flow/vscode_new_flow_1.png)
Alternatively, you can use the "Create new flow" action on the prompt flow pane > quick access section to create a new flow
![img](../media/how-to-guides/init-and-test-a-flow/vscode_new_flow_2.png)
:::
::::
Structure of flow folder:
- **flow.dag.yaml**: The flow definition with inputs/outputs, nodes, tools and variants for authoring purpose.
- **.promptflow/flow.tools.json**: It contains tools meta referenced in `flow.dag.yaml`.
- **Source code files (.py, .jinja2)**: User managed, the code scripts referenced by tools.
- **requirements.txt**: Python package dependencies for this flow.
![init_flow_folder](../media/how-to-guides/init-and-test-a-flow/flow_folder.png)
### Create from existing code
Customer needs to pass the path of tool script to `entry`, and also needs to pass in the promptflow template dict to `prompt-template`, which the key is the input name of the tool and the value is the path to the promptflow template.
Promptflow CLI can generate the yaml definitions needed for prompt flow from the existing folder, using the tools script and prompt templates.
```bash
# Create a flow in existing folder
pf flow init --flow <flow-name> --entry <tool-script-path> --function <tool-function-name> --prompt-template <prompt-param-name>=<prompt-tempate-path>
```
Take [customer-intent-extraction](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/customer-intent-extraction) for example, which demonstrating how to convert a langchain code into a prompt flow.
![init_output](../media/how-to-guides/init-and-test-a-flow/flow_init_output.png)
In this case, promptflow CLI generates `flow.dag.yaml`, `.promptflow/flow.tools.json` and `extract_intent_tool.py`, it is a python tool in the flow.
![init_files](../media/how-to-guides/init-and-test-a-flow/flow_init_files.png)
## Test a flow
:::{admonition} Note
Testing flow will NOT create a batch run record, therefore it's unable to use commands like `pf run show-details` to get the run information. If you want to persist the run record, see [Run and evaluate a flow](./run-and-evaluate-a-flow/index.md)
:::
Promptflow also provides ways to test the initialized flow or flow node. It will help you quickly test your flow.
### Visual editor on the VS Code for prompt flow.
::::{tab-set}
:::{tab-item} VS Code Extension
:sync: VS Code Extension
Open the flow.dag.yaml file of your flow. On the top of the yaml editor you can find the "Visual editor" action. Use it to open the Visual editor with GUI support.
![img](../media/how-to-guides/vscode_open_visual_editor.png)
:::
::::
### Test flow
Customer can use CLI or VS Code extension to test the flow.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
```bash
# Test flow
pf flow test --flow <flow-name>
# Test flow with specified variant
pf flow test --flow <flow-name> --variant '${<node-name>.<variant-name>}'
```
The log and result of flow test will be displayed in the terminal.
![flow test](../media/how-to-guides/init-and-test-a-flow/flow_test.png)
Promptflow CLI will generate test logs and outputs in `.promptflow`:
- **flow.detail.json**: Defails info of flow test, include the result of each node.
- **flow.log**: The log of flow test.
- **flow.output.json**: The result of flow test.
![flow_output_files](../media/how-to-guides/init-and-test-a-flow/flow_output_files.png)
:::
:::{tab-item} SDK
:sync: SDK
The return value of `test` function is the flow outputs.
```python
from promptflow import PFClient
pf_client = PFClient()
# Test flow
inputs = {"<flow_input_name>": "<flow_input_value>"} # The inputs of the flow.
flow_result = pf_client.test(flow="<flow_folder_path>", inputs=inputs)
print(f"Flow outputs: {flow_result}")
```
The log and result of flow test will be displayed in the terminal.
![flow test](../media/how-to-guides/init-and-test-a-flow/flow_test.png)
Promptflow CLI will generate test logs and outputs in `.promptflow`:
- **flow.detail.json**: Defails info of flow test, include the result of each node.
- **flow.log**: The log of flow test.
- **flow.output.json**: The result of flow test.
![flow_output_files](../media/how-to-guides/init-and-test-a-flow/flow_output_files.png)
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
You can use the action either on the default yaml editor or the visual editor to trigger flow test. See the snapshots below:
![img](../media/how-to-guides/vscode_test_flow_yaml.png)
![img](../media/how-to-guides/vscode_test_flow_visual.png)
:::
::::
### Test a single node in the flow
Customer can test a single python node in the flow. It will use customer provides date or the default value of the node as input. It will only use customer specified node to execute with the input.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Customer can execute this command to test the flow.
```bash
# Test flow node
pf flow test --flow <flow-name> --node <node-name>
```
The log and result of flow node test will be displayed in the terminal. And the details of node test will generated to `.promptflow/flow-<node-name>.node.detail.json`.
:::
:::{tab-item} SDK
:sync: SDK
Customer can execute this command to test the flow. The return value of `test` function is the node outputs.
```python
from promptflow import PFClient
pf_client = PFClient()
# Test not iun the flow
inputs = {<node_input_name>: <node_input_value>} # The inputs of the node.
node_result = pf_client.test(flow=<flow_folder_path>, inputs=inputs, node=<node_name>)
print(f"Node outputs: {node_result}")
```
The log and result of flow node test will be displayed in the terminal. And the details of node test will generated to `.promptflow/flow-<node-name>.node.detail.json`.
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
The prompt flow extension provides inline actions in both default yaml editor and visual editor to trigger single node runs.
![img](../media/how-to-guides/vscode_single_node_test.png)
![img](../media/how-to-guides/vscode_single_node_test_visual.png)
:::
::::
### Test with interactive mode
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Promptflow CLI provides a way to start an interactive chat session for chat flow. Customer can use below command to start an interactive chat session:
```bash
# Chat in the flow
pf flow test --flow <flow-name> --interactive
```
After executing this command, customer can interact with the chat flow in the terminal. Customer can press **Enter** to send the message to chat flow. And customer can quit with **ctrl+C**.
Promptflow CLI will distinguish the output of different roles by color, <span style="color:Green">User input</span>, <span style="color:Gold">Bot output</span>, <span style="color:Blue">Flow script output</span>, <span style="color:Cyan">Node output</span>.
Using this [chat flow](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/basic-chat) to show how to use interactive mode.
![chat](../media/how-to-guides/init-and-test-a-flow/chat.png)
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
If a flow contains chat inputs or chat outputs in the flow interface, there will be a selection when triggering flow test. You can select the interactive mode if you want to.
![img](../media/how-to-guides/vscode_interactive_chat.png)
![img](../media/how-to-guides/vscode_interactive_chat_1.png)
:::
::::
When the [LLM node](https://promptflow.azurewebsites.net/tools-reference/llm-tool.html) in the chat flow that is connected to the flow output, Promptflow SDK streams the results of the LLM node.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
The flow result will be streamed in the terminal as shown below.
![streaming_output](../media/how-to-guides/init-and-test-a-flow/streaming_output.gif)
:::
:::{tab-item} SDK
:sync: SDK
The LLM node return value of `test` function is a generator, you can consume the result by this way:
```python
from promptflow import PFClient
pf_client = PFClient()
# Test flow
inputs = {"<flow_input_name>": "<flow_input_value>"} # The inputs of the flow.
flow_result = pf_client.test(flow="<flow_folder_path>", inputs=inputs)
for item in flow_result["<LLM_node_output_name>"]:
print(item)
```
:::
::::
### Debug a single node in the flow
Customer can debug a single python node in VScode by the extension.
::::{tab-set}
:::{tab-item} VS Code Extension
:sync: VS Code Extension
Break points and debugging functionalities for the Python steps in your flow. Just set the break points and use the debug actions on either default yaml editor or visual editor.
![img](../media/how-to-guides/vscode_single_node_debug_yaml.png)
![img](../media/how-to-guides/vscode_single_node_debug_visual.png)
:::
::::
## Next steps
- [Add conditional control to a flow](./add-conditional-control-to-a-flow.md) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/execute-flow-as-a-function.md | # Execute flow as a function
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
## Overview
Promptflow allows you to load a flow and use it as a function in your code.
This feature is useful when building a service on top of a flow, reference [here](https://github.com/microsoft/promptflow/tree/main/examples/tutorials/flow-deploy/create-service-with-flow) for a simple example service with flow function consumption.
## Load an invoke the flow function
To use the flow-as-function feature, you first need to load a flow using the `load_flow` function.
Then you can consume the flow object like a function by providing key-value arguments for it.
```python
f = load_flow("../../examples/flows/standard/web-classification/")
f(url="sample_url")
```
## Config the flow with context
You can overwrite some flow configs before flow function execution by setting `flow.context`.
### Load flow as a function with in-memory connection override
By providing a connection object to flow context, flow won't need to get connection in execution time, which can save time when for cases where flow function need to be called multiple times.
```python
from promptflow.entities import AzureOpenAIConnection
connection_obj = AzureOpenAIConnection(
name=conn_name,
api_key=api_key,
api_base=api_base,
api_type="azure",
api_version=api_version,
)
# no need to create the connection object.
f.context = FlowContext(
connections={"classify_with_llm": {"connection": connection_obj}}
)
```
### Local flow as a function with flow inputs override
By providing overrides, the original flow dag will be updated in execution time.
```python
f.context = FlowContext(
# node "fetch_text_content_from_url" will take inputs from the following command instead of from flow input
overrides={"nodes.fetch_text_content_from_url.inputs.url": sample_url},
)
```
**Note**, the `overrides` are only doing YAML content replacement on original `flow.dag.yaml`.
If the `flow.dag.yaml` become invalid after `overrides`, validation error will be raised when executing.
### Load flow as a function with streaming output
After set `streaming` in flow context, the flow function will return an iterator to stream the output.
```python
f = load_flow(source="../../examples/flows/chat/basic-chat/")
f.context.streaming = True
result = f(
chat_history=[
{
"inputs": {"chat_input": "Hi"},
"outputs": {"chat_output": "Hello! How can I assist you today?"},
}
],
question="How are you?",
)
answer = ""
# the result will be a generator, iterate it to get the result
for r in result["answer"]:
answer += r
```
Reference our [sample](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/flow-as-function.ipynb) for usage.
## Next steps
Learn more about:
- [Flow as a function sample](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/flow-as-function.ipynb)
- [Deploy a flow](./deploy-a-flow/index.md)
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/enable-streaming-mode.md | # Use streaming endpoints deployed from prompt flow
In prompt flow, you can [deploy flow as REST endpoint](./deploy-a-flow/index.md) for real-time inference.
When consuming the endpoint by sending a request, the default behavior is that the online endpoint will keep waiting until the whole response is ready, and then send it back to the client. This can cause a long delay for the client and a poor user experience.
To avoid this, you can use streaming when you consume the endpoints. Once streaming enabled, you don't have to wait for the whole response ready. Instead, the server will send back the response in chunks as they are generated. The client can then display the response progressively, with less waiting time and more interactivity.
This article will describe the scope of streaming, how streaming works, and how to consume streaming endpoints.
## Create a streaming enabled flow
If you want to use the streaming mode, you need to create a flow that has a node that produces a string generator as the flow’s output. A string generator is an object that can return one string at a time when requested. You can use the following types of nodes to create a string generator:
- LLM node: This node uses a large language model to generate natural language responses based on the input.
```jinja
{# Sample prompt template for LLM node #}
system:
You are a helpful assistant.
user:
{{question}}
```
- Python tools node: This node allows you to write custom Python code that can yield string outputs. You can use this node to call external APIs or libraries that support streaming. For example, you can use this code to echo the input word by word:
```python
from promptflow import tool
# Sample code echo input by yield in Python tool node
@tool
def my_python_tool(paragraph: str) -> str:
yield "Echo: "
for word in paragraph.split():
yield word + " "
```
In this guide, we will use the ["Chat with Wikipedia"](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-wikipedia) sample flow as an example. This flow processes the user’s question, searches Wikipedia for relevant articles, and answers the question with information from the articles. It uses streaming mode to show the progress of the answer generation.
![chat_wikipedia.png](../media/how-to-guides/how-to-enable-streaming-mode/chat_wikipedia_center.png)
## Deploy the flow as an online endpoint
To use the streaming mode, you need to deploy your flow as an online endpoint. This will allow you to send requests and receive responses from your flow in real time.
Follow [this guide](./deploy-a-flow/index.md) to deploy your flow as an online endpoint.
> [!NOTE]
>
> You can follow this document to deploy an [online endpoint](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference?view=azureml-api-2).
> Please deploy with runtime environment version later than version `20230816.v10`.
> You can check your runtime version and update runtime in the run time detail page.
## Understand the streaming process
When you have an online endpoint, the client and the server need to follow specific principles for [content negotiation](https://developer.mozilla.org/en-US/docs/Web/HTTP/Content_negotiation) to utilize the streaming mode:
Content negotiation is like a conversation between the client and the server about the preferred format of the data they want to send and receive. It ensures effective communication and agreement on the format of the exchanged data.
To understand the streaming process, consider the following steps:
- First, the client constructs an HTTP request with the desired media type included in the `Accept` header. The media type tells the server what kind of data format the client expects. It's like the client saying, "Hey, I'm looking for a specific format for the data you'll send me. It could be JSON, text, or something else." For example, `application/json` indicates a preference for JSON data, `text/event-stream` indicates a desire for streaming data, and `*/*` means the client accepts any data format.
> [!NOTE]
>
> If a request lacks an `Accept` header or has empty `Accept` header, it implies that the client will accept any media type in response. The server treats it as `*/*`.
- Next, the server responds based on the media type specified in the `Accept` header. It's important to note that the client may request multiple media types in the `Accept` header, and the server must consider its capabilities and format priorities to determine the appropriate response.
- First, the server checks if `text/event-stream` is explicitly specified in the `Accept` header:
- For a stream-enabled flow, the server returns a response with a `Content-Type` of `text/event-stream`, indicating that the data is being streamed.
- For a non-stream-enabled flow, the server proceeds to check for other media types specified in the header.
- If `text/event-stream` is not specified, the server then checks if `application/json` or `*/*` is specified in the `Accept` header:
- In such cases, the server returns a response with a `Content-Type` of `application/json`, providing the data in JSON format.
- If the `Accept` header specifies other media types, such as `text/html`:
- The server returns a `424` response with a PromptFlow runtime error code `UserError` and a runtime HTTP status `406`, indicating that the server cannot fulfill the request with the requested data format.
> Note: Please refer [handle errors](#handle-errors) for details.
- Finally, the client checks the `Content-Type` response header. If it is set to `text/event-stream`, it indicates that the data is being streamed.
Let’s take a closer look at how the streaming process works. The response data in streaming mode follows the format of [server-sent events (SSE)](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events).
The overall process works as follows:
### 0. The client sends a message to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "Hello",
"chat_history": []
}
```
> [!NOTE]
>
> The `Accept` header is set to `text/event-stream` to request a stream response.
### 1. The server sends back the response in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"answer": ""}
data: {"answer": "Hello"}
data: {"answer": "!"}
data: {"answer": " How"}
data: {"answer": " can"}
data: {"answer": " I"}
data: {"answer": " assist"}
data: {"answer": " you"}
data: {"answer": " today"}
data: {"answer": " ?"}
data: {"answer": ""}
```
Note that the `Content-Type` is set to `text/event-stream; charset=utf-8`, indicating the response is an event stream.
The client should decode the response data as server-sent events and display them incrementally. The server will close the HTTP connection after all the data is sent.
Each response event is the delta to the previous event. It is recommended for the client to keep track of the merged data in memory and send them back to the server as chat history in the next request.
### 2. The client sends another chat message, along with the full chat history, to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "Glad to know you!",
"chat_history": [
{
"inputs": {
"question": "Hello"
},
"outputs": {
"answer": "Hello! How can I assist you today?"
}
}
]
}
```
### 3. The server sends back the answer in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"answer": ""}
data: {"answer": "Nice"}
data: {"answer": " to"}
data: {"answer": " know"}
data: {"answer": " you"}
data: {"answer": " too"}
data: {"answer": "!"}
data: {"answer": " Is"}
data: {"answer": " there"}
data: {"answer": " anything"}
data: {"answer": " I"}
data: {"answer": " can"}
data: {"answer": " help"}
data: {"answer": " you"}
data: {"answer": " with"}
data: {"answer": "?"}
data: {"answer": ""}
```
### 4. The chat continues in a similar way.
## Handle errors
The client should check the HTTP response code first. See [this table](https://learn.microsoft.com/azure/machine-learning/how-to-troubleshoot-online-endpoints?view=azureml-api-2&tabs=cli#http-status-codes) for common error codes returned by online endpoints.
If the response code is "424 Model Error", it means that the error is caused by the model’s code. The error response from a PromptFlow model always follows this format:
```json
{
"error": {
"code": "UserError",
"message": "Media type text/event-stream in Accept header is not acceptable. Supported media type(s) - application/json",
}
}
```
* It is always a JSON dictionary with only one key "error" defined.
* The value for "error" is a dictionary, containing "code", "message".
* "code" defines the error category. Currently, it may be "UserError" for bad user inputs and "SystemError" for errors inside the service.
* "message" is a description of the error. It can be displayed to the end user.
## How to consume the server-sent events
### Consume using Python
In this sample usage, we are using the `SSEClient` class. This class is not a built-in Python class and needs to be installed separately. You can install it via pip:
```bash
pip install sseclient-py
```
A sample usage would like:
```python
import requests
from sseclient import SSEClient
from requests.exceptions import HTTPError
try:
response = requests.post(url, json=body, headers=headers, stream=stream)
response.raise_for_status()
content_type = response.headers.get('Content-Type')
if "text/event-stream" in content_type:
client = SSEClient(response)
for event in client.events():
# Handle event, i.e. print to stdout
else:
# Handle json response
except HTTPError:
# Handle exceptions
```
### Consume using JavaScript
There are several libraries to consume server-sent events in JavaScript. Here is [one of them as an example](https://www.npmjs.com/package/sse.js?activeTab=code).
## A sample chat app using Python
Here is a sample chat app written in Python.
(Click [here](../media/how-to-guides/how-to-enable-streaming-mode/scripts/chat_app.py) to view the source code.)
![chat_app](../media/how-to-guides/how-to-enable-streaming-mode/chat_app.gif)
## Advance usage - hybrid stream and non-stream flow output
Sometimes, you may want to get both stream and non-stream results from a flow output. For example, in the “Chat with Wikipedia” flow, you may want to get not only LLM’s answer, but also the list of URLs that the flow searched. To do this, you need to modify the flow to output a combination of stream LLM’s answer and non-stream URL list.
In the sample "Chat With Wikipedia" flow, the output is connected to the LLM node `augmented_chat`. To add the URL list to the output, you need to add an output field with the name `url` and the value `${get_wiki_url.output}`.
![chat_wikipedia_dual_output_center.png](../media/how-to-guides/how-to-enable-streaming-mode/chat_wikipedia_dual_output_center.png)
The output of the flow will be a non-stream field as the base and a stream field as the delta. Here is an example of request and response.
### 0. The client sends a message to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "When was ChatGPT launched?",
"chat_history": []
}
```
### 1. The server sends back the answer in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"url": ["https://en.wikipedia.org/w/index.php?search=ChatGPT", "https://en.wikipedia.org/w/index.php?search=GPT-4"]}
data: {"answer": ""}
data: {"answer": "Chat"}
data: {"answer": "G"}
data: {"answer": "PT"}
data: {"answer": " was"}
data: {"answer": " launched"}
data: {"answer": " on"}
data: {"answer": " November"}
data: {"answer": " "}
data: {"answer": "30"}
data: {"answer": ","}
data: {"answer": " "}
data: {"answer": "202"}
data: {"answer": "2"}
data: {"answer": "."}
data: {"answer": " \n\n"}
...
data: {"answer": "PT"}
data: {"answer": ""}
```
### 2. The client sends another chat message, along with the full chat history, to the server.
```
POST https://<your-endpoint>.inference.ml.azure.com/score
Content-Type: application/json
Authorization: Bearer <key or token of your endpoint>
Accept: text/event-stream
{
"question": "When did OpenAI announce GPT-4? How long is it between these two milestones?",
"chat_history": [
{
"inputs": {
"question": "When was ChatGPT launched?"
},
"outputs": {
"url": [
"https://en.wikipedia.org/w/index.php?search=ChatGPT",
"https://en.wikipedia.org/w/index.php?search=GPT-4"
],
"answer": "ChatGPT was launched on November 30, 2022. \n\nSOURCES: https://en.wikipedia.org/w/index.php?search=ChatGPT"
}
}
]
}
```
### 3. The server sends back the answer in streaming mode.
```
HTTP/1.1 200 OK
Content-Type: text/event-stream; charset=utf-8
Connection: close
Transfer-Encoding: chunked
data: {"url": ["https://en.wikipedia.org/w/index.php?search=Generative pre-trained transformer ", "https://en.wikipedia.org/w/index.php?search=Microsoft "]}
data: {"answer": ""}
data: {"answer": "Open"}
data: {"answer": "AI"}
data: {"answer": " released"}
data: {"answer": " G"}
data: {"answer": "PT"}
data: {"answer": "-"}
data: {"answer": "4"}
data: {"answer": " in"}
data: {"answer": " March"}
data: {"answer": " "}
data: {"answer": "202"}
data: {"answer": "3"}
data: {"answer": "."}
data: {"answer": " Chat"}
data: {"answer": "G"}
data: {"answer": "PT"}
data: {"answer": " was"}
data: {"answer": " launched"}
data: {"answer": " on"}
data: {"answer": " November"}
data: {"answer": " "}
data: {"answer": "30"}
data: {"answer": ","}
data: {"answer": " "}
data: {"answer": "202"}
data: {"answer": "2"}
data: {"answer": "."}
data: {"answer": " The"}
data: {"answer": " time"}
data: {"answer": " between"}
data: {"answer": " these"}
data: {"answer": " two"}
data: {"answer": " milestones"}
data: {"answer": " is"}
data: {"answer": " approximately"}
data: {"answer": " "}
data: {"answer": "3"}
data: {"answer": " months"}
data: {"answer": ".\n\n"}
...
data: {"answer": "Chat"}
data: {"answer": "G"}
data: {"answer": "PT"}
data: {"answer": ""}
```
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/how-to-guides/manage-runs.md | # Manage runs
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](faq.md#stable-vs-experimental).
:::
This documentation will walk you through how to manage your runs with CLI, SDK and VS Code Extension.
In general:
- For `CLI`, you can run `pf/pfazure run --help` in terminal to see the help messages.
- For `SDK`, you can refer to [Promptflow Python Library Reference](../reference/python-library-reference/promptflow.md) and check `PFClient.runs` for more run operations.
Let's take a look at the following topics:
- [Manage runs](#manage-runs)
- [Create a run](#create-a-run)
- [Get a run](#get-a-run)
- [Show run details](#show-run-details)
- [Show run metrics](#show-run-metrics)
- [Visualize a run](#visualize-a-run)
- [List runs](#list-runs)
- [Update a run](#update-a-run)
- [Archive a run](#archive-a-run)
- [Restore a run](#restore-a-run)
- [Delete a run](#delete-a-run)
## Create a run
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
To create a run against bulk inputs, you can write the following YAML file.
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: ../web_classification
data: ../webClassification1.jsonl
column_mapping:
url: "${data.url}"
variant: ${summarize_text_content.variant_0}
```
To create a run against existing run, you can write the following YAML file.
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: ../classification_accuracy_evaluation
data: ../webClassification1.jsonl
column_mapping:
groundtruth: "${data.answer}"
prediction: "${run.outputs.category}"
run: <existing-flow-run-name>
```
Reference [here](https://aka.ms/pf/column-mapping) for detailed information for column mapping.
You can find additional information about flow yaml schema in [Run YAML Schema](../reference/run-yaml-schema-reference.md).
After preparing the yaml file, use the CLI command below to create them:
```bash
# create the flow run
pf run create -f <path-to-flow-run>
# create the flow run and stream output
pf run create -f <path-to-flow-run> --stream
```
The expected result is as follows if the run is created successfully.
![img](../media/how-to-guides/run_create.png)
:::
:::{tab-item} SDK
:sync: SDK
Using SDK, create `Run` object and submit it with `PFClient`. The following code snippet shows how to import the required class and create the run:
```python
from promptflow import PFClient
from promptflow.entities import Run
# Get a pf client to manage runs
pf = PFClient()
# Initialize an Run object
run = Run(
flow="<path-to-local-flow>",
# run flow against local data or existing run, only one of data & run can be specified.
data="<path-to-data>",
run="<existing-run-name>",
column_mapping={"url": "${data.url}"},
variant="${summarize_text_content.variant_0}"
)
# Create the run
result = pf.runs.create_or_update(run)
print(result)
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
You can click on the actions on the top of the default yaml editor or the visual editor for the flow.dag.yaml files to trigger flow batch runs.
![img](../media/how-to-guides/vscode_batch_run_yaml.png)
![img](../media/how-to-guides/vscode_batch_run_visual.png)
:::
::::
## Get a run
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Get a run in CLI with JSON format.
```bash
pf run show --name <run-name>
```
![img](../media/how-to-guides/run_show.png)
:::
:::{tab-item} SDK
:sync: SDK
Show run with `PFClient`
```python
from promptflow import PFClient
# Get a pf client to manage runs
pf = PFClient()
# Get and print the run
run = pf.runs.get(name="<run-name>")
print(run)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
![img](../media/how-to-guides/vscode_run_detail.png)
:::
::::
## Show run details
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Get run details with TABLE format.
```bash
pf run show --name <run-name>
```
![img](../media/how-to-guides/run_show_details.png)
:::
:::{tab-item} SDK
:sync: SDK
Show run details with `PFClient`
```python
from promptflow import PFClient
from tabulate import tabulate
# Get a pf client to manage runs
pf = PFClient()
# Get and print the run-details
run_details = pf.runs.get_details(name="<run-name>")
print(tabulate(details.head(max_results), headers="keys", tablefmt="grid"))
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
![img](../media/how-to-guides/vscode_run_detail.png)
:::
::::
## Show run metrics
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Get run metrics with JSON format.
```bash
pf run show-metrics --name <run-name>
```
![img](../media/how-to-guides/run_show_metrics.png)
:::
:::{tab-item} SDK
:sync: SDK
Show run metrics with `PFClient`
```python
from promptflow import PFClient
import json
# Get a pf client to manage runs
pf = PFClient()
# Get and print the run-metrics
run_details = pf.runs.get_metrics(name="<run-name>")
print(json.dumps(metrics, indent=4))
```
:::
::::
## Visualize a run
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Visualize run in browser.
```bash
pf run visualize --names <run-name>
```
A browser will open and display run outputs.
![img](../media/how-to-guides/run_visualize.png)
:::
:::{tab-item} SDK
:sync: SDK
Visualize run with `PFClient`
```python
from promptflow import PFClient
# Get a pf client to manage runs
pf = PFClient()
# Visualize the run
client.runs.visualize(runs="<run-name>")
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > the prompt flow pane, there is a run list. It will list all the runs on your machine. Select one or more items and click the "visualize" button on the top-right to visualize the local runs.
![img](../media/how-to-guides/vscode_run_actions.png)
:::
::::
## List runs
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
List runs with JSON format.
```bash
pf run list
```
![img](../media/how-to-guides/run_list.png)
:::
:::{tab-item} SDK
:sync: SDK
List with `PFClient`
```python
from promptflow import PFClient
# Get a pf client to manage runs
pf = PFClient()
# list runs
runs = pf.runs.list()
print(runs)
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
On the VS Code primary sidebar > the prompt flow pane, there is a run list. It will list all the runs on your machine. Hover on it to view more details.
![img](../media/how-to-guides/vscode_list_runs.png)
:::
::::
## Update a run
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Get run metrics with JSON format.
```bash
pf run update --name <run-name> --set display_name=new_display_name
```
:::
:::{tab-item} SDK
:sync: SDK
Update run with `PFClient`
```python
from promptflow import PFClient
# Get a pf client to manage runs
pf = PFClient()
# Get and print the run-metrics
run = pf.runs.update(name="<run-name>", display_name="new_display_name")
print(run)
```
:::
::::
## Archive a run
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Archive the run so it won't show in run list results.
```bash
pf run archive --name <run-name>
```
:::
:::{tab-item} SDK
:sync: SDK
Archive with `PFClient`
```python
from promptflow import PFClient
# Get a pf client to manage runs
pf = PFClient()
# archive a run
client.runs.archive(name="<run-name>")
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
![img](../media/how-to-guides/vscode_run_actions.png)
:::
::::
## Restore a run
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Restore an archived run so it can show in run list results.
```bash
pf run restore --name <run-name>
```
:::
:::{tab-item} SDK
:sync: SDK
Restore with `PFClient`
```python
from promptflow import PFClient
# Get a pf client to manage runs
pf = PFClient()
# restore a run
client.runs.restore(name="<run-name>")
```
:::
::::
## Delete a run
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Caution: pf run delete is irreversible. This operation will delete the run permanently from your local disk. Both run entity and output data will be deleted.
Delete will fail if the run name is not valid.
```bash
pf run delete --name <run-name>
```
:::
:::{tab-item} SDK
:sync: SDK
Delete with `PFClient`
```python
from promptflow import PFClient
# Get a pf client to manage runs
pf = PFClient()
# delete a run
client.runs.delete(name="run-name")
```
:::
:::: | 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/deploy-a-flow/deploy-using-dev-server.md | # Deploy a flow using development server
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
Once you have created and thoroughly tested a flow, you can use it as an HTTP endpoint.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
We are going to use the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) as
an example to show how to deploy a flow.
Please ensure you have [create the connection](../manage-connections.md#create-a-connection) required by flow, if not, you could
refer to [Setup connection for web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification).
Note: We will use relevant environment variable ({connection_name}_{key_name}) to override connection configurations in
serving mode, white space in connection name will be removed directly from environment variable name. For instance,
if there is a custom connection named 'custom_connection' with a configuration key called 'chat_deployment_name,' the
function will attempt to retrieve 'chat_deployment_name' from the environment variable
'CUSTOM_CONNECTION_CHAT_DEPLOYMENT_NAME' by default. If the environment variable is not set, it will use the original
value as a fallback.
The following CLI commands allows you serve a flow folder as an endpoint. By running this command, a [flask](https://flask.palletsprojects.com/en/) app will start in the environment where command is executed, please ensure all prerequisites required by flow have been installed.
```bash
# Serve the flow at localhost:8080
pf flow serve --source <path-to-your-flow-folder> --port 8080 --host localhost
```
The expected result is as follows if the flow served successfully, and the process will keep alive until it be killed manually.
![img](../../media/how-to-guides/deploy_flow.png)
:::
:::{tab-item} VS Code Extension
:sync: VSC
In visual editor, choose:
![img](../../media/how-to-guides/vscode_export.png)
then choose format:
![img](../../media/how-to-guides/vscode_export_as_local_app.png)
then in yaml editor:
![img](../../media/how-to-guides/vscode_start_local_app.png)
:::
::::
## Test endpoint
::::{tab-set}
:::{tab-item} Bash
You could open another terminal to test the endpoint with the following command:
```bash
curl http://localhost:8080/score --data '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -X POST -H "Content-Type: application/json"
```
:::
:::{tab-item} PowerShell
You could open another terminal to test the endpoint with the following command:
```powershell
Invoke-WebRequest -URI http://localhost:8080/score -Body '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -Method POST -ContentType "application/json"
```
:::
:::{tab-item} Test Page
The development server has a built-in web page you can use to test the flow. Open 'http://localhost:8080' in your browser.
![img](../../media/how-to-guides/deploy_flow_test_page.png)
:::
::::
## Next steps
- Try the example [here](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/).
- See how to [deploy a flow using docker](deploy-using-docker.md).
- See how to [deploy a flow using kubernetes](deploy-using-kubernetes.md).
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/deploy-a-flow/index.md | # Deploy a flow
A flow can be deployed to multiple platforms, such as a local development service, Docker container, Kubernetes cluster, etc.
```{gallery-grid}
:grid-columns: 1 2 2 3
- image: ../../media/how-to-guides/local.png
content: "<center><b>Development server</b></center>"
website: deploy-using-dev-server.html
- image: ../../media/how-to-guides/docker.png
content: "<center><b>Docker</b></center>"
website: deploy-using-docker.html
- image: ../../media/how-to-guides/kubernetes.png
content: "<center><b>Kubernetes</b></center>"
website: deploy-using-kubernetes.html
```
We also provide guides to deploy to cloud, such as azure app service:
```{gallery-grid}
:grid-columns: 1 2 2 3
- image: ../../media/how-to-guides/appservice.png
content: "<center><b>Azure App Service</b></center>"
website: ../../cloud/azureai/deploy-to-azure-appservice.html
```
We are working on more official deployment guides for other hosting providers, and welcome user submitted guides.
```{toctree}
:maxdepth: 1
:hidden:
deploy-using-dev-server
deploy-using-docker
deploy-using-kubernetes
distribute-flow-as-executable-app
``` | 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/deploy-a-flow/deploy-using-kubernetes.md | # Deploy a flow using Kubernetes
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
There are four steps to deploy a flow using Kubernetes:
1. Build the flow as docker format.
2. Build the docker image.
3. Create Kubernetes deployment yaml.
4. Apply the deployment.
## Build a flow as docker format
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Note that all dependent connections must be created before building as docker.
```bash
# create connection if not created before
pf connection create --file ../../../examples/connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Use the command below to build a flow as docker format:
```bash
pf flow build --source <path-to-your-flow-folder> --output <your-output-dir> --format docker
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
Click the button below to build a flow as docker format:
![img](../../media/how-to-guides/vscode_export_as_docker.png)
:::
::::
Note that all dependent connections must be created before exporting as docker.
### Docker format folder structure
Exported Dockerfile & its dependencies are located in the same folder. The structure is as below:
- flow: the folder contains all the flow files
- ...
- connections: the folder contains yaml files to create all related connections
- ...
- Dockerfile: the dockerfile to build the image
- start.sh: the script used in `CMD` of `Dockerfile` to start the service
- runit: the folder contains all the runit scripts
- ...
- settings.json: a json file to store the settings of the docker image
- README.md: Simple introduction of the files
## Deploy with Kubernetes
We are going to use the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) as
an example to show how to deploy with Kubernetes.
Please ensure you have [create the connection](../manage-connections.md#create-a-connection) required by flow, if not, you could
refer to [Setup connection for web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification).
Additionally, please ensure that you have installed all the required dependencies. You can refer to the "Prerequisites" section in the README of the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) for a comprehensive list of prerequisites and installation instructions.
### Build Docker image
Like other Dockerfile, you need to build the image first. You can tag the image with any name you want. In this example, we use `web-classification-serve`.
Then run the command below:
```bash
cd <your-output-dir>
docker build . -t web-classification-serve
```
### Create Kubernetes deployment yaml.
The Kubernetes deployment yaml file acts as a guide for managing your docker container in a Kubernetes pod. It clearly specifies important information like the container image, port configurations, environment variables, and various settings. Below, you'll find a simple deployment template that you can easily customize to meet your needs.
**Note**: You need encode the secret using base64 firstly and input the <encoded_secret> as 'open-ai-connection-api-key' in the deployment configuration. For example, you can run below commands in linux:
```bash
encoded_secret=$(echo -n <your_api_key> | base64)
```
```yaml
---
kind: Namespace
apiVersion: v1
metadata:
name: <your-namespace>
---
apiVersion: v1
kind: Secret
metadata:
name: open-ai-connection-api-key
namespace: <your-namespace>
type: Opaque
data:
open-ai-connection-api-key: <encoded_secret>
---
apiVersion: v1
kind: Service
metadata:
name: web-classification-service
namespace: <your-namespace>
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 30123
selector:
app: web-classification-serve-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-classification-serve-app
namespace: <your-namespace>
spec:
selector:
matchLabels:
app: web-classification-serve-app
template:
metadata:
labels:
app: web-classification-serve-app
spec:
containers:
- name: web-classification-serve-container
image: <your-docker-image>
imagePullPolicy: Never
ports:
- containerPort: 8080
env:
- name: OPEN_AI_CONNECTION_API_KEY
valueFrom:
secretKeyRef:
name: open-ai-connection-api-key
key: open-ai-connection-api-key
```
### Apply the deployment.
Before you can deploy your application, ensure that you have set up a Kubernetes cluster and installed [kubectl](https://kubernetes.io/docs/reference/kubectl/) if it's not already installed. In this documentation, we will use [Minikube](https://minikube.sigs.k8s.io/docs/) as an example. To start the cluster, execute the following command:
```bash
minikube start
```
Once your Kubernetes cluster is up and running, you can proceed to deploy your application by using the following command:
```bash
kubectl apply -f deployment.yaml
```
This command will create the necessary pods to run your application within the cluster.
**Note**: You need replace <pod_name> below with your specific pod_name. You can retrieve it by running `kubectl get pods -n web-classification`.
### Retrieve flow service logs of the container
The kubectl logs command is used to retrieve the logs of a container running within a pod, which can be useful for debugging, monitoring, and troubleshooting applications deployed in a Kubernetes cluster.
```bash
kubectl -n <your-namespace> logs <pod-name>
```
#### Connections
If the service involves connections, all related connections will be exported as yaml files and recreated in containers.
Secrets in connections won't be exported directly. Instead, we will export them as a reference to environment variables:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
type: open_ai
name: open_ai_connection
module: promptflow.connections
api_key: ${env:OPEN_AI_CONNECTION_API_KEY} # env reference
```
You'll need to set up the environment variables in the container to make the connections work.
### Test the endpoint
- Option1:
Once you've started the service, you can establish a connection between a local port and a port on the pod. This allows you to conveniently test the endpoint from your local terminal.
To achieve this, execute the following command:
```bash
kubectl port-forward <pod_name> <local_port>:<container_port> -n <your-namespace>
```
With the port forwarding in place, you can use the curl command to initiate the endpoint test:
```bash
curl http://localhost:<local_port>/score --data '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -X POST -H "Content-Type: application/json"
```
- Option2:
`minikube service web-classification-service --url -n <your-namespace>` runs as a process, creating a tunnel to the cluster. The command exposes the service directly to any program running on the host operating system.
The command above will retrieve the URL of a service running within a Minikube Kubernetes cluster (e.g. http://<ip>:<assigned_port>), which you can click to interact with the flow service in your web browser. Alternatively, you can use the following command to test the endpoint:
**Note**: Minikube will use its own external port instead of nodePort to listen to the service. So please substitute <assigned_port> with the port obtained above.
```bash
curl http://localhost:<assigned_port>/score --data '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -X POST -H "Content-Type: application/json"
```
## Next steps
- Try the example [here](https://github.com/microsoft/promptflow/tree/main/examples/tutorials/flow-deploy/kubernetes). | 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/deploy-a-flow/distribute-flow-as-executable-app.md | # Distribute flow as executable app
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
We are going to use the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) as
an example to show how to distribute flow as executable app with [Pyinstaller](https://pyinstaller.org/en/stable/requirements.html#).
Please ensure that you have installed all the required dependencies. You can refer to the "Prerequisites" section in the README of the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) for a comprehensive list of prerequisites and installation instructions. And we recommend you to add a `requirements.txt` to indicate all the required dependencies for each flow.
[Pyinstaller](https://pyinstaller.org/en/stable/installation.html) is a popular tool used for converting Python applications into standalone executables. It allows you to package your Python scripts into a single executable file, which can be run on a target machine without requiring the Python interpreter to be installed.
[Streamlit](https://docs.streamlit.io/library/get-started) is an open-source Python library used for creating web applications quickly and easily. It's designed for data scientists and engineers who want to turn data scripts into shareable web apps with minimal effort.
We use Pyinstaller to package the flow and Streamlit to create custom web apps. Prior to distributing the workflow, kindly ensure that you have installed them.
## Build a flow as executable format
Note that all dependent connections must be created before building as executable.
```bash
# create connection if not created before
pf connection create --file ../../../examples/connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Use the command below to build a flow as executable format:
```bash
pf flow build --source <path-to-your-flow-folder> --output <your-output-dir> --format executable
```
## Executable format folder structure
Exported files & its dependencies are located in the same folder. The structure is as below:
- flow: the folder contains all the flow files.
- connections: the folder contains yaml files to create all related connections.
- app.py: the entry file is included as the entry point for the bundled application.
- app.spec: the spec file tells PyInstaller how to process your script.
- main.py: it will start streamlit service and be called by the entry file.
- settings.json: a json file to store the settings of the executable application.
- build: a folder contains various log and working files.
- dist: a folder contains the executable application.
- README.md: Simple introduction of the files.
### A template script of the entry file
PyInstaller reads a spec file or Python script written by you. It analyzes your code to discover every other module and library your script needs in order to execute. Then it collects copies of all those files, including the active Python interpreter, and puts them with your script in a single folder, or optionally in a single executable file.
We provide a Python entry script named `app.py` as the entry point for the bundled app, which enables you to serve a flow folder as an endpoint.
```python
import os
import sys
from promptflow._cli._pf._connection import create_connection
from streamlit.web import cli as st_cli
from streamlit.runtime import exists
from main import start
def is_yaml_file(file_path):
_, file_extension = os.path.splitext(file_path)
return file_extension.lower() in ('.yaml', '.yml')
def create_connections(directory_path) -> None:
for root, dirs, files in os.walk(directory_path):
for file in files:
file_path = os.path.join(root, file)
if is_yaml_file(file_path):
create_connection(file_path)
if __name__ == "__main__":
create_connections(os.path.join(os.path.dirname(__file__), "connections"))
if exists():
start()
else:
main_script = os.path.join(os.path.dirname(__file__), "main.py")
sys.argv = ["streamlit", "run", main_script, "--global.developmentMode=false"]
st_cli.main(prog_name="streamlit")
```
### A template script of the spec file
The spec file tells PyInstaller how to process your script. It encodes the script names and most of the options you give to the pyinstaller command. The spec file is actually executable Python code. PyInstaller builds the app by executing the contents of the spec file.
To streamline this process, we offer a `app.spec` spec file that bundles the application into a single file. For additional information on spec files, you can refer to the [Using Spec Files](https://pyinstaller.org/en/stable/spec-files.html). Please replace `streamlit_runtime_interpreter_path` with the path of streamlit runtime interpreter in your environment.
```spec
# -*- mode: python ; coding: utf-8 -*-
from PyInstaller.utils.hooks import collect_data_files
from PyInstaller.utils.hooks import copy_metadata
datas = [('connections', 'connections'), ('flow', 'flow'), ('settings.json', '.'), ('main.py', '.'), ('{{streamlit_runtime_interpreter_path}}', './streamlit/runtime')]
datas += collect_data_files('streamlit')
datas += copy_metadata('streamlit')
datas += collect_data_files('keyrings.alt', include_py_files=True)
datas += copy_metadata('keyrings.alt')
datas += collect_data_files('streamlit_quill')
block_cipher = None
a = Analysis(
['app.py', 'main.py'],
pathex=[],
binaries=[],
datas=datas,
hiddenimports=['bs4'],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='app',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)
```
### The bundled application using Pyinstaller
Once you've build a flow as executable format following [Build a flow as executable format](#build-a-flow-as-executable-format).
It will create two folders named `build` and `dist` within your specified output directory, denoted as <your-output-dir>. The `build` folder houses various log and working files, while the `dist` folder contains the `app` executable application.
### Connections
If the service involves connections, all related connections will be exported as yaml files and recreated in the executable package.
Secrets in connections won't be exported directly. Instead, we will export them as a reference to environment variables:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
type: open_ai
name: open_ai_connection
module: promptflow.connections
api_key: ${env:OPEN_AI_CONNECTION_API_KEY} # env reference
```
## Test the endpoint
Finally, You can distribute the bundled application `app` to other people. They can execute your program by double clicking the executable file, e.g. `app.exe` in Windows system or running the binary file, e.g. `app` in Linux system.
The development server has a built-in web page they can use to test the flow by opening 'http://localhost:8501' in the browser. The expected result is as follows: if the flow served successfully, the process will keep alive until it is killed manually.
To your users, the app is self-contained. They do not need to install any particular version of Python or any modules. They do not need to have Python installed at all.
**Note**: The executable generated is not cross-platform. One platform (e.g. Windows) packaged executable can't run on others (Mac, Linux).
## Known issues
1. Note that Python 3.10.0 contains a bug making it unsupportable by PyInstaller. PyInstaller will also not work with beta releases of Python 3.13.
## Next steps
- Try the example [here](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/flow-deploy) | 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/deploy-a-flow/deploy-using-docker.md | # Deploy a flow using Docker
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
There are two steps to deploy a flow using docker:
1. Build the flow as docker format.
2. Build and run the docker image.
## Build a flow as docker format
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Use the command below to build a flow as docker format:
```bash
pf flow build --source <path-to-your-flow-folder> --output <your-output-dir> --format docker
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
In visual editor, choose:
![img](../../media/how-to-guides/vscode_export.png)
Click the button below to build a flow as docker format:
![img](../../media/how-to-guides/vscode_export_as_docker.png)
:::
::::
Note that all dependent connections must be created before exporting as docker.
### Docker format folder structure
Exported Dockerfile & its dependencies are located in the same folder. The structure is as below:
- flow: the folder contains all the flow files
- ...
- connections: the folder contains yaml files to create all related connections
- ...
- Dockerfile: the dockerfile to build the image
- start.sh: the script used in `CMD` of `Dockerfile` to start the service
- runit: the folder contains all the runit scripts
- ...
- settings.json: a json file to store the settings of the docker image
- README.md: Simple introduction of the files
## Deploy with Docker
We are going to use the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) as
an example to show how to deploy with docker.
Please ensure you have [create the connection](../manage-connections.md#create-a-connection) required by flow, if not, you could
refer to [Setup connection for web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification).
## Build a flow as docker format app
Use the command below to build a flow as docker format app:
```bash
pf flow build --source ../../flows/standard/web-classification --output dist --format docker
```
Note that all dependent connections must be created before exporting as docker.
### Build Docker image
Like other Dockerfile, you need to build the image first. You can tag the image with any name you want. In this example, we use `promptflow-serve`.
Run the command below to build image:
```bash
docker build dist -t web-classification-serve
```
### Run Docker image
Run the docker image will start a service to serve the flow inside the container.
#### Connections
If the service involves connections, all related connections will be exported as yaml files and recreated in containers.
Secrets in connections won't be exported directly. Instead, we will export them as a reference to environment variables:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
type: open_ai
name: open_ai_connection
module: promptflow.connections
api_key: ${env:OPEN_AI_CONNECTION_API_KEY} # env reference
```
You'll need to set up the environment variables in the container to make the connections work.
### Run with `docker run`
You can run the docker image directly set via below commands:
```bash
# The started service will listen on port 8080.You can map the port to any port on the host machine as you want.
docker run -p 8080:8080 -e OPEN_AI_CONNECTION_API_KEY=<secret-value> web-classification-serve
```
### Test the endpoint
After start the service, you can use curl to test it:
```bash
curl http://localhost:8080/score --data '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -X POST -H "Content-Type: application/json"
```
## Next steps
- Try the example [here](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/flow-deploy/docker).
- See how to [deploy a flow using kubernetes](deploy-using-kubernetes.md).
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-tool/create-cascading-tool-inputs.md | # Creating Cascading Tool Inputs
Cascading input settings are useful when the value of one input field determines which subsequent inputs are shown. This makes the input process more streamlined, user-friendly, and error-free. This guide will walk through how to create cascading inputs for your tools.
## Prerequisites
Please make sure you have the latest version of [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) installed (v1.2.0+).
## Create a tool with cascading inputs
We'll build out an example tool to show how cascading inputs work. The `student_id` and `teacher_id` inputs will be controlled by the value selected for the `user_type` input. Here's how to configure this in the tool code and YAML.
1. Develop the tool function, following the [cascading inputs example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_cascading_inputs.py). Key points:
* Use the `@tool` decorator to mark the function as a tool.
* Define `UserType` as an Enum class, as it accepts only a specific set of fixed values in this example.
* Conditionally use inputs in the tool logic based on `user_type`.
```python
from enum import Enum
from promptflow import tool
class UserType(str, Enum):
STUDENT = "student"
TEACHER = "teacher"
@tool
def my_tool(user_type: Enum, student_id: str = "", teacher_id: str = "") -> str:
"""This is a dummy function to support cascading inputs.
:param user_type: user type, student or teacher.
:param student_id: student id.
:param teacher_id: teacher id.
:return: id of the user.
If user_type is student, return student_id.
If user_type is teacher, return teacher_id.
"""
if user_type == UserType.STUDENT:
return student_id
elif user_type == UserType.TEACHER:
return teacher_id
else:
raise Exception("Invalid user.")
```
2. Generate a starting YAML for your tool per the [tool package guide](create-and-use-tool-package.md), then update it to enable cascading:
Add `enabled_by` and `enabled_by_value` to control visibility of dependent inputs. See the [example YAML](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_cascading_inputs.yaml) for reference.
* The `enabled_by` attribute specifies the input field, which must be an enum type, that controls the visibility of the dependent input field.
* The `enabled_by_value` attribute defines the accepted enum values from the `enabled_by` field that will make this dependent input field visible.
> Note: `enabled_by_value` takes a list, allowing multiple values to enable an input.
```yaml
my_tool_package.tools.tool_with_cascading_inputs.my_tool:
function: my_tool
inputs:
user_type:
type:
- string
enum:
- student
- teacher
student_id:
type:
- string
# This input is enabled by the input "user_type".
enabled_by: user_type
# This input is enabled when "user_type" is "student".
enabled_by_value: [student]
teacher_id:
type:
- string
enabled_by: user_type
enabled_by_value: [teacher]
module: my_tool_package.tools.tool_with_cascading_inputs
name: My Tool with Cascading Inputs
description: This is my tool with cascading inputs
type: python
```
## Use the tool in VS Code
Once you package and share your tool, you can use it in VS Code per the [tool package guide](create-and-use-tool-package.md). We have a [demo flow](https://github.com/microsoft/promptflow/tree/main/examples/tools/use-cases/cascading-inputs-tool-showcase) you can try.
Before selecting a `user_type`, the `student_id` and `teacher_id` inputs are hidden. Once you pick the `user_type`, the corresponding input appears.
![before_user_type_selected.png](../../media/how-to-guides/develop-a-tool/before_user_type_selected.png)
![after_user_type_selected_with_student.png](../../media/how-to-guides/develop-a-tool/after_user_type_selected_with_student.png)
![after_user_type_selected_with_teacher.png](../../media/how-to-guides/develop-a-tool/after_user_type_selected_with_teacher.png)
## FAQs
### How do I create multi-layer cascading inputs?
If you are dealing with multiple levels of cascading inputs, you can effectively manage the dependencies between them by using the `enabled_by` and `enabled_by_value` attributes. For example:
```yaml
my_tool_package.tools.tool_with_multi_layer_cascading_inputs.my_tool:
function: my_tool
inputs:
event_type:
type:
- string
enum:
- corporate
- private
corporate_theme:
type:
- string
# This input is enabled by the input "event_type".
enabled_by: event_type
# This input is enabled when "event_type" is "corporate".
enabled_by_value: [corporate]
enum:
- seminar
- team_building
seminar_location:
type:
- string
# This input is enabled by the input "corporate_theme".
enabled_by: corporate_theme
# This input is enabled when "corporate_theme" is "seminar".
enabled_by_value: [seminar]
private_theme:
type:
- string
# This input is enabled by the input "event_type".
enabled_by: event_type
# This input is enabled when "event_type" is "private".
enabled_by_value: [private]
module: my_tool_package.tools.tool_with_multi_layer_cascading_inputs
name: My Tool with Multi-Layer Cascading Inputs
description: This is my tool with multi-layer cascading inputs
type: python
```
Inputs will be enabled in a cascading way based on selections. | 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-tool/create-dynamic-list-tool-input.md | # Creating a Dynamic List Tool Input
Tool input options can be generated on the fly using a dynamic list. Instead of having predefined static options, the tool author defines a request function that queries backends like APIs to retrieve real-time options. This enables flexible integration with various data sources to populate dynamic options. For instance, the function could call a storage API to list current files. Rather than a hardcoded list, the user sees up-to-date options when running the tool.
## Prerequisites
- Please make sure you have the latest version of [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) installed (v1.3.1+).
- Please install promptflow package and ensure that its version is 1.0.0 or later.
```
pip install promptflow>=1.0.0
```
## Create a tool input with dynamic listing
### Create a list function
To enable dynamic listing, the tool author defines a request function with the following structure:
- Type: Regular Python function, can be in tool file or separate file
- Input: Accepts parameters needed to fetch options
- Output: Returns a list of option objects as `List[Dict[str, Union[str, int, float, list, Dict]]]`:
- Required key:
- `value`: Internal option value passed to tool function
- Optional keys:
- `display_value`: Display text shown in dropdown (defaults to `value`)
- `hyperlink`: URL to open when option clicked
- `description`: Tooltip text on hover
This function can make backend calls to retrieve the latest options, returning them in a standardized dictionary structure for the dynamic list. The required and optional keys enable configuring how each option appears and behaves in the tool input dropdown. See [my_list_func](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_dynamic_list_input.py) as an example.
```python
def my_list_func(prefix: str = "", size: int = 10, **kwargs) -> List[Dict[str, Union[str, int, float, list, Dict]]]:
"""This is a dummy function to generate a list of items.
:param prefix: prefix to add to each item.
:param size: number of items to generate.
:param kwargs: other parameters.
:return: a list of items. Each item is a dict with the following keys:
- value: for backend use. Required.
- display_value: for UI display. Optional.
- hyperlink: external link. Optional.
- description: information icon tip. Optional.
"""
import random
words = ["apple", "banana", "cherry", "date", "elderberry", "fig", "grape", "honeydew", "kiwi", "lemon"]
result = []
for i in range(size):
random_word = f"{random.choice(words)}{i}"
cur_item = {
"value": random_word,
"display_value": f"{prefix}_{random_word}",
"hyperlink": f'https://www.bing.com/search?q={random_word}',
"description": f"this is {i} item",
}
result.append(cur_item)
return result
```
### Configure a tool input with the list function
In `inputs` section of tool YAML, add following properties to the input that you want to make dynamic:
- `dynamic_list`:
- `func_path`: Path to the list function (module_name.function_name).
- `func_kwargs`: Parameters to pass to the function, can reference other input values.
- `allow_manual_entry`: Allow user to enter input value manually. Default to false.
- `is_multi_select`: Allow user to select multiple values. Default to false.
See [tool_with_dynamic_list_input.yaml](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_dynamic_list_input.yaml) as an example.
```yaml
my_tool_package.tools.tool_with_dynamic_list_input.my_tool:
function: my_tool
inputs:
input_text:
type:
- list
dynamic_list:
func_path: my_tool_package.tools.tool_with_dynamic_list_input.my_list_func
func_kwargs:
- name: prefix # argument name to be passed to the function
type:
- string
# if optional is not specified, default to false.
# this is for UX pre-validaton. If optional is false, but no input. UX can throw error in advanced.
optional: true
reference: ${inputs.input_prefix} # dynamic reference to another input parameter
- name: size # another argument name to be passed to the function
type:
- int
optional: true
default: 10
# enum and dynamic list may need below setting.
# allow user to enter input value manually, default false.
allow_manual_entry: true
# allow user to select multiple values, default false.
is_multi_select: true
# used to filter
input_prefix:
type:
- string
module: my_tool_package.tools.tool_with_dynamic_list_input
name: My Tool with Dynamic List Input
description: This is my tool with dynamic list input
type: python
```
## Use the tool in VS Code
Once you package and share your tool, you can use it in VS Code per the [tool package guide](create-and-use-tool-package.md#use-your-tool-from-vscode-extension). You could try `my-tools-package` for a quick test.
```sh
pip install my-tools-package>=0.0.8
```
![dynamic list tool input options](../../media/how-to-guides/develop-a-tool/dynamic-list-options.png)
![dynamic list tool input selected](../../media/how-to-guides/develop-a-tool/dynamic-list-selected.png)
> Note: If your dynamic list function call Azure APIs, you need to login to Azure and set default workspace. Otherwise, the tool input will be empty and you can't select anything. See [FAQs](#im-a-tool-author-and-want-to-dynamically-list-azure-resources-in-my-tool-input-what-should-i-pay-attention-to) for more details.
## FAQs
### I'm a tool author, and want to dynamically list Azure resources in my tool input. What should I pay attention to?
1. Clarify azure workspace triple "subscription_id", "resource_group_name", "workspace_name" in the list function signature. System helps append workspace triple to function input parameters if they are in function signature. See [list_endpoint_names](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_dynamic_list_input.py) as an example.
```python
def list_endpoint_names(subscription_id, resource_group_name, workspace_name, prefix: str = "") -> List[Dict[str, str]]:
"""This is an example to show how to get Azure ML resource in tool input list function.
:param subscription_id: Azure subscription id.
:param resource_group_name: Azure resource group name.
:param workspace_name: Azure ML workspace name.
:param prefix: prefix to add to each item.
"""
from azure.ai.ml import MLClient
from azure.identity import DefaultAzureCredential
credential = DefaultAzureCredential()
credential.get_token("https://management.azure.com/.default")
ml_client = MLClient(
credential=credential,
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name)
result = []
for ep in ml_client.online_endpoints.list():
hyperlink = (
f"https://ml.azure.com/endpoints/realtime/{ep.name}/detail?wsid=/subscriptions/"
f"{subscription_id}/resourceGroups/{resource_group_name}/providers/Microsoft."
f"MachineLearningServices/workspaces/{workspace_name}"
)
cur_item = {
"value": ep.name,
"display_value": f"{prefix}_{ep.name}",
# external link to jump to the endpoint page.
"hyperlink": hyperlink,
"description": f"this is endpoint: {ep.name}",
}
result.append(cur_item)
return result
```
2. Note in your tool doc that if your tool user want to use the tool at local, they should login to azure and set ws triple as default. Or the tool input will be empty and user can't select anything.
```sh
az login
az account set --subscription <subscription_id>
az configure --defaults group=<resource_group_name> workspace=<workspace_name>
```
Install azure dependencies.
```sh
pip install azure-ai-ml
```
```sh
pip install my-tools-package[azure]>=0.0.8
```
![dynamic list function azure](../../media/how-to-guides/develop-a-tool/dynamic-list-azure.png)
### I'm a tool user, and cannot see any options in dynamic list tool input. What should I do?
If you are unable to see any options in a dynamic list tool input, you may see an error message below the input field stating:
"Unable to display list of items due to XXX. Please contact the tool author/support team for troubleshooting assistance."
If this occurs, follow these troubleshooting steps:
- Note the exact error message shown. This provides details on why the dynamic list failed to populate.
- Contact the tool author/support team and report the issue. Provide the error message so they can investigate the root cause.
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-tool/index.md | # Develop a tool
We provide guides on how to develop a tool and use it.
```{toctree}
:maxdepth: 1
:hidden:
create-and-use-tool-package
add-a-tool-icon
add-category-and-tags-for-tool
use-file-path-as-tool-input
customize_an_llm_tool
create-cascading-tool-inputs
create-your-own-custom-strong-type-connection
create-dynamic-list-tool-input
```
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-tool/use-file-path-as-tool-input.md | # Using File Path as Tool Input
Users sometimes need to reference local files within a tool to implement specific logic. To simplify this, we've introduced the `FilePath` input type. This input type enables users to either select an existing file or create a new one, then pass it to a tool, allowing the tool to access the file's content.
In this guide, we will provide a detailed walkthrough on how to use `FilePath` as a tool input. We will also demonstrate the user experience when utilizing this type of tool within a flow.
## Prerequisites
- Please install promptflow package and ensure that its version is 0.1.0b8 or later.
```
pip install promptflow>=0.1.0b8
```
- Please ensure that your [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) is updated to version 1.1.0 or later.
## Using File Path as Package Tool Input
### How to create a package tool with file path input
Here we use [an existing tool package](https://github.com/microsoft/promptflow/tree/main/examples/tools/tool-package-quickstart/my_tool_package) as an example. If you want to create your own tool, please refer to [create and use tool package](create-and-use-tool-package.md#create-custom-tool-package).
1. Add a `FilePath` input for your tool, like in [this example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_file_path_input.py).
```python
import importlib
from pathlib import Path
from promptflow import tool
# 1. import the FilePath type
from promptflow.contracts.types import FilePath
# 2. add a FilePath input for your tool method
@tool
def my_tool(input_file: FilePath, input_text: str) -> str:
# 3. customise your own code to handle and use the input_file here
new_module = importlib.import_module(Path(input_file).stem)
return new_module.hello(input_text)
```
2. `FilePath` input format in a tool YAML, like in [this example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_file_path_input.yaml).
```yaml
my_tool_package.tools.tool_with_file_path_input.my_tool:
function: my_tool
inputs:
# yaml format for FilePath input
input_file:
type:
- file_path
input_text:
type:
- string
module: my_tool_package.tools.tool_with_file_path_input
name: Tool with FilePath Input
description: This is a tool to demonstrate the usage of FilePath input
type: python
```
> [!Note] tool yaml file can be generated using a python script. For further details, please refer to [create custom tool package](create-and-use-tool-package.md#create-custom-tool-package).
### Use tool with a file path input in VS Code extension
Follow steps to [build and install your tool package](create-and-use-tool-package.md#build-and-share-the-tool-package) and [use your tool from VS Code extension](create-and-use-tool-package.md#use-your-tool-from-vscode-extension).
Here we use an existing flow to demonstrate the experience, open [this flow](https://github.com/microsoft/promptflow/blob/main/examples/tools/use-cases/filepath-input-tool-showcase/flow.dag.yaml) in VS Code extension:
- There is a node named "Tool_with_FilePath_Input" with a `file_path` type input called `input_file`.
- Click the picker icon to open the UI for selecting an existing file or creating a new file to use as input.
![use file path in flow](../../media/how-to-guides/develop-a-tool/use_file_path_in_flow.png)
## Using File Path as Script Tool Input
We can also utilize the `FilePath` input type directly in a script tool, eliminating the need to create a package tool.
1. Initiate an empty flow in the VS Code extension and add a python node titled 'python_node_with_filepath' into it in the Visual Editor page.
2. Select the link `python_node_with_filepath.py` in the node to modify the python method to include a `FilePath` input as shown below, and save the code change.
```python
import importlib
from pathlib import Path
from promptflow import tool
# 1. import the FilePath type
from promptflow.contracts.types import FilePath
# 2. add a FilePath input for your tool method
@tool
def my_tool(input_file: FilePath, input_text: str) -> str:
# 3. customise your own code to handle and use the input_file here
new_module = importlib.import_module(Path(input_file).stem)
return new_module.hello(input_text)
```
3. Return to the flow Visual Editor page, click the picker icon to launch the UI for selecting an existing file or creating a new file to use as input, here we select [this file](https://github.com/microsoft/promptflow/blob/main/examples/tools/use-cases/filepath-input-tool-showcase/hello_method.py) as an example.
![use file path in script tool](../../media/how-to-guides/develop-a-tool/use_file_path_in_script_tool.png)
## FAQ
### What are some practical use cases for this feature?
The `FilePath` input enables several useful workflows:
1. **Dynamically load modules** - As shown in the demo, you can load a Python module from a specific script file selected by the user. This allows flexible custom logic.
2. **Load arbitrary data files** - The tool can load data from files like .csv, .txt, .json, etc. This provides an easy way to inject external data into a tool.
So in summary, `FilePath` input gives tools flexible access to external files provided by users at runtime. This unlocks many useful scenarios like the ones above.
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-tool/create-your-own-custom-strong-type-connection.md | # Create and Use Your Own Custom Strong Type Connection
Connections provide a secure method for managing credentials for external APIs and data sources in prompt flow. This guide explains how to create and use a custom strong type connection.
## What is a Custom Strong Type Connection?
A custom strong type connection in prompt flow allows you to define a custom connection class with strongly typed keys. This provides the following benefits:
* Enhanced user experience - no need to manually enter connection keys.
* Rich intellisense experience - defining key types enables real-time suggestions and auto-completion of available keys as you work in VS Code.
* Central location to view available keys and data types.
For other connections types, please refer to [Connections](https://microsoft.github.io/promptflow/concepts/concept-connections.html).
## Prerequisites
- Please ensure that your [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) is updated to at least version 1.2.1.
- Please install promptflow package and ensure that its version is 0.1.0b8 or later.
```
pip install promptflow>=0.1.0b8
```
## Create a custom strong type connection
Follow these steps to create a custom strong type connection:
1. Define a Python class inheriting from `CustomStrongTypeConnection`.
> [!Note] Please avoid using the `CustomStrongTypeConnection` class directly.
2. Use the Secret type to indicate secure keys. This enhances security by scrubbing secret keys.
3. Document with docstrings explaining each key.
For example:
```python
from promptflow.connections import CustomStrongTypeConnection
from promptflow.contracts.types import Secret
class MyCustomConnection(CustomStrongTypeConnection):
"""My custom strong type connection.
:param api_key: The api key.
:type api_key: Secret
:param api_base: The api base.
:type api_base: String
"""
api_key: Secret
api_base: str = "This is a fake api base."
```
See [this example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_custom_strong_type_connection.py) for a complete implementation.
## Use the connection in a flow
Once you create a custom strong type connection, here are two ways to use it in your flows:
### With Package Tools:
1. Refer to the [Create and Use Tool Package](create-and-use-tool-package.md#create-custom-tool-package) to build and install your tool package containing the connection.
2. Develop a flow with custom tools. Please take [this folder](https://github.com/microsoft/promptflow/tree/main/examples/tools/use-cases/custom-strong-type-connection-package-tool-showcase) as an example.
3. Create a custom strong type connection using one of the following methods:
- If the connection type hasn't been created previously, click the 'Add connection' button to create the connection.
![create_custom_strong_type_connection_in_node_interface](../../media/how-to-guides/develop-a-tool/create_custom_strong_type_connection_in_node_interface.png)
- Click the 'Create connection' plus sign in the CONNECTIONS section.
![create_custom_strong_type_connection_add_sign](../../media/how-to-guides/develop-a-tool/create_custom_strong_type_connection_add_sign.png)
- Click 'Create connection' plus sign in the Custom category.
![create_custom_strong_type_connection_in_custom_category](../../media/how-to-guides/develop-a-tool/create_custom_strong_type_connection_in_custom_category.png)
4. Fill in the `values` starting with `to-replace-with` in the connection template.
![custom_strong_type_connection_template](../../media/how-to-guides/develop-a-tool/custom_strong_type_connection_template.png)
5. Run the flow with the created custom strong type connection.
![use_custom_strong_type_connection_in_flow](../../media/how-to-guides/develop-a-tool/use_custom_strong_type_connection_in_flow.png)
### With Script Tools:
1. Develop a flow with python script tools. Please take [this folder](https://github.com/microsoft/promptflow/tree/main/examples/tools/use-cases/custom-strong-type-connection-script-tool-showcase) as an example.
2. Create a `CustomConnection`. Fill in the `keys` and `values` in the connection template.
![custom](../../media/how-to-guides/develop-a-tool/custom_connection_template.png)
3. Run the flow with the created custom connection.
![use_custom_connection_in_flow](../../media/how-to-guides/develop-a-tool/use_custom_connection_in_flow.png)
## Local to cloud
When creating the necessary connections in Azure AI, you will need to create a `CustomConnection`. In the node interface of your flow, this connection will be displayed as the `CustomConnection` type.
Please refer to [Run prompt flow in Azure AI](https://microsoft.github.io/promptflow/cloud/azureai/quick-start.html) for more details.
Here is an example command:
```
pfazure run create --subscription 96aede12-2f73-41cb-b983-6d11a904839b -g promptflow -w my-pf-eus --flow D:\proj\github\ms\promptflow\examples\flows\standard\flow-with-package-tool-using-custom-strong-type-connection --data D:\proj\github\ms\promptflow\examples\flows\standard\flow-with-package-tool-using-custom-strong-type-connection\data.jsonl --runtime test-compute
```
## FAQs
### I followed the steps to create a custom strong type connection, but it's not showing up. What could be the issue?
Once the new tool package is installed in your local environment, a window reload is necessary. This action ensures that the new tools and custom strong type connections become visible and accessible.
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-tool/add-a-tool-icon.md | # Adding a Tool Icon
A tool icon serves as a graphical representation of your tool in the user interface (UI). Follow this guidance to add a custom tool icon when developing your own tool package.
Adding a custom tool icon is optional. If you do not provide one, the system uses a default icon.
## Prerequisites
- Please ensure that your [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) is updated to version 1.4.2 or later.
- Create a tool package as described in [Create and Use Tool Package](create-and-use-tool-package.md).
- Prepare custom icon image that meets these requirements:
- Use PNG, JPG or BMP format.
- 16x16 pixels to prevent distortion when resizing.
- Avoid complex images with lots of detail or contrast, as they may not resize well.
See [this example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/icons/custom-tool-icon.png) as a reference.
- Install dependencies to generate icon data URI:
```
pip install pillow
```
## Add tool icon with _icon_ parameter
Run the command below in your tool project directory to automatically generate your tool YAML, use _-i_ or _--icon_ parameter to add a custom tool icon:
```
python <promptflow github repo>\scripts\tool\generate_package_tool_meta.py -m <tool_module> -o <tool_yaml_path> -i <tool-icon-path>
```
Here we use [an existing tool project](https://github.com/microsoft/promptflow/tree/main/examples/tools/tool-package-quickstart) as an example.
```
cd D:\proj\github\promptflow\examples\tools\tool-package-quickstart
python D:\proj\github\promptflow\scripts\tool\generate_package_tool_meta.py -m my_tool_package.tools.my_tool_1 -o my_tool_package\yamls\my_tool_1.yaml -i my_tool_package\icons\custom-tool-icon.png
```
In the auto-generated tool YAML file, the custom tool icon data URI is added in the `icon` field:
```yaml
my_tool_package.tools.my_tool_1.my_tool:
function: my_tool
icon: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACR0lEQVR4nKWS3UuTcRTHP79nm9ujM+fccqFGI5viRRpjJgkJ3hiCENVN/QMWdBHUVRdBNwX9ARHd2FVEWFLRjaS9XPmSC/EFTNOWc3Pi48y9PHNzz68L7UXTCvreHM65+PA953uElFLyHzLvHMwsJrnzfJqFeAan3cKV9mr8XseeAOXX5vqjSS53jdF+tIz1nIFAMDCzwpvJ5b87+LSYYHw+gcWkEAwluXnOR2Q1R+9YjJ7BKJG4zoXmqr0ddL3+QnV5EeUOK821LsJammcjEeZiafJScrd3bm8H6zkDd4mVztZKAK49/Mj8is4Z/35GPq9R5VJ5GYztDtB1HT1vovGQSiqVAqDugI3I6jpP3i9x9VQVfu8+1N/OvbWCqqqoBSa6h1fQNA1N0xiYTWJSBCZF8HgwSjQapbRQ2RUg5NYj3O6ZochmYkFL03S4mImIzjFvCf2xS5gtCRYXWvBUvKXjyEVeTN/DXuDgxsnuzSMK4HTAw1Q0hZba4NXEKp0tbpq9VkxCwTAETrsVwxBIBIYhMPI7YqyrtONQzSznJXrO4H5/GJ9LUGg0YFYydJxoYnwpj1s9SEN5KzZz4fYYAW6dr+VsowdFgamlPE/Hs8SzQZYzg0S+zjIc6iOWDDEc6uND+N12B9/VVu+mrd79o38wFCCdTeBSK6hxBii1eahxBlAtRbsDdmoiHGRNj1NZ7GM0NISvzM9oaIhiqwOO/wMgl4FsRpLf2KxGXpLNSLLInzH+CWBIA6RECIGUEiEUpDRACBSh8A3pXfGWdXfMgAAAAABJRU5ErkJggg==
inputs:
connection:
type:
- CustomConnection
input_text:
type:
- string
module: my_tool_package.tools.my_tool_1
name: my_tool
type: python
```
## Verify the tool icon in VS Code extension
Follow [steps](create-and-use-tool-package.md#use-your-tool-from-vscode-extension) to use your tool from VS Code extension. Your tool displays with the custom icon:
![custom-tool-with-icon-in-extension](../../media/how-to-guides/develop-a-tool/custom-tool-with-icon-in-extension.png)
## FAQ
### Can I preview the tool icon image before adding it to a tool?
Yes, you could run below command under the root folder to generate a data URI for your custom tool icon. Make sure the output file has an `.html` extension.
```
python <path-to-scripts>\tool\convert_image_to_data_url.py --image-path <image_input_path> -o <html_output_path>
```
For example:
```
python D:\proj\github\promptflow\scripts\tool\convert_image_to_data_url.py --image-path D:\proj\github\promptflow\examples\tools\tool-package-quickstart\my_tool_package\icons\custom-tool-icon.png -o output.html
```
The content of `output.html` looks like the following, open it in a web browser to preview the icon.
```html
<html>
<body>
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACR0lEQVR4nKWS3UuTcRTHP79nm9ujM+fccqFGI5viRRpjJgkJ3hiCENVN/QMWdBHUVRdBNwX9ARHd2FVEWFLRjaS9XPmSC/EFTNOWc3Pi48y9PHNzz68L7UXTCvreHM65+PA953uElFLyHzLvHMwsJrnzfJqFeAan3cKV9mr8XseeAOXX5vqjSS53jdF+tIz1nIFAMDCzwpvJ5b87+LSYYHw+gcWkEAwluXnOR2Q1R+9YjJ7BKJG4zoXmqr0ddL3+QnV5EeUOK821LsJammcjEeZiafJScrd3bm8H6zkDd4mVztZKAK49/Mj8is4Z/35GPq9R5VJ5GYztDtB1HT1vovGQSiqVAqDugI3I6jpP3i9x9VQVfu8+1N/OvbWCqqqoBSa6h1fQNA1N0xiYTWJSBCZF8HgwSjQapbRQ2RUg5NYj3O6ZochmYkFL03S4mImIzjFvCf2xS5gtCRYXWvBUvKXjyEVeTN/DXuDgxsnuzSMK4HTAw1Q0hZba4NXEKp0tbpq9VkxCwTAETrsVwxBIBIYhMPI7YqyrtONQzSznJXrO4H5/GJ9LUGg0YFYydJxoYnwpj1s9SEN5KzZz4fYYAW6dr+VsowdFgamlPE/Hs8SzQZYzg0S+zjIc6iOWDDEc6uND+N12B9/VVu+mrd79o38wFCCdTeBSK6hxBii1eahxBlAtRbsDdmoiHGRNj1NZ7GM0NISvzM9oaIhiqwOO/wMgl4FsRpLf2KxGXpLNSLLInzH+CWBIA6RECIGUEiEUpDRACBSh8A3pXfGWdXfMgAAAAABJRU5ErkJggg==" alt="My Image">
</body>
</html>
```
### Can I add a tool icon to an existing tool package?
Yes, you can refer to the [preview icon](add-a-tool-icon.md#can-i-preview-the-tool-icon-image-before-adding-it-to-a-tool) section to generate the data URI and manually add the data URI to the tool's YAML file.
### Can I add tool icons for dark and light mode separately?
Yes, you can add the tool icon data URIs manually or run the command below in your tool project directory to automatically generate your tool YAML, use _--icon-light_ to add a custom tool icon for the light mode and use _--icon-dark_ to add a custom tool icon for the dark mode:
```
python <promptflow github repo>\scripts\tool\generate_package_tool_meta.py -m <tool_module> -o <tool_yaml_path> --icon-light <light-tool-icon-path> --icon-dark <dark-tool-icon-path>
```
Here we use [an existing tool project](https://github.com/microsoft/promptflow/tree/main/examples/tools/tool-package-quickstart) as an example.
```
cd D:\proj\github\promptflow\examples\tools\tool-package-quickstart
python D:\proj\github\promptflow\scripts\tool\generate_package_tool_meta.py -m my_tool_package.tools.my_tool_1 -o my_tool_package\yamls\my_tool_1.yaml --icon-light my_tool_package\icons\custom-tool-icon-light.png --icon-dark my_tool_package\icons\custom-tool-icon-dark.png
```
In the auto-generated tool YAML file, the light and dark tool icon data URIs are added in the `icon` field:
```yaml
my_tool_package.tools.my_tool_1.my_tool:
function: my_tool
icon:
dark: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAAB00lEQVR4nI1SO2iTURT+7iNNb16a+Cg6iJWqRKwVRIrWV6GVUkrFdqiVShBaxIIi4iY4iouDoy4ODkKn4uQkDs5FfEzFYjEtJYQo5P/z35j/3uNw7Z80iHqHC/ec8z3OuQeMMcYYAHenU8n84YMAABw7mo93dEQpAIyBAyAiF1Kq8/Wrl5fHR1x6tjC9uPBcSrlZD4BxIgIgBCei+bnC6cGxSuWHEEIIUa58H7l0dWZqwlqSUjhq7oDWEoAL584Y6ymljDHGmM543BhvaPAsAKLfEjIyB6BeryPw796+EWidUInr16b5z6rWAYCmKXeEEADGRy+SLgXlFfLWbbWoyytULZ4f6Hee2yDgnAG4OVsoff20try08eX92vLSzJVJAJw3q7dISSnDMFx48UypeCa97cPHz7fu3Y/FYo1Go8nbCiAiIUStVus/eaKvN691IAQnsltI24wZY9Kp1Ju373K5bDKZNMa6gf5ZIWrG9/0g0K3W/wYIw3Dvnq6dO7KNMPwvgOf5x3uPHOrp9n3/HwBrLYCu3bv6Tg0PjU0d2L8PAEWfDKCtac6YIVrfKN2Zn8tkUqvfigBaR88Ya66uezMgl93+9Mmjxw8fJBIqWv7NAvwCHeuq7gEPU/QAAAAASUVORK5CYII=
light: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAAB2UlEQVR4nH1SO2hUQRQ9c18K33u72cXs7jOL8UeQCCJoJaIgKAiCWKilaGNlYREFDRGNjayVWKiFFmITECFKJKIokQRRsDFENoooUchHU5qdWZ2512KymxcNOcUwc5nDuefeA2FhZpGFU0S0Mf5S0zpdF2FhISgopUREKfXj59yhoycmPn4GAKDncuXa9VtKKWYGACgowHOdc9a6g0eOA7mx8apzzlp76vRZoGXw6XMRsdb6nwSAmYnoQ3Xi5fBIdk2SiSMiCoKgNZslteruvX4ASikvSwAEAGDqdYhAXO+VypevkwODQ4+HnlGcq2mDNLwtZq5pvWP3AYRJ0Lq2uG5rWNgYFjaBVt+8c19E/jRaWvQgImPj1e279ufaN8elzly5K1/u6r7QZ51zrjmoBqHJ+TU/39ax5cy5i53bdnb39KXtLpr28OMLgiCfz78YHpmemi0W2piZWdIWaMmDCIDWet/ePUlS0toQUWM8yxG8jrVuw/qOTBw19rUiQUQoCGZm50z9txf8By3/K0Rh+PDRk8lv3+MoWklBBACmpmdKxcKn96O3b1SqC6FSyxOUgohk4pjZ9T8YeDX6ptye+PoSpNIrfkGv3747fOzk+UtXjTE+BM14M8tfl7BQR9VzUXEAAAAASUVORK5CYII=
inputs:
connection:
type:
- CustomConnection
input_text:
type:
- string
module: my_tool_package.tools.my_tool_1
name: my_tool
type: python
```
Note: Both light and dark icons are optional. If you set either a light or dark icon, it will be used in its respective mode, and the system default icon will be used in the other mode. | 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-tool/create-and-use-tool-package.md | # Create and Use Tool Package
In this document, we will guide you through the process of developing your own tool package, offering detailed steps and advice on how to utilize your creation.
The custom tool is the prompt flow tool developed by yourself. If you find it useful, you can follow this guidance to make it a tool package. This will enable you to conveniently reuse it, share it with your team, or distribute it to anyone in the world.
After successful installation of the package, your custom "tool" will show up in VSCode extension as below:
![custom-tool-list](../../media/how-to-guides/develop-a-tool//custom-tool-list-in-extension.png)
## Create your own tool package
Your tool package should be a python package. To try it quickly, just use [my-tools-package 0.0.1](https://pypi.org/project/my-tools-package/) and skip this section.
### Prerequisites
Create a new conda environment using python 3.9 or 3.10. Run below command to install PromptFlow dependencies:
```
pip install promptflow
```
Install Pytest packages for running tests:
```
pip install pytest pytest-mock
```
Clone the PromptFlow repository from GitHub using the following command:
```
git clone https://github.com/microsoft/promptflow.git
```
### Create custom tool package
Run below command under the root folder to create your tool project quickly:
```
python <promptflow github repo>\scripts\tool\generate_tool_package_template.py --destination <your-tool-project> --package-name <your-package-name> --tool-name <your-tool-name> --function-name <your-tool-function-name>
```
For example:
```
python D:\proj\github\promptflow\scripts\tool\generate_tool_package_template.py --destination hello-world-proj --package-name hello-world --tool-name hello_world_tool --function-name get_greeting_message
```
This auto-generated script will create one tool for you. The parameters _destination_ and _package-name_ are mandatory. The parameters _tool-name_ and _function-name_ are optional. If left unfilled, the _tool-name_ will default to _hello_world_tool_, and the _function-name_ will default to _tool-name_.
The command will generate the tool project as follows with one tool `hello_world_tool.py` in it:
```
hello-world-proj/
│
├── hello_world/
│ ├── tools/
│ │ ├── __init__.py
│ │ ├── hello_world_tool.py
│ │ └── utils.py
│ ├── yamls/
│ │ └── hello_world_tool.yaml
│ └── __init__.py
│
├── tests/
│ ├── __init__.py
│ └── test_hello_world_tool.py
│
├── MANIFEST.in
│
└── setup.py
```
```The points outlined below explain the purpose of each folder/file in the package. If your aim is to develop multiple tools within your package, please make sure to closely examine point 2 and 5.```
1. **hello-world-proj**: This is the source directory. All of your project's source code should be placed in this directory.
2. **hello-world/tools**: This directory contains the individual tools for your project. Your tool package can contain either one tool or many tools. When adding a new tool, you should create another *_tool.py under the `tools` folder.
3. **hello-world/tools/hello_world_tool.py**: Develop your tool within the def function. Use the `@tool` decorator to identify the function as a tool.
> [!Note] There are two ways to write a tool. The default and recommended way is the function implemented way. You can also use the class implementation way, referring to [my_tool_2.py](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/my_tool_2.py) as an example.
4. **hello-world/tools/utils.py**: This file implements the tool list method, which collects all the tools defined. It is required to have this tool list method, as it allows the User Interface (UI) to retrieve your tools and display them within the UI.
> [!Note] There's no need to create your own list method if you maintain the existing folder structure. You can simply use the auto-generated list method provided in the `utils.py` file.
5. **hello_world/yamls/hello_world_tool.yaml**: Tool YAMLs defines the metadata of the tool. The tool list method, as outlined in the `utils.py`, fetches these tool YAMLs.
> [!Note] If you create a new tool, don't forget to also create the corresponding tool YAML. You can run below command under your tool project to auto generate your tool YAML. You may want to specify `-n` for `name` and `-d` for `description`, which would be displayed as the tool name and tooltip in prompt flow UI.
```
python <promptflow github repo>\scripts\tool\generate_package_tool_meta.py -m <tool_module> -o <tool_yaml_path> -n <tool_name> -d <tool_description>
```
For example:
```
python D:\proj\github\promptflow\scripts\tool\generate_package_tool_meta.py -m hello_world.tools.hello_world_tool -o hello_world\yamls\hello_world_tool.yaml -n "Hello World Tool" -d "This is my hello world tool."
```
To populate your tool module, adhere to the pattern \<package_name\>.tools.\<tool_name\>, which represents the folder path to your tool within the package.
6. **tests**: This directory contains all your tests, though they are not required for creating your custom tool package. When adding a new tool, you can also create corresponding tests and place them in this directory. Run below command under your tool project:
```
pytest tests
```
7. **MANIFEST.in**: This file is used to determine which files to include in the distribution of the project. Tool YAML files should be included in MANIFEST.in so that your tool YAMLs would be packaged and your tools can show in the UI.
> [!Note] There's no need to update this file if you maintain the existing folder structure.
8. **setup.py**: This file contains metadata about your project like the name, version, author, and more. Additionally, the entry point is automatically configured for you in the `generate_tool_package_template.py` script. In Python, configuring the entry point in `setup.py` helps establish the primary execution point for a package, streamlining its integration with other software.
The `package_tools` entry point together with the tool list method are used to retrieve all the tools and display them in the UI.
```python
entry_points={
"package_tools": ["<your_tool_name> = <list_module>:<list_method>"],
},
```
> [!Note] There's no need to update this file if you maintain the existing folder structure.
## Build and share the tool package
Execute the following command in the tool package root directory to build your tool package:
```
python setup.py sdist bdist_wheel
```
This will generate a tool package `<your-package>-0.0.1.tar.gz` and corresponding `whl file` inside the `dist` folder.
Create an account on PyPI if you don't already have one, and install `twine` package by running `pip install twine`.
Upload your package to PyPI by running `twine upload dist/*`, this will prompt you for your Pypi username and password, and then upload your package on PyPI. Once your package is uploaded to PyPI, others can install it using pip by running `pip install your-package-name`. Make sure to replace `your-package-name` with the name of your package as it appears on PyPI.
If you only want to put it on Test PyPI, upload your package by running `twine upload --repository-url https://test.pypi.org/legacy/ dist/*`. Once your package is uploaded to Test PyPI, others can install it using pip by running `pip install --index-url https://test.pypi.org/simple/ your-package-name`.
## Use your tool from VSCode Extension
* Step1: Install [Prompt flow for VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow).
* Step2: Go to terminal and install your tool package in conda environment of the extension. Assume your conda env name is `prompt-flow`.
```
(local_test) PS D:\projects\promptflow\tool-package-quickstart> conda activate prompt-flow
(prompt-flow) PS D:\projects\promptflow\tool-package-quickstart> pip install .\dist\my_tools_package-0.0.1-py3-none-any.whl
```
* Step3: Go to the extension and open one flow folder. Click 'flow.dag.yaml' and preview the flow. Next, click `+` button and you will see your tools. You may need to reload the windows to clean previous cache if you don't see your tool in the list.
![auto-list-tool-in-extension](../../media/how-to-guides/develop-a-tool/auto-list-tool-in-extension.png)
## FAQs
### Why is my custom tool not showing up in the UI?
Confirm that the tool YAML files are included in your custom tool package. You can add the YAML files to [MANIFEST.in](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/MANIFEST.in) and include the package data in [setup.py](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/setup.py).
Alternatively, you can test your tool package using the script below to ensure that you've packaged your tool YAML files and configured the package tool entry point correctly.
1. Make sure to install the tool package in your conda environment before executing this script.
2. Create a python file anywhere and copy the content below into it.
```python
import importlib
import importlib.metadata
def test():
"""List all package tools information using the `package-tools` entry point.
This function iterates through all entry points registered under the group "package_tools."
For each tool, it imports the associated module to ensure its validity and then prints
information about the tool.
Note:
- Make sure your package is correctly packed to appear in the list.
- The module is imported to validate its presence and correctness.
Example of tool information printed:
----identifier
{'module': 'module_name', 'package': 'package_name', 'package_version': 'package_version', ...}
"""
entry_points = importlib.metadata.entry_points()
if isinstance(entry_points, list):
entry_points = entry_points.select(group=PACKAGE_TOOLS_ENTRY)
else:
entry_points = entry_points.get(PACKAGE_TOOLS_ENTRY, [])
for entry_point in entry_points:
list_tool_func = entry_point.load()
package_tools = list_tool_func()
for identifier, tool in package_tools.items():
importlib.import_module(tool["module"]) # Import the module to ensure its validity
print(f"----{identifier}\n{tool}")
if __name__ == "__main__":
test()
```
3. Run this script in your conda environment. This will return the metadata of all tools installed in your local environment, and you should verify that your tools are listed.
### Why am I unable to upload package to PyPI?
* Make sure that the entered username and password of your PyPI account are accurate.
* If you encounter a `403 Forbidden Error`, it's likely due to a naming conflict with an existing package. You will need to choose a different name. Package names must be unique on PyPI to avoid confusion and conflicts among users. Before creating a new package, it's recommended to search PyPI (https://pypi.org/) to verify that your chosen name is not already taken. If the name you want is unavailable, consider selecting an alternative name or a variation that clearly differentiates your package from the existing one.
## Advanced features
- [Add a Tool Icon](add-a-tool-icon.md)
- [Add Category and Tags for Tool](add-category-and-tags-for-tool.md)
- [Create and Use Your Own Custom Strong Type Connection](create-your-own-custom-strong-type-connection.md)
- [Customize an LLM Tool](customize_an_llm_tool.md)
- [Use File Path as Tool Input](use-file-path-as-tool-input.md)
- [Create a Dynamic List Tool Input](create-dynamic-list-tool-input.md)
- [Create Cascading Tool Inputs](create-cascading-tool-inputs.md)
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-tool/add-category-and-tags-for-tool.md | # Adding Category and Tags for Tool
This document is dedicated to guiding you through the process of categorizing and tagging your tools for optimal organization and efficiency. Categories help you organize your tools into specific folders, making it much easier to find what you need. Tags, on the other hand, work like labels that offer more detailed descriptions. They enable you to quickly search and filter tools based on specific characteristics or functions. By using categories and tags, you'll not only tailor your tool library to your preferences but also save time by effortlessly finding the right tool for any task.
| Attribute | Type | Required | Description |
| --------- | ---- | -------- | ----------- |
| category | str | No | Organizes tools into folders by common features. |
| tags | dict | No | Offers detailed, searchable descriptions of tools through key-value pairs. |
**Important Notes:**
- Tools without an assigned category will be listed in the root folder.
- Tools lacking tags will display an empty tags field.
## Prerequisites
- Please ensure that your [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) is updated to version 1.1.0 or later.
## How to add category and tags for a tool
Run the command below in your tool project directory to automatically generate your tool YAML, use _-c_ or _--category_ to add category, and use _--tags_ to add tags for your tool:
```
python <promptflow github repo>\scripts\tool\generate_package_tool_meta.py -m <tool_module> -o <tool_yaml_path> --category <tool_category> --tags <tool_tags>
```
Here, we use [an existing tool](https://github.com/microsoft/promptflow/tree/main/examples/tools/tool-package-quickstart/my_tool_package/yamls/my_tool_1.yaml) as an example. If you wish to create your own tool, please refer to the [create and use tool package](create-and-use-tool-package.md#create-custom-tool-package) guide.
```
cd D:\proj\github\promptflow\examples\tools\tool-package-quickstart
python D:\proj\github\promptflow\scripts\tool\generate_package_tool_meta.py -m my_tool_package.tools.my_tool_1 -o my_tool_package\yamls\my_tool_1.yaml --category "test_tool" --tags "{'tag1':'value1','tag2':'value2'}"
```
In the auto-generated tool YAML file, the category and tags are shown as below:
```yaml
my_tool_package.tools.my_tool_1.my_tool:
function: my_tool
inputs:
connection:
type:
- CustomConnection
input_text:
type:
- string
module: my_tool_package.tools.my_tool_1
name: My First Tool
description: This is my first tool
type: python
# Category and tags are shown as below.
category: test_tool
tags:
tag1: value1
tag2: value2
```
## Tool with category and tags experience in VS Code extension
Follow the [steps](create-and-use-tool-package.md#use-your-tool-from-vscode-extension) to use your tool via the VS Code extension.
- Experience in the tool tree
![category_and_tags_in_tool_tree](../../media/how-to-guides/develop-a-tool/category_and_tags_in_tool_tree.png)
- Experience in the tool list
By clicking `More` in the visual editor, you can view your tools along with their category and tags:
![category_and_tags_in_tool_list](../../media/how-to-guides/develop-a-tool/category_and_tags_in_tool_list.png)
Furthermore, you have the option to search or filter tools based on tags:
![filter_tools_by_tag](../../media/how-to-guides/develop-a-tool/filter_tools_by_tag.png) | 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-tool/customize_an_llm_tool.md | # Customizing an LLM Tool
In this document, we will guide you through the process of customizing an LLM tool, allowing users to seamlessly connect to a large language model with prompt tuning experience using a `PromptTemplate`.
## Prerequisites
- Please ensure that your [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) is updated to version 1.2.0 or later.
## How to customize an LLM tool
Here we use [an existing tool package](https://github.com/microsoft/promptflow/tree/main/examples/tools/tool-package-quickstart/my_tool_package) as an example. If you want to create your own tool, please refer to [create and use tool package](create-and-use-tool-package.md).
1. Develop the tool code as in [this example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_custom_llm_type.py).
- Add a `CustomConnection` input to the tool, which is used to authenticate and establish a connection to the large language model.
- Add a `PromptTemplate` input to the tool, which serves as an argument to be passed into the large language model.
```python
from jinja2 import Template
from promptflow import tool
from promptflow.connections import CustomConnection
from promptflow.contracts.types import PromptTemplate
@tool
def my_tool(connection: CustomConnection, prompt: PromptTemplate, **kwargs) -> str:
# Customize your own code to use the connection and prompt here.
rendered_prompt = Template(prompt, trim_blocks=True, keep_trailing_newline=True).render(**kwargs)
return rendered_prompt
```
2. Generate the custom LLM tool YAML.
Run the command below in your tool project directory to automatically generate your tool YAML, use _-t "custom_llm"_ or _--tool-type "custom_llm"_ to indicate this is a custom LLM tool:
```
python <promptflow github repo>\scripts\tool\generate_package_tool_meta.py -m <tool_module> -o <tool_yaml_path> -t "custom_llm"
```
Here we use [an existing tool](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_custom_llm_type.yaml) as an example.
```
cd D:\proj\github\promptflow\examples\tools\tool-package-quickstart
python D:\proj\github\promptflow\scripts\tool\generate_package_tool_meta.py -m my_tool_package.tools.tool_with_custom_llm_type -o my_tool_package\yamls\tool_with_custom_llm_type.yaml -n "My Custom LLM Tool" -d "This is a tool to demonstrate how to customize an LLM tool with a PromptTemplate." -t "custom_llm"
```
This command will generate a YAML file as follows:
```yaml
my_tool_package.tools.tool_with_custom_llm_type.my_tool:
name: My Custom LLM Tool
description: This is a tool to demonstrate how to customize an LLM tool with a PromptTemplate.
# The type is custom_llm.
type: custom_llm
module: my_tool_package.tools.tool_with_custom_llm_type
function: my_tool
inputs:
connection:
type:
- CustomConnection
```
## Use the tool in VS Code
Follow the steps to [build and install your tool package](create-and-use-tool-package.md#build-and-share-the-tool-package) and [use your tool from VS Code extension](create-and-use-tool-package.md#use-your-tool-from-vscode-extension).
Here we use an existing flow to demonstrate the experience, open [this flow](https://github.com/microsoft/promptflow/blob/main/examples/tools/use-cases/custom_llm_tool_showcase/flow.dag.yaml) in VS Code extension.
- There is a node named "my_custom_llm_tool" with a prompt template file. You can either use an existing file or create a new one as the prompt template file.
![use_my_custom_llm_tool](../../media/how-to-guides/develop-a-tool/use_my_custom_llm_tool.png)
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/run-and-evaluate-a-flow/use-column-mapping.md | # Use column mapping
In this document, we will introduce how to map inputs with column mapping when running a flow.
## Column mapping introduction
Column mapping is a mapping from flow input name to specified values.
If specified, the flow will be executed with provided value for specified inputs.
The following types of values in column mapping are supported:
- `${data.<column_name>}` to reference from your test dataset.
- `${run.outputs.<output_name>}` to reference from referenced run's output. **Note**: this only supported when `--run` is provided for `pf run`.
- `STATIC_VALUE` to create static value for all lines for specified column.
## Flow inputs override priority
Flow input values are overridden according to the following priority:
"specified in column mapping" > "default value" > "same name column in provided data".
For example, if we have a flow with following inputs:
```yaml
inputs:
input1:
type: string
default: "default_val1"
input2:
type: string
default: "default_val2"
input3:
type: string
input4:
type: string
...
```
And the flow will return each inputs in outputs.
With the following data
```json
{"input3": "val3_in_data", "input4": "val4_in_data"}
```
And use the following YAML to run
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
flow: path/to/flow
# my_flow has default value val2 for key2
data: path/to/data
# my_data has column key3 with value val3
column_mapping:
input1: "val1_in_column_mapping"
input3: ${data.input3}
```
Since the flow will return each inputs in output, we can get the actual inputs from `outputs.output` field in run details:
![column_mapping_details](../../media/column_mapping_details.png)
- Input "input1" has value "val1_in_column_mapping" since it's specified as constance in `column_mapping`.
- Input "input2" has value "default_val2" since it used default value in flow dag.
- Input "input3" has value "val3_in_data" since it's specified as data reference in `column_mapping`.
- Input "input4" has value "val4_in_data" since it has same name column in provided data.
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/run-and-evaluate-a-flow/index.md | # Run and evaluate a flow
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
After you have developed and tested the flow in [init and test a flow](../init-and-test-a-flow.md), this guide will help you learn how to run a flow with a larger dataset and then evaluate the flow you have created.
## Create a batch run
Since you have run your flow successfully with a small set of data, you might want to test if it performs well in large set of data, you can run a batch test and check the outputs.
A bulk test allows you to run your flow with a large dataset and generate outputs for each data row, and the run results will be recorded in local db so you can use [pf commands](../../reference/pf-command-reference.md) to view the run results at anytime. (e.g. `pf run list`)
Let's create a run with flow [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification). It is a flow demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.
To begin with the guide, you need:
- Git clone the sample repository(above flow link) and set the working directory to `<path-to-the-sample-repo>/examples/flows/`.
- Make sure you have already created the necessary connection following [Create necessary connections](../quick-start.md#create-necessary-connections).
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Create the run with flow and data, can add `--stream` to stream the run.
```sh
pf run create --flow standard/web-classification --data standard/web-classification/data.jsonl --column-mapping url='${data.url}' --stream
```
Note `column-mapping` is a mapping from flow input name to specified values, see more details in [Use column mapping](https://aka.ms/pf/column-mapping).
You can also name the run by specifying `--name my_first_run` in above command, otherwise the run name will be generated in a certain pattern which has timestamp inside.
![q_0](../../media/how-to-guides/quick-start/flow-run-create-output-cli.png)
With a run name, you can easily view or visualize the run details using below commands:
```sh
pf run show-details -n my_first_run
```
![q_0](../../media/how-to-guides/quick-start/flow-run-show-details-output-cli.png)
```sh
pf run visualize -n my_first_run
```
![q_0](../../media/how-to-guides/quick-start/flow-run-visualize-single-run.png)
More details can be found with `pf run --help`
:::
:::{tab-item} SDK
:sync: SDK
```python
from promptflow import PFClient
# Please protect the entry point by using `if __name__ == '__main__':`,
# otherwise it would cause unintended side effect when promptflow spawn worker processes.
# Ref: https://docs.python.org/3/library/multiprocessing.html#the-spawn-and-forkserver-start-methods
if __name__ == "__main__":
# PFClient can help manage your runs and connections.
pf = PFClient()
# Set flow path and run input data
flow = "standard/web-classification" # set the flow directory
data= "standard/web-classification/data.jsonl" # set the data file
# create a run, stream it until it's finished
base_run = pf.run(
flow=flow,
data=data,
stream=True,
# map the url field from the data to the url input of the flow
column_mapping={"url": "${data.url}"},
)
```
![q_0](../../media/how-to-guides/quick-start/flow-run-create-with-stream-output-sdk.png)
```python
# get the inputs/outputs details of a finished run.
details = pf.get_details(base_run)
details.head(10)
```
![q_0](../../media/how-to-guides/quick-start/flow-run-show-details-output-sdk.png)
```python
# visualize the run in a web browser
pf.visualize(base_run)
```
![q_0](../../media/how-to-guides/quick-start/flow-run-visualize-single-run.png)
Feel free to check [Promptflow Python Library Reference](../../reference/python-library-reference/promptflow.md) for all SDK public interfaces.
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
Use the code lens action on the top of the yaml editor to trigger batch run
![dag_yaml_flow_test](../../media/how-to-guides/quick-start/batch_run_dag_yaml.png)
Click the bulk test button on the top of the visual editor to trigger flow test.
![visual_editor_flow_test](../../media/how-to-guides/quick-start/bulk_run_visual_editor.png)
:::
::::
We also have a more detailed documentation [Manage runs](../manage-runs.md) demonstrating how to manage your finished runs with CLI, SDK and VS Code Extension.
## Evaluate your flow
You can use an evaluation method to evaluate your flow. The evaluation methods are also flows which use Python or LLM etc., to calculate metrics like accuracy, relevance score. Please refer to [Develop evaluation flow](../develop-a-flow/develop-evaluation-flow.md) to learn how to develop an evaluation flow.
In this guide, we use [eval-classification-accuracy](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation/eval-classification-accuracy) flow to evaluate. This is a flow illustrating how to evaluate the performance of a classification system. It involves comparing each prediction to the groundtruth and assigns a `Correct` or `Incorrect` grade, and aggregating the results to produce metrics such as `accuracy`, which reflects how good the system is at classifying the data.
### Run evaluation flow against run
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
**Evaluate the finished flow run**
After the run is finished, you can evaluate the run with below command, compared with the normal run create command, note there are two extra arguments:
- `column-mapping`: A mapping from flow input name to specified data values. Reference [here](https://aka.ms/pf/column-mapping) for detailed information.
- `run`: The run name of the flow run to be evaluated.
More details can be found in [Use column mapping](https://aka.ms/pf/column-mapping).
```sh
pf run create --flow evaluation/eval-classification-accuracy --data standard/web-classification/data.jsonl --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.category}' --run my_first_run --stream
```
Same as the previous run, you can specify the evaluation run name with `--name my_first_eval_run` in above command.
You can also stream or view the run details with:
```sh
pf run stream -n my_first_eval_run # same as "--stream" in command "run create"
pf run show-details -n my_first_eval_run
pf run show-metrics -n my_first_eval_run
```
Since now you have two different runs `my_first_run` and `my_first_eval_run`, you can visualize the two runs at the same time with below command.
```sh
pf run visualize -n "my_first_run,my_first_eval_run"
```
A web browser will be opened to show the visualization result.
![q_0](../../media/how-to-guides/run_visualize.png)
:::
:::{tab-item} SDK
:sync: SDK
**Evaluate the finished flow run**
After the run is finished, you can evaluate the run with below command, compared with the normal run create command, note there are two extra arguments:
- `column-mapping`: A dictionary represents sources of the input data that are needed for the evaluation method. The sources can be from the flow run output or from your test dataset.
- If the data column is in your test dataset, then it is specified as `${data.<column_name>}`.
- If the data column is from your flow output, then it is specified as `${run.outputs.<output_name>}`.
- `run`: The run name or run instance of the flow run to be evaluated.
More details can be found in [Use column mapping](https://aka.ms/pf/column-mapping).
```python
from promptflow import PFClient
# PFClient can help manage your runs and connections.
pf = PFClient()
# set eval flow path
eval_flow = "evaluation/eval-classification-accuracy"
data= "standard/web-classification/data.jsonl"
# run the flow with existing run
eval_run = pf.run(
flow=eval_flow,
data=data,
run=base_run,
column_mapping={ # map the url field from the data to the url input of the flow
"groundtruth": "${data.answer}",
"prediction": "${run.outputs.category}",
}
)
# stream the run until it's finished
pf.stream(eval_run)
# get the inputs/outputs details of a finished run.
details = pf.get_details(eval_run)
details.head(10)
# view the metrics of the eval run
metrics = pf.get_metrics(eval_run)
print(json.dumps(metrics, indent=4))
# visualize both the base run and the eval run
pf.visualize([base_run, eval_run])
```
A web browser will be opened to show the visualization result.
![q_0](../../media/how-to-guides/run_visualize.png)
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
There are actions to trigger local batch runs. To perform an evaluation you can use the run against "existing runs" actions.
![img](../../media/how-to-guides/vscode_against_run.png)
![img](../../media/how-to-guides/vscode_against_run_2.png)
:::
::::
## Next steps
Learn more about:
- [Tune prompts with variants](../tune-prompts-with-variants.md)
- [Deploy a flow](../deploy-a-flow/index.md)
- [Manage runs](../manage-runs.md)
- [Python library reference](../../reference/python-library-reference/promptflow.md)
```{toctree}
:maxdepth: 1
:hidden:
use-column-mapping
```
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-flow/referencing-external-files-or-folders-in-a-flow.md | # Referencing external files/folders in a flow
Sometimes, pre-existing code assets are essential for the flow reference. In most cases, you can accomplish this by importing a Python package into your flow. However, if a Python package is not available or it is heavy to create a package, you can still reference external files or folders located outside of the current flow folder by using our **additional includes** feature in your flow configuration.
This feature provides an efficient mechanism to list relative file or folder paths that are outside of the flow folder, integrating them seamlessly into your flow.dag.yaml. For example:
```yaml
additional_includes:
- ../web-classification/classify_with_llm.jinja2
- ../web-classification/convert_to_dict.py
- ../web-classification/fetch_text_content_from_url.py
- ../web-classification/prepare_examples.py
- ../web-classification/summarize_text_content.jinja2
- ../web-classification/summarize_text_content__variant_1.jinja2
```
You can add this field `additional_includes` into the flow.dag.yaml. The value of this field is a list of the **relative file/folder path** to the flow folder.
Just as with the common definition of the tool node entry, you can define the tool node entry in the flow.dag.yaml using only the file name, eliminating the need to specify the relative path again. For example:
```yaml
nodes:
- name: fetch_text_content_from_url
type: python
source:
type: code
path: fetch_text_content_from_url.py
inputs:
url: ${inputs.url}
- name: summarize_text_content
use_variants: true
- name: prepare_examples
type: python
source:
type: code
path: prepare_examples.py
inputs: {}
```
The entry file "fetch_text_content_from_url.py" of the tool node "fetch_text_content_from_url" is located in "../web-classification/fetch_text_content_from_url.py", as specified in the additional_includes field. The same applies to the "summarize_text_content" tool nodes.
> **Note**:
>
> 1. If you have two files with the same name located in different folders specified in the `additional_includes` field, and the file name is also specified as the entry of a tool node, the system will reference the **last one** it encounters in the `additional_includes` field.
> > 1. If you have a file in the flow folder with the same name as a file specified in the `additional_includes` field, the system will prioritize the file listed in the `additional_includes` field.
Take the following YAML structure as an example:
```yaml
additional_includes:
- ../web-classification/prepare_examples.py
- ../tmp/prepare_examples.py
...
nodes:
- name: summarize_text_content
use_variants: true
- name: prepare_examples
type: python
source:
type: code
path: prepare_examples.py
inputs: {}
```
In this case, the system will use "../tmp/prepare_examples.py" as the entry file for the tool node "prepare_examples". Even if there is a file named "prepare_examples.py" in the flow folder, the system will still use the file "../tmp/prepare_examples.py" specified in the `additional_includes` field.
> Tips:
> The additional includes feature can significantly streamline your workflow by eliminating the need to manually handle these references.
> 1. To get a hands-on experience with this feature, practice with our sample [flow-with-additional-includes](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/flow-with-additional-includes).
> 1. You can learn more about [How the 'additional includes' flow operates during the transition to the cloud](../../cloud/azureai/quick-start.md#run-snapshot-of-the-flow-with-additional-includes). | 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-flow/index.md | # Develop a flow
We provide guides on how to develop a flow by writing a flow yaml from scratch in this section.
```{toctree}
:maxdepth: 1
:hidden:
develop-standard-flow
develop-chat-flow
develop-evaluation-flow
referencing-external-files-or-folders-in-a-flow
``` | 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-flow/develop-standard-flow.md | # Develop standard flow
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
From this document, you can learn how to develop a standard flow by writing a flow yaml from scratch. You can
find additional information about flow yaml schema in [Flow YAML Schema](../../reference/flow-yaml-schema-reference.md).
## Flow input data
The flow input data is the data that you want to process in your flow.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can add a flow input in inputs section of flow yaml.
```yaml
inputs:
url:
type: string
default: https://www.microsoft.com/en-us/d/xbox-wireless-controller-stellar-shift-special-edition/94fbjc7h0h6h
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
When unfolding Inputs section in the authoring page, you can set and view your flow inputs, including input schema (name and type),
and the input value.
![flow_input](../../media/how-to-guides/develop-standard-flow/flow_input.png)
:::
::::
For Web Classification sample as shown the screenshot above, the flow input is an url of string type.
For more input types in a python tool, please refer to [Input types](../../reference/tools-reference/python-tool.md#types).
## Develop the flow using different tools
In one flow, you can consume different kinds of tools. We now support built-in tool like
[LLM](../../reference/tools-reference/llm-tool.md), [Python](../../reference/tools-reference/python-tool.md) and
[Prompt](../../reference/tools-reference/prompt-tool.md) and
third-party tool like [Serp API](../../reference/tools-reference/serp-api-tool.md),
[Vector Search](../../reference/tools-reference/vector_db_lookup_tool.md), etc.
### Add tool as your need
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can add a tool node in nodes section of flow yaml. For example, yaml below shows how to add a Python tool node in the flow.
```yaml
nodes:
- name: fetch_text_content_from_url
type: python
source:
type: code
path: fetch_text_content_from_url.py
inputs:
url: ${inputs.url}
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
By selecting the tool card on the very top, you'll add a new tool node to flow.
![add_tool](../../media/how-to-guides/develop-standard-flow/add_tool.png)
:::
::::
### Edit tool
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can edit the tool by simply opening the source file and making edits. For example, we provide a simple Python tool code below.
```python
from promptflow import tool
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
@tool
def my_python_tool(input1: str) -> str:
return 'hello ' + input1
```
We also provide an LLM tool prompt below.
```jinja
Please summarize the following text in one paragraph. 100 words.
Do not add any information that is not in the text.
Text: {{text}}
Summary:
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
When a new tool node is added to flow, it will be appended at the bottom of flatten view with a random name by default.
At the top of each tool node card, there's a toolbar for adjusting the tool node. You can move it up or down, you can delete or rename it too.
For a python tool node, you can edit the tool code by clicking the code file. For a LLM tool node, you can edit the
tool prompt by clicking the prompt file and adjust input parameters like connection, api and etc.
![edit_tool](../../media/how-to-guides/develop-standard-flow/edit_tool.png)
:::
::::
### Create connection
Please refer to the [Create necessary connections](../quick-start.md#create-necessary-connections) for details.
## Chain your flow - link nodes together
Before linking nodes together, you need to define and expose an interface.
### Define LLM node interface
LLM node has only one output, the completion given by LLM provider.
As for inputs, we offer a templating strategy that can help you create parametric prompts that accept different input
values. Instead of fixed text, enclose your input name in `{{}}`, so it can be replaced on the fly. We use Jinja as our
templating language. For example:
```jinja
Your task is to classify a given url into one of the following types:
Movie, App, Academic, Channel, Profile, PDF or None based on the text content information.
The classification will be based on the url, the webpage text content summary, or both.
Here are a few examples:
{% for ex in examples %}
URL: {{ex.url}}
Text content: {{ex.text_content}}
OUTPUT:
{"category": "{{ex.category}}", "evidence": "{{ex.evidence}}"}
{% endfor %}
For a given URL : {{url}}, and text content: {{text_content}}.
Classify above url to complete the category and indicate evidence.
OUTPUT:
```
### Define Python node interface
Python node might have multiple inputs and outputs. Define inputs and outputs as shown below.
If you have multiple outputs, remember to make it a dictionary so that the downstream node can call each key separately.
For example:
```python
import json
from promptflow import tool
@tool
def convert_to_dict(input_str: str, input_str2: str) -> dict:
try:
print(input_str2)
return json.loads(input_str)
except Exception as e:
print("input is not valid, error: {}".format(e))
return {"category": "None", "evidence": "None"}
```
### Link nodes together
After the interface is defined, you can use:
- ${inputs.key} to link with flow input.
- ${upstream_node_name.output} to link with single-output upstream node.
- ${upstream_node_name.output.key} to link with multi-output upstream node.
Below are common scenarios for linking nodes together.
### Scenario 1 - Link LLM node with flow input and single-output upstream node
After you add a new LLM node and edit the prompt file like [Define LLM node interface](#define-llm-node-interface),
three inputs called `url`, `examples` and `text_content` are created in inputs section.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can link the LLM node input with flow input by `${inputs.url}`.
And you can link `examples` to the upstream `prepare_examples` node and `text_content` to the `summarize_text_content` node
by `${prepare_examples.output}` and `${summarize_text_content.output}`.
```yaml
- name: classify_with_llm
type: llm
source:
type: code
path: classify_with_llm.jinja2
inputs:
deployment_name: text-davinci-003
suffix: ""
max_tokens: 128
temperature: 0.2
top_p: 1
echo: false
presence_penalty: 0
frequency_penalty: 0
best_of: 1
url: ${inputs.url} # Link with flow input
examples: ${prepare_examples.output} # Link LLM node with single-output upstream node
text_content: ${summarize_text_content.output} # Link LLM node with single-output upstream node
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
In the value drop-down, select `${inputs.url}`, `${prepare_examples.output}` and `${summarize_text_content.output}`, then
you'll see in the graph view that the newly created LLM node is linked to the flow input, upstream `prepare_examples` and `summarize_text_content` node.
![link_llm_with_flow_input_single_output_node](../../media/how-to-guides/develop-standard-flow/link_llm_with_flow_input_single_output_node.png)
:::
::::
When running the flow, the `url` input of the node will be replaced by flow input on the fly, and the `examples` and
`text_content` input of the node will be replaced by `prepare_examples` and `summarize_text_content` node output on the fly.
### Scenario 2 - Link LLM node with multi-output upstream node
Suppose we want to link the newly created LLM node with `covert_to_dict` Python node whose output is a dictionary with two keys: `category` and `evidence`.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can link `examples` to the `evidence` output of upstream `covert_to_dict` node by `${convert_to_dict.output.evidence}` like below:
```yaml
- name: classify_with_llm
type: llm
source:
type: code
path: classify_with_llm.jinja2
inputs:
deployment_name: text-davinci-003
suffix: ""
max_tokens: 128
temperature: 0.2
top_p: 1
echo: false
presence_penalty: 0
frequency_penalty: 0
best_of: 1
text_content: ${convert_to_dict.output.evidence} # Link LLM node with multi-output upstream node
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
In the value drop-down, select `${convert_to_dict.output}`, then manually append `evidence`, then you'll see in the graph
view that the newly created LLM node is linked to the upstream `convert_to_dict node`.
![link_llm_with_multi_output_node](../../media/how-to-guides/develop-standard-flow/link_llm_with_multi_output_node.png)
:::
::::
When running the flow, the `text_content` input of the node will be replaced by `evidence` value from `convert_to_dict node` output dictionary on the fly.
### Scenario 3 - Link Python node with upstream node/flow input
After you add a new Python node and edit the code file like [Define Python node interface](#define-python-node-interface)],
two inputs called `input_str` and `input_str2` are created in inputs section. The linkage is the same as LLM node,
using `${flow.input_name}` to link with flow input or `${upstream_node_name.output}` to link with upstream node.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
```yaml
- name: prepare_examples
type: python
source:
type: code
path: prepare_examples.py
inputs:
input_str: ${inputs.url} # Link Python node with flow input
input_str2: ${fetch_text_content_from_url.output} # Link Python node with single-output upstream node
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
![link_python_with_flow_node_input](../../media/how-to-guides/develop-standard-flow/link_python_with_flow_node_input.png)
:::
::::
When running the flow, the `input_str` input of the node will be replaced by flow input on the fly and the `input_str2`
input of the node will be replaced by `fetch_text_content_from_url` node output dictionary on the fly.
## Set flow output
When the flow is complicated, instead of checking outputs on each node, you can set flow output and check outputs of
multiple nodes in one place. Moreover, flow output helps:
- Check bulk test results in one single table.
- Define evaluation interface mapping.
- Set deployment response schema.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
You can add flow outputs in outputs section of flow yaml . The linkage is the same as LLM node,
using `${convert_to_dict.output.category}` to link `category` flow output with with `category` value of upstream node
`convert_to_dict`.
```yaml
outputs:
category:
type: string
reference: ${convert_to_dict.output.category}
evidence:
type: string
reference: ${convert_to_dict.output.evidence}
```
:::
:::{tab-item} VS Code Extension
:sync: VS Code Extension
First define flow output schema, then select in drop-down the node whose output you want to set as flow output.
Since `convert_to_dict` has a dictionary output with two keys: `category` and `evidence`, you need to manually append
`category` and `evidence` to each. Then run flow, after a while, you can check flow output in a table.
![flow_output](../../media/how-to-guides/develop-standard-flow/flow_output.png)
:::
:::: | 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-flow/develop-chat-flow.md | # Develop chat flow
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
From this document, you can learn how to develop a chat flow by writing a flow yaml from scratch. You can
find additional information about flow yaml schema in [Flow YAML Schema](../../reference/flow-yaml-schema-reference.md).
## Flow input data
The most important elements that differentiate a chat flow from a standard flow are **chat input** and **chat history**. A chat flow can have multiple inputs, but **chat history** and **chat input** are required inputs in chat flow.
- **Chat Input**: Chat input refers to the messages or queries submitted by users to the chatbot. Effectively handling chat input is crucial for a successful conversation, as it involves understanding user intentions, extracting relevant information, and triggering appropriate responses.
- **Chat History**: Chat history is the record of all interactions between the user and the chatbot, including both user inputs and AI-generated outputs. Maintaining chat history is essential for keeping track of the conversation context and ensuring the AI can generate contextually relevant responses. Chat history is a special type of chat flow input, that stores chat messages in a structured format.
An example of chat history:
```python
[
{"inputs": {"question": "What types of container software there are?"}, "outputs": {"answer": "There are several types of container software available, including: Docker, Kubernetes"}},
{"inputs": {"question": "What's the different between them?"}, "outputs": {"answer": "The main difference between the various container software systems is their functionality and purpose. Here are some key differences between them..."}},
]
```
You can set **is_chat_input**/**is_chat_history** to **true** to add chat_input/chat_history to the chat flow.
```yaml
inputs:
chat_history:
type: list
is_chat_history: true
default: []
question:
type: string
is_chat_input: true
default: What is ChatGPT?
```
For more information see [develop the flow using different tools](./develop-standard-flow.md#flow-input-data).
## Develop the flow using different tools
In one flow, you can consume different kinds of tools. We now support built-in tool like
[LLM](../../reference/tools-reference/llm-tool.md), [Python](../../reference/tools-reference/python-tool.md) and
[Prompt](../../reference/tools-reference/prompt-tool.md) and
third-party tool like [Serp API](../../reference/tools-reference/serp-api-tool.md),
[Vector Search](../../reference/tools-reference/vector_db_lookup_tool.md), etc.
For more information see [develop the flow using different tools](./develop-standard-flow.md#develop-the-flow-using-different-tools).
## Chain your flow - link nodes together
Before linking nodes together, you need to define and expose an interface.
For more information see [chain your flow](./develop-standard-flow.md#chain-your-flow---link-nodes-together).
## Set flow output
**Chat output** is required output in the chat flow. It refers to the AI-generated messages that are sent to the user in response to their inputs. Generating contextually appropriate and engaging chat outputs is vital for a positive user experience.
You can set **is_chat_output** to **true** to add chat_output to the chat flow.
```yaml
outputs:
answer:
type: string
reference: ${chat.output}
is_chat_output: true
```
| 0 |
promptflow_repo/promptflow/docs/how-to-guides | promptflow_repo/promptflow/docs/how-to-guides/develop-a-flow/develop-evaluation-flow.md | # Develop evaluation flow
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
The evaluation flow is a flow to test/evaluate the quality of your LLM application (standard/chat flow). It usually runs on the outputs of standard/chat flow, and compute key metrics that can be used to determine whether the standard/chat flow performs well. See [Flows](../../concepts/concept-flows.md) for more information.
Before proceeding with this document, it is important to have a good understanding of the standard flow. Please make sure you have read [Develop standard flow](./develop-standard-flow.md), since they share many common features and these features won't be repeated in this doc, such as:
- `Inputs/Outputs definition`
- `Nodes`
- `Chain nodes in a flow`
While the evaluation flow shares similarities with the standard flow, there are some important differences that set it apart. The main distinctions are as follows:
- `Inputs from an existing run`: The evaluation flow contains inputs that are derived from the outputs of the standard/chat flow. These inputs are used for evaluation purposes.
- `Aggregation node`: The evaluation flow contains one or more aggregation nodes, where the actual evaluation takes place. These nodes are responsible for computing metrics and determining the performance of the standard/chat flow.
## Evaluation flow example
In this guide, we use [eval-classification-accuracy](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation/eval-classification-accuracy) flow as an example of the evaluation flow. This is a flow illustrating how to evaluate the performance of a classification flow. It involves comparing each prediction to the groundtruth and assigns a `Correct` or `Incorrect` grade, and aggregating the results to produce metrics such as `accuracy`, which reflects how good the system is at classifying the data.
## Flow inputs
The flow `eval-classification-accuracy` contains two inputs:
```yaml
inputs:
groundtruth:
type: string
description: Groundtruth of the original question, it's the correct label that you hope your standard flow could predict.
default: APP
prediction:
type: string
description: The actual predicted outputs that your flow produces.
default: APP
```
As evident from the inputs description, the evaluation flow requires two specific inputs:
- `groundtruth`: This input represents the actual or expected values against which the performance of the standard/chat flow will be evaluated.
- `prediction`: The prediction input is derived from the outputs of another standard/chat flow. It contains the predicted values generated by the standard/chat flow, which will be compared to the groundtruth values during the evaluation process.
From the definition perspective, there is no difference compared with adding an input/output in a `standard/chat flow`. However when running an evaluation flow, you may need to specify the data source from both data file and flow run outputs. For more details please refer to [Run and evaluate a flow](../run-and-evaluate-a-flow/index.md#evaluate-your-flow).
## Aggregation node
Before introducing the aggregation node, let's see what a regular node looks like, we use node `grade` in the example flow for instance:
```yaml
- name: grade
type: python
source:
type: code
path: grade.py
inputs:
groundtruth: ${inputs.groundtruth}
prediction: ${inputs.prediction}
```
It takes both `groundtruth` and `prediction` from the flow inputs, compare them in the source code to see if they match:
```python
from promptflow import tool
@tool
def grade(groundtruth: str, prediction: str):
return "Correct" if groundtruth.lower() == prediction.lower() else "Incorrect"
```
When it comes to an `aggregation node`, there are two key distinctions that set it apart from a regular node:
1. It has an attribute `aggregation` set to be `true`.
```yaml
- name: calculate_accuracy
type: python
source:
type: code
path: calculate_accuracy.py
inputs:
grades: ${grade.output}
aggregation: true # Add this attribute to make it an aggregation node
```
2. Its source code accepts a `List` type parameter which is a collection of the previous regular node's outputs.
```python
from typing import List
from promptflow import log_metric, tool
@tool
def calculate_accuracy(grades: List[str]):
result = []
for index in range(len(grades)):
grade = grades[index]
result.append(grade)
# calculate accuracy for each variant
accuracy = round((result.count("Correct") / len(result)), 2)
log_metric("accuracy", accuracy)
return result
```
The parameter `grades` in above function, contains all results that are produced by the regular node `grade`. Assuming the referred standard flow run has 3 outputs:
```json
{"prediction": "App"}
{"prediction": "Channel"}
{"prediction": "Academic"}
```
And we provides a data file like this:
```json
{"groundtruth": "App"}
{"groundtruth": "Channel"}
{"groundtruth": "Wiki"}
```
Then the `grades` value would be `["Correct", "Correct", "Incorrect"]`, and the final accuracy is `0.67`.
This example provides a straightforward demonstration of how to evaluate the classification flow. Once you have a solid understanding of the evaluation mechanism, you can customize and design your own evaluation method to suit your specific needs.
### More about the list parameter
What if the number of referred standard flow run outputs does not match the provided data file? We know that a standard flow can be executed against multiple line data and some of them could fail while others succeed. Consider the same standard flow run mentioned in above example but the `2nd` line run has failed, thus we have below run outputs:
```json
{"prediction": "App"}
{"prediction": "Academic"}
```
The promptflow flow executor has the capability to recognize the index of the referred run's outputs and extract the corresponding data from the provided data file. This means that during the execution process, even if the same data file is provided(3 lines), only the specific data mentioned below will be processed:
```json
{"groundtruth": "App"}
{"groundtruth": "Wiki"}
```
In this case, the `grades` value would be `["Correct", "Incorrect"]` and the accuracy is `0.5`.
### How to set aggregation node in VS Code Extention
![img](../../media/how-to-guides/develop-evaluation-flow/set_aggregation_node_in_vscode.png)
## How to log metrics
:::{admonition} Limitation
You can only log metrics in an `aggregation node`, otherwise the metric will be ignored.
:::
Promptflow supports logging and tracking experiments using `log_metric` function. A metric is a key-value pair that records a single float measure. In a python node, you can log a metric with below code:
```python
from typing import List
from promptflow import log_metric, tool
@tool
def example_log_metrics(grades: List[str]):
# this node is an aggregation node so it accepts a list of grades
metric_key = "accuracy"
metric_value = round((grades.count("Correct") / len(result)), 2)
log_metric(metric_key, metric_value)
```
After the run is completed, you can run `pf run show-metrics -n <run_name>` to see the metrics.
![img](../../media/how-to-guides/run_show_metrics.png)
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/concept-flows.md | While how LLMs work may be elusive to many developers, how LLM apps work is not - they essentially involve a series of calls to external services such as LLMs/databases/search engines, or intermediate data processing, all glued together. Thus LLM apps are merely Directed Acyclic Graphs (DAGs) of function calls. These DAGs are flows in prompt flow.
# Flows
A flow in prompt flow is a DAG of functions (we call them [tools](./concept-tools.md)). These functions/tools connected via input/output dependencies and executed based on the topology by prompt flow executor.
A flow is represented as a YAML file and can be visualized with our [Prompt flow for VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow). Here is an example:
![flow_dag](../media/how-to-guides/quick-start/flow_dag.png)
## Flow types
Prompt flow has three flow types:
- **Standard flow** and **Chat flow**: these two are for you to develop your LLM application. The primary difference between the two lies in the additional support provided by the "Chat Flow" for chat applications. For instance, you can define chat_history, chat_input, and chat_output for your flow. The prompt flow, in turn, will offer a chat-like experience (including conversation history) during the development of the flow. Moreover, it also provides a sample chat application for deployment purposes.
- **Evaluation flow** is for you to test/evaluate the quality of your LLM application (standard/chat flow). It usually run on the outputs of standard/chat flow, and compute some metrics that can be used to determine whether the standard/chat flow performs well. E.g. is the answer accurate? is the answer fact-based?
## When to use standard flow vs. chat flow?
As a general guideline, if you are building a chatbot that needs to maintain conversation history, try chat flow. In most other cases, standard flow should serve your needs.
Our examples should also give you an idea when to use what:
- [examples/flows/standard](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard)
- [examples/flows/chat](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat)
## Next steps
- [Quick start](../how-to-guides/quick-start.md)
- [Initialize and test a flow](../how-to-guides/init-and-test-a-flow.md)
- [Run and evaluate a flow](../how-to-guides/run-and-evaluate-a-flow/index.md)
- [Tune prompts using variants](../how-to-guides/tune-prompts-with-variants.md) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/design-principles.md | # Design principles
When we started this project, [LangChain](https://www.langchain.com/) already became popular esp. after the ChatGPT launch. One of the questions we’ve been asked is what’s the difference between prompt flow and LangChain. This article is to elucidate the reasons for building prompt flow and the deliberate design choices we have made. To put it succinctly, prompt flow is a suite of development tools for you to build LLM apps with a strong emphasis of quality through experimentations, not a framework - which LangChain is.
While LLM apps are mostly in exploration stage, Microsoft started in this area a bit earlier and we’ve had the opportunity to observe how developers are integrating LLMs into existing systems or build new applications. These invaluable insights have shaped the fundamental design principles of prompt flow.
## 1. Expose the prompts vs. hiding them
The core essence of LLM applications lies in the prompts themselves, at least for today. When developing a reasonably complex LLM application, the majority of development work should be “tuning” the prompts (note the intentional use of the term "tuning," which we will delve into further later on). Any framework or tool trying to help in this space should focus on making prompt tuning easier and more straightforward. On the other hand, prompts are very volatile, it's unlikely to write a single prompt that can work across different models or even different version of same models. Building a successful LLM-based application, you have to understand every prompt introduced, so that you can tune it when necessary. LLM is simply not powerful or deterministic enough that you can use a prompt written by others like you use libraries in traditional programming languages.
In this context, any design that tries to provide a smart function or agent by encapsulating a few prompts in a library is unlikely to yield favorable results in real-world scenarios. And hiding prompts inside a library’s code base only makes it’s hard for people to improve or tailor the prompts to suit their specific needs.
Prompt flow, being positioned as a tool, refrains from wrapping any prompts within its core codebase. The only place you will see prompts are our sample flows, which are, of course, available for adoption and utilization. Every prompt should be authored and controlled by the developers themselves, rather than relying on us.
## 2. A new way of work
LLMs possess remarkable capabilities that enable developers to enhance their applications without delving deep into the intricacies of machine learning. In the meantime, LLMs make these apps more stochastic, which pose new challenges to application development. Merely asserting "no exception" or "result == x" in gated tests is no longer sufficient. Adopting a new methodology and employing new tools becomes imperative to ensure the quality of LLM applications — an entirely novel way of working is required.
At the center of this paradigm shift is evaluation, a term frequently used in machine learning space, refers to the process of assessing the performance and quality of a trained model. It involves measuring how well the model performs on a given task or dataset, which plays a pivotal role in understanding the model's strengths, weaknesses, and overall effectiveness. Evaluation metrics and techniques vary depending on the specific task and problem domain. Some common metrics include accuracy, precision and recall, you probably already familiar with. Now the LLM apps share similarities with machine learning models, they requires an evaluation-centric approach integrated into the development workflow, with a robust set of metrics and evaluation forming the foundation for ensuring the quality of LLM applications.
Prompt flow offers a range of tools to streamline the new way of work:
* Develop your evaluation program as Evaluation flow to calculate metrics for your app/flow, learn from our sample evaluation flows.
* Iterate on your application flow and run evaluation flows via the SDK/CLI, allowing you to compare metrics and choose the optimal candidate for release. These iterations include trying different prompts, different LLM parameters like temperature etc. - this is referred as “tuning” process earlier, or sometime referred as experimentation.
* Integrate the evaluation into your CI/CD pipeline, aligning the assertions in your gated tests with the selected metrics.
Prompt flow introduces two conceptual components to facilitate this workflow:
* Evaluation flow: a flow type that indicates this flow is not for deploy or integrate into your app, it’s for evaluating an app/flow performance.
* Run: every time you run your flow with data, or run an evaluation on the output of a flow, a Run object is created to manage the history and allow for comparison and additional analysis.
While new concepts introduce additional cognitive load, we firmly believe they hold greater importance compared to abstracting different LLM APIs or vector database APIs.
## 3. Optimize for “visibility”
There are quite some interesting application patterns emerging because of LLMs, like Retrieval Augmented Generation (RAG), ReAct and more. Though how LLMs work may remain enigmatic to many developers, how LLM apps work is not - they essentially involve a series of calls to external services such as LLMs, databases, and search engines, all glued together. Architecturally there isn’t much new, patterns like RAG and ReAct are both straightforward to implement once a developer understands what they are - plain Python programs with API calls to external services can totally serve the purpose effectively.
By observing many internal use cases, we learned that deeper insight into the detail of the execution is critical. Establishing a systematic method for tracking interactions with external systems is one of design priority. Consequently, We adopted an unconventional approach - prompt flow has a YAML file describing how function calls (we call them [Tools](../concepts/concept-tools.md)) are executed and connected into a Directed Acyclic Graph (DAG).
This approach offers several key benefits, primarily centered around **enhanced visibility**:
1) During development, your flow can be visualized in an intelligible manner, enabling clear identification of any faulty components. As a byproduct, you obtain an architecturally descriptive diagram that can be shared with others.
2) Each node in the flow has it’s internal detail visualized in a consistent way.
3) Single nodes can be individually run or debugged without the need to rerun previous nodes.
</b>
![promptflow-dag](../media/promptflow-dag.png)
The emphasis on visibility in prompt flow's design helps developers to gain a comprehensive understanding of the intricate details of their applications. This, in turn, empowers developers to engage in effective troubleshooting and optimization.
Despite there're some control flow features like "activate-when" to serve the needs of branches/switch-case, we do not intend to make Flow itself Turing-complete. If you want to develop an agent which is fully dynamic and guided by LLM, leveraging [Semantic Kernel](https://github.com/microsoft/semantic-kernel) together with prompt flow would be a favorable option.
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/concept-variants.md | With prompt flow, you can use variants to tune your prompt. In this article, you'll learn the prompt flow variants concept.
# Variants
A variant refers to a specific version of a tool node that has distinct settings. Currently, variants are supported only in the LLM tool. For example, in the LLM tool, a new variant can represent either a different prompt content or different connection settings.
Suppose you want to generate a summary of a news article. You can set different variants of prompts and settings like this:
| Variants | Prompt | Connection settings |
| --------- | ------------------------------------------------------------ | ------------------- |
| Variant 0 | `Summary: {{input sentences}}` | Temperature = 1 |
| Variant 1 | `Summary: {{input sentences}}` | Temperature = 0.7 |
| Variant 2 | `What is the main point of this article? {{input sentences}}` | Temperature = 1 |
| Variant 3 | `What is the main point of this article? {{input sentences}}` | Temperature = 0.7 |
By utilizing different variants of prompts and settings, you can explore how the model responds to various inputs and outputs, enabling you to discover the most suitable combination for your requirements.
## Benefits of using variants
- **Enhance the quality of your LLM generation**: By creating multiple variants of the same LLM node with diverse prompts and configurations, you can identify the optimal combination that produces high-quality content aligned with your needs.
- **Save time and effort**: Even slight modifications to a prompt can yield significantly different results. It's crucial to track and compare the performance of each prompt version. With variants, you can easily manage the historical versions of your LLM nodes, facilitating updates based on any variant without the risk of forgetting previous iterations. This saves you time and effort in managing prompt tuning history.
- **Boost productivity**: Variants streamline the optimization process for LLM nodes, making it simpler to create and manage multiple variations. You can achieve improved results in less time, thereby increasing your overall productivity.
- **Facilitate easy comparison**: You can effortlessly compare the results obtained from different variants side by side, enabling you to make data-driven decisions regarding the variant that generates the best outcomes.
## Next steps
- [Tune prompts with variants](../how-to-guides/tune-prompts-with-variants.md) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/index.md | # Concepts
In this section, you will learn the basic concepts of prompt flow.
```{toctree}
:maxdepth: 1
concept-flows
concept-tools
concept-connections
concept-variants
design-principles
``` | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/concept-tools.md | Tools are the fundamental building blocks of a [flow](./concept-flows.md).
Each tool is an executable unit, basically a function to performs various tasks including but not limited to:
- Accessing LLMs for various purposes
- Querying databases
- Getting information from search engines
- Pre/post processing of data
# Tools
Prompt flow provides 3 basic tools:
- [LLM](../reference/tools-reference/llm-tool.md): The LLM tool allows you to write custom prompts and leverage large language models to achieve specific goals, such as summarizing articles, generating customer support responses, and more.
- [Python](../reference/tools-reference/python-tool.md): The Python tool enables you to write custom Python functions to perform various tasks, such as fetching web pages, processing intermediate data, calling third-party APIs, and more.
- [Prompt](../reference/tools-reference/prompt-tool.md): The Prompt tool allows you to prepare a prompt as a string for more complex use cases or for use in conjunction with other prompt tools or python tools.
## More tools
Our partners also contributes other useful tools for advanced scenarios, here are some links:
- [Vector DB Lookup](../reference/tools-reference/vector_db_lookup_tool.md): vector search tool that allows users to search top k similar vectors from vector database.
- [Faiss Index Lookup](../reference/tools-reference/faiss_index_lookup_tool.md): querying within a user-provided Faiss-based vector store.
## Custom tools
You can create your own tools that can be shared with your team or anyone in the world.
Learn more on [Create and Use Tool Package](../how-to-guides/develop-a-tool/create-and-use-tool-package.md)
## Next steps
For more information on the available tools and their usage, visit the our [reference doc](../reference/index.md). | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/concept-connections.md | In prompt flow, you can utilize connections to securely manage credentials or secrets for external services.
# Connections
Connections are for storing information about how to access external services like LLMs: endpoint, api keys etc.
- In your local development environment, the connections are persisted in your local machine with keys encrypted.
- In Azure AI, connections can be configured to be shared across the entire workspace. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards.
Prompt flow provides a variety of pre-built connections, including Azure Open AI, Open AI, etc. These pre-built connections enable seamless integration with these resources within the built-in tools. Additionally, you have the flexibility to create custom connection types using key-value pairs, empowering them to tailor the connections to their specific requirements, particularly in Python tools.
| Connection type | Built-in tools |
| ------------------------------------------------------------ | ------------------------------- |
| [Azure Open AI](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service) | LLM or Python |
| [Open AI](https://openai.com/) | LLM or Python |
| [Cognitive Search](https://azure.microsoft.com/en-us/products/search) | Vector DB Lookup or Python |
| [Serp](https://serpapi.com/) | Serp API or Python |
| Custom | Python |
By leveraging connections in prompt flow, you can easily establish and manage connections to external APIs and data sources, facilitating efficient data exchange and interaction within their AI applications.
## Next steps
- [Create connections](../how-to-guides/manage-connections.md) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/integrations/index.md | # Integrations
The Integrations section contains documentation on custom extensions created by the community that expand prompt flow's capabilities.
These include tools that enrich flows, as well as tutorials on innovative ways to use prompt flow.
```{toctree}
:maxdepth: 1
tools/index
llms/index
``` | 0 |
promptflow_repo/promptflow/docs/integrations | promptflow_repo/promptflow/docs/integrations/llms/index.md | # Alternative LLMs
This section provides tutorials on incorporating alternative large language models into prompt flow.
```{toctree}
:maxdepth: 1
:hidden:
``` | 0 |
promptflow_repo/promptflow/docs/integrations | promptflow_repo/promptflow/docs/integrations/tools/index.md | # Custom Tools
This section contains documentation for custom tools created by the community to extend Prompt flow's capabilities for specific use cases. These tools are developed following the guide on [Creating and Using Tool Packages](../../how-to-guides/develop-a-tool/create-and-use-tool-package.md). They are not officially maintained or endorsed by the Prompt flow team. For questions or issues when using a tool, please use the support contact link in the table below.
## Tool Package Index
The table below provides an index of custom tool packages. The columns contain:
- **Package Name:** The name of the tool package. Links to the package documentation.
- **Description:** A short summary of what the tool package does.
- **Owner:** The creator/maintainer of the tool package.
- **Support Contact:** Link to contact for support and reporting new issues.
| Package Name | Description | Owner | Support Contact |
|-|-|-|-|
| promptflow-azure-ai-language | Collection of Azure AI Language Prompt flow tools. | Sean Murray | [email protected] |
```{toctree}
:maxdepth: 1
:hidden:
azure-ai-language-tool
```
| 0 |
promptflow_repo/promptflow/docs/integrations | promptflow_repo/promptflow/docs/integrations/tools/azure-ai-language-tool.md | # Azure AI Language
Azure AI Language enables users with task-oriented and optimized pre-trained language models to effectively understand documents and conversations. This Prompt flow tool is a wrapper for various Azure AI Language APIs. The current list of supported capabilities is as follows:
| Name | Description |
|-------------------------------------------|-------------------------------------------------------|
| Abstractive Summarization | Generate abstractive summaries from documents. |
| Extractive Summarization | Extract summaries from documents. |
| Conversation Summarization | Summarize conversations. |
| Entity Recognition | Recognize and categorize entities in documents. |
| Key Phrase Extraction | Extract key phrases from documents. |
| Language Detection | Detect the language of documents. |
| PII Entity Recognition | Recognize and redact PII entities in documents. |
| Sentiment Analysis | Analyze the sentiment of documents. |
| Conversational Language Understanding | Predict intents and entities from user's utterances. |
| Translator | Translate documents. |
## Requirements
- For AzureML users:
follow this [wiki](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-custom-tool-package-creation-and-usage?view=azureml-api-2#prepare-runtime), starting from `Prepare runtime`. Note that the PyPi package name is `promptflow-azure-ai-language`.
- For local users:
```
pip install promptflow-azure-ai-language
```
## Prerequisites
The tool calls APIs from Azure AI Language. To use it, you must create a connection to an [Azure AI Language resource](https://learn.microsoft.com/en-us/azure/ai-services/language-service/). Create a Language resource first, if necessary.
- In Prompt flow, add a new `CustomConnection`.
- Under the `secrets` field, specify the resource's API key: `api_key: <Azure AI Language Resource api key>`
- Under the `configs` field, specify the resource's endpoint: `endpoint: <Azure AI Language Resource endpoint>`
To use the `Translator` tool, you must set up an additional connection to an [Azure AI Translator resource](https://azure.microsoft.com/en-us/products/ai-services/ai-translator). [Create a Translator resource](https://learn.microsoft.com/en-us/azure/ai-services/translator/create-translator-resource) first, if necessary.
- In Prompt flow, add a new `CustomConnection`.
- Under the `secrets` field, specify the resource's API key: `api_key: <Azure AI Translator Resource api key>`
- Under the `configs` field, specify the resource's endpoint: `endpoint: <Azure AI Translator Resource endpoint>`
- If your Translator Resource is regional and non-global, specify its region under `configs` as well: `region: <Azure AI Translator Resource region>`
## Inputs
The tool accepts the following inputs:
- **Abstractive Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| query | string | The query used to structure summarization. | Yes |
| summary_length | string (enum) | The desired summary length. Enum values are `short`, `medium`, and `long`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Extractive Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| query | string | The query used to structure summarization. | Yes |
| sentence_count | int | The desired number of output summary sentences. Default value is `3`. | No |
| sort_by | string (enum) | The sorting criteria for extractive summarization results. Enum values are `Offset` to sort results in order of appearance in the text and `Rank` to sort results in order of importance (i.e. rank score) according to model. Default value is `Offset`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Conversation Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. Text should be of the following form: `<speaker id>: <speaker text> \n <speaker id>: <speaker text> \n ...` | Yes |
| modality | string (enum) | The modality of the input text. Enum values are `text` for input from a text source, and `transcript` for input from a transcript source. | Yes |
| summary_aspect | string (enum) | The desired summary "aspect" to obtain. Enum values are `chapterTitle` to obtain the chapter title of any conversation, `issue` to obtain the summary of issues in transcripts of web chats and service calls between customer-service agents and customers, `narrative` to obtain the generic summary of any conversation, `resolution` to obtain the summary of resolutions in transcripts of web chats and service calls between customer-service agents and customers, `recap` to obtain a general summary, and `follow-up tasks` to obtain a summary of follow-up or action items. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Entity Recognition**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Key Phrase Extraction**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Language Detection**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **PII Entity Recognition**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| domain | string (enum) | The PII domain used for PII Entity Recognition. Enum values are `none` for no domain, or `phi` to indicate that entities in the Personal Health domain should be redacted. Default value is `none`. | No |
| categories | list[string] | Describes the PII categories to return. Default value is `[]`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Sentiment Analysis**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| opinion_mining | bool | Should opinion mining be enabled. Default value is `False`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Conversational Language Understanding**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| utterances | string | A single user utterance or a json array of user utterances. | Yes |
| project_name | string | The Conversational Language Understanding project to be called. | Yes |
| deployment_name | string | The Conversational Language Understanding project deployment to be called. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Translator**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Translator resource. | Yes |
| text | string | The input text. | Yes |
| to | list[string] | The languages to translate the input text to. | Yes |
| source_language | string | The language of the input text. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
## Outputs
If the input parameter `parse_response` is set to `False` (default value), the raw API json output will be returned as a string. Refer to the [REST API reference](https://learn.microsoft.com/en-us/rest/api/language/) for details on API output. For Conversational Language Understanding, the output will be a list of raw API json responses, one response for each user utterance in the input.
When `parse_response` is set to `True`, the tool will parse API output as follows:
| Name | Type | Description |
|-------------------------------------------------------------|--------|---------------------|
| Abstractive Summarization | string | Abstractive summary. |
| Extractive Summarization | list[string] | Extracted summary sentence strings. |
| Conversation Summarization | string | Conversation summary based on `summary_aspect`. |
| Entity Recognition | dict[string, string] | Recognized entities, where keys are entity names and values are entity categories. |
| Key Phrase Extraction | list[string] | Extracted key phrases as strings. |
| Language Detection | string | Detected language's ISO 639-1 code. |
| PII Entity Recognition | string | Input `text` with PII entities redacted. |
| Sentiment Analysis | string | Analyzed sentiment: `positive`, `neutral`, or `negative`. |
| Conversational Language Understanding | list[dict[string, string]] | List of user utterances and associated intents. |
| Translator | dict[string, string] | Translated text, where keys are the translated languages and values are the translated texts. |
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/cloud/index.md | # Cloud
Prompt flow streamlines the process of developing AI applications based on LLM, easing prompt engineering, prototyping, evaluating, and fine-tuning for high-quality products.
Transitioning to production, however, typically requires a comprehensive **LLMOps** process, LLMOps is short for large language model operations. This can often be a complex task, demanding high availability and security, particularly vital for large-scale team collaboration and lifecycle management when deploying to production.
To assist in this journey, we've introduced **Azure AI**, a **cloud-based platform** tailored for executing LLMOps, focusing on boosting productivity for enterprises.
* Private data access and controls
* Collaborative development
* Automating iterative experimentation and CI/CD
* Deployment and optimization
* Safe and Responsible AI
![img](../media/cloud/azureml/llmops_cloud_value.png)
## Transitioning from local to cloud (Azure AI)
In prompt flow, You can develop your flow locally and then seamlessly transition to Azure AI. Here are a few scenarios where this might be beneficial:
| Scenario | Benefit | How to|
| --- | --- |--- |
| Collaborative development | Azure AI provides a cloud-based platform for flow development and management, facilitating sharing and collaboration across multiple teams, organizations, and tenants.| [Submit a run using pfazure](./azureai/quick-start.md), based on the flow file in your code base.|
| Processing large amounts of data in parallel pipelines | Transitioning to Azure AI allows you to use your flow as a parallel component in a pipeline job, enabling you to process large amounts of data and integrate with existing pipelines. | Learn how to [Use flow in Azure ML pipeline job](./azureai/use-flow-in-azure-ml-pipeline.md).|
| Large-scale Deployment | Azure AI allows for seamless deployment and optimization when your flow is ready for production and requires high availability and security. | Use `pf flow build` to deploy your flow to [Azure App Service](./azureai/deploy-to-azure-appservice.md).|
| Data Security and Responsible AI Practices | If your flow handling sensitive data or requiring ethical AI practices, Azure AI offers robust security, responsible AI services, and features for data storage, identity, and access control. | Follow the steps mentioned in the above scenarios.|
For more resources on Azure AI, visit the cloud documentation site: [Build AI solutions with prompt flow](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/get-started-prompt-flow?view=azureml-api-2).
```{toctree}
:caption: AzureAI
:maxdepth: 1
azureai/quick-start
azureai/manage-flows
azureai/consume-connections-from-azure-ai
azureai/deploy-to-azure-appservice
azureai/use-flow-in-azure-ml-pipeline.md
azureai/faq
azureai/runtime-change-log.md
```
| 0 |
promptflow_repo/promptflow/docs/cloud | promptflow_repo/promptflow/docs/cloud/azureai/consume-connections-from-azure-ai.md | # Consume connections from Azure AI
For a smooth development flow that transitions from cloud (Azure AI) to local environments, you can directly utilize the connection already established on the cloud by setting the connection provider to "Azure AI connections".
You can set the connection provider using the following steps:
1. Navigate to the connection list in the VS Code primary sidebar.
1. Click on the ... (more options icon) at the top and select the `Set connection provider` option.
![img](../../media/cloud/consume-cloud-connections/set-connection-provider.png)
1. Choose one of the "Azure AI connections" provider types that you wish to use. [Click to learn more about the differences between the connection providers](#different-connection-providers).
![img](../../media/cloud/consume-cloud-connections/set-connection-provider-2.png)
1. If you choose "Azure AI Connections - for current working directory", then you need to specify the cloud resources in the `config.json` file within the project folder.
![img](../../media/cloud/consume-cloud-connections/set-aml-connection-provider.png)
1. If you choose "Azure AI Connections - for this machine", specify the cloud resources in the connection string. You can do this in one of two ways:
(1) Input connection string in the input box above.
For example `azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>`
![img](../../media/cloud/consume-cloud-connections/set-aml-connection-provider-2.png)
(2) Follow the wizard to set up your config step by step.
![img](../../media/cloud/consume-cloud-connections/set-aml-connection-provider-2-wizard.png)
1. Once the connection provider is set, the connection list will automatically refresh, displaying the connections retrieved from the selected provider.
Note:
1. You need to have a project folder open to use the "Azure AI connections - for current working directory" option.
1. Once you change the connection provider, it will stay that way until you change it again and save the new setting.
## Different connection providers
Currently, we support three types of connections:
|Connection provider|Type|Description|Provider Specification|Use Case|
|---|---|---|---|---|
| Local Connections| Local| Enables consume the connections created and locally and stored in local sqlite. |NA| Ideal when connections need to be stored and managed locally.|
|Azure AI connection - For current working directory| Cloud provider| Enables the consumption of connections from a cloud provider, such as a specific Azure Machine Learning workspace or Azure AI project.| Specify the resource ID in a `config.json` file placed in the project folder. <br> [Click here for more details](../../how-to-guides/set-global-configs.md#azureml)| A dynamic approach for consuming connections from different providers in specific projects. Allows for setting different provider configurations for different flows by updating the `config.json` in the project folder.|
|Azure AI connection - For this machine| Cloud| Enables the consumption of connections from a cloud provider, such as a specific Azure Machine Learning workspace or Azure AI project. | Use a `connection string` to specify a cloud resource as the provider on your local machine. <br> [Click here for more details](../../how-to-guides/set-global-configs.md#full-azure-machine-learning-workspace-resource-id)|A global provider setting that applies across all working directories on your machine.|
## Next steps
- Set global configs on [connection.provider](../../how-to-guides/set-global-configs.md#connectionprovider).
- [Manage connections on local](../../how-to-guides/manage-connections.md).
| 0 |
promptflow_repo/promptflow/docs/cloud | promptflow_repo/promptflow/docs/cloud/azureai/deploy-to-azure-appservice.md | # Deploy to Azure App Service
[Azure App Service](https://learn.microsoft.com/azure/app-service/) is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends.
The scripts (`deploy.sh` for bash and `deploy.ps1` for powershell) under [this folder](https://github.com/microsoft/promptflow/tree/main/examples/tutorials/flow-deploy/azure-app-service) are here to help deploy the docker image to Azure App Service.
This example demos how to deploy [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) deploy a flow using Azure App Service.
## Build a flow as docker format app
Use the command below to build a flow as docker format app:
```bash
pf flow build --source ../../flows/standard/web-classification --output dist --format docker
```
Note that all dependent connections must be created before building as docker.
## Deploy with Azure App Service
The two scripts will do the following things:
1. Create a resource group if not exists.
2. Build and push the image to docker registry.
3. Create an app service plan with the given sku.
4. Create an app with specified name, set the deployment container image to the pushed docker image.
5. Set up the environment variables for the app.
::::{tab-set}
:::{tab-item} Bash
Example command to use bash script:
```shell
bash deploy.sh --path dist -i <image_tag> --name my_app_23d8m -r <docker registry> -g <resource_group>
```
See the full parameters by `bash deploy.sh -h`.
:::
:::{tab-item} PowerShell
Example command to use powershell script:
```powershell
.\deploy.ps1 -i <image_tag> --Name my_app_23d8m -r <docker registry> -g <resource_group>
```
See the full parameters by `.\deploy.ps1 -h`.
:::
::::
Note that the `name` will produce a unique FQDN as AppName.azurewebsites.net.
## View and test the web app
The web app can be found via [azure portal](https://portal.azure.com/)
![img](../../media/cloud/azureml/deploy_appservice_azure_portal_img.png)
After the app created, you will need to go to https://portal.azure.com/ find the app and set up the environment variables
at (Settings>Configuration) or (Settings>Environment variables), then restart the app.
![img](../../media/cloud/azureml/deploy_appservice_set_env_var.png)
The app can be tested by sending a POST request to the endpoint or browse the test page.
::::{tab-set}
:::{tab-item} Bash
```bash
curl https://<name>.azurewebsites.net/score --data '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -X POST -H "Content-Type: application/json"
```
:::
:::{tab-item} PowerShell
```powershell
Invoke-WebRequest -URI https://<name>.azurewebsites.net/score -Body '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -Method POST -ContentType "application/json"
```
:::
:::{tab-item} Test Page
Browse the app at Overview and see the test page:
![img](../../media/cloud/azureml/deploy_appservice_test_page.png)
:::
::::
Tips:
- Reach deployment logs at (Deployment>Deployment Central) and app logs at (Monitoring>Log stream).
- Reach advanced deployment tools at https://$name.scm.azurewebsites.net/.
- Reach more details about app service at https://learn.microsoft.com/azure/app-service/.
## Next steps
- Try the example [here](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/flow-deploy/azure-app-service). | 0 |
promptflow_repo/promptflow/docs/cloud | promptflow_repo/promptflow/docs/cloud/azureai/runtime-change-log.md | # Change log of default runtime image
In Azure Machine Learning prompt flow, the execution of flows is facilitated by using runtimes. Within the Azure Machine Learning workspace, a runtime serves as computing resource that enable customers to execute flows.
A runtime includes a pre-built Docker image (users can also provide their own custom image), which contains all necessary dependency packages.
This Docker image is continuously updated, and here we record the new features and fixed bugs of each image version. The image can be pulled by specifying a runtime version and execute the following command:
```
docker pull mcr.microsoft.com/azureml/promptflow/promptflow-runtime-stable:<runtime_version>
```
You can check the runtime image version from the flow execution log:
![img](../../media/cloud/runtime-change-log/runtime-version.png)
## 20240116.v1
### New features
NA
### Bugs fixed
- Add validation for wrong connection type for LLM tool.
## 20240111.v2
### New features
- Support error log scrubbing for heron jobs.
### Bugs fixed
- Fixed the compatibility issue between runtime and promptflow package < 1.3.0
| 0 |
promptflow_repo/promptflow/docs/cloud | promptflow_repo/promptflow/docs/cloud/azureai/manage-flows.md | # Manage flows
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../../how-to-guides/faq.md#stable-vs-experimental).
:::
This documentation will walk you through how to manage your flow with CLI and SDK on [Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow?view=azureml-api-2).
The flow examples in this guide come from [examples/flows/standard](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard).
In general:
- For `CLI`, you can run `pfazure flow --help` in the terminal to see help messages.
- For `SDK`, you can refer to [Promptflow Python Library Reference](../../reference/python-library-reference/promptflow.md) and check `promptflow.azure.PFClient.flows` for more flow operations.
:::{admonition} Prerequisites
- Refer to the prerequisites in [Quick start](./quick-start.md#prerequisites).
- Use the `az login` command in the command line to log in. This enables promptflow to access your credentials.
:::
Let's take a look at the following topics:
- [Manage flows](#manage-flows)
- [Create a flow](#create-a-flow)
- [List flows](#list-flows)
## Create a flow
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
To set the target workspace, you can either specify it in the CLI command or set default value in the Azure CLI.
You can refer to [Quick start](./quick-start.md#submit-a-run-to-workspace) for more information.
To create a flow to Azure from local flow directory, you can use
```bash
# create the flow
pfazure flow create --flow <path-to-flow-folder>
# create the flow with metadata
pfazure flow create --flow <path-to-flow-folder> --set display_name=<display-name> description=<description> tags.key1=value1
```
After the flow is created successfully, you can see the flow summary in the command line.
![img](../../media/cloud/manage-flows/flow_create_0.png)
:::
:::{tab-item} SDK
:sync: SDK
1. Import the required libraries
```python
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
# azure version promptflow apis
from promptflow.azure import PFClient
```
2. Get credential
```python
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()
```
3. Get a handle to the workspace
```python
# Get a handle to workspace
pf = PFClient(
credential=credential,
subscription_id="<SUBSCRIPTION_ID>", # this will look like xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
resource_group_name="<RESOURCE_GROUP>",
workspace_name="<AML_WORKSPACE_NAME>",
)
```
4. Create the flow
```python
# specify flow path
flow = "./web-classification"
# create flow to Azure
flow = pf.flows.create_or_update(
flow=flow, # path to the flow folder
display_name="my-web-classification", # it will be "web-classification-{timestamp}" if not specified
type="standard", # it will be "standard" if not specified
)
```
:::
::::
On Azure portal, you can see the created flow in the flow list.
![img](../../media/cloud/manage-flows/flow_create_1.png)
And the flow source folder on file share is `Users/<alias>/promptflow/<flow-display-name>`:
![img](../../media/cloud/manage-flows/flow_create_2.png)
Note that if the flow display name is not specified, it will default to the flow folder name + timestamp. (e.g. `web-classification-11-13-2023-14-19-10`)
## List flows
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
List flows with default json format:
```bash
pfazure flow list --max-results 1
```
![img](../../media/cloud/manage-flows/flow_list_0.png)
:::
:::{tab-item} SDK
:sync: SDK
```python
# reuse the pf client created in "create a flow" section
flows = pf.flows.list(max_results=1)
```
:::
:::: | 0 |
promptflow_repo/promptflow/docs/cloud | promptflow_repo/promptflow/docs/cloud/azureai/quick-start.md | # Run prompt flow in Azure AI
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../../how-to-guides/faq.md#stable-vs-experimental).
:::
Assuming you have learned how to create and run a flow following [Quick start](../../how-to-guides/quick-start.md). This guide will walk you through the main process of how to submit a promptflow run to [Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow?view=azureml-api-2).
Benefits of use Azure AI comparison to just run locally:
- **Designed for team collaboration**: Portal UI is a better fix for sharing & presentation your flow and runs. And workspace can better organize team shared resources like connections.
- **Enterprise Readiness Solutions**: prompt flow leverages Azure AI's robust enterprise readiness solutions, providing a secure, scalable, and reliable foundation for the development, experimentation, and deployment of flows.
## Prerequisites
1. An Azure account with an active subscription - [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
2. An Azure AI ML workspace - [Create workspace resources you need to get started with Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources).
3. A python environment, `python=3.9` or higher version like 3.10 is recommended.
4. Install `promptflow` with extra dependencies and `promptflow-tools`.
```sh
pip install promptflow[azure] promptflow-tools
```
5. Clone the sample repo and check flows in folder [examples/flows](https://github.com/microsoft/promptflow/tree/main/examples/flows).
```sh
git clone https://github.com/microsoft/promptflow.git
```
## Create necessary connections
Connection helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety.
In this guide, we will use flow `web-classification` which uses connection `open_ai_connection` inside, we need to set up the connection if we haven't added it before.
Please go to workspace portal, click `Prompt flow` -> `Connections` -> `Create`, then follow the instruction to create your own connections. Learn more on [connections](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/concept-connections?view=azureml-api-2).
## Submit a run to workspace
Assuming you are in working directory `<path-to-the-sample-repo>/examples/flows/standard/`
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Use `az login` to login so promptflow can get your credential.
```sh
az login
```
Submit a run to workspace.
```sh
pfazure run create --subscription <my_sub> -g <my_resource_group> -w <my_workspace> --flow web-classification --data web-classification/data.jsonl --stream
```
**Default subscription/resource-group/workspace**
Note `--subscription`, `-g` and `-w` can be omitted if you have installed the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) and [set the default configurations](https://learn.microsoft.com/en-us/cli/azure/azure-cli-configuration).
```sh
az account set --subscription <my-sub>
az configure --defaults group=<my_resource_group> workspace=<my_workspace>
```
**Serverless runtime and named runtime**
Runtimes serve as computing resources so that the flow can be executed in workspace. Above command does not specify any runtime which means it will run in serverless mode. In this mode the workspace will automatically create a runtime and you can use it as the default runtime for any flow run later.
Instead, you can also [create a runtime](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-create-manage-runtime?view=azureml-api-2) and use it with `--runtime <my-runtime>`:
```sh
pfazure run create --flow web-classification --data web-classification/data.jsonl --stream --runtime <my-runtime>
```
**Specify run name and view a run**
You can also name the run by specifying `--name my_first_cloud_run` in the run create command, otherwise the run name will be generated in a certain pattern which has timestamp inside.
With a run name, you can easily stream or view the run details using below commands:
```sh
pfazure run stream -n my_first_cloud_run # same as "--stream" in command "run create"
pfazure run show-details -n my_first_cloud_run
pfazure run visualize -n my_first_cloud_run
```
More details can be found in [CLI reference: pfazure](../../reference/pfazure-command-reference.md)
:::
:::{tab-item} SDK
:sync: SDK
1. Import the required libraries
```python
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
# azure version promptflow apis
from promptflow.azure import PFClient
```
2. Get credential
```python
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()
```
3. Get a handle to the workspace
```python
# Get a handle to workspace
pf = PFClient(
credential=credential,
subscription_id="<SUBSCRIPTION_ID>", # this will look like xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
resource_group_name="<RESOURCE_GROUP>",
workspace_name="<AML_WORKSPACE_NAME>",
)
```
4. Submit the flow run
```python
# load flow
flow = "web-classification"
data = "web-classification/data.jsonl"
runtime = "example-runtime-ci" # assume you have existing runtime with this name provisioned
# runtime = None # un-comment use automatic runtime
# create run
base_run = pf.run(
flow=flow,
data=data,
runtime=runtime,
)
pf.stream(base_run)
```
5. View the run info
```python
details = pf.get_details(base_run)
details.head(10)
pf.visualize(base_run)
```
:::
::::
## View the run in workspace
At the end of stream logs, you can find the `portal_url` of the submitted run, click it to view the run in the workspace.
![c_0](../../media/cloud/azureml/local-to-cloud-run-webview.png)
### Run snapshot of the flow with additional includes
Flows that enabled [additional include](../../how-to-guides/develop-a-flow/referencing-external-files-or-folders-in-a-flow.md) files can also be submitted for execution in the workspace. Please note that the specific additional include files or folders will be uploaded and organized within the **Files** folder of the run snapshot in the cloud.
![img](../../media/cloud/azureml/run-with-additional-includes.png)
## Next steps
Learn more about:
- [CLI reference: pfazure](../../reference/pfazure-command-reference.md)
| 0 |
promptflow_repo/promptflow/docs/cloud | promptflow_repo/promptflow/docs/cloud/azureai/faq.md | # Frequency asked questions (FAQ)
## Troubleshooting ##
### Token expired when run pfazure cmd
If hit error "AADSTS700082: The refresh token has expired due to inactivity." when running pfazure cmd, it's caused by local cached token expired. Please clear the cached token under "%LOCALAPPDATA%/.IdentityService/msal.cache". Then run below command to login again:
```sh
az login
``` | 0 |
promptflow_repo/promptflow/docs/cloud | promptflow_repo/promptflow/docs/cloud/azureai/use-flow-in-azure-ml-pipeline.md | # Use flow in Azure ML pipeline job
After you have developed and tested the flow in [init and test a flow](../../how-to-guides/init-and-test-a-flow.md), this guide will help you learn how to use a flow as a parallel component in a pipeline job on AzureML, so that you can integrate the created flow with existing pipelines and process a large amount of data.
:::{admonition} Pre-requirements
- Customer need to install the extension `ml>=2.21.0` to enable this feature in CLI and package `azure-ai-ml>=1.11.0` to enable this feature in SDK;
- Customer need to put `$schema` in the target `flow.dag.yaml` to enable this feature;
- `flow.dag.yaml`: `$schema`: `https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json`
- `run.yaml`: `$schema`: `https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json`
- Customer need to generate `flow.tools.json` for the target flow before below usage. The generation can be done by `pf flow validate`.
:::
For more information about AzureML and component:
- [Install and set up the CLI(v2)](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-cli?view=azureml-api-2&tabs=public)
- [Install and set up the SDK(v2)](https://learn.microsoft.com/en-us/python/api/overview/azure/ai-ml-readme?view=azure-python)
- [What is a pipeline](https://learn.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines?view=azureml-api-2)
- [What is a component](https://learn.microsoft.com/en-us/azure/machine-learning/concept-component?view=azureml-api-2)
## Register a flow as a component
Customer can register a flow as a component with either CLI or SDK.
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
```bash
# Register flow as a component
# Default component name will be the name of flow folder, which is not a valid component name, so we override it here; default version will be "1"
az ml component create --file standard/web-classification/flow.dag.yaml --set name=web_classification
# Register flow as a component with parameters override
az ml component create --file standard/web-classification/flow.dag.yaml --version 2 --set name=web_classification_updated
```
:::
:::{tab-item} SDK
:sync: SDK
```python
from azure.ai.ml import MLClient, load_component
ml_client = MLClient()
# Register flow as a component
flow_component = load_component("standard/web-classification/flow.dag.yaml")
# Default component name will be the name of flow folder, which is not a valid component name, so we override it here; default version will be "1"
flow_component.name = "web_classification"
ml_client.components.create_or_update(flow_component)
# Register flow as a component with parameters override
ml_client.components.create_or_update(
"standard/web-classification/flow.dag.yaml",
version="2",
params_override=[
{"name": "web_classification_updated"}
]
)
```
:::
::::
After registered a flow as a component, they can be referred in a pipeline job like [regular registered components](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components).
## Directly use a flow in a pipeline job
Besides explicitly registering a flow as a component, customer can also directly use flow in a pipeline job:
All connections and flow inputs will be exposed as input parameters of the component. Default value can be provided in flow/run definition; they can also be set/overwrite on job submission:
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
```yaml
...
jobs:
flow_node:
type: parallel
component: standard/web-classification/flow.dag.yaml
inputs:
data: ${{parent.inputs.web_classification_input}}
url: "${data.url}"
connections.summarize_text_content.connection: azure_open_ai_connection
connections.summarize_text_content.deployment_name: text-davinci-003
...
```
Above is part of the pipeline job yaml, see here for [full example](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/pipeline_job_with_flow_as_component).
:::
:::{tab-item} SDK
:sync: SDK
```python
from azure.identity import DefaultAzureCredential
from azure.ai.ml import MLClient, load_component, Input
from azure.ai.ml.dsl import pipeline
credential = DefaultAzureCredential()
ml_client = MLClient.from_config(credential=credential)
data_input = Input(path="standard/web-classification/data.jsonl", type='uri_file')
# Load flow as a component
flow_component = load_component("standard/web-classification/flow.dag.yaml")
@pipeline
def pipeline_func_with_flow(data):
flow_node = flow_component(
data=data,
url="${data.url}",
connections={
"summarize_text_content": {
"connection": "azure_open_ai_connection",
"deployment_name": "text-davinci-003",
},
},
)
flow_node.compute = "cpu-cluster"
pipeline_with_flow = pipeline_func_with_flow(data=data_input)
pipeline_job = ml_client.jobs.create_or_update(pipeline_with_flow)
ml_client.jobs.stream(pipeline_job.name)
```
Above is part of the pipeline job python code, see here for [full example](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines/1l_flow_in_pipeline).
:::
::::
## Difference across flow in prompt flow and pipeline job
In prompt flow, flow runs on [runtime](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/concept-runtime), which is designed for prompt flow and easy to customize; while in pipeline job, flow runs on different types of compute, and usually compute cluster.
Given above, if your flow has logic relying on identity or environment variable, please be aware of this difference as you might run into some unexpected error(s) when the flow runs in pipeline job, and you might need some extra configurations to make it work.
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/dev/replay-e2e-test.md | # Replay end-to-end tests
* This document introduces replay tests for those located in [sdk_cli_azure_test](../../src/promptflow/tests/sdk_cli_azure_test/e2etests/) and [sdk_cli_test](../../src/promptflow/tests/sdk_cli_test/e2etests/).
* The primary purpose of replay tests is to avoid the need for credentials, Azure workspaces, OpenAI tokens, and to directly test prompt flow behavior.
* Although there are different techniques behind recording/replaying, there are some common steps to run the tests in replay mode.
* The key handle of replay tests is the environment variable `PROMPT_FLOW_TEST_MODE`.
## How to run tests in replay mode
After cloning the full repo and setting up the proper test environment following [dev_setup.md](./dev_setup.md), run the following command in the root directory of the repo:
1. If you have changed/affected tests in __sdk_cli_test__ : Copy or rename the file [dev-connections.json.example](../../src/promptflow/dev-connections.json.example) to `connections.json` in the same folder.
2. In your Python environment, set the environment variable `PROMPT_FLOW_TEST_MODE` to `'replay'` and run the test(s).
These tests should work properly without any real connection settings.
## Test modes
There are 3 representative values of the environment variable `PROMPT_FLOW_TEST_MODE`
- `live`: Tests run against the real backend, which is the way traditional end-to-end tests do.
- `record`: Tests run against the real backend, and network traffic will be sanitized (filter sensitive and unnecessary requests/responses) and recorded to local files (recordings).
- `replay`: There is no real network traffic between SDK/CLI and the backend, tests run against local recordings.
## Update test recordings
To record a test, don’t forget to clone the full repo and set up the proper test environment following [dev_setup.md](./dev_setup.md):
1. Prepare some data.
* If you have changed/affected tests in __sdk_cli_test__: Copy or rename the file [dev-connections.json.example](../../src/promptflow/dev-connections.json.example) to `connections.json` in the same folder.
* If you have changed/affected tests in __sdk_cli_azure_test__: prepare your Azure ML workspace, make sure your Azure CLI logged in, and set the environment variable `PROMPT_FLOW_SUBSCRIPTION_ID`, `PROMPT_FLOW_RESOURCE_GROUP_NAME`, `PROMPT_FLOW_WORKSPACE_NAME` and `PROMPT_FLOW_RUNTIME_NAME` (if needed) pointing to your workspace.
2. Record the test.
* Specify the environment variable `PROMPT_FLOW_TEST_MODE` to `'record'`. If you have a `.env` file, we recommend specifying it there. Here is an example [.env file](../../src/promptflow/.env.example). Then, just run the test that you want to record.
3. Once the test completed.
* If you have changed/affected tests in __sdk_cli_azure_test__: There should be one new YAML file located in `src/promptflow/tests/test_configs/recordings/`, containing the network traffic of the test.
* If you have changed/affected tests in __sdk_cli_test__: There may be changes in the folder `src/promptflow/tests/test_configs/node_recordings/`. Don’t worry if there are no changes, because similar LLM calls may have been recorded before.
## Techniques behind replay test
### Sdk_cli_azure_test
End-to-end tests for pfazure aim to test the behavior of the PromptFlow SDK/CLI as it interacts with the service. This process can be time-consuming, error-prone, and require credentials (which are unavailable to pull requests from forked repositories); all of these go against our intention for a smooth development experience.
Therefore, we introduce replay tests, which leverage [VCR.py](https://pypi.org/project/vcrpy/) to record all required network traffic to local files and replay during tests. In this way, we avoid the need for credentials, speed up, and stabilize the test process.
### Sdk_cli_test
sdk_cli_test often doesn’t use a real backend. It will directly invokes LLM calls from localhost. Thus the key target of replay tests is to avoid the need for OpenAI tokens. If you have OpenAI / Azure OpenAI tokens yourself, you can try recording the tests. Record Storage will not record your own LLM connection, but only the inputs and outputs of the LLM calls.
There are also limitations. Currently, recorded calls are:
* AzureOpenAI calls
* OpenAI calls
* tool name "fetch_text_content_from_url" and tool name "my_python_tool" | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/dev/documentation_guidelines.md | # Promptflow Reference Documentation Guide
## Overview
This guide describes how to author Python docstrings for promptflow public interfaces. See our doc site at [Promptflow API reference documentation](https://microsoft.github.io/promptflow/reference/python-library-reference/promptflow.html).
## Principles
- **Coverage**: Every public object must have a docstring. For private objects, docstrings are encouraged but not required.
- **Style**: All docstrings should be written in [Sphinx style](https://sphinx-rtd-tutorial.readthedocs.io/en/latest/docstrings.html#the-sphinx-docstring-format) noting all types and if any exceptions are raised.
- **Relevance**: The documentation is up-to-date and relevant to the current version of the product.
- **Clarity**: The documentation is written in clear, concise language that is easy to understand.
- **Consistency**: The documentation has a consistent format and structure, making it easy to navigate and follow.
## How to write the docstring
First please read through [Sphinx style](https://sphinx-rtd-tutorial.readthedocs.io/en/latest/docstrings.html#the-sphinx-docstring-format) to have a basic understanding of sphinx style docstring.
### Write class docstring
Let's start with a class example:
```python
from typing import Dict, Optional, Union
from promptflow import PFClient
class MyClass:
"""One-line summary of the class.
More detailed explanation of the class. May include below notes, admonitions, code blocks.
.. note::
Here are some notes to show, with a nested python code block:
.. code-block:: python
from promptflow import MyClass, PFClient
obj = MyClass(PFClient())
.. admonition:: [Title of the admonition]
Here are some admonitions to show.
:param client: Descrition of the client.
:type client: ~promptflow.PFClient
:param param_int: Description of the parameter.
:type param_int: Optional[int]
:param param_str: Description of the parameter.
:type param_str: Optional[str]
:param param_dict: Description of the parameter.
:type param_dict: Optional[Dict[str, str]]
"""
def __init__(
client: PFClient,
param_int: Optional[int] = None,
param_str: Optional[str] = None,
param_dict: Optional[Dict[str, str]] = None,
) -> None:
"""No docstring for __init__, it should be written in class definition above."""
...
```
**Notes**:
1. One-line summary is required. It should be clear and concise.
2. Detailed explanation is encouraged but not required. This part may or may not include notes, admonitions and code blocks.
- The format like `.. note::` is called `directive`. Directives are a mechanism to extend the content of [reStructuredText](https://docutils.sourceforge.io/rst.html). Every directive declares a block of content with specific role. Start a new line with `.. directive_name::` to use the directive.
- The directives used in the sample(`note/admonition/code-block`) should be enough for basic usage of docstring in our project. But you are welcomed to explore more [Directives](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#specific-admonitions).
3. Parameter description and type is required.
- A pair of `:param [ParamName]:` and `:type [ParamName]:` is required.
- If the type is a promptflow public class, use the `full path to the class` and prepend it with a "~". This will create a link when the documentation is rendered on the doc site that will take the user to the class reference documentation for more information.
```text
:param client: Descrition of the client.
:type client: ~promptflow.PFClient
```
- Use `Union/Optional` when appropriate in function declaration. And use the same annotaion after `:type [ParamName]:`
```text
:type param_int: Optional[int]
```
4. For classes, include docstring in definition only. If you include a docstring in both the class definition and the constructor (init method) docstrings, it will show up twice in the reference docs.
5. Constructors (def `__init__`) should return `None`, per [PEP 484 standards](https://peps.python.org/pep-0484/#the-meaning-of-annotations).
6. To create a link for promptflow class on our doc site. `~promptflow.xxx.MyClass` alone only works after `:type [ParamName]` and `:rtype:`. If you want to achieve the same effect in docstring summary, you should use it with `:class:`:
```python
"""
An example to achieve link effect in summary for :class:`~promptflow.xxx.MyClass`
For function, use :meth:`~promptflow.xxx.my_func`
"""
```
7. There are some tricks to highlight the content in your docstring:
- Single backticks (`): Single backticks are used to represent inline code elements within the text. It is typically used to highlight function names, variable names, or any other code elements within the documentation.
- Double backticks(``): Double backticks are typically used to highlight a literal value.
8. If there are any class level constants you don't want to expose to doc site, make sure to add `_` in front of the constant to hide it.
### Write function docstring
```python
from typing import Optional
def my_method(param_int: Optional[int] = None) -> int:
"""One-line summary
Detailed explanations.
:param param_int: Description of the parameter.
:type param_int: int
:raises [ErrorType1]: [ErrorDescription1]
:raises [ErrorType2]: [ErrorDescription2]
:return: Description of the return value.
:rtype: int
"""
...
```
In addition to `class docstring` notes:
1. Function docstring should include return values.
- If return type is promptflow class, we should also use `~promptflow.xxx.[ClassName]`.
2. Function docstring should include exceptions that may be raised in this function.
- If exception type is `PromptflowException`, use `~promptflow.xxx.[ExceptionName]`
- If multiple exceptions are raised, just add new lines of `:raises`, see the example above.
## How to build doc site locally
You can build the documentation site locally to preview the final effect of your docstring on the rendered site. This will provide you with a clear understanding of how your docstring will appear on our site once your changes are merged into the main branch.
1. Setup your dev environment, see [dev_setup](./dev_setup.md) for details. Sphinx will load all source code to process docstring.
- Skip this step if you just want to build the doc site without reference doc, but do remove `-WithReferenceDoc` from the command in step 3.
2. Install `langchain` package since it is used in our code but not covered in `dev_setup`.
3. Open a `powershell`, activate the conda env and navigate to `<repo-root>/scripts/docs` , run `doc_generation.ps1`:
```pwsh
cd scripts\docs
.\doc_generation.ps1 -WithReferenceDoc -WarningAsError
```
- For the first time you execute this command, it will take some time to install `sphinx` dependencies. After the initial installation, next time you can add param `-SkipInstall` to above command to save some time for dependency check.
4. Check warnings/errors in the build log, fix them if any, then build again.
5. Open `scripts/docs/_build/index.html` to preview the local doc site.
## Additional comments
- **Utilities**: The [autoDocstring](https://marketplace.visualstudio.com/items?itemName=njpwerner.autodocstring) VSCode extension or GitHub Copilot can help autocomplete in this style for you.
- **Advanced principles**
- Accuracy: The documentation accurately reflects the features and functionality of the product.
- Completeness: The documentation covers all relevant features and functionality of the product.
- Demonstration: Every docstring should include an up-to-date code snippet that demonstrates how to use the product effectively.
## References
- [AzureML v2 Reference Documentation Guide](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ml/azure-ai-ml/documentation_guidelines.md)
- [Azure SDK for Python documentation guidelines](https://azure.github.io/azure-sdk/python_documentation.html#docstrings)
- [How to document a Python API](https://review.learn.microsoft.com/en-us/help/onboard/admin/reference/python/documenting-api?branch=main) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/dev/dev_setup.md | # Dev Setup
## Set up process
- First create a new [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html) environment. Please specify python version as 3.9.
`conda create -n <env_name> python=3.9`.
- Activate the env you created.
- Set environment variable `PYTHONPATH` in your new conda environment.
`conda env config vars set PYTHONPATH=<path-to-src>\promptflow`.
Once you have set the environment variable, you have to reactivate your environment.
`conda activate <env_name>`.
- In root folder, run `python scripts/building/dev_setup.py --promptflow-extra-deps azure` to install the package and dependencies.
## How to run tests
### Set up your secrets
`dev-connections.json.example` is a template about connections provided in `src/promptflow`. You can follow these steps to refer to this template to configure your connection for the test cases:
1. `cd ./src/promptflow`
2. Run the command `cp dev-connections.json.example connections.json`;
3. Replace the values in the json file with your connection info;
4. Set the environment `PROMPTFLOW_CONNECTIONS='connections.json'`;
After above setup process is finished. You can use `pytest` command to run test, for example in root folder you can:
### Run tests via command
- Run all tests under a folder: `pytest src/promptflow/tests -v`
- Run a single test: ` pytest src/promptflow/tests/promptflow_test/e2etests/test_executor.py::TestExecutor::test_executor_basic_flow -v`
### Run tests in VSCode
1. Set up your python interperter
- Open the Command Palette (Ctrl+Shift+P) and select `Python: Select Interpreter`.
![img0](../media/dev_setup/set_up_vscode_0.png)
- Select existing conda env which you created previously.
![img1](../media/dev_setup/set_up_vscode_1.png)
2. Set up your test framework and directory
- Open the Command Palette (Ctrl+Shift+P) and select `Python: Configure Tests`.
![img2](../media/dev_setup/set_up_vscode_2.png)
- Select `pytest` as test framework.
![img3](../media/dev_setup/set_up_vscode_3.png)
- Select `Root directory` as test directory.
![img4](../media/dev_setup/set_up_vscode_4.png)
3. Exclude specific test folders.
You can exclude specific test folders if you don't have some extra dependency to avoid VS Code's test discovery fail.
For example, if you don't have azure dependency, you can exclude `sdk_cli_azure_test`.
Open `.vscode/settings.json`, write `"--ignore=src/promptflow/tests/sdk_cli_azure_test"` to `"python.testing.pytestArgs"`.
![img6](../media/dev_setup/set_up_vscode_6.png)
4. Click the `Run Test` button on the left
![img5](../media/dev_setup/set_up_vscode_5.png)
### Run tests in pycharm
1. Set up your pycharm python interpreter
![img0](../media/dev_setup/set_up_pycharm_0.png)
2. Select existing conda env which you created previously
![img1](../media/dev_setup/set_up_pycharm_1.png)
3. Run test, right-click the test name to run, or click the green arrow button on the left.
![img2](../media/dev_setup/set_up_pycharm_2.png)
### Record and replay tests
Please refer to [Replay End-to-End Tests](./replay-e2e-test.md) to learn how to record and replay tests.
## How to write docstring.
A clear and consistent API documentation is crucial for the usability and maintainability of our codebase. Please refer to [API Documentation Guidelines](./documentation_guidelines.md) to learn how to write docstring when developing the project.
## How to write tests
- Put all test data/configs under `src/promptflow/tests/test_configs`.
- Write unit tests:
- Flow run: `src/promptflow/tests/sdk_cli_test/unittest/`
- Flow run in azure: `src/promptflow/tests/sdk_cli_azure_test/unittest/`
- Write e2e tests:
- Flow run: `src/promptflow/tests/sdk_cli_test/e2etests/`
- Flow run in azure: `src/promptflow/tests/sdk_cli_azure_test/e2etests/`
- Test file name and the test case name all start with `test_`.
- A basic test example, see [test_connection.py](../../src/promptflow/tests/sdk_cli_test/e2etests/test_connection.py).
### Test structure
Currently all tests are under `src/promptflow/tests/` folder:
- tests/
- promptflow/
- sdk_cli_test/
- e2etests/
- unittests/
- sdk_cli_azure_test/
- e2etests/
- unittests/
- test_configs/
- connections/
- datas/
- flows/
- runs/
- wrong_flows/
- wrong_tools/
When you want to add tests for a new feature, you can add new test file let's say a e2e test file `test_construction.py`
under `tests/promptflow/**/e2etests/`.
Once the project gets more complicated or anytime you find it necessary to add new test folder and test configs for
a specific feature, feel free to split the `promptflow` to more folders, for example:
- tests/
- (Test folder name)/
- e2etests/
- test_xxx.py
- unittests/
- test_xxx.py
- test_configs/
- (Data or config folder name)/
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/tutorials/index.md | # Tutorials
This section contains a collection of flow samples and step-by-step tutorials.
|Area|<div style="width:250px">Sample</div>|Description|
|--|--|--|
|SDK|[Getting started with prompt flow](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/quickstart.ipynb)| A step by step guidance to invoke your first flow run.
|CLI|[Chat with PDF](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/e2e-development/chat-with-pdf.md)| An end-to-end tutorial on how to build a high quality chat application with prompt flow, including flow development and evaluation with metrics.
|SDK|[Chat with PDF - test, evaluation and experimentation](https://github.com/microsoft/promptflow/blob/main/examples/flows/chat/chat-with-pdf/chat-with-pdf.ipynb)| We will walk you through how to use prompt flow Python SDK to test, evaluate and experiment with the "Chat with PDF" flow.
|SDK|[Connection management](https://github.com/microsoft/promptflow/blob/main/examples/connections/connection.ipynb)| Manage various types of connections using sdk
|CLI|[Working with connection](https://github.com/microsoft/promptflow/blob/main/examples/connections/README.md)| Manage various types of connections using cli
|SDK|[Run prompt flow in Azure AI](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/quickstart-azure.ipynb)| A quick start tutorial to run a flow in Azure AI and evaluate it.
|SDK|[Flow run management in Azure AI](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/run-management/cloud-run-management.ipynb)| Flow run management in azure AI
## Samples
|Area|<div style="width:250px">Sample</div>|Description|
|--|--|--|
|Standard Flow|[basic](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/basic)| a basic flow with prompt and python tool.
|Standard Flow|[basic-with-connection](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/basic-with-connection)| a basic flow using custom connection with prompt and python tool
|Standard Flow|[basic-with-builtin-llm](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/basic-with-builtin-llm)| a basic flow using builtin llm tool
|Standard Flow|[customer-intent-extraction](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/customer-intent-extraction)| a flow created from existing langchain python code
|Standard Flow|[web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification)| a flow demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.
|Standard Flow|[autonomous-agent](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/autonomous-agent)| a flow showcasing how to construct a AutoGPT flow to autonomously figures out how to apply the given functions to solve the goal, which is film trivia that provides accurate and up-to-date information about movies, directors, actors, and more.
|Chat Flow|[chat-with-wikipedia](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-wikipedia)| a flow demonstrating Q&A with GPT3.5 using information from Wikipedia to make the answer more grounded.
|Chat Flow|[chat-with-pdf](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-pdf)| a flow that allow you to ask questions about the content of a PDF file and get answers.
|Evaluation Flow|[eval-classification-accuracy](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation/eval-classification-accuracy)| a flow illustrating how to evaluate the performance of a classification system.
Learn more: [Try out more promptflow examples.](https://github.com/microsoft/promptflow/tree/main/examples)
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/reference/flow-yaml-schema-reference.md | # Flow YAML Schema
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../how-to-guides/faq.md#stable-vs-experimental).
:::
The source JSON schema can be found at [Flow.schema.json](https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json)
## YAML syntax
| Key | Type | Description |
|----------------------------|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `$schema` | string | The YAML schema. If you use the prompt flow VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. |
| `inputs` | object | Dictionary of flow inputs. The key is a name for the input within the context of the flow and the value is the flow input definition. |
| `inputs.<input_name>` | object | The flow input definition. See [Flow input](#flow-input) for the set of configurable properties. |
| `outputs` | object | Dictionary of flow outputs. The key is a name for the output within the context of the flow and the value is the flow output definition. |
| `outputs.<output_name>` | object | The component output definition. See [Flow output](#flow-output) for the set of configurable properties. |
| `nodes` | array | Sets of dictionary of individual nodes to run as steps within the flow. Node can use built-in tool or third-party tool. See [Nodes](#nodes) for more information. |
| `node_variants` | object | Dictionary of nodes with variants. The key is the node name and value contains variants definition and `default_variant_id`. See [Node variants](#node-variants) for more information. |
| `environment` | object | The environment to use for the flow. The key can be `image` or `python_requirements_txt` and the value can be either a image or a python requirements text file. |
| `additional_includes` | array | Additional includes is a list of files that can be shared among flows. Users can specify additional files and folders used by flow, and prompt flow will help copy them all to the snapshot during flow creation. |
### Flow input
| Key | Type | Description | Allowed values |
|-------------------|-------------------------------------------|------------------------------------------------------|-----------------------------------------------------|
| `type` | string | The type of flow input. | `int`, `double`, `bool`, `string`, `list`, `object`, `image` |
| `description` | string | Description of the input. | |
| `default` | int, double, bool, string, list, object, image | The default value for the input. | |
| `is_chat_input` | boolean | Whether the input is the chat flow input. | |
| `is_chat_history` | boolean | Whether the input is the chat history for chat flow. | |
### Flow output
| Key | Type | Description | Allowed values |
|------------------|---------|-------------------------------------------------------------------------------|-----------------------------------------------------|
| `type` | string | The type of flow output. | `int`, `double`, `bool`, `string`, `list`, `object` |
| `description` | string | Description of the output. | |
| `reference` | string | A reference to the node output, e.g. ${<node_name>.output.<node_output_name>} | |
| `is_chat_output` | boolean | Whether the output is the chat flow output. | |
### Nodes
Nodes is a set of node which is a dictionary with following fields. Below, we only show the common fields of a single node using built-in tool.
| Key | Type | Description | Allowed values |
|----------------|--------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|
| `name` | string | The name of the node. | |
| `type` | string | The type of the node. | Type of built-in tool like `Python`, `Prompt`, `LLM` and third-party tool like `Vector Search`, etc. |
| `inputs` | object | Dictionary of node inputs. The key is the input name and the value can be primitive value or a reference to the flow input or the node output, e.g. `${inputs.<flow_input_name>}`, `${<node_name>.output}` or `${<node_name>.output.<node_output_name>}` | |
| `source` | object | Dictionary of tool source used by the node. The key contains `type`, `path` and `tool`. The type can be `code`, `package` and `package_with_prompt`. | |
| `provider` | string | It indicates the provider of the tool. Used when the `type` is LLM. | `AzureOpenAI` or `OpenAI` |
| `connection` | string | The connection name which has been created before. Used when the `type` is LLM. | |
| `api` | string | The api name of the provider. Used when the `type` is LLM. | |
| `module` | string | The module name of the tool using by the node. Used when the `type` is LLM. | |
| `use_variants` | bool | Whether the node has variants. | |
### Node variants
Node variants is a dictionary containing variants definition for nodes with variants with their respective node names as dictionary keys.
Below, we explore the variants for a single node.
| Key | Type | Description | Allowed values |
|----------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|
| `<node_name>` | string | The name of the node. | |
| `default_variant_id` | string | Default variant id. | |
| `variants ` | object | This dictionary contains all node variations, with the variant id serving as the key and a node definition dictionary as the corresponding value. Within the node definition dictionary, the key labeled 'node' should contain a variant definition similar to [Nodes](#nodes), excluding the 'name' field. | |
## Examples
Flow examples are available in the [GitHub repository](https://github.com/microsoft/promptflow/tree/main/examples/flows).
- [basic](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/basic)
- [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification)
- [basic-chat](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/basic-chat)
- [chat-with-pdf](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-pdf)
- [eval-basic](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation/eval-basic) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/reference/index.md | # Reference
**Current stable version:**
- [promptflow](https://pypi.org/project/promptflow):
[![PyPI version](https://badge.fury.io/py/promptflow.svg)](https://badge.fury.io/py/promptflow)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/promptflow)](https://pypi.org/project/promptflow/)
- [promptflow-tools](https://pypi.org/project/promptflow-tools/):
[![PyPI version](https://badge.fury.io/py/promptflow-tools.svg)](https://badge.fury.io/py/promptflow-tools)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/promptflow-tools)](https://pypi.org/project/promptflow-tools/)
```{toctree}
:caption: Command Line Interface
:maxdepth: 1
pf-command-reference.md
pfazure-command-reference.md
```
```{toctree}
:caption: Python Library Reference
:maxdepth: 4
python-library-reference/promptflow
```
```{toctree}
:caption: Tool Reference
:maxdepth: 1
tools-reference/llm-tool
tools-reference/prompt-tool
tools-reference/python-tool
tools-reference/serp-api-tool
tools-reference/faiss_index_lookup_tool
tools-reference/vector_db_lookup_tool
tools-reference/embedding_tool
tools-reference/open_model_llm_tool
tools-reference/openai-gpt-4v-tool
tools-reference/contentsafety_text_tool
tools-reference/aoai-gpt4-turbo-vision
```
```{toctree}
:caption: YAML Schema
:maxdepth: 1
flow-yaml-schema-reference.md
run-yaml-schema-reference.md
```
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/reference/run-yaml-schema-reference.md | # Run YAML Schema
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../how-to-guides/faq.md#stable-vs-experimental).
:::
The source JSON schema can be found at [Run.schema.json](https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json)
## YAML syntax
| Key | Type | Description |
|-------------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `$schema` | string | The YAML schema. If you use the prompt flow VS Code extension to author the YAML file, including $schema at the top of your file enables you to invoke schema and resource completions. |
| `name` | string | The name of the run. |
| `flow` | string | Path of the flow directory. |
| `description` | string | Description of the run. |
| `display_name` | string | Display name of the run. |
| `runtime` | string | The runtime for the run. Only supported for cloud run. |
| `data` | string | Input data for the run. Local path or remote uri(starts with azureml: or public URL) are supported. Note: remote uri is only supported for cloud run. |
| `run` | string | Referenced flow run name. For example, you can run an evaluation flow against an existing run. |
| `column_mapping` | object | Inputs column mapping, use `${data.xx}` to refer to data columns, use `${run.inputs.xx}` to refer to referenced run's data columns, and `${run.outputs.xx}` to refer to run outputs columns. |
| `connections` | object | Overwrite node level connections with provided value. Example: --connections node1.connection=test_llm_connection node1.deployment_name=gpt-35-turbo |
| `environment_variables` | object/string | Environment variables to set by specifying a property path and value. Example: `{"key1"="${my_connection.api_key}"}`. The value reference to connection keys will be resolved to the actual value, and all environment variables specified will be set into os.environ. |
| `properties` | object | Dictionary of properties of the run. |
| `tags` | object | Dictionary of tags of the run. |
| `resources` | object | Dictionary of resources used for automatic runtime. Only supported for cloud run. See [Resources Schema](#resources-schema) for the set of configurable properties. |
| `variant` | string | The variant for the run. |
| `status` | string | The status of the run. Only available for when getting an existing run. Won't take affect if set when creating a run. |
### Resources Schema
| Key | Type | Description |
|-------------------------------------|---------|-------------------------------------------------------------|
| `instance_type` | string | The instance type for automatic runtime of the run. |
| `idle_time_before_shutdown_minutes` | integer | The idle time before automatic runtime shutdown in minutes. |
## Examples
Run examples are available in the [GitHub repository](https://github.com/microsoft/promptflow/tree/main/examples/flows).
- [basic](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/basic/run.yml)
- [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/run.yml)
- [flow-with-additional-includes](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/flow-with-additional-includes/run.yml)
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/reference/pf-command-reference.md | # pf
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../how-to-guides/faq.md#stable-vs-experimental).
:::
Manage prompt flow resources with the prompt flow CLI.
| Command | Description |
|---------------------------------|---------------------------------|
| [pf flow](#pf-flow) | Manage flows. |
| [pf connection](#pf-connection) | Manage connections. |
| [pf run](#pf-run) | Manage runs. |
| [pf tool](#pf-tool) | Init or list tools. |
| [pf config](#pf-config) | Manage config for current user. |
| [pf upgrade](#pf-upgrade) | Upgrade prompt flow CLI. |
## pf flow
Manage promptflow flow flows.
| Command | Description |
| --- | --- |
| [pf flow init](#pf-flow-init) | Initialize a prompt flow directory. |
| [pf flow test](#pf-flow-test) | Test the prompt flow or flow node. |
| [pf flow validate](#pf-flow-validate) | Validate a flow and generate `flow.tools.json` for it. |
| [pf flow build](#pf-flow-build) | Build a flow for further sharing or deployment. |
| [pf flow serve](#pf-flow-serve) | Serve a flow as an endpoint. |
### pf flow init
Initialize a prompt flow directory.
```bash
pf flow init [--flow]
[--entry]
[--function]
[--prompt-template]
[--type]
[--yes]
```
#### Examples
Create a flow folder with code, prompts and YAML specification of the flow.
```bash
pf flow init --flow <path-to-flow-direcotry>
```
Create an evaluation prompt flow
```bash
pf flow init --flow <path-to-flow-direcotry> --type evaluation
```
Create a flow in existing folder
```bash
pf flow init --flow <path-to-existing-folder> --entry <entry.py> --function <function-name> --prompt-template <path-to-prompt-template.md>
```
#### Optional Parameters
`--flow`
The flow name to create.
`--entry`
The entry file name.
`--function`
The function name in entry file.
`--prompt-template`
The prompt template parameter and assignment.
`--type`
The initialized flow type.
accepted value: standard, evaluation, chat
`--yes --assume-yes -y`
Automatic yes to all prompts; assume 'yes' as answer to all prompts and run non-interactively.
### pf flow test
Test the prompt flow or flow node.
```bash
pf flow test --flow
[--inputs]
[--node]
[--variant]
[--debug]
[--interactive]
[--verbose]
```
#### Examples
Test the flow.
```bash
pf flow test --flow <path-to-flow-directory>
```
Test the flow with single line from input file.
```bash
pf flow test --flow <path-to-flow-directory> --inputs data_key1=data_val1 data_key2=data_val2
```
Test the flow with specified variant node.
```bash
pf flow test --flow <path-to-flow-directory> --variant '${node_name.variant_name}'
```
Test the single node in the flow.
```bash
pf flow test --flow <path-to-flow-directory> --node <node_name>
```
Debug the single node in the flow.
```bash
pf flow test --flow <path-to-flow-directory> --node <node_name> --debug
```
Chat in the flow.
```bash
pf flow test --flow <path-to-flow-directory> --node <node_name> --interactive
```
#### Required Parameter
`--flow`
The flow directory to test.
#### Optional Parameters
`--inputs`
Input data for the flow. Example: --inputs data1=data1_val data2=data2_val
`--node`
The node name in the flow need to be tested.
`--variant`
Node & variant name in format of ${node_name.variant_name}.
`--debug`
Debug the single node in the flow.
`--interactive`
Start a interactive chat session for chat flow.
`--verbose`
Displays the output for each step in the chat flow.
### pf flow validate
Validate the prompt flow and generate a `flow.tools.json` under `.promptflow`. This file is required when using flow as a component in a Azure ML pipeline.
```bash
pf flow validate --source
[--debug]
[--verbose]
```
#### Examples
Validate the flow.
```bash
pf flow validate --source <path-to-flow>
```
#### Required Parameter
`--source`
The flow source to validate.
### pf flow build
Build a flow for further sharing or deployment.
```bash
pf flow build --source
--output
--format
[--variant]
[--verbose]
[--debug]
```
#### Examples
Build a flow as docker, which can be built into Docker image via `docker build`.
```bash
pf flow build --source <path-to-flow> --output <output-path> --format docker
```
Build a flow as docker with specific variant.
```bash
pf flow build --source <path-to-flow> --output <output-path> --format docker --variant '${node_name.variant_name}'
```
#### Required Parameter
`--source`
The flow or run source to be used.
`--output`
The folder to output built flow. Need to be empty or not existed.
`--format`
The format to build flow into
#### Optional Parameters
`--variant`
Node & variant name in format of ${node_name.variant_name}.
`--verbose`
Show more details for each step during build.
`--debug`
Show debug information during build.
### pf flow serve
Serving a flow as an endpoint.
```bash
pf flow serve --source
[--port]
[--host]
[--environment-variables]
[--verbose]
[--debug]
[--skip-open-browser]
```
#### Examples
Serve flow as an endpoint.
```bash
pf flow serve --source <path-to-flow>
```
Serve flow as an endpoint with specific port and host.
```bash
pf flow serve --source <path-to-flow> --port <port> --host <host> --environment-variables key1="`${my_connection.api_key}`" key2="value2"
```
#### Required Parameter
`--source`
The flow or run source to be used.
#### Optional Parameters
`--port`
The port on which endpoint to run.
`--host`
The host of endpoint.
`--environment-variables`
Environment variables to set by specifying a property path and value. Example: --environment-variable key1="\`${my_connection.api_key}\`" key2="value2". The value reference to connection keys will be resolved to the actual value, and all environment variables specified will be set into `os.environ`.
`--verbose`
Show more details for each step during serve.
`--debug`
Show debug information during serve.
`--skip-open-browser`
Skip opening browser after serve. Store true parameter.
## pf connection
Manage prompt flow connections.
| Command | Description |
| --- | --- |
| [pf connection create](#pf-connection-create) | Create a connection. |
| [pf connection update](#pf-connection-update) | Update a connection. |
| [pf connection show](#pf-connection-show) | Show details of a connection. |
| [pf connection list](#pf-connection-list) | List all the connection. |
| [pf connection delete](#pf-connection-delete) | Delete a connection. |
### pf connection create
Create a connection.
```bash
pf connection create --file
[--name]
[--set]
```
#### Examples
Create a connection with YAML file.
```bash
pf connection create -f <yaml-filename>
```
Create a connection with YAML file with override.
```bash
pf connection create -f <yaml-filename> --set api_key="<api-key>"
```
Create a custom connection with .env file; note that overrides specified by `--set` will be ignored.
```bash
pf connection create -f .env --name <name>
```
#### Required Parameter
`--file -f`
Local path to the YAML file containing the prompt flow connection specification.
#### Optional Parameters
`--name -n`
Name of the connection.
`--set`
Update an object by specifying a property path and value to set. Example: --set property1.property2=.
### pf connection update
Update a connection.
```bash
pf connection update --name
[--set]
```
#### Example
Update a connection.
```bash
pf connection update -n <name> --set api_key="<api-key>"
```
#### Required Parameter
`--name -n`
Name of the connection.
#### Optional Parameter
`--set`
Update an object by specifying a property path and value to set. Example: --set property1.property2=.
### pf connection show
Show details of a connection.
```bash
pf connection show --name
```
#### Required Parameter
`--name -n`
Name of the connection.
### pf connection list
List all the connection.
```bash
pf connection list
```
### pf connection delete
Delete a connection.
```bash
pf connection delete --name
```
#### Required Parameter
`--name -n`
Name of the connection.
## pf run
Manage prompt flow runs.
| Command | Description |
| --- | --- |
| [pf run create](#pf-run-create) | Create a run. |
| [pf run update](#pf-run-update) | Update a run metadata, including display name, description and tags. |
| [pf run stream](#pf-run-stream) | Stream run logs to the console. |
| [pf run list](#pf-run-list) | List runs. |
| [pf run show](#pf-run-show) | Show details for a run. |
| [pf run show-details](#pf-run-show-details) | Preview a run's intput(s) and output(s). |
| [pf run show-metrics](#pf-run-show-metrics) | Print run metrics to the console. |
| [pf run visualize](#pf-run-visualize) | Visualize a run. |
| [pf run archive](#pf-run-archive) | Archive a run. |
| [pf run restore](#pf-run-restore) | Restore an archived run. |
### pf run create
Create a run.
```bash
pf run create [--file]
[--flow]
[--data]
[--column-mapping]
[--run]
[--variant]
[--stream]
[--environment-variables]
[--connections]
[--set]
[--source]
```
#### Examples
Create a run with YAML file.
```bash
pf run create -f <yaml-filename>
```
Create a run with YAML file and replace another data in the YAML file.
```bash
pf run create -f <yaml-filename> --data <path-to-new-data-file-relative-to-yaml-file>
```
Create a run from flow directory and reference a run.
```bash
pf run create --flow <path-to-flow-directory> --data <path-to-data-file> --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.category}' --run <run-name> --variant '${summarize_text_content.variant_0}' --stream
```
Create a run from an existing run record folder.
```bash
pf run create --source <path-to-run-folder>
```
#### Optional Parameters
`--file -f`
Local path to the YAML file containing the prompt flow run specification; can be overwritten by other parameters. Reference [here](https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json) for YAML schema.
`--flow`
Local path to the flow directory. If --file is provided, this path should be relative path to the file.
`--data`
Local path to the data file. If --file is provided, this path should be relative path to the file.
`--column-mapping`
Inputs column mapping, use `${data.xx}` to refer to data columns, use `${run.inputs.xx}` to refer to referenced run's data columns, and `${run.outputs.xx}` to refer to run outputs columns.
`--run`
Referenced flow run name. For example, you can run an evaluation flow against an existing run. For example, "pf run create --flow evaluation_flow_dir --run existing_bulk_run".
`--variant`
Node & variant name in format of `${node_name.variant_name}`.
`--stream -s`
Indicates whether to stream the run's logs to the console.
default value: False
`--environment-variables`
Environment variables to set by specifying a property path and value. Example:
`--environment-variable key1='${my_connection.api_key}' key2='value2'`. The value reference
to connection keys will be resolved to the actual value, and all environment variables
specified will be set into os.environ.
`--connections`
Overwrite node level connections with provided value.
Example: `--connections node1.connection=test_llm_connection node1.deployment_name=gpt-35-turbo`
`--set`
Update an object by specifying a property path and value to set.
Example: `--set property1.property2=<value>`.
`--source`
Local path to the existing run record folder.
### pf run update
Update a run metadata, including display name, description and tags.
```bash
pf run update --name
[--set]
```
#### Example
Update a run
```bash
pf run update -n <name> --set display_name="<display-name>" description="<description>" tags.key="value"
```
#### Required Parameter
`--name -n`
Name of the run.
#### Optional Parameter
`--set`
Update an object by specifying a property path and value to set. Example: --set property1.property2=.
### pf run stream
Stream run logs to the console.
```bash
pf run stream --name
```
#### Required Parameter
`--name -n`
Name of the run.
### pf run list
List runs.
```bash
pf run list [--all-results]
[--archived-only]
[--include-archived]
[--max-results]
```
#### Optional Parameters
`--all-results`
Returns all results.
default value: False
`--archived-only`
List archived runs only.
default value: False
`--include-archived`
List archived runs and active runs.
default value: False
`--max-results -r`
Max number of results to return. Default is 50.
default value: 50
### pf run show
Show details for a run.
```bash
pf run show --name
```
#### Required Parameter
`--name -n`
Name of the run.
### pf run show-details
Preview a run's input(s) and output(s).
```bash
pf run show-details --name
```
#### Required Parameter
`--name -n`
Name of the run.
### pf run show-metrics
Print run metrics to the console.
```bash
pf run show-metrics --name
```
#### Required Parameter
`--name -n`
Name of the run.
### pf run visualize
Visualize a run in the browser.
```bash
pf run visualize --names
```
#### Required Parameter
`--names -n`
Name of the runs, comma separated.
### pf run archive
Archive a run.
```bash
pf run archive --name
```
#### Required Parameter
`--name -n`
Name of the run.
### pf run restore
Restore an archived run.
```bash
pf run restore --name
```
#### Required Parameter
`--name -n`
Name of the run.
## pf tool
Manage promptflow tools.
| Command | Description |
| --- | --- |
| [pf tool init](#pf-tool-init) | Initialize a tool directory. |
| [pf tool list](#pf-tool-list) | List all tools in the environment. |
| [pf tool validate](#pf-tool-validate) | Validate tools. |
### pf tool init
Initialize a tool directory.
```bash
pf tool init [--package]
[--tool]
[--set]
```
#### Examples
Creating a package tool from scratch.
```bash
pf tool init --package <package-name> --tool <tool-name>
```
Creating a package tool with extra info.
```bash
pf tool init --package <package-name> --tool <tool-name> --set icon=<icon-path> category=<tool-category> tags="{'<key>': '<value>'}"
```
Creating a package tool from scratch.
```bash
pf tool init --package <package-name> --tool <tool-name>
```
Creating a python tool from scratch.
```bash
pf tool init --tool <tool-name>
```
#### Optional Parameters
`--package`
The package name to create.
`--tool`
The tool name to create.
`--set`
Set extra information about the tool, like category, icon and tags. Example: --set <key>=<value>.
### pf tool list
List all tools in the environment.
```bash
pf tool list [--flow]
```
#### Examples
List all package tool in the environment.
```bash
pf tool list
```
List all package tool and code tool in the flow.
```bash
pf tool list --flow <path-to-flow-direcotry>
```
#### Optional Parameters
`--flow`
The flow directory.
### pf tool validate
Validate tool.
```bash
pf tool validate --source
```
#### Examples
Validate single function tool.
```bash
pf tool validate -–source <package-name>.<module-name>.<tool-function>
```
Validate all tool in a package tool.
```bash
pf tool validate -–source <package-name>
```
Validate tools in a python script.
```bash
pf tool validate --source <path-to-tool-script>
```
#### Required Parameter
`--source`
The tool source to be used.
## pf config
Manage config for current user.
| Command | Description |
|-----------------------------------|--------------------------------------------|
| [pf config set](#pf-config-set) | Set prompt flow configs for current user. |
| [pf config show](#pf-config-show) | Show prompt flow configs for current user. |
### pf config set
Set prompt flow configs for current user, configs will be stored at ~/.promptflow/pf.yaml.
```bash
pf config set
```
#### Examples
Config connection provider to azure workspace for current user.
```bash
pf config set connection.provider="azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>"
```
### pf config show
Show prompt flow configs for current user.
```bash
pf config show
```
#### Examples
Show prompt flow for current user.
```bash
pf config show
```
## pf upgrade
Upgrade prompt flow CLI.
| Command | Description |
|-----------------------------|-----------------------------|
| [pf upgrade](#pf-upgrade) | Upgrade prompt flow CLI. |
### Examples
Upgrade prompt flow without prompt and run non-interactively.
```bash
pf upgrade --yes
``` | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/reference/pfazure-command-reference.md | # pfazure
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../how-to-guides/faq.md#stable-vs-experimental).
:::
Manage prompt flow resources on Azure with the prompt flow CLI.
| Command | Description |
| --- | --- |
| [pfazure flow](#pfazure-flow) | Manage flows. |
| [pfazure run](#pfazure-run) | Manage runs. |
## pfazure flow
Manage flows.
| Command | Description |
| --- | --- |
| [pfazure flow create](#pfazure-flow-create) | Create a flow. |
| [pfazure flow list](#pfazure-flow-list) | List flows in a workspace. |
### pfazure flow create
Create a flow in Azure AI from a local flow folder.
```bash
pfazure flow create [--flow]
[--set]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--flow`
Local path to the flow directory.
`--set`
Update an object by specifying a property path and value to set.
- `display_name`: Flow display name that will be created in remote. Default to be flow folder name + timestamp if not specified.
- `type`: Flow type. Default to be "standard" if not specified. Available types are: "standard", "evaluation", "chat".
- `description`: Flow description. e.g. "--set description=\<description\>."
- `tags`: Flow tags. e.g. "--set tags.key1=value1 tags.key2=value2."
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure flow list
List remote flows on Azure AI.
```bash
pfazure flow list [--max-results]
[--include-others]
[--type]
[--output]
[--archived-only]
[--include-archived]
[--subscription]
[--resource-group]
[--workspace-name]
[--output]
```
#### Parameters
`--max-results -r`
Max number of results to return. Default is 50, upper bound is 100.
`--include-others`
Include flows created by other owners. By default only flows created by the current user are returned.
`--type`
Filter flows by type. Available types are: "standard", "evaluation", "chat".
`--archived-only`
List archived flows only.
`--include-archived`
List archived flows and active flows.
`--output -o`
Output format. Allowed values: `json`, `table`. Default: `json`.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
## pfazure run
Manage prompt flow runs.
| Command | Description |
| --- | --- |
| [pfazure run create](#pfazure-run-create) | Create a run. |
| [pfazure run list](#pfazure-run-list) | List runs in a workspace. |
| [pfazure run show](#pfazure-run-show) | Show details for a run. |
| [pfazure run stream](#pfazure-run-stream) | Stream run logs to the console. |
| [pfazure run show-details](#pfazure-run-show-details) | Show a run details. |
| [pfazure run show-metrics](#pfazure-run-show-metrics) | Show run metrics. |
| [pfazure run visualize](#pfazure-run-visualize) | Visualize a run. |
| [pfazure run archive](#pfazure-run-archive) | Archive a run. |
| [pfazure run restore](#pfazure-run-restore) | Restore a run. |
| [pfazure run update](#pfazure-run-update) | Update a run. |
| [pfazure run download](#pfazure-run-download) | Download a run. |
### pfazure run create
Create a run.
```bash
pfazure run create [--file]
[--flow]
[--data]
[--column-mapping]
[--run]
[--variant]
[--stream]
[--environment-variables]
[--connections]
[--set]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--file -f`
Local path to the YAML file containing the prompt flow run specification; can be overwritten by other parameters. Reference [here](https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json) for YAML schema.
`--flow`
Local path to the flow directory.
`--data`
Local path to the data file or remote data. e.g. azureml:name:version.
`--column-mapping`
Inputs column mapping, use `${data.xx}` to refer to data columns, use `${run.inputs.xx}` to refer to referenced run's data columns, and `${run.outputs.xx}` to refer to run outputs columns.
`--run`
Referenced flow run name. For example, you can run an evaluation flow against an existing run. For example, "pfazure run create --flow evaluation_flow_dir --run existing_bulk_run --column-mapping url='${data.url}'".
`--variant`
Node & variant name in format of `${node_name.variant_name}`.
`--stream -s`
Indicates whether to stream the run's logs to the console.
default value: False
`--environment-variables`
Environment variables to set by specifying a property path and value. Example:
`--environment-variable key1='${my_connection.api_key}' key2='value2'`. The value reference
to connection keys will be resolved to the actual value, and all environment variables
specified will be set into os.environ.
`--connections`
Overwrite node level connections with provided value.
Example: `--connections node1.connection=test_llm_connection node1.deployment_name=gpt-35-turbo`
`--set`
Update an object by specifying a property path and value to set.
Example: `--set property1.property2=<value>`.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run list
List runs in a workspace.
```bash
pfazure run list [--archived-only]
[--include-archived]
[--max-results]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--archived-only`
List archived runs only.
default value: False
`--include-archived`
List archived runs and active runs.
default value: False
`--max-results -r`
Max number of results to return. Default is 50, upper bound is 100.
default value: 50
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run show
Show details for a run.
```bash
pfazure run show --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run stream
Stream run logs to the console.
```bash
pfazure run stream --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run show-details
Show a run details.
```bash
pfazure run show-details --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run show-metrics
Show run metrics.
```bash
pfazure run show-metrics --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run visualize
Visualize a run.
```bash
pfazure run visualize --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run archive
Archive a run.
```bash
pfazure run archive --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run restore
Restore a run.
```bash
pfazure run restore --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run update
Update a run's metadata, such as `display name`, `description` and `tags`.
```bash
pfazure run update --name
[--set display_name="<value>" description="<value>" tags.key="<value>"]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Examples
Set `display name`, `description` and `tags`:
```bash
pfazure run update --name <run_name> --set display_name="<value>" description="<value>" tags.key="<value>"
```
#### Parameters
`--name -n`
Name of the run.
`--set`
Set meta information of the run, like `display_name`, `description` or `tags`. Example: --set <key>=<value>.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run download
Download a run's metadata, such as `input`, `output`, `snapshot` and `artifact`. After the download is finished, you can use `pf run create --source <run-info-local-folder>` to register this run as a local run record, then you can use commands like `pf run show/visualize` to inspect the run just like a run that was created from local flow.
```bash
pfazure run download --name
[--output]
[--overwrite]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Examples
Download a run data to local:
```bash
pfazure run download --name <name> --output <output-folder-path>
```
#### Parameters
`--name -n`
Name of the run.
`--output -o`
Output folder path to store the downloaded run data. Default to be `~/.promptflow/.runs` if not specified
`--overwrite`
Overwrite the existing run data if the output folder already exists. Default to be `False` if not specified
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/python-library-reference/promptflow.md | # PLACEHOLDER | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/llm-tool.md | # LLM
## Introduction
Prompt flow LLM tool enables you to leverage widely used large language models like [OpenAI](https://platform.openai.com/) or [Azure OpenAI (AOAI)](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/overview) for natural language processing.
Prompt flow provides a few different LLM APIs:
- **[Completion](https://platform.openai.com/docs/api-reference/completions)**: OpenAI's completion models generate text based on provided prompts.
- **[Chat](https://platform.openai.com/docs/api-reference/chat)**: OpenAI's chat models facilitate interactive conversations with text-based inputs and responses.
> [!NOTE]
> We now remove the `embedding` option from LLM tool api setting. You can use embedding api with [Embedding tool](https://github.com/microsoft/promptflow/blob/main/docs/reference/tools-reference/embedding_tool.md).
## Prerequisite
Create OpenAI resources:
- **OpenAI**
Sign up account [OpenAI website](https://openai.com/)
Login and [Find personal API key](https://platform.openai.com/account/api-keys)
- **Azure OpenAI (AOAI)**
Create Azure OpenAI resources with [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal)
## **Connections**
Setup connections to provisioned resources in prompt flow.
| Type | Name | API KEY | API Type | API Version |
|-------------|----------|----------|----------|-------------|
| OpenAI | Required | Required | - | - |
| AzureOpenAI | Required | Required | Required | Required |
## Inputs
### Text Completion
| Name | Type | Description | Required |
|------------------------|-------------|-----------------------------------------------------------------------------------------|----------|
| prompt | string | text prompt that the language model will complete | Yes |
| model, deployment_name | string | the language model to use | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the completion. Default is 16. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| suffix | string | text appended to the end of the completion | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| logprobs | integer | the number of log probabilities to generate. Default is null. | No |
| echo | boolean | value that indicates whether to echo back the prompt in the response. Default is false. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0. | No |
| best\_of | integer | the number of best completions to generate. Default is 1. | No |
| logit\_bias | dictionary | the logit bias for the language model. Default is empty dictionary. | No |
### Chat
| Name | Type | Description | Required |
|------------------------|-------------|------------------------------------------------------------------------------------------------|----------|
| prompt | string | text prompt that the language model will response | Yes |
| model, deployment_name | string | the language model to use | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is inf. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0.| No |
| logit\_bias | dictionary | the logit bias for the language model. Default is empty dictionary. | No |
| function\_call | object | value that controls which function is called by the model. Default is null. | No |
| functions | list | a list of functions the model may generate JSON inputs for. Default is null. | No |
| response_format | object | an object specifying the format that the model must output. Default is null. | No |
## Outputs
| API | Return Type | Description |
|------------|-------------|------------------------------------------|
| Completion | string | The text of one predicted completion |
| Chat | string | The text of one response of conversation |
## How to use LLM Tool?
1. Setup and select the connections to OpenAI resources
2. Configure LLM model api and its parameters
3. Prepare the Prompt with [guidance](./prompt-tool.md#how-to-write-prompt).
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/serp-api-tool.md | # SerpAPI
## Introduction
The SerpAPI API is a Python tool that provides a wrapper to the [SerpAPI Google Search Engine Results API](https://serpapi.com/search-api) and [SerpApi Bing Search Engine Results API
](https://serpapi.com/bing-search-api).
We could use the tool to retrieve search results from a number of different search engines, including Google and Bing, and you can specify a range of search parameters, such as the search query, location, device type, and more.
## Prerequisite
Sign up at [SERP API homepage](https://serpapi.com/)
## Connection
Connection is the model used to establish connections with Serp API.
| Type | Name | API KEY |
|-------------|----------|----------|
| Serp | Required | Required |
_**API Key** is on SerpAPI account dashboard_
## Inputs
The **serp api** tool supports following parameters:
| Name | Type | Description | Required |
|----------|---------|---------------------------------------------------------------|----------|
| query | string | The search query to be executed. | Yes |
| engine | string | The search engine to use for the search. Default is 'google'. | Yes |
| num | integer | The number of search results to return.Default is 10. | No |
| location | string | The geographic location to execute the search from. | No |
| safe | string | The safe search mode to use for the search. Default is 'off'. | No |
## Outputs
The json representation from serpapi query.
| Engine | Return Type | Output |
|----------|-------------|-------------------------------------------------------|
| google | json | [Sample](https://serpapi.com/search-api#api-examples) |
| bing | json | [Sample](https://serpapi.com/bing-search-api) |
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/openai-gpt-4v-tool.md | # OpenAI GPT-4V
## Introduction
OpenAI GPT-4V tool enables you to leverage OpenAI's GPT-4 with vision, also referred to as GPT-4V or gpt-4-vision-preview in the API, to take images as input and answer questions about them.
## Prerequisites
- Create OpenAI resources
Sign up account [OpenAI website](https://openai.com/)
Login and [Find personal API key](https://platform.openai.com/account/api-keys)
- Get Access to GPT-4 API
To use GPT-4 with vision, you need access to GPT-4 API. Learn more about [How to get access to GPT-4 API](https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4)
## Connection
Setup connections to provisioned resources in prompt flow.
| Type | Name | API KEY |
|-------------|----------|----------|
| OpenAI | Required | Required |
## Inputs
| Name | Type | Description | Required |
|------------------------|-------------|------------------------------------------------------------------------------------------------|----------|
| connection | OpenAI | the OpenAI connection to be used in the tool | Yes |
| model | string | the language model to use, currently only support gpt-4-vision-preview | Yes |
| prompt | string | The text prompt that the language model will use to generate it's response. | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is 512. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0. | No |
## Outputs
| Return Type | Description |
|-------------|------------------------------------------|
| string | The text of one response of conversation |
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/embedding_tool.md | # Embedding
## Introduction
OpenAI's embedding models convert text into dense vector representations for various NLP tasks. See the [OpenAI Embeddings API](https://platform.openai.com/docs/api-reference/embeddings) for more information.
## Prerequisite
Create OpenAI resources:
- **OpenAI**
Sign up account [OpenAI website](https://openai.com/)
Login and [Find personal API key](https://platform.openai.com/account/api-keys)
- **Azure OpenAI (AOAI)**
Create Azure OpenAI resources with [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal)
## **Connections**
Setup connections to provide resources in embedding tool.
| Type | Name | API KEY | API Type | API Version |
|-------------|----------|----------|----------|-------------|
| OpenAI | Required | Required | - | - |
| AzureOpenAI | Required | Required | Required | Required |
## Inputs
| Name | Type | Description | Required |
|------------------------|-------------|-----------------------------------------------------------------------|----------|
| input | string | the input text to embed | Yes |
| connection | string | the connection for the embedding tool use to provide resources | Yes |
| model/deployment_name | string | instance of the text-embedding engine to use. Fill in model name if you use OpenAI connection, or deployment name if use Azure OpenAI connection. | Yes |
## Outputs
| Return Type | Description |
|-------------|------------------------------------------|
| list | The vector representations for inputs |
The following is an example response returned by the embedding tool:
<details>
<summary>Output</summary>
```
[-0.005744616035372019,
-0.007096089422702789,
-0.00563855143263936,
-0.005272455979138613,
-0.02355326898396015,
0.03955197334289551,
-0.014260607771575451,
-0.011810848489403725,
-0.023170066997408867,
-0.014739611186087132,
...]
```
</details> | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/vector_db_lookup_tool.md | # Vector DB Lookup
Vector DB Lookup is a vector search tool that allows users to search top k similar vectors from vector database. This tool is a wrapper for multiple third-party vector databases. The list of current supported databases is as follows.
| Name | Description |
| --- | --- |
| Azure Cognitive Search | Microsoft's cloud search service with built-in AI capabilities that enrich all types of information to help identify and explore relevant content at scale. |
| Qdrant | Qdrant is a vector similarity search engine that provides a production-ready service with a convenient API to store, search and manage points (i.e. vectors) with an additional payload. |
| Weaviate | Weaviate is an open source vector database that stores both objects and vectors. This allows for combining vector search with structured filtering. |
This tool will support more vector databases.
## Requirements
- For AzureML users, the tool is installed in default image, you can use the tool without extra installation.
- For local users,
`pip install promptflow-vectordb`
## Prerequisites
The tool searches data from a third-party vector database. To use it, you should create resources in advance and establish connection between the tool and the resource.
- **Azure Cognitive Search:**
- Create resource [Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/search-create-service-portal).
- Add "Cognitive search" connection. Fill "API key" field with "Primary admin key" from "Keys" section of created resource, and fill "API base" field with the URL, the URL format is `https://{your_serive_name}.search.windows.net`.
- **Qdrant:**
- Follow the [installation](https://qdrant.tech/documentation/quick-start/) to deploy Qdrant to a self-maintained cloud server.
- Add "Qdrant" connection. Fill "API base" with your self-maintained cloud server address and fill "API key" field.
- **Weaviate:**
- Follow the [installation](https://weaviate.io/developers/weaviate/installation) to deploy Weaviate to a self-maintained instance.
- Add "Weaviate" connection. Fill "API base" with your self-maintained instance address and fill "API key" field.
## Inputs
The tool accepts the following inputs:
- **Azure Cognitive Search:**
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| connection | CognitiveSearchConnection | The created connection for accessing to Cognitive Search endpoint. | Yes |
| index_name | string | The index name created in Cognitive Search resource. | Yes |
| text_field | string | The text field name. The returned text field will populate the text of output. | No |
| vector_field | string | The vector field name. The target vector is searched in this vector field. | Yes |
| search_params | dict | The search parameters. It's key-value pairs. Except for parameters in the tool input list mentioned above, additional search parameters can be formed into a JSON object as search_params. For example, use `{"select": ""}` as search_params to select the returned fields, use `{"search": ""}` to perform a [hybrid search](https://learn.microsoft.com/en-us/azure/search/search-get-started-vector#hybrid-search). | No |
| search_filters | dict | The search filters. It's key-value pairs, the input format is like `{"filter": ""}` | No |
| vector | list | The target vector to be queried, which can be generated by Embedding tool. | Yes |
| top_k | int | The count of top-scored entities to return. Default value is 3 | No |
- **Qdrant:**
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| connection | QdrantConnection | The created connection for accessing to Qdrant server. | Yes |
| collection_name | string | The collection name created in self-maintained cloud server. | Yes |
| text_field | string | The text field name. The returned text field will populate the text of output. | No |
| search_params | dict | The search parameters can be formed into a JSON object as search_params. For example, use `{"params": {"hnsw_ef": 0, "exact": false, "quantization": null}}` to set search_params. | No |
| search_filters | dict | The search filters. It's key-value pairs, the input format is like `{"filter": {"should": [{"key": "", "match": {"value": ""}}]}}` | No |
| vector | list | The target vector to be queried, which can be generated by Embedding tool. | Yes |
| top_k | int | The count of top-scored entities to return. Default value is 3 | No |
- **Weaviate:**
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| connection | WeaviateConnection | The created connection for accessing to Weaviate. | Yes |
| class_name | string | The class name. | Yes |
| text_field | string | The text field name. The returned text field will populate the text of output. | No |
| vector | list | The target vector to be queried, which can be generated by Embedding tool. | Yes |
| top_k | int | The count of top-scored entities to return. Default value is 3 | No |
## Outputs
The following is an example JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by promptflow-vectordb SDK.
- **Azure Cognitive Search:**
For Azure Cognitive Search, the following fields are populated:
| Field Name | Type | Description |
| ---- | ---- | ----------- |
| original_entity | dict | the original response json from search REST API|
| score | float | @search.score from the original entity, which evaluates the similarity between the entity and the query vector |
| text | string | text of the entity|
| vector | list | vector of the entity|
<details>
<summary>Output</summary>
```json
[
{
"metadata": null,
"original_entity": {
"@search.score": 0.5099789,
"id": "",
"your_text_filed_name": "sample text1",
"your_vector_filed_name": [-0.40517663431890405, 0.5856996257406859, -0.1593078462266455, -0.9776269170785785, -0.6145604369828972],
"your_additional_field_name": ""
},
"score": 0.5099789,
"text": "sample text1",
"vector": [-0.40517663431890405, 0.5856996257406859, -0.1593078462266455, -0.9776269170785785, -0.6145604369828972]
}
]
```
</details>
- **Qdrant:**
For Qdrant, the following fields are populated:
| Field Name | Type | Description |
| ---- | ---- | ----------- |
| original_entity | dict | the original response json from search REST API|
| metadata | dict | payload from the original entity|
| score | float | score from the original entity, which evaluates the similarity between the entity and the query vector|
| text | string | text of the payload|
| vector | list | vector of the entity|
<details>
<summary>Output</summary>
```json
[
{
"metadata": {
"text": "sample text1"
},
"original_entity": {
"id": 1,
"payload": {
"text": "sample text1"
},
"score": 1,
"vector": [0.18257418, 0.36514837, 0.5477226, 0.73029673],
"version": 0
},
"score": 1,
"text": "sample text1",
"vector": [0.18257418, 0.36514837, 0.5477226, 0.73029673]
}
]
```
</details>
- **Weaviate:**
For Weaviate, the following fields are populated:
| Field Name | Type | Description |
| ---- | ---- | ----------- |
| original_entity | dict | the original response json from search REST API|
| score | float | certainty from the original entity, which evaluates the similarity between the entity and the query vector|
| text | string | text in the original entity|
| vector | list | vector of the entity|
<details>
<summary>Output</summary>
```json
[
{
"metadata": null,
"original_entity": {
"_additional": {
"certainty": 1,
"distance": 0,
"vector": [
0.58,
0.59,
0.6,
0.61,
0.62
]
},
"text": "sample text1."
},
"score": 1,
"text": "sample text1.",
"vector": [
0.58,
0.59,
0.6,
0.61,
0.62
]
}
]
```
</details> | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/aoai-gpt4-turbo-vision.md | # Azure OpenAI GPT-4 Turbo with Vision
## Introduction
Azure OpenAI GPT-4 Turbo with Vision tool enables you to leverage your AzureOpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them.
## Prerequisites
- Create AzureOpenAI resources
Create Azure OpenAI resources with [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal)
- Create a GPT-4 Turbo with Vision deployment
Browse to [Azure OpenAI Studio](https://oai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource.
Under Management select Deployments and Create a GPT-4 Turbo with Vision deployment by selecting model name: `gpt-4` and model version `vision-preview`.
## Connection
Setup connections to provisioned resources in prompt flow.
| Type | Name | API KEY | API Type | API Version |
|-------------|----------|----------|----------|-------------|
| AzureOpenAI | Required | Required | Required | Required |
## Inputs
| Name | Type | Description | Required |
|------------------------|-------------|------------------------------------------------------------------------------------------------|----------|
| connection | AzureOpenAI | the AzureOpenAI connection to be used in the tool | Yes |
| deployment\_name | string | the language model to use | Yes |
| prompt | string | The text prompt that the language model will use to generate it's response. | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is 512. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0. | No |
## Outputs
| Return Type | Description |
|-------------|------------------------------------------|
| string | The text of one response of conversation |
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/python-tool.md | # Python
## Introduction
Users are empowered by the Python Tool to offer customized code snippets as self-contained executable nodes in PromptFlow.
Users can effortlessly create Python tools, edit code, and verify results with ease.
## Inputs
| Name | Type | Description | Required |
|--------|--------|------------------------------------------------------|---------|
| Code | string | Python code snippet | Yes |
| Inputs | - | List of tool function parameters and its assignments | - |
### Types
| Type | Python example | Description |
|-----------------------------------------------------|---------------------------------|--------------------------------------------|
| int | param: int | Integer type |
| bool | param: bool | Boolean type |
| string | param: str | String type |
| double | param: float | Double type |
| list | param: list or param: List[T] | List type |
| object | param: dict or param: Dict[K, V] | Object type |
| [Connection](../../concepts/concept-connections.md) | param: CustomConnection | Connection type, will be handled specially |
Parameters with `Connection` type annotation will be treated as connection inputs, which means:
- Promptflow extension will show a selector to select the connection.
- During execution time, promptflow will try to find the connection with the name same from parameter value passed in.
Note that `Union[...]` type annotation is supported **ONLY** for connection type,
for example, `param: Union[CustomConnection, OpenAIConnection]`.
## Outputs
The return of the python tool function.
## How to write Python Tool?
### Guidelines
1. Python Tool Code should consist of a complete Python code, including any necessary module imports.
2. Python Tool Code must contain a function decorated with @tool (tool function), serving as the entry point for execution. The @tool decorator should be applied only once within the snippet.
_Below sample defines python tool "my_python_tool", decorated with @tool_
3. Python tool function parameters must be assigned in 'Inputs' section
_Below sample defines inputs "message" and assign with "world"_
4. Python tool function shall have return
_Below sample returns a concatenated string_
### Code
The snippet below shows the basic structure of a tool function. Promptflow will read the function and extract inputs
from function parameters and type annotations.
```python
from promptflow import tool
from promptflow.connections import CustomConnection
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
@tool
def my_python_tool(message: str, my_conn: CustomConnection) -> str:
my_conn_dict = dict(my_conn)
# Do some function call with my_conn_dict...
return 'hello ' + message
```
### Inputs
| Name | Type | Sample Value in Flow Yaml | Value passed to function|
|---------|--------|-------------------------| ------------------------|
| message | string | "world" | "world" |
| my_conn | CustomConnection | "my_conn" | CustomConnection object |
Promptflow will try to find the connection named 'my_conn' during execution time.
### outputs
```python
"hello world"
```
### Keyword Arguments Support
Starting from version 1.0.0 of PromptFlow and version 1.4.0 of [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow),
we have introduced support for keyword arguments (kwargs) in the Python tool.
```python
from promptflow import tool
@tool
def print_test(normal_input: str, **kwargs):
for key, value in kwargs.items():
print(f"Key {key}'s value is {value}")
return len(kwargs)
```
When you add `kwargs` in your python tool like above code, you can insert variable number of inputs by the `+Add input` button.
![Screenshot of the kwargs On VScode Prompt Flow extension](../../media/reference/tools-reference/python_tool_kwargs.png) | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/prompt-tool.md | # Prompt
## Introduction
The Prompt Tool in PromptFlow offers a collection of textual templates that serve as a starting point for creating prompts.
These templates, based on the Jinja2 template engine, facilitate the definition of prompts. The tool proves useful
when prompt tuning is required prior to feeding the prompts into the Language Model (LLM) model in PromptFlow.
## Inputs
| Name | Type | Description | Required |
|--------------------|--------|----------------------------------------------------------|----------|
| prompt | string | The prompt template in Jinja | Yes |
| Inputs | - | List of variables of prompt template and its assignments | - |
## Outputs
The prompt text parsed from the prompt + Inputs
## How to write Prompt?
1. Prepare jinja template. Learn more about [Jinja](https://jinja.palletsprojects.com/en/3.1.x/)
_In below example, the prompt incorporates Jinja templating syntax to dynamically generate the welcome message and personalize it based on the user's name. It also presents a menu of options for the user to choose from. Depending on whether the user_name variable is provided, it either addresses the user by name or uses a generic greeting._
```jinja
Welcome to {{ website_name }}!
{% if user_name %}
Hello, {{ user_name }}!
{% else %}
Hello there!
{% endif %}
Please select an option from the menu below:
1. View your account
2. Update personal information
3. Browse available products
4. Contact customer support
```
2. Assign value for the variables.
_In above example, two variables would be automatically detected and listed in '**Inputs**' section. Please assign values._
### Sample 1
Inputs
| Variable | Type | Sample Value |
|---------------|--------|--------------|
| website_name | string | "Microsoft" |
| user_name | string | "Jane" |
Outputs
```
Welcome to Microsoft! Hello, Jane! Please select an option from the menu below: 1. View your account 2. Update personal information 3. Browse available products 4. Contact customer support
```
### Sample 2
Inputs
| Variable | Type | Sample Value |
|--------------|--------|----------------|
| website_name | string | "Bing" |
| user_name | string | " |
Outputs
```
Welcome to Bing! Hello there! Please select an option from the menu below: 1. View your account 2. Update personal information 3. Browse available products 4. Contact customer support
``` | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/faiss_index_lookup_tool.md | # Faiss Index Lookup
Faiss Index Lookup is a tool tailored for querying within a user-provided Faiss-based vector store. In combination with our Large Language Model (LLM) tool, it empowers users to extract contextually relevant information from a domain knowledge base.
## Requirements
- For AzureML users, the tool is installed in default image, you can use the tool without extra installation.
- For local users, if your index is stored in local path,
`pip install promptflow-vectordb`
if your index is stored in Azure storage,
`pip install promptflow-vectordb[azure]`
## Prerequisites
### For AzureML users,
- step 1. Prepare an accessible path on Azure Blob Storage. Here's the guide if a new storage account needs to be created: [Azure Storage Account](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal).
- step 2. Create related Faiss-based index files on Azure Blob Storage. We support the LangChain format (index.faiss + index.pkl) for the index files, which can be prepared either by employing our promptflow-vectordb SDK or following the quick guide from [LangChain documentation](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss). Please refer to the instructions of <a href="https://aka.ms/pf-sample-build-faiss-index" target="_blank">An example code for creating Faiss index</a> for building index using promptflow-vectordb SDK.
- step 3. Based on where you put your own index files, the identity used by the promptflow runtime should be granted with certain roles. Please refer to [Steps to assign an Azure role](https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-steps):
| Location | Role |
| ---- | ---- |
| workspace datastores or workspace default blob | AzureML Data Scientist |
| other blobs | Storage Blob Data Reader |
### For local users,
- Create Faiss-based index files in local path by only doing step 2 above.
## Inputs
The tool accepts the following inputs:
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| path | string | URL or path for the vector store.<br><br>local path (for local users):<br>`<local_path_to_the_index_folder>`<br><br> Azure blob URL format (with [azure] extra installed):<br>https://`<account_name>`.blob.core.windows.net/`<container_name>`/`<path_and_folder_name>`.<br><br>AML datastore URL format (with [azure] extra installed):<br>azureml://subscriptions/`<your_subscription>`/resourcegroups/`<your_resource_group>`/workspaces/`<your_workspace>`/data/`<data_path>`<br><br>public http/https URL (for public demonstration):<br>http(s)://`<path_and_folder_name>` | Yes |
| vector | list[float] | The target vector to be queried, which can be generated by the LLM tool. | Yes |
| top_k | integer | The count of top-scored entities to return. Default value is 3. | No |
## Outputs
The following is an example for JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by our promptflow-vectordb SDK. For the Faiss Index Search, the following fields are populated:
| Field Name | Type | Description |
| ---- | ---- | ----------- |
| text | string | Text of the entity |
| score | float | Distance between the entity and the query vector |
| metadata | dict | Customized key-value pairs provided by user when create the index |
<details>
<summary>Output</summary>
```json
[
{
"metadata": {
"link": "http://sample_link_0",
"title": "title0"
},
"original_entity": null,
"score": 0,
"text": "sample text #0",
"vector": null
},
{
"metadata": {
"link": "http://sample_link_1",
"title": "title1"
},
"original_entity": null,
"score": 0.05000000447034836,
"text": "sample text #1",
"vector": null
},
{
"metadata": {
"link": "http://sample_link_2",
"title": "title2"
},
"original_entity": null,
"score": 0.20000001788139343,
"text": "sample text #2",
"vector": null
}
]
```
</details> | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/contentsafety_text_tool.md | # Content Safety (Text)
Azure Content Safety is a content moderation service developed by Microsoft that help users detect harmful content from different modalities and languages. This tool is a wrapper for the Azure Content Safety Text API, which allows you to detect text content and get moderation results. See the [Azure Content Safety](https://aka.ms/acs-doc) for more information.
## Requirements
- For AzureML users, the tool is installed in default image, you can use the tool without extra installation.
- For local users,
`pip install promptflow-tools`
> [!NOTE]
> Content Safety (Text) tool is now incorporated into the latest `promptflow-tools` package. If you have previously installed the package `promptflow-contentsafety`, please uninstall it to avoid the duplication in your local tool list.
## Prerequisites
- Create an [Azure Content Safety](https://aka.ms/acs-create) resource.
- Add "Azure Content Safety" connection in prompt flow. Fill "API key" field with "Primary key" from "Keys and Endpoint" section of created resource.
## Inputs
You can use the following parameters as inputs for this tool:
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| text | string | The text that need to be moderated. | Yes |
| hate_category | string | The moderation sensitivity for Hate category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for hate category. The other three options mean different degrees of strictness in filtering out hate content. The default option is *medium_sensitivity*. | Yes |
| sexual_category | string | The moderation sensitivity for Sexual category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for sexual category. The other three options mean different degrees of strictness in filtering out sexual content. The default option is *medium_sensitivity*. | Yes |
| self_harm_category | string | The moderation sensitivity for Self-harm category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for self-harm category. The other three options mean different degrees of strictness in filtering out self_harm content. The default option is *medium_sensitivity*. | Yes |
| violence_category | string | The moderation sensitivity for Violence category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for violence category. The other three options mean different degrees of strictness in filtering out violence content. The default option is *medium_sensitivity*. | Yes |
For more information, please refer to [Azure Content Safety](https://aka.ms/acs-doc)
## Outputs
The following is an example JSON format response returned by the tool:
<details>
<summary>Output</summary>
```json
{
"action_by_category": {
"Hate": "Accept",
"SelfHarm": "Accept",
"Sexual": "Accept",
"Violence": "Accept"
},
"suggested_action": "Accept"
}
```
</details>
The `action_by_category` field gives you a binary value for each category: *Accept* or *Reject*. This value shows if the text meets the sensitivity level that you set in the request parameters for that category.
The `suggested_action` field gives you an overall recommendation based on the four categories. If any category has a *Reject* value, the `suggested_action` will be *Reject* as well.
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/open_model_llm_tool.md | # Open Model LLM
## Introduction
The Open Model LLM tool enables the utilization of a variety of Open Model and Foundational Models, such as [Falcon](https://ml.azure.com/models/tiiuae-falcon-7b/version/4/catalog/registry/azureml) and [Llama 2](https://ml.azure.com/models/Llama-2-7b-chat/version/14/catalog/registry/azureml-meta), for natural language processing in Azure ML Prompt Flow.
Here's how it looks in action on the Visual Studio Code prompt flow extension. In this example, the tool is being used to call a LlaMa-2 chat endpoint and asking "What is CI?".
![Screenshot of the Open Model LLM On VScode Prompt Flow extension](../../media/reference/tools-reference/open_model_llm_on_vscode_promptflow.png)
This prompt flow tool supports two different LLM API types:
- **Chat**: Shown in the example above. The chat API type facilitates interactive conversations with text-based inputs and responses.
- **Completion**: The Completion API type is used to generate single response text completions based on provided prompt input.
## Quick Overview: How do I use Open Model LLM Tool?
1. Choose a Model from the AzureML Model Catalog and get it deployed.
2. Connect to the model deployment.
3. Configure the open model llm tool settings.
4. Prepare the Prompt with [guidance](./prompt-tool.md#how-to-write-prompt).
5. Run the flow.
## Prerequisites: Model Deployment
1. Pick the model which matched your scenario from the [Azure Machine Learning model catalog](https://ml.azure.com/model/catalog).
2. Use the "Deploy" button to deploy the model to a AzureML Online Inference endpoint.
2.1. Use one of the Pay as you go deployment options.
More detailed instructions can be found here [Deploying foundation models to endpoints for inferencing.](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing)
## Prerequisites: Connect to the Model
In order for prompt flow to use your deployed model, you will need to connect to it. There are several ways to connect.
### 1. Endpoint Connections
Once associated to a AzureML or Azure AI Studio workspace, the Open Model LLM tool can use the endpoints on that workspace.
1. **Using AzureML or Azure AI Studio workspaces**: If you are using prompt flow in one of the web page based browsers workspaces, the online endpoints available on that workspace will automatically who up.
2. **Using VScode or Code First**: If you are using prompt flow in VScode or one of the Code First offerings, you will need to connect to the workspace. The Open Model LLM tool uses the azure.identity DefaultAzureCredential client for authorization. One way is through [setting environment credential values](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.environmentcredential?view=azure-python).
### 2. Custom Connections
The Open Model LLM tool uses the CustomConnection. Prompt flow supports two types of connections:
1. **Workspace Connections** - These are connections which are stored as secrets on an Azure Machine Learning workspace. While these can be used, in many places, the are commonly created and maintained in the Studio UI.
2. **Local Connections** - These are connections which are stored locally on your machine. These connections are not available in the Studio UX's, but can be used with the VScode extension.
Instructions on how to create a workspace or local Custom Connection [can be found here.](../../how-to-guides/manage-connections.md#create-a-connection)
The required keys to set are:
1. **endpoint_url**
- This value can be found at the previously created Inferencing endpoint.
2. **endpoint_api_key**
- Ensure to set this as a secret value.
- This value can be found at the previously created Inferencing endpoint.
3. **model_family**
- Supported values: LLAMA, DOLLY, GPT2, or FALCON
- This value is dependent on the type of deployment you are targeting.
## Running the Tool: Inputs
The Open Model LLM tool has a number of parameters, some of which are required. Please see the below table for details, you can match these to the screen shot above for visual clarity.
| Name | Type | Description | Required |
|------|------|-------------|----------|
| api | string | This is the API mode and will depend on the model used and the scenario selected. *Supported values: (Completion \| Chat)* | Yes |
| endpoint_name | string | Name of an Online Inferencing Endpoint with a supported model deployed on it. Takes priority over connection. | No |
| temperature | float | The randomness of the generated text. Default is 1. | No |
| max_new_tokens | integer | The maximum number of tokens to generate in the completion. Default is 500. | No |
| top_p | float | The probability of using the top choice from the generated tokens. Default is 1. | No |
| model_kwargs | dictionary | This input is used to provide configuration specific to the model used. For example, the Llama-02 model may use {\"temperature\":0.4}. *Default: {}* | No |
| deployment_name | string | The name of the deployment to target on the Online Inferencing endpoint. If no value is passed, the Inferencing load balancer traffic settings will be used. | No |
| prompt | string | The text prompt that the language model will use to generate it's response. | Yes |
## Outputs
| API | Return Type | Description |
|------------|-------------|------------------------------------------|
| Completion | string | The text of one predicted completion |
| Chat | string | The text of one response int the conversation |
## Deploying to an Online Endpoint
When deploying a flow containing the Open Model LLM tool to an online endpoint, there is an additional step to setup permissions. During deployment through the web pages, there is a choice between System-assigned and User-assigned Identity types. Either way, using the Azure Portal (or a similar functionality), add the "Reader" Job function role to the identity on the Azure Machine Learning workspace or Ai Studio project which is hosting the endpoint. The prompt flow deployment may need to be refreshed.
| 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/.devcontainer/devcontainer.json | {
"name": "Promptflow-Python39",
// "context" is the path that the Codespaces docker build command should be run from, relative to devcontainer.json
"context": ".",
"dockerFile": "Dockerfile",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"ms-toolsai.vscode-ai",
"ms-toolsai.jupyter",
"redhat.vscode-yaml",
"prompt-flow.prompt-flow"
],
"runArgs": ["-v", "/var/run/docker.sock:/var/run/docker.sock"]
}
| 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/.devcontainer/Dockerfile | FROM python:3.9-slim-bullseye AS base
RUN set -x
RUN apt-get update \
&& apt-get -y install curl \
&& apt-get -y install net-tools \
&& apt-get -y install procps \
&& apt-get -y install build-essential \
&& apt-get -y install docker.io
RUN pip install ipython ipykernel
RUN ipython kernel install --user --name promptflow
# FROM base AS promptflow
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN set +x
CMD bash
| 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/.devcontainer/requirements.txt | azure-cli
promptflow[azure]
promptflow-tools | 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/.devcontainer/README.md | # Devcontainer for promptflow
To facilitate your promptflow project development and empower you to work on LLM projects using promptflow more effectively,
we've configured the necessary environment for developing promptflow projects and utilizing flows through the dev container feature.
You can seamlessly initiate your promptflow project development and start leveraging flows by simply using the dev container feature via VS Code or Codespaces.
## Use Github Codespaces
Use codespaces to open promptflow repo, it will automatically build the dev containers environment and open promptflow with dev containers. You can just click: [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/microsoft/promptflow?quickstart=1)
## Use local devcontainer
Use vscode to open promptflow repo, and install vscode extension: Dev Containers and then open promptflow with dev containers.
![devcontainer](./devcontainers.png)
**About dev containers please refer to: [dev containers](https://code.visualstudio.com/docs/devcontainers/containers)**
| 0 |