repo_id
stringlengths 15
132
| file_path
stringlengths 34
176
| content
stringlengths 2
3.52M
| __index_level_0__
int64 0
0
|
---|---|---|---|
promptflow_repo | promptflow_repo/promptflow/SUPPORT.md | # Support
## How to file issues and get help
This project uses GitHub Issues to track bugs and feature requests. Please search the existing
issues before filing new issues to avoid duplicates. For new issues, file your bug or
feature request as a new Issue.
## Microsoft Support Policy
Support for this **PROJECT or PRODUCT** is limited to the resources listed above.
| 0 |
promptflow_repo | promptflow_repo/promptflow/setup.cfg | [flake8]
extend-ignore = E203, E266, W503, F403, F821
max-line-length = 120
enable-extensions = E123,E133,E241,E242,E704,W505
exclude =
.git
.tox
.eggs
__pycache__
tests/fixtures/*
docs/*
venv,.pytest_cache
build
src/promptflow/promptflow/azure/_restclient
src/promptflow/tests/test_configs/*
import-order-style = google
[mypy]
ignore_missing_imports = True
disallow_untyped_defs = True
[mypy-pytest,pytest_mock]
ignore_missing_imports = True
[tool:pycln]
quiet = True
[black]
line_length = 120
[pycln]
silence = True
[isort]
# we use check for make fmt*
profile = "black"
# no need to fmt ignored
skip_gitignore = true
# needs to be the same as in black
line_length = 120
use_parentheses = true
include_trailing_comma = true
honor_noqa = true
ensure_newline_before_comments = true
skip_glob = [
docs/**,
pipelines/**,
pytest/**,
samples/**,
]
known_third_party = azure,mock,numpy,pandas,pydash,pytest,pytest_mock,requests,setuptools,six,sklearn,tqdm,urllib3,utilities,utils,yaml,jsonschema,strictyaml,jwt,pathspec,isodate,docker
known_first_party = promptflow,promptflow_test
| 0 |
promptflow_repo | promptflow_repo/promptflow/CODE_OF_CONDUCT.md | # Microsoft Open Source Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
Resources:
- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
- Contact [[email protected]](mailto:[email protected]) with questions or concerns
| 0 |
promptflow_repo | promptflow_repo/promptflow/README.md | # Prompt flow
[![Python package](https://img.shields.io/pypi/v/promptflow)](https://pypi.org/project/promptflow/)
[![Python](https://img.shields.io/pypi/pyversions/promptflow.svg?maxAge=2592000)](https://pypi.python.org/pypi/promptflow/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/promptflow)](https://pypi.org/project/promptflow/)
[![CLI](https://img.shields.io/badge/CLI-reference-blue)](https://microsoft.github.io/promptflow/reference/pf-command-reference.html)
[![vsc extension](https://img.shields.io/visual-studio-marketplace/i/prompt-flow.prompt-flow?logo=Visual%20Studio&label=Extension%20)](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow)
[![Doc](https://img.shields.io/badge/Doc-online-green)](https://microsoft.github.io/promptflow/index.html)
[![Issue](https://img.shields.io/github/issues/microsoft/promptflow)](https://github.com/microsoft/promptflow/issues/new/choose)
[![Discussions](https://img.shields.io/github/discussions/microsoft/promptflow)](https://github.com/microsoft/promptflow/issues/new/choose)
[![CONTRIBUTING](https://img.shields.io/badge/Contributing-8A2BE2)](https://github.com/microsoft/promptflow/blob/main/CONTRIBUTING.md)
[![License: MIT](https://img.shields.io/github/license/microsoft/promptflow)](https://github.com/microsoft/promptflow/blob/main/LICENSE)
> Welcome to join us to make prompt flow better by
> participating [discussions](https://github.com/microsoft/promptflow/discussions),
> opening [issues](https://github.com/microsoft/promptflow/issues/new/choose),
> submitting [PRs](https://github.com/microsoft/promptflow/pulls).
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
With prompt flow, you will be able to:
- **Create and iteratively develop flow**
- Create executable [flows](https://microsoft.github.io/promptflow/concepts/concept-flows.html) that link LLMs, prompts, Python code and other [tools](https://microsoft.github.io/promptflow/concepts/concept-tools.html) together.
- Debug and iterate your flows, especially the [interaction with LLMs](https://microsoft.github.io/promptflow/concepts/concept-connections.html) with ease.
- **Evaluate flow quality and performance**
- Evaluate your flow's quality and performance with larger datasets.
- Integrate the testing and evaluation into your CI/CD system to ensure quality of your flow.
- **Streamlined development cycle for production**
- Deploy your flow to the serving platform you choose or integrate into your app's code base easily.
- (Optional but highly recommended) Collaborate with your team by leveraging the cloud version of [Prompt flow in Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow?view=azureml-api-2).
------
## Installation
To get started quickly, you can use a pre-built development environment. **Click the button below** to open the repo in GitHub Codespaces, and then continue the readme!
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/microsoft/promptflow?quickstart=1)
If you want to get started in your local environment, first install the packages:
Ensure you have a python environment, `python=3.9` is recommended.
```sh
pip install promptflow promptflow-tools
```
## Quick Start ⚡
**Create a chatbot with prompt flow**
Run the command to initiate a prompt flow from a chat template, it creates folder named `my_chatbot` and generates required files within it:
```sh
pf flow init --flow ./my_chatbot --type chat
```
**Setup a connection for your API key**
For OpenAI key, establish a connection by running the command, using the `openai.yaml` file in the `my_chatbot` folder, which stores your OpenAI key (override keys and name with --set to avoid yaml file changes):
```sh
pf connection create --file ./my_chatbot/openai.yaml --set api_key=<your_api_key> --name open_ai_connection
```
For Azure OpenAI key, establish the connection by running the command, using the `azure_openai.yaml` file:
```sh
pf connection create --file ./my_chatbot/azure_openai.yaml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
**Chat with your flow**
In the `my_chatbot` folder, there's a `flow.dag.yaml` file that outlines the flow, including inputs/outputs, nodes, connection, and the LLM model, etc
> Note that in the `chat` node, we're using a connection named `open_ai_connection` (specified in `connection` field) and the `gpt-35-turbo` model (specified in `deployment_name` field). The deployment_name filed is to specify the OpenAI model, or the Azure OpenAI deployment resource.
Interact with your chatbot by running: (press `Ctrl + C` to end the session)
```sh
pf flow test --flow ./my_chatbot --interactive
```
**Core value: ensuring "High Quality” from prototype to production**
Explore our [**15-minute tutorial**](examples/tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md) that guides you through prompt tuning ➡ batch testing ➡ evaluation, all designed to ensure high quality ready for production.
Next Step! Continue with the **Tutorial** 👇 section to delve deeper into prompt flow.
## Tutorial 🏃♂️
Prompt flow is a tool designed to **build high quality LLM apps**, the development process in prompt flow follows these steps: develop a flow, improve the flow quality, deploy the flow to production.
### Develop your own LLM apps
#### VS Code Extension
We also offer a VS Code extension (a flow designer) for an interactive flow development experience with UI.
<img src="examples/tutorials/quick-start/media/vsc.png" alt="vsc" width="1000"/>
You can install it from the <a href="https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow">visualstudio marketplace</a>.
#### Deep delve into flow development
[Getting started with prompt flow](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html): A step by step guidance to invoke your first flow run.
### Learn from use cases
[Tutorial: Chat with PDF](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/e2e-development/chat-with-pdf.md): An end-to-end tutorial on how to build a high quality chat application with prompt flow, including flow development and evaluation with metrics.
> More examples can be found [here](https://microsoft.github.io/promptflow/tutorials/index.html#samples). We welcome contributions of new use cases!
### Setup for contributors
If you're interested in contributing, please start with our dev setup guide: [dev_setup.md](./docs/dev/dev_setup.md).
Next Step! Continue with the **Contributing** 👇 section to contribute to prompt flow.
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
## Code of Conduct
This project has adopted the
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [[email protected]](mailto:[email protected])
with any additional questions or comments.
## Data Collection
The software may collect information about you and your use of the software and
send it to Microsoft if configured to enable telemetry.
Microsoft may use this information to provide services and improve our products and services.
You may turn on the telemetry as described in the repository.
There are also some features in the software that may enable you and Microsoft
to collect data from users of your applications. If you use these features, you
must comply with applicable law, including providing appropriate notices to
users of your applications together with a copy of Microsoft's privacy
statement. Our privacy statement is located at
https://go.microsoft.com/fwlink/?LinkID=824704. You can learn more about data
collection and use in the help documentation and our privacy statement. Your
use of the software operates as your consent to these practices.
### Telemetry Configuration
Telemetry collection is on by default.
To opt out, please run `pf config set telemetry.enabled=false` to turn it off.
## License
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the [MIT](LICENSE) license.
| 0 |
promptflow_repo | promptflow_repo/promptflow/LICENSE | MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
| 0 |
promptflow_repo | promptflow_repo/promptflow/CONTRIBUTING.md | # Contributing to Prompt Flow
You can contribute to prompt flow with issues and pull requests (PRs). Simply
filing issues for problems you encounter is a great way to contribute. Contributing
code is greatly appreciated.
## Reporting Issues
We always welcome bug reports, API proposals and overall feedback. Here are a few
tips on how you can make reporting your issue as effective as possible.
### Where to Report
New issues can be reported in our [list of issues](https://github.com/microsoft/promptflow/issues).
Before filing a new issue, please search the list of issues to make sure it does
not already exist.
If you do find an existing issue for what you wanted to report, please include
your own feedback in the discussion. Do consider upvoting (👍 reaction) the original
post, as this helps us prioritize popular issues in our backlog.
### Writing a Good Bug Report
Good bug reports make it easier for maintainers to verify and root cause the
underlying problem.
The better a bug report, the faster the problem will be resolved. Ideally, a bug
report should contain the following information:
- A high-level description of the problem.
- A _minimal reproduction_, i.e. the smallest size of code/configuration required
to reproduce the wrong behavior.
- A description of the _expected behavior_, contrasted with the _actual behavior_ observed.
- Information on the environment: OS/distribution, CPU architecture, SDK version, etc.
- Additional information, e.g. Is it a regression from previous versions? Are there
any known workarounds?
## Contributing Changes
Project maintainers will merge accepted code changes from contributors.
### DOs and DON'Ts
DO's:
- **DO** follow the standard coding conventions: [Python](https://pypi.org/project/black/)
- **DO** give priority to the current style of the project or file you're changing
if it diverges from the general guidelines.
- **DO** include tests when adding new features. When fixing bugs, start with
adding a test that highlights how the current behavior is broken.
- **DO** add proper docstring for functions and classes following [API Documentation Guidelines](./docs/dev/documentation_guidelines.md).
- **DO** keep the discussions focused. When a new or related topic comes up
it's often better to create new issue than to side track the discussion.
- **DO** clearly state on an issue that you are going to take on implementing it.
- **DO** blog and tweet (or whatever) about your contributions, frequently!
DON'Ts:
- **DON'T** surprise us with big pull requests. Instead, file an issue and start
a discussion so we can agree on a direction before you invest a large amount of time.
- **DON'T** commit code that you didn't write. If you find code that you think is a good
fit to add to prompt flow, file an issue and start a discussion before proceeding.
- **DON'T** submit PRs that alter licensing related files or headers. If you believe
there's a problem with them, file an issue and we'll be happy to discuss it.
- **DON'T** make new APIs without filing an issue and discussing with us first.
### Breaking Changes
Contributions must maintain API signature and behavioral compatibility. Contributions
that include breaking changes will be rejected. Please file an issue to discuss
your idea or change if you believe that a breaking change is warranted.
### Suggested Workflow
We use and recommend the following workflow:
1. Create an issue for your work, or reuse an existing issue on the same topic.
- Get agreement from the team and the community that your proposed change is
a good one.
- Clearly state that you are going to take on implementing it, if that's the case.
You can request that the issue be assigned to you. Note: The issue filer and
the implementer don't have to be the same person.
2. Create a personal fork of the repository on GitHub (if you don't already have one).
3. In your fork, create a branch off of main (`git checkout -b my_branch`).
- Name the branch so that it clearly communicates your intentions, such as
"issue-123" or "githubhandle-issue".
4. Make and commit your changes to your branch.
5. Add new tests corresponding to your change, if applicable.
6. Run the relevant scripts in [the section below](https://github.com/microsoft/promptflow/blob/main/CONTRIBUTING.md#dev-scripts) to ensure that your build is clean and all tests are passing.
7. Create a PR against the repository's **main** branch.
- State in the description what issue or improvement your change is addressing.
- Link the PR to the issue in step 1.
- Verify that all the Continuous Integration checks are passing.
8. Wait for feedback or approval of your changes from the code maintainers.
- If there is no response for a few days, you can create a new issue to raise awareness.
Promptflow team has triage process toward issues without assignee,
then you can directly contact the issue owner to follow up (e.g. loop related internal reviewer).
9. When area owners have signed off, and all checks are green, your PR will be merged.
### Development scripts
The scripts below are used to build, test, and lint within the project.
- see [doc/dev/dev_setup.md](https://github.com/microsoft/promptflow/blob/main/docs/dev/dev_setup.md).
### PR - CI Process
The continuous integration (CI) system will automatically perform the required
builds and run tests (including the ones you are expected to run) for PRs. Builds
and test runs must be clean.
If the CI build fails for any reason, the PR issue will be updated with a link
that can be used to determine the cause of the failure.
| 0 |
promptflow_repo | promptflow_repo/promptflow/.cspell.json | {
"version": "0.2",
"language": "en",
"languageId": "python",
"dictionaries": [
"powershell",
"python",
"go",
"css",
"html",
"bash",
"npm",
"softwareTerms",
"en_us",
"en-gb"
],
"ignorePaths": [
"**/*.js",
"**/*.pyc",
"**/*.log",
"**/*.jsonl",
"**/*.xml",
"**/*.txt",
".gitignore",
"scripts/docs/_build/**",
"src/promptflow/promptflow/azure/_restclient/flow/**",
"src/promptflow/promptflow/azure/_restclient/swagger.json",
"src/promptflow/tests/**",
"src/promptflow-tools/tests/**",
"**/flow.dag.yaml",
"**/setup.py",
"scripts/installer/curl_install_pypi/**",
"scripts/installer/windows/**",
"src/promptflow/promptflow/_sdk/_service/pfsvc.py"
],
"words": [
"aoai",
"amlignore",
"mldesigner",
"faiss",
"serp",
"azureml",
"mlflow",
"vnet",
"openai",
"pfazure",
"eastus",
"azureai",
"vectordb",
"Qdrant",
"Weaviate",
"env",
"e2etests",
"e2etest",
"tablefmt",
"logprobs",
"logit",
"hnsw",
"chatml",
"UNLCK",
"KHTML",
"numlines",
"azurecr",
"centralus",
"Policheck",
"azuremlsdktestpypi",
"rediraffe",
"pydata",
"ROBOCOPY",
"undoc",
"retriable",
"pfcli",
"pfutil",
"mgmt",
"wsid",
"westus",
"msrest",
"cref",
"msal",
"pfbytes",
"Apim",
"junit",
"nunit",
"astext",
"Likert",
"pfsvc"
],
"ignoreWords": [
"openmpi",
"ipynb",
"xdist",
"pydash",
"tqdm",
"rtype",
"epocs",
"fout",
"funcs",
"todos",
"fstring",
"creds",
"zipp",
"gmtime",
"pyjwt",
"nbconvert",
"nbformat",
"pypandoc",
"dotenv",
"miniconda",
"datas",
"tcgetpgrp",
"yamls",
"fmt",
"serpapi",
"genutils",
"metadatas",
"tiktoken",
"bfnrt",
"orelse",
"thead",
"sympy",
"ghactions",
"esac",
"MSRC",
"pycln",
"strictyaml",
"psutil",
"getch",
"tcgetattr",
"TCSADRAIN",
"stringio",
"jsonify",
"werkzeug",
"continuumio",
"pydantic",
"iterrows",
"dtype",
"fillna",
"nlines",
"aggr",
"tcsetattr",
"pysqlite",
"AADSTS700082",
"Pyinstaller",
"runsvdir",
"runsv",
"levelno",
"LANCZOS",
"Mobius",
"ruamel",
"gunicorn",
"pkill",
"pgrep",
"Hwfoxydrg",
"llms",
"vcrpy",
"uionly",
"llmops",
"Abhishek",
"restx",
"httpx",
"tiiuae",
"nohup",
"metagenai",
"WBITS",
"laddr",
"nrows",
"Dumpable"
],
"flagWords": [
"Prompt Flow"
],
"allowCompoundWords": true
}
| 0 |
promptflow_repo | promptflow_repo/promptflow/SECURITY.md | <!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->
| 0 |
promptflow_repo | promptflow_repo/promptflow/.pre-commit-config.yaml | # See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
exclude: '(^docs/)|flows|scripts|src/promptflow/promptflow/azure/_restclient/|src/promptflow/tests/test_configs|src/promptflow-tools'
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-json
- id: check-merge-conflict
- repo: https://github.com/psf/black
rev: 22.3.0 # Replace by any tag/version: https://github.com/psf/black/tags
hooks:
- id: black
language_version: python3 # Should be a command that runs python3.6+
args:
- "--line-length=120"
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: flake8
# Temporary disable this since it gets stuck when updating env
- repo: https://github.com/streetsidesoftware/cspell-cli
rev: v7.3.0
hooks:
- id: cspell
args: ['--config', '.cspell.json', "--no-must-find-files"]
- repo: https://github.com/hadialqattan/pycln
rev: v2.1.2 # Possible releases: https://github.com/hadialqattan/pycln/tags
hooks:
- id: pycln
name: "Clean unused python imports"
args: [--config=setup.cfg]
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
# stages: [commit]
name: isort-python
# Use black profile for isort to avoid conflicts
# see https://github.com/PyCQA/isort/issues/1518
args: ["--profile", "black", --line-length=120]
| 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/.devcontainer/README.md | # Devcontainer for promptflow
To facilitate your promptflow project development and empower you to work on LLM projects using promptflow more effectively,
we've configured the necessary environment for developing promptflow projects and utilizing flows through the dev container feature.
You can seamlessly initiate your promptflow project development and start leveraging flows by simply using the dev container feature via VS Code or Codespaces.
## Use Github Codespaces
Use codespaces to open promptflow repo, it will automatically build the dev containers environment and open promptflow with dev containers. You can just click: [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/microsoft/promptflow?quickstart=1)
## Use local devcontainer
Use vscode to open promptflow repo, and install vscode extension: Dev Containers and then open promptflow with dev containers.
![devcontainer](./devcontainers.png)
**About dev containers please refer to: [dev containers](https://code.visualstudio.com/docs/devcontainers/containers)**
| 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/.devcontainer/Dockerfile | FROM python:3.9-slim-bullseye AS base
RUN set -x
RUN apt-get update \
&& apt-get -y install curl \
&& apt-get -y install net-tools \
&& apt-get -y install procps \
&& apt-get -y install build-essential \
&& apt-get -y install docker.io
RUN pip install ipython ipykernel
RUN ipython kernel install --user --name promptflow
# FROM base AS promptflow
COPY requirements.txt .
RUN pip install -r requirements.txt
RUN set +x
CMD bash
| 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/.devcontainer/devcontainer.json | {
"name": "Promptflow-Python39",
// "context" is the path that the Codespaces docker build command should be run from, relative to devcontainer.json
"context": ".",
"dockerFile": "Dockerfile",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"ms-toolsai.vscode-ai",
"ms-toolsai.jupyter",
"redhat.vscode-yaml",
"prompt-flow.prompt-flow"
],
"runArgs": ["-v", "/var/run/docker.sock:/var/run/docker.sock"]
}
| 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/.devcontainer/requirements.txt | azure-cli
promptflow[azure]
promptflow-tools | 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/docs/README.md | # Promptflow documentation contribute guidelines
This folder contains the source code for [prompt flow documentation site](https://microsoft.github.io/promptflow/).
This readme file will not be included in above doc site. It keeps a guide for promptflow documentation contributors.
## Content
Below is a table of important doc pages.
| Category | Article |
|----------------|----------------|
|Quick start|[Getting started with prompt flow](./how-to-guides/quick-start.md)|
|Concepts|[Flows](./concepts/concept-flows.md)<br> [Tools](./concepts/concept-tools.md)<br> [Connections](./concepts/concept-connections.md)<br> [Variants](./concepts/concept-variants.md)<br> |
|How-to guides|[How to initialize and test a flow](./how-to-guides/init-and-test-a-flow.md) <br>[How to run and evaluate a flow](./how-to-guides/run-and-evaluate-a-flow/index.md)<br> [How to tune prompts using variants](./how-to-guides/tune-prompts-with-variants.md)<br>[How to deploy a flow](./how-to-guides/deploy-a-flow/index.md)<br>[How to create and use your own tool package](./how-to-guides/develop-a-tool/create-and-use-tool-package.md)|
|Tools reference|[LLM tool](./reference/tools-reference/llm-tool.md)<br> [Prompt tool](./reference/tools-reference/prompt-tool.md)<br> [Python tool](./reference/tools-reference/python-tool.md)<br> [Embedding tool](./reference/tools-reference/embedding_tool.md)<br>[SERP API tool](./reference/tools-reference/serp-api-tool.md) ||
## Writing tips
0. Reach the doc source repository by clicking `Edit this page` on any page.
![Edit this page](./media/edit-this-page.png)
1. Please use :::{admonition} for experimental feature or notes, and admonition with dropdown for the Limitation Part.
2. Please use ::::{tab-set} to group your sdk/cli example, and put the cli at first. Use :sync: to sync multiple tables .
3. If you are unclear with the above lines, refer to [get started](./how-to-guides/quick-start.md) to see the usage.
4. Add gif: Use [ScreenToGif](https://www.screentogif.com/) to record your screen, edit and save as a gif.
5. Reach more element style at [Sphinx Design Components](https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/web-components.html).
## Preview your changes
**Local build**: We suggest using local build at the beginning, as it's fast and efficiency.
Please refer to [How to build doc site locally](./dev/documentation_guidelines.md#how-to-build-doc-site-locally).
## FAQ
### Adding image in doc
Please use markdown syntax `![img desc](img link)` to reference image, because the relative path of image will be changed after sphinx build, and image placed in html tags can not be referenced when build.
### Draw flow chart in doc
We recommend using the mermaid, learn more from the [mermaid syntax doc](https://mermaid-js.github.io/mermaid/#/./flowchart?id=flowcharts-basic-syntax)
- Recommend to install [vscode extension](https://marketplace.visualstudio.com/items?itemName=bierner.markdown-mermaid) to preview graph in vscode.
## Reference
- [md-and-rst](https://coderefinery.github.io/sphinx-lesson/md-and-rst/)
- [sphinx-quickstart](https://www.sphinx-doc.org/en/master/usage/quickstart.html) | 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/docs/index.md | ---
myst:
html_meta:
"description lang=en": "Prompt flow Doc"
"google-site-verification": "rEZN-2h5TVqEco07aaMpqNcDx4bjr2czx1Hwfoxydrg"
html_theme.sidebar_secondary.remove: true
---
# Prompt flow
[**Prompt flow**](https://github.com/microsoft/promptflow) is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
With prompt flow, you will be able to:
- **Create [flows](./concepts/concept-flows.md)** that link [LLMs](./reference/tools-reference/llm-tool.md), [prompts](./reference/tools-reference/prompt-tool.md), [Python](./reference/tools-reference/python-tool.md) code and other [tools](./concepts/concept-tools.md) together in a executable workflow.
- **Debug and iterate your flows**, especially the interaction with LLMs with ease.
- **Evaluate your flows**, calculate quality and performance metrics with larger datasets.
- **Integrate the testing and evaluation into your CI/CD system** to ensure quality of your flow.
- **Deploy your flows** to the serving platform you choose or integrate into your app's code base easily.
- (Optional but highly recommended) **Collaborate with your team** by leveraging the cloud version of [Prompt flow in Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow?view=azureml-api-2).
> Welcome to join us to make prompt flow better by
> participating [discussions](https://github.com/microsoft/promptflow/discussions),
> opening [issues](https://github.com/microsoft/promptflow/issues/new/choose),
> submitting [PRs](https://github.com/microsoft/promptflow/pulls).
This documentation site contains guides for prompt flow [sdk, cli](https://pypi.org/project/promptflow/) and [vscode extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) users.
```{gallery-grid}
:grid-columns: 1 2 2 2
- header: "🚀 Quick Start"
content: "
Quick start and end-to-end tutorials.<br/><br/>
- [Getting started with prompt flow](how-to-guides/quick-start.md)<br/>
- [E2E development tutorial: chat with PDF](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/e2e-development/chat-with-pdf.md)<br/>
- Find more: [tutorials & samples](tutorials/index.md)<br/>
"
- header: "📒 How-to Guides"
content: "
Articles guide user to complete a specific task in prompt flow.<br/><br/>
- [Develop a flow](how-to-guides/develop-a-flow/index.md)<br/>
- [Initialize and test a flow](how-to-guides/init-and-test-a-flow.md)<br/>
- [Run and evaluate a flow](how-to-guides/run-and-evaluate-a-flow/index.md)<br/>
- [Tune prompts using variants](how-to-guides/tune-prompts-with-variants.md)<br/>
- [Develop custom tool](how-to-guides/develop-a-tool/create-and-use-tool-package.md)<br/>
- [Deploy a flow](how-to-guides/deploy-a-flow/index.md)<br/>
- [Process image in flow](how-to-guides/process-image-in-flow.md)
"
```
```{gallery-grid}
:grid-columns: 1 2 2 2
- header: "📑 Concepts"
content: "
Introduction of key concepts of prompt flow.<br/><br/>
- [Flows](concepts/concept-flows.md)<br/>
- [Tools](concepts/concept-tools.md)<br/>
- [Connections](concepts/concept-connections.md)<br/>
- [Design principles](concepts/design-principles.md)<br/>
"
- header: "🔍 Reference"
content: "
Reference provides technical information about prompt flow API.<br/><br/>
- Command line Interface reference: [pf](reference/pf-command-reference.md)<br/>
- Python library reference: [promptflow](reference/python-library-reference/promptflow.md)<br/>
- Tool reference: [LLM Tool](reference/tools-reference/llm-tool.md), [Python Tool](reference/tools-reference/python-tool.md), [Prompt Tool](reference/tools-reference/prompt-tool.md)<br/>
"
```
```{toctree}
:hidden:
:maxdepth: 1
how-to-guides/quick-start
```
```{toctree}
:hidden:
:maxdepth: 1
how-to-guides/index
```
```{toctree}
:hidden:
:maxdepth: 1
tutorials/index
```
```{toctree}
:hidden:
:maxdepth: 2
concepts/index
```
```{toctree}
:hidden:
:maxdepth: 1
reference/index
```
```{toctree}
:hidden:
:maxdepth: 1
cloud/index
```
```{toctree}
:hidden:
:maxdepth: 1
integrations/index
``` | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/integrations/index.md | # Integrations
The Integrations section contains documentation on custom extensions created by the community that expand prompt flow's capabilities.
These include tools that enrich flows, as well as tutorials on innovative ways to use prompt flow.
```{toctree}
:maxdepth: 1
tools/index
llms/index
``` | 0 |
promptflow_repo/promptflow/docs/integrations | promptflow_repo/promptflow/docs/integrations/tools/azure-ai-language-tool.md | # Azure AI Language
Azure AI Language enables users with task-oriented and optimized pre-trained language models to effectively understand documents and conversations. This Prompt flow tool is a wrapper for various Azure AI Language APIs. The current list of supported capabilities is as follows:
| Name | Description |
|-------------------------------------------|-------------------------------------------------------|
| Abstractive Summarization | Generate abstractive summaries from documents. |
| Extractive Summarization | Extract summaries from documents. |
| Conversation Summarization | Summarize conversations. |
| Entity Recognition | Recognize and categorize entities in documents. |
| Key Phrase Extraction | Extract key phrases from documents. |
| Language Detection | Detect the language of documents. |
| PII Entity Recognition | Recognize and redact PII entities in documents. |
| Sentiment Analysis | Analyze the sentiment of documents. |
| Conversational Language Understanding | Predict intents and entities from user's utterances. |
| Translator | Translate documents. |
## Requirements
- For AzureML users:
follow this [wiki](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-custom-tool-package-creation-and-usage?view=azureml-api-2#prepare-runtime), starting from `Prepare runtime`. Note that the PyPi package name is `promptflow-azure-ai-language`.
- For local users:
```
pip install promptflow-azure-ai-language
```
## Prerequisites
The tool calls APIs from Azure AI Language. To use it, you must create a connection to an [Azure AI Language resource](https://learn.microsoft.com/en-us/azure/ai-services/language-service/). Create a Language resource first, if necessary.
- In Prompt flow, add a new `CustomConnection`.
- Under the `secrets` field, specify the resource's API key: `api_key: <Azure AI Language Resource api key>`
- Under the `configs` field, specify the resource's endpoint: `endpoint: <Azure AI Language Resource endpoint>`
To use the `Translator` tool, you must set up an additional connection to an [Azure AI Translator resource](https://azure.microsoft.com/en-us/products/ai-services/ai-translator). [Create a Translator resource](https://learn.microsoft.com/en-us/azure/ai-services/translator/create-translator-resource) first, if necessary.
- In Prompt flow, add a new `CustomConnection`.
- Under the `secrets` field, specify the resource's API key: `api_key: <Azure AI Translator Resource api key>`
- Under the `configs` field, specify the resource's endpoint: `endpoint: <Azure AI Translator Resource endpoint>`
- If your Translator Resource is regional and non-global, specify its region under `configs` as well: `region: <Azure AI Translator Resource region>`
## Inputs
The tool accepts the following inputs:
- **Abstractive Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| query | string | The query used to structure summarization. | Yes |
| summary_length | string (enum) | The desired summary length. Enum values are `short`, `medium`, and `long`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Extractive Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| query | string | The query used to structure summarization. | Yes |
| sentence_count | int | The desired number of output summary sentences. Default value is `3`. | No |
| sort_by | string (enum) | The sorting criteria for extractive summarization results. Enum values are `Offset` to sort results in order of appearance in the text and `Rank` to sort results in order of importance (i.e. rank score) according to model. Default value is `Offset`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Conversation Summarization**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. Text should be of the following form: `<speaker id>: <speaker text> \n <speaker id>: <speaker text> \n ...` | Yes |
| modality | string (enum) | The modality of the input text. Enum values are `text` for input from a text source, and `transcript` for input from a transcript source. | Yes |
| summary_aspect | string (enum) | The desired summary "aspect" to obtain. Enum values are `chapterTitle` to obtain the chapter title of any conversation, `issue` to obtain the summary of issues in transcripts of web chats and service calls between customer-service agents and customers, `narrative` to obtain the generic summary of any conversation, `resolution` to obtain the summary of resolutions in transcripts of web chats and service calls between customer-service agents and customers, `recap` to obtain a general summary, and `follow-up tasks` to obtain a summary of follow-up or action items. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Entity Recognition**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Key Phrase Extraction**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Language Detection**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| text | string | The input text. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **PII Entity Recognition**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| domain | string (enum) | The PII domain used for PII Entity Recognition. Enum values are `none` for no domain, or `phi` to indicate that entities in the Personal Health domain should be redacted. Default value is `none`. | No |
| categories | list[string] | Describes the PII categories to return. Default value is `[]`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Sentiment Analysis**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| text | string | The input text. | Yes |
| opinion_mining | bool | Should opinion mining be enabled. Default value is `False`. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Conversational Language Understanding**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Language resource. | Yes |
| language | string | The ISO 639-1 code for the language of the input. | Yes |
| utterances | string | A single user utterance or a json array of user utterances. | Yes |
| project_name | string | The Conversational Language Understanding project to be called. | Yes |
| deployment_name | string | The Conversational Language Understanding project deployment to be called. | Yes |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
- **Translator**:
| Name | Type | Description | Required |
|--------------------|------------------|-------------|----------|
| connection | CustomConnection | The created connection to an Azure AI Translator resource. | Yes |
| text | string | The input text. | Yes |
| to | list[string] | The languages to translate the input text to. | Yes |
| source_language | string | The language of the input text. | No |
| parse_response | bool | Should the raw API json output be parsed. Default value is `False`. | No |
## Outputs
If the input parameter `parse_response` is set to `False` (default value), the raw API json output will be returned as a string. Refer to the [REST API reference](https://learn.microsoft.com/en-us/rest/api/language/) for details on API output. For Conversational Language Understanding, the output will be a list of raw API json responses, one response for each user utterance in the input.
When `parse_response` is set to `True`, the tool will parse API output as follows:
| Name | Type | Description |
|-------------------------------------------------------------|--------|---------------------|
| Abstractive Summarization | string | Abstractive summary. |
| Extractive Summarization | list[string] | Extracted summary sentence strings. |
| Conversation Summarization | string | Conversation summary based on `summary_aspect`. |
| Entity Recognition | dict[string, string] | Recognized entities, where keys are entity names and values are entity categories. |
| Key Phrase Extraction | list[string] | Extracted key phrases as strings. |
| Language Detection | string | Detected language's ISO 639-1 code. |
| PII Entity Recognition | string | Input `text` with PII entities redacted. |
| Sentiment Analysis | string | Analyzed sentiment: `positive`, `neutral`, or `negative`. |
| Conversational Language Understanding | list[dict[string, string]] | List of user utterances and associated intents. |
| Translator | dict[string, string] | Translated text, where keys are the translated languages and values are the translated texts. |
| 0 |
promptflow_repo/promptflow/docs/integrations | promptflow_repo/promptflow/docs/integrations/tools/index.md | # Custom Tools
This section contains documentation for custom tools created by the community to extend Prompt flow's capabilities for specific use cases. These tools are developed following the guide on [Creating and Using Tool Packages](../../how-to-guides/develop-a-tool/create-and-use-tool-package.md). They are not officially maintained or endorsed by the Prompt flow team. For questions or issues when using a tool, please use the support contact link in the table below.
## Tool Package Index
The table below provides an index of custom tool packages. The columns contain:
- **Package Name:** The name of the tool package. Links to the package documentation.
- **Description:** A short summary of what the tool package does.
- **Owner:** The creator/maintainer of the tool package.
- **Support Contact:** Link to contact for support and reporting new issues.
| Package Name | Description | Owner | Support Contact |
|-|-|-|-|
| promptflow-azure-ai-language | Collection of Azure AI Language Prompt flow tools. | Sean Murray | [email protected] |
```{toctree}
:maxdepth: 1
:hidden:
azure-ai-language-tool
```
| 0 |
promptflow_repo/promptflow/docs/integrations | promptflow_repo/promptflow/docs/integrations/llms/index.md | # Alternative LLMs
This section provides tutorials on incorporating alternative large language models into prompt flow.
```{toctree}
:maxdepth: 1
:hidden:
``` | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/reference/pfazure-command-reference.md | # pfazure
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../how-to-guides/faq.md#stable-vs-experimental).
:::
Manage prompt flow resources on Azure with the prompt flow CLI.
| Command | Description |
| --- | --- |
| [pfazure flow](#pfazure-flow) | Manage flows. |
| [pfazure run](#pfazure-run) | Manage runs. |
## pfazure flow
Manage flows.
| Command | Description |
| --- | --- |
| [pfazure flow create](#pfazure-flow-create) | Create a flow. |
| [pfazure flow list](#pfazure-flow-list) | List flows in a workspace. |
### pfazure flow create
Create a flow in Azure AI from a local flow folder.
```bash
pfazure flow create [--flow]
[--set]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--flow`
Local path to the flow directory.
`--set`
Update an object by specifying a property path and value to set.
- `display_name`: Flow display name that will be created in remote. Default to be flow folder name + timestamp if not specified.
- `type`: Flow type. Default to be "standard" if not specified. Available types are: "standard", "evaluation", "chat".
- `description`: Flow description. e.g. "--set description=\<description\>."
- `tags`: Flow tags. e.g. "--set tags.key1=value1 tags.key2=value2."
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure flow list
List remote flows on Azure AI.
```bash
pfazure flow list [--max-results]
[--include-others]
[--type]
[--output]
[--archived-only]
[--include-archived]
[--subscription]
[--resource-group]
[--workspace-name]
[--output]
```
#### Parameters
`--max-results -r`
Max number of results to return. Default is 50, upper bound is 100.
`--include-others`
Include flows created by other owners. By default only flows created by the current user are returned.
`--type`
Filter flows by type. Available types are: "standard", "evaluation", "chat".
`--archived-only`
List archived flows only.
`--include-archived`
List archived flows and active flows.
`--output -o`
Output format. Allowed values: `json`, `table`. Default: `json`.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
## pfazure run
Manage prompt flow runs.
| Command | Description |
| --- | --- |
| [pfazure run create](#pfazure-run-create) | Create a run. |
| [pfazure run list](#pfazure-run-list) | List runs in a workspace. |
| [pfazure run show](#pfazure-run-show) | Show details for a run. |
| [pfazure run stream](#pfazure-run-stream) | Stream run logs to the console. |
| [pfazure run show-details](#pfazure-run-show-details) | Show a run details. |
| [pfazure run show-metrics](#pfazure-run-show-metrics) | Show run metrics. |
| [pfazure run visualize](#pfazure-run-visualize) | Visualize a run. |
| [pfazure run archive](#pfazure-run-archive) | Archive a run. |
| [pfazure run restore](#pfazure-run-restore) | Restore a run. |
| [pfazure run update](#pfazure-run-update) | Update a run. |
| [pfazure run download](#pfazure-run-download) | Download a run. |
### pfazure run create
Create a run.
```bash
pfazure run create [--file]
[--flow]
[--data]
[--column-mapping]
[--run]
[--variant]
[--stream]
[--environment-variables]
[--connections]
[--set]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--file -f`
Local path to the YAML file containing the prompt flow run specification; can be overwritten by other parameters. Reference [here](https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json) for YAML schema.
`--flow`
Local path to the flow directory.
`--data`
Local path to the data file or remote data. e.g. azureml:name:version.
`--column-mapping`
Inputs column mapping, use `${data.xx}` to refer to data columns, use `${run.inputs.xx}` to refer to referenced run's data columns, and `${run.outputs.xx}` to refer to run outputs columns.
`--run`
Referenced flow run name. For example, you can run an evaluation flow against an existing run. For example, "pfazure run create --flow evaluation_flow_dir --run existing_bulk_run --column-mapping url='${data.url}'".
`--variant`
Node & variant name in format of `${node_name.variant_name}`.
`--stream -s`
Indicates whether to stream the run's logs to the console.
default value: False
`--environment-variables`
Environment variables to set by specifying a property path and value. Example:
`--environment-variable key1='${my_connection.api_key}' key2='value2'`. The value reference
to connection keys will be resolved to the actual value, and all environment variables
specified will be set into os.environ.
`--connections`
Overwrite node level connections with provided value.
Example: `--connections node1.connection=test_llm_connection node1.deployment_name=gpt-35-turbo`
`--set`
Update an object by specifying a property path and value to set.
Example: `--set property1.property2=<value>`.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run list
List runs in a workspace.
```bash
pfazure run list [--archived-only]
[--include-archived]
[--max-results]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--archived-only`
List archived runs only.
default value: False
`--include-archived`
List archived runs and active runs.
default value: False
`--max-results -r`
Max number of results to return. Default is 50, upper bound is 100.
default value: 50
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run show
Show details for a run.
```bash
pfazure run show --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run stream
Stream run logs to the console.
```bash
pfazure run stream --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run show-details
Show a run details.
```bash
pfazure run show-details --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run show-metrics
Show run metrics.
```bash
pfazure run show-metrics --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run visualize
Visualize a run.
```bash
pfazure run visualize --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run archive
Archive a run.
```bash
pfazure run archive --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run restore
Restore a run.
```bash
pfazure run restore --name
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Parameters
`--name -n`
Name of the run.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run update
Update a run's metadata, such as `display name`, `description` and `tags`.
```bash
pfazure run update --name
[--set display_name="<value>" description="<value>" tags.key="<value>"]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Examples
Set `display name`, `description` and `tags`:
```bash
pfazure run update --name <run_name> --set display_name="<value>" description="<value>" tags.key="<value>"
```
#### Parameters
`--name -n`
Name of the run.
`--set`
Set meta information of the run, like `display_name`, `description` or `tags`. Example: --set <key>=<value>.
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
### pfazure run download
Download a run's metadata, such as `input`, `output`, `snapshot` and `artifact`. After the download is finished, you can use `pf run create --source <run-info-local-folder>` to register this run as a local run record, then you can use commands like `pf run show/visualize` to inspect the run just like a run that was created from local flow.
```bash
pfazure run download --name
[--output]
[--overwrite]
[--subscription]
[--resource-group]
[--workspace-name]
```
#### Examples
Download a run data to local:
```bash
pfazure run download --name <name> --output <output-folder-path>
```
#### Parameters
`--name -n`
Name of the run.
`--output -o`
Output folder path to store the downloaded run data. Default to be `~/.promptflow/.runs` if not specified
`--overwrite`
Overwrite the existing run data if the output folder already exists. Default to be `False` if not specified
`--subscription`
Subscription id, required when there is no default value from `az configure`.
`--resource-group -g`
Resource group name, required when there is no default value from `az configure`.
`--workspace-name -w`
Workspace name, required when there is no default value from `az configure`.
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/reference/flow-yaml-schema-reference.md | # Flow YAML Schema
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../how-to-guides/faq.md#stable-vs-experimental).
:::
The source JSON schema can be found at [Flow.schema.json](https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json)
## YAML syntax
| Key | Type | Description |
|----------------------------|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `$schema` | string | The YAML schema. If you use the prompt flow VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. |
| `inputs` | object | Dictionary of flow inputs. The key is a name for the input within the context of the flow and the value is the flow input definition. |
| `inputs.<input_name>` | object | The flow input definition. See [Flow input](#flow-input) for the set of configurable properties. |
| `outputs` | object | Dictionary of flow outputs. The key is a name for the output within the context of the flow and the value is the flow output definition. |
| `outputs.<output_name>` | object | The component output definition. See [Flow output](#flow-output) for the set of configurable properties. |
| `nodes` | array | Sets of dictionary of individual nodes to run as steps within the flow. Node can use built-in tool or third-party tool. See [Nodes](#nodes) for more information. |
| `node_variants` | object | Dictionary of nodes with variants. The key is the node name and value contains variants definition and `default_variant_id`. See [Node variants](#node-variants) for more information. |
| `environment` | object | The environment to use for the flow. The key can be `image` or `python_requirements_txt` and the value can be either a image or a python requirements text file. |
| `additional_includes` | array | Additional includes is a list of files that can be shared among flows. Users can specify additional files and folders used by flow, and prompt flow will help copy them all to the snapshot during flow creation. |
### Flow input
| Key | Type | Description | Allowed values |
|-------------------|-------------------------------------------|------------------------------------------------------|-----------------------------------------------------|
| `type` | string | The type of flow input. | `int`, `double`, `bool`, `string`, `list`, `object`, `image` |
| `description` | string | Description of the input. | |
| `default` | int, double, bool, string, list, object, image | The default value for the input. | |
| `is_chat_input` | boolean | Whether the input is the chat flow input. | |
| `is_chat_history` | boolean | Whether the input is the chat history for chat flow. | |
### Flow output
| Key | Type | Description | Allowed values |
|------------------|---------|-------------------------------------------------------------------------------|-----------------------------------------------------|
| `type` | string | The type of flow output. | `int`, `double`, `bool`, `string`, `list`, `object` |
| `description` | string | Description of the output. | |
| `reference` | string | A reference to the node output, e.g. ${<node_name>.output.<node_output_name>} | |
| `is_chat_output` | boolean | Whether the output is the chat flow output. | |
### Nodes
Nodes is a set of node which is a dictionary with following fields. Below, we only show the common fields of a single node using built-in tool.
| Key | Type | Description | Allowed values |
|----------------|--------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------|
| `name` | string | The name of the node. | |
| `type` | string | The type of the node. | Type of built-in tool like `Python`, `Prompt`, `LLM` and third-party tool like `Vector Search`, etc. |
| `inputs` | object | Dictionary of node inputs. The key is the input name and the value can be primitive value or a reference to the flow input or the node output, e.g. `${inputs.<flow_input_name>}`, `${<node_name>.output}` or `${<node_name>.output.<node_output_name>}` | |
| `source` | object | Dictionary of tool source used by the node. The key contains `type`, `path` and `tool`. The type can be `code`, `package` and `package_with_prompt`. | |
| `provider` | string | It indicates the provider of the tool. Used when the `type` is LLM. | `AzureOpenAI` or `OpenAI` |
| `connection` | string | The connection name which has been created before. Used when the `type` is LLM. | |
| `api` | string | The api name of the provider. Used when the `type` is LLM. | |
| `module` | string | The module name of the tool using by the node. Used when the `type` is LLM. | |
| `use_variants` | bool | Whether the node has variants. | |
### Node variants
Node variants is a dictionary containing variants definition for nodes with variants with their respective node names as dictionary keys.
Below, we explore the variants for a single node.
| Key | Type | Description | Allowed values |
|----------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------|
| `<node_name>` | string | The name of the node. | |
| `default_variant_id` | string | Default variant id. | |
| `variants ` | object | This dictionary contains all node variations, with the variant id serving as the key and a node definition dictionary as the corresponding value. Within the node definition dictionary, the key labeled 'node' should contain a variant definition similar to [Nodes](#nodes), excluding the 'name' field. | |
## Examples
Flow examples are available in the [GitHub repository](https://github.com/microsoft/promptflow/tree/main/examples/flows).
- [basic](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/basic)
- [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification)
- [basic-chat](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/basic-chat)
- [chat-with-pdf](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-pdf)
- [eval-basic](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation/eval-basic) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/reference/index.md | # Reference
**Current stable version:**
- [promptflow](https://pypi.org/project/promptflow):
[![PyPI version](https://badge.fury.io/py/promptflow.svg)](https://badge.fury.io/py/promptflow)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/promptflow)](https://pypi.org/project/promptflow/)
- [promptflow-tools](https://pypi.org/project/promptflow-tools/):
[![PyPI version](https://badge.fury.io/py/promptflow-tools.svg)](https://badge.fury.io/py/promptflow-tools)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/promptflow-tools)](https://pypi.org/project/promptflow-tools/)
```{toctree}
:caption: Command Line Interface
:maxdepth: 1
pf-command-reference.md
pfazure-command-reference.md
```
```{toctree}
:caption: Python Library Reference
:maxdepth: 4
python-library-reference/promptflow
```
```{toctree}
:caption: Tool Reference
:maxdepth: 1
tools-reference/llm-tool
tools-reference/prompt-tool
tools-reference/python-tool
tools-reference/serp-api-tool
tools-reference/faiss_index_lookup_tool
tools-reference/vector_db_lookup_tool
tools-reference/embedding_tool
tools-reference/open_model_llm_tool
tools-reference/openai-gpt-4v-tool
tools-reference/contentsafety_text_tool
tools-reference/aoai-gpt4-turbo-vision
```
```{toctree}
:caption: YAML Schema
:maxdepth: 1
flow-yaml-schema-reference.md
run-yaml-schema-reference.md
```
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/reference/run-yaml-schema-reference.md | # Run YAML Schema
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../how-to-guides/faq.md#stable-vs-experimental).
:::
The source JSON schema can be found at [Run.schema.json](https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json)
## YAML syntax
| Key | Type | Description |
|-------------------------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `$schema` | string | The YAML schema. If you use the prompt flow VS Code extension to author the YAML file, including $schema at the top of your file enables you to invoke schema and resource completions. |
| `name` | string | The name of the run. |
| `flow` | string | Path of the flow directory. |
| `description` | string | Description of the run. |
| `display_name` | string | Display name of the run. |
| `runtime` | string | The runtime for the run. Only supported for cloud run. |
| `data` | string | Input data for the run. Local path or remote uri(starts with azureml: or public URL) are supported. Note: remote uri is only supported for cloud run. |
| `run` | string | Referenced flow run name. For example, you can run an evaluation flow against an existing run. |
| `column_mapping` | object | Inputs column mapping, use `${data.xx}` to refer to data columns, use `${run.inputs.xx}` to refer to referenced run's data columns, and `${run.outputs.xx}` to refer to run outputs columns. |
| `connections` | object | Overwrite node level connections with provided value. Example: --connections node1.connection=test_llm_connection node1.deployment_name=gpt-35-turbo |
| `environment_variables` | object/string | Environment variables to set by specifying a property path and value. Example: `{"key1"="${my_connection.api_key}"}`. The value reference to connection keys will be resolved to the actual value, and all environment variables specified will be set into os.environ. |
| `properties` | object | Dictionary of properties of the run. |
| `tags` | object | Dictionary of tags of the run. |
| `resources` | object | Dictionary of resources used for automatic runtime. Only supported for cloud run. See [Resources Schema](#resources-schema) for the set of configurable properties. |
| `variant` | string | The variant for the run. |
| `status` | string | The status of the run. Only available for when getting an existing run. Won't take affect if set when creating a run. |
### Resources Schema
| Key | Type | Description |
|-------------------------------------|---------|-------------------------------------------------------------|
| `instance_type` | string | The instance type for automatic runtime of the run. |
| `idle_time_before_shutdown_minutes` | integer | The idle time before automatic runtime shutdown in minutes. |
## Examples
Run examples are available in the [GitHub repository](https://github.com/microsoft/promptflow/tree/main/examples/flows).
- [basic](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/basic/run.yml)
- [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/run.yml)
- [flow-with-additional-includes](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/flow-with-additional-includes/run.yml)
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/reference/pf-command-reference.md | # pf
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../how-to-guides/faq.md#stable-vs-experimental).
:::
Manage prompt flow resources with the prompt flow CLI.
| Command | Description |
|---------------------------------|---------------------------------|
| [pf flow](#pf-flow) | Manage flows. |
| [pf connection](#pf-connection) | Manage connections. |
| [pf run](#pf-run) | Manage runs. |
| [pf tool](#pf-tool) | Init or list tools. |
| [pf config](#pf-config) | Manage config for current user. |
| [pf upgrade](#pf-upgrade) | Upgrade prompt flow CLI. |
## pf flow
Manage promptflow flow flows.
| Command | Description |
| --- | --- |
| [pf flow init](#pf-flow-init) | Initialize a prompt flow directory. |
| [pf flow test](#pf-flow-test) | Test the prompt flow or flow node. |
| [pf flow validate](#pf-flow-validate) | Validate a flow and generate `flow.tools.json` for it. |
| [pf flow build](#pf-flow-build) | Build a flow for further sharing or deployment. |
| [pf flow serve](#pf-flow-serve) | Serve a flow as an endpoint. |
### pf flow init
Initialize a prompt flow directory.
```bash
pf flow init [--flow]
[--entry]
[--function]
[--prompt-template]
[--type]
[--yes]
```
#### Examples
Create a flow folder with code, prompts and YAML specification of the flow.
```bash
pf flow init --flow <path-to-flow-direcotry>
```
Create an evaluation prompt flow
```bash
pf flow init --flow <path-to-flow-direcotry> --type evaluation
```
Create a flow in existing folder
```bash
pf flow init --flow <path-to-existing-folder> --entry <entry.py> --function <function-name> --prompt-template <path-to-prompt-template.md>
```
#### Optional Parameters
`--flow`
The flow name to create.
`--entry`
The entry file name.
`--function`
The function name in entry file.
`--prompt-template`
The prompt template parameter and assignment.
`--type`
The initialized flow type.
accepted value: standard, evaluation, chat
`--yes --assume-yes -y`
Automatic yes to all prompts; assume 'yes' as answer to all prompts and run non-interactively.
### pf flow test
Test the prompt flow or flow node.
```bash
pf flow test --flow
[--inputs]
[--node]
[--variant]
[--debug]
[--interactive]
[--verbose]
```
#### Examples
Test the flow.
```bash
pf flow test --flow <path-to-flow-directory>
```
Test the flow with single line from input file.
```bash
pf flow test --flow <path-to-flow-directory> --inputs data_key1=data_val1 data_key2=data_val2
```
Test the flow with specified variant node.
```bash
pf flow test --flow <path-to-flow-directory> --variant '${node_name.variant_name}'
```
Test the single node in the flow.
```bash
pf flow test --flow <path-to-flow-directory> --node <node_name>
```
Debug the single node in the flow.
```bash
pf flow test --flow <path-to-flow-directory> --node <node_name> --debug
```
Chat in the flow.
```bash
pf flow test --flow <path-to-flow-directory> --node <node_name> --interactive
```
#### Required Parameter
`--flow`
The flow directory to test.
#### Optional Parameters
`--inputs`
Input data for the flow. Example: --inputs data1=data1_val data2=data2_val
`--node`
The node name in the flow need to be tested.
`--variant`
Node & variant name in format of ${node_name.variant_name}.
`--debug`
Debug the single node in the flow.
`--interactive`
Start a interactive chat session for chat flow.
`--verbose`
Displays the output for each step in the chat flow.
### pf flow validate
Validate the prompt flow and generate a `flow.tools.json` under `.promptflow`. This file is required when using flow as a component in a Azure ML pipeline.
```bash
pf flow validate --source
[--debug]
[--verbose]
```
#### Examples
Validate the flow.
```bash
pf flow validate --source <path-to-flow>
```
#### Required Parameter
`--source`
The flow source to validate.
### pf flow build
Build a flow for further sharing or deployment.
```bash
pf flow build --source
--output
--format
[--variant]
[--verbose]
[--debug]
```
#### Examples
Build a flow as docker, which can be built into Docker image via `docker build`.
```bash
pf flow build --source <path-to-flow> --output <output-path> --format docker
```
Build a flow as docker with specific variant.
```bash
pf flow build --source <path-to-flow> --output <output-path> --format docker --variant '${node_name.variant_name}'
```
#### Required Parameter
`--source`
The flow or run source to be used.
`--output`
The folder to output built flow. Need to be empty or not existed.
`--format`
The format to build flow into
#### Optional Parameters
`--variant`
Node & variant name in format of ${node_name.variant_name}.
`--verbose`
Show more details for each step during build.
`--debug`
Show debug information during build.
### pf flow serve
Serving a flow as an endpoint.
```bash
pf flow serve --source
[--port]
[--host]
[--environment-variables]
[--verbose]
[--debug]
[--skip-open-browser]
```
#### Examples
Serve flow as an endpoint.
```bash
pf flow serve --source <path-to-flow>
```
Serve flow as an endpoint with specific port and host.
```bash
pf flow serve --source <path-to-flow> --port <port> --host <host> --environment-variables key1="`${my_connection.api_key}`" key2="value2"
```
#### Required Parameter
`--source`
The flow or run source to be used.
#### Optional Parameters
`--port`
The port on which endpoint to run.
`--host`
The host of endpoint.
`--environment-variables`
Environment variables to set by specifying a property path and value. Example: --environment-variable key1="\`${my_connection.api_key}\`" key2="value2". The value reference to connection keys will be resolved to the actual value, and all environment variables specified will be set into `os.environ`.
`--verbose`
Show more details for each step during serve.
`--debug`
Show debug information during serve.
`--skip-open-browser`
Skip opening browser after serve. Store true parameter.
## pf connection
Manage prompt flow connections.
| Command | Description |
| --- | --- |
| [pf connection create](#pf-connection-create) | Create a connection. |
| [pf connection update](#pf-connection-update) | Update a connection. |
| [pf connection show](#pf-connection-show) | Show details of a connection. |
| [pf connection list](#pf-connection-list) | List all the connection. |
| [pf connection delete](#pf-connection-delete) | Delete a connection. |
### pf connection create
Create a connection.
```bash
pf connection create --file
[--name]
[--set]
```
#### Examples
Create a connection with YAML file.
```bash
pf connection create -f <yaml-filename>
```
Create a connection with YAML file with override.
```bash
pf connection create -f <yaml-filename> --set api_key="<api-key>"
```
Create a custom connection with .env file; note that overrides specified by `--set` will be ignored.
```bash
pf connection create -f .env --name <name>
```
#### Required Parameter
`--file -f`
Local path to the YAML file containing the prompt flow connection specification.
#### Optional Parameters
`--name -n`
Name of the connection.
`--set`
Update an object by specifying a property path and value to set. Example: --set property1.property2=.
### pf connection update
Update a connection.
```bash
pf connection update --name
[--set]
```
#### Example
Update a connection.
```bash
pf connection update -n <name> --set api_key="<api-key>"
```
#### Required Parameter
`--name -n`
Name of the connection.
#### Optional Parameter
`--set`
Update an object by specifying a property path and value to set. Example: --set property1.property2=.
### pf connection show
Show details of a connection.
```bash
pf connection show --name
```
#### Required Parameter
`--name -n`
Name of the connection.
### pf connection list
List all the connection.
```bash
pf connection list
```
### pf connection delete
Delete a connection.
```bash
pf connection delete --name
```
#### Required Parameter
`--name -n`
Name of the connection.
## pf run
Manage prompt flow runs.
| Command | Description |
| --- | --- |
| [pf run create](#pf-run-create) | Create a run. |
| [pf run update](#pf-run-update) | Update a run metadata, including display name, description and tags. |
| [pf run stream](#pf-run-stream) | Stream run logs to the console. |
| [pf run list](#pf-run-list) | List runs. |
| [pf run show](#pf-run-show) | Show details for a run. |
| [pf run show-details](#pf-run-show-details) | Preview a run's intput(s) and output(s). |
| [pf run show-metrics](#pf-run-show-metrics) | Print run metrics to the console. |
| [pf run visualize](#pf-run-visualize) | Visualize a run. |
| [pf run archive](#pf-run-archive) | Archive a run. |
| [pf run restore](#pf-run-restore) | Restore an archived run. |
### pf run create
Create a run.
```bash
pf run create [--file]
[--flow]
[--data]
[--column-mapping]
[--run]
[--variant]
[--stream]
[--environment-variables]
[--connections]
[--set]
[--source]
```
#### Examples
Create a run with YAML file.
```bash
pf run create -f <yaml-filename>
```
Create a run with YAML file and replace another data in the YAML file.
```bash
pf run create -f <yaml-filename> --data <path-to-new-data-file-relative-to-yaml-file>
```
Create a run from flow directory and reference a run.
```bash
pf run create --flow <path-to-flow-directory> --data <path-to-data-file> --column-mapping groundtruth='${data.answer}' prediction='${run.outputs.category}' --run <run-name> --variant '${summarize_text_content.variant_0}' --stream
```
Create a run from an existing run record folder.
```bash
pf run create --source <path-to-run-folder>
```
#### Optional Parameters
`--file -f`
Local path to the YAML file containing the prompt flow run specification; can be overwritten by other parameters. Reference [here](https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json) for YAML schema.
`--flow`
Local path to the flow directory. If --file is provided, this path should be relative path to the file.
`--data`
Local path to the data file. If --file is provided, this path should be relative path to the file.
`--column-mapping`
Inputs column mapping, use `${data.xx}` to refer to data columns, use `${run.inputs.xx}` to refer to referenced run's data columns, and `${run.outputs.xx}` to refer to run outputs columns.
`--run`
Referenced flow run name. For example, you can run an evaluation flow against an existing run. For example, "pf run create --flow evaluation_flow_dir --run existing_bulk_run".
`--variant`
Node & variant name in format of `${node_name.variant_name}`.
`--stream -s`
Indicates whether to stream the run's logs to the console.
default value: False
`--environment-variables`
Environment variables to set by specifying a property path and value. Example:
`--environment-variable key1='${my_connection.api_key}' key2='value2'`. The value reference
to connection keys will be resolved to the actual value, and all environment variables
specified will be set into os.environ.
`--connections`
Overwrite node level connections with provided value.
Example: `--connections node1.connection=test_llm_connection node1.deployment_name=gpt-35-turbo`
`--set`
Update an object by specifying a property path and value to set.
Example: `--set property1.property2=<value>`.
`--source`
Local path to the existing run record folder.
### pf run update
Update a run metadata, including display name, description and tags.
```bash
pf run update --name
[--set]
```
#### Example
Update a run
```bash
pf run update -n <name> --set display_name="<display-name>" description="<description>" tags.key="value"
```
#### Required Parameter
`--name -n`
Name of the run.
#### Optional Parameter
`--set`
Update an object by specifying a property path and value to set. Example: --set property1.property2=.
### pf run stream
Stream run logs to the console.
```bash
pf run stream --name
```
#### Required Parameter
`--name -n`
Name of the run.
### pf run list
List runs.
```bash
pf run list [--all-results]
[--archived-only]
[--include-archived]
[--max-results]
```
#### Optional Parameters
`--all-results`
Returns all results.
default value: False
`--archived-only`
List archived runs only.
default value: False
`--include-archived`
List archived runs and active runs.
default value: False
`--max-results -r`
Max number of results to return. Default is 50.
default value: 50
### pf run show
Show details for a run.
```bash
pf run show --name
```
#### Required Parameter
`--name -n`
Name of the run.
### pf run show-details
Preview a run's input(s) and output(s).
```bash
pf run show-details --name
```
#### Required Parameter
`--name -n`
Name of the run.
### pf run show-metrics
Print run metrics to the console.
```bash
pf run show-metrics --name
```
#### Required Parameter
`--name -n`
Name of the run.
### pf run visualize
Visualize a run in the browser.
```bash
pf run visualize --names
```
#### Required Parameter
`--names -n`
Name of the runs, comma separated.
### pf run archive
Archive a run.
```bash
pf run archive --name
```
#### Required Parameter
`--name -n`
Name of the run.
### pf run restore
Restore an archived run.
```bash
pf run restore --name
```
#### Required Parameter
`--name -n`
Name of the run.
## pf tool
Manage promptflow tools.
| Command | Description |
| --- | --- |
| [pf tool init](#pf-tool-init) | Initialize a tool directory. |
| [pf tool list](#pf-tool-list) | List all tools in the environment. |
| [pf tool validate](#pf-tool-validate) | Validate tools. |
### pf tool init
Initialize a tool directory.
```bash
pf tool init [--package]
[--tool]
[--set]
```
#### Examples
Creating a package tool from scratch.
```bash
pf tool init --package <package-name> --tool <tool-name>
```
Creating a package tool with extra info.
```bash
pf tool init --package <package-name> --tool <tool-name> --set icon=<icon-path> category=<tool-category> tags="{'<key>': '<value>'}"
```
Creating a package tool from scratch.
```bash
pf tool init --package <package-name> --tool <tool-name>
```
Creating a python tool from scratch.
```bash
pf tool init --tool <tool-name>
```
#### Optional Parameters
`--package`
The package name to create.
`--tool`
The tool name to create.
`--set`
Set extra information about the tool, like category, icon and tags. Example: --set <key>=<value>.
### pf tool list
List all tools in the environment.
```bash
pf tool list [--flow]
```
#### Examples
List all package tool in the environment.
```bash
pf tool list
```
List all package tool and code tool in the flow.
```bash
pf tool list --flow <path-to-flow-direcotry>
```
#### Optional Parameters
`--flow`
The flow directory.
### pf tool validate
Validate tool.
```bash
pf tool validate --source
```
#### Examples
Validate single function tool.
```bash
pf tool validate -–source <package-name>.<module-name>.<tool-function>
```
Validate all tool in a package tool.
```bash
pf tool validate -–source <package-name>
```
Validate tools in a python script.
```bash
pf tool validate --source <path-to-tool-script>
```
#### Required Parameter
`--source`
The tool source to be used.
## pf config
Manage config for current user.
| Command | Description |
|-----------------------------------|--------------------------------------------|
| [pf config set](#pf-config-set) | Set prompt flow configs for current user. |
| [pf config show](#pf-config-show) | Show prompt flow configs for current user. |
### pf config set
Set prompt flow configs for current user, configs will be stored at ~/.promptflow/pf.yaml.
```bash
pf config set
```
#### Examples
Config connection provider to azure workspace for current user.
```bash
pf config set connection.provider="azureml://subscriptions/<your-subscription>/resourceGroups/<your-resourcegroup>/providers/Microsoft.MachineLearningServices/workspaces/<your-workspace>"
```
### pf config show
Show prompt flow configs for current user.
```bash
pf config show
```
#### Examples
Show prompt flow for current user.
```bash
pf config show
```
## pf upgrade
Upgrade prompt flow CLI.
| Command | Description |
|-----------------------------|-----------------------------|
| [pf upgrade](#pf-upgrade) | Upgrade prompt flow CLI. |
### Examples
Upgrade prompt flow without prompt and run non-interactively.
```bash
pf upgrade --yes
``` | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/python-library-reference/promptflow.md | # PLACEHOLDER | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/python-tool.md | # Python
## Introduction
Users are empowered by the Python Tool to offer customized code snippets as self-contained executable nodes in PromptFlow.
Users can effortlessly create Python tools, edit code, and verify results with ease.
## Inputs
| Name | Type | Description | Required |
|--------|--------|------------------------------------------------------|---------|
| Code | string | Python code snippet | Yes |
| Inputs | - | List of tool function parameters and its assignments | - |
### Types
| Type | Python example | Description |
|-----------------------------------------------------|---------------------------------|--------------------------------------------|
| int | param: int | Integer type |
| bool | param: bool | Boolean type |
| string | param: str | String type |
| double | param: float | Double type |
| list | param: list or param: List[T] | List type |
| object | param: dict or param: Dict[K, V] | Object type |
| [Connection](../../concepts/concept-connections.md) | param: CustomConnection | Connection type, will be handled specially |
Parameters with `Connection` type annotation will be treated as connection inputs, which means:
- Promptflow extension will show a selector to select the connection.
- During execution time, promptflow will try to find the connection with the name same from parameter value passed in.
Note that `Union[...]` type annotation is supported **ONLY** for connection type,
for example, `param: Union[CustomConnection, OpenAIConnection]`.
## Outputs
The return of the python tool function.
## How to write Python Tool?
### Guidelines
1. Python Tool Code should consist of a complete Python code, including any necessary module imports.
2. Python Tool Code must contain a function decorated with @tool (tool function), serving as the entry point for execution. The @tool decorator should be applied only once within the snippet.
_Below sample defines python tool "my_python_tool", decorated with @tool_
3. Python tool function parameters must be assigned in 'Inputs' section
_Below sample defines inputs "message" and assign with "world"_
4. Python tool function shall have return
_Below sample returns a concatenated string_
### Code
The snippet below shows the basic structure of a tool function. Promptflow will read the function and extract inputs
from function parameters and type annotations.
```python
from promptflow import tool
from promptflow.connections import CustomConnection
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
@tool
def my_python_tool(message: str, my_conn: CustomConnection) -> str:
my_conn_dict = dict(my_conn)
# Do some function call with my_conn_dict...
return 'hello ' + message
```
### Inputs
| Name | Type | Sample Value in Flow Yaml | Value passed to function|
|---------|--------|-------------------------| ------------------------|
| message | string | "world" | "world" |
| my_conn | CustomConnection | "my_conn" | CustomConnection object |
Promptflow will try to find the connection named 'my_conn' during execution time.
### outputs
```python
"hello world"
```
### Keyword Arguments Support
Starting from version 1.0.0 of PromptFlow and version 1.4.0 of [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow),
we have introduced support for keyword arguments (kwargs) in the Python tool.
```python
from promptflow import tool
@tool
def print_test(normal_input: str, **kwargs):
for key, value in kwargs.items():
print(f"Key {key}'s value is {value}")
return len(kwargs)
```
When you add `kwargs` in your python tool like above code, you can insert variable number of inputs by the `+Add input` button.
![Screenshot of the kwargs On VScode Prompt Flow extension](../../media/reference/tools-reference/python_tool_kwargs.png) | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/serp-api-tool.md | # SerpAPI
## Introduction
The SerpAPI API is a Python tool that provides a wrapper to the [SerpAPI Google Search Engine Results API](https://serpapi.com/search-api) and [SerpApi Bing Search Engine Results API
](https://serpapi.com/bing-search-api).
We could use the tool to retrieve search results from a number of different search engines, including Google and Bing, and you can specify a range of search parameters, such as the search query, location, device type, and more.
## Prerequisite
Sign up at [SERP API homepage](https://serpapi.com/)
## Connection
Connection is the model used to establish connections with Serp API.
| Type | Name | API KEY |
|-------------|----------|----------|
| Serp | Required | Required |
_**API Key** is on SerpAPI account dashboard_
## Inputs
The **serp api** tool supports following parameters:
| Name | Type | Description | Required |
|----------|---------|---------------------------------------------------------------|----------|
| query | string | The search query to be executed. | Yes |
| engine | string | The search engine to use for the search. Default is 'google'. | Yes |
| num | integer | The number of search results to return.Default is 10. | No |
| location | string | The geographic location to execute the search from. | No |
| safe | string | The safe search mode to use for the search. Default is 'off'. | No |
## Outputs
The json representation from serpapi query.
| Engine | Return Type | Output |
|----------|-------------|-------------------------------------------------------|
| google | json | [Sample](https://serpapi.com/search-api#api-examples) |
| bing | json | [Sample](https://serpapi.com/bing-search-api) |
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/contentsafety_text_tool.md | # Content Safety (Text)
Azure Content Safety is a content moderation service developed by Microsoft that help users detect harmful content from different modalities and languages. This tool is a wrapper for the Azure Content Safety Text API, which allows you to detect text content and get moderation results. See the [Azure Content Safety](https://aka.ms/acs-doc) for more information.
## Requirements
- For AzureML users, the tool is installed in default image, you can use the tool without extra installation.
- For local users,
`pip install promptflow-tools`
> [!NOTE]
> Content Safety (Text) tool is now incorporated into the latest `promptflow-tools` package. If you have previously installed the package `promptflow-contentsafety`, please uninstall it to avoid the duplication in your local tool list.
## Prerequisites
- Create an [Azure Content Safety](https://aka.ms/acs-create) resource.
- Add "Azure Content Safety" connection in prompt flow. Fill "API key" field with "Primary key" from "Keys and Endpoint" section of created resource.
## Inputs
You can use the following parameters as inputs for this tool:
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| text | string | The text that need to be moderated. | Yes |
| hate_category | string | The moderation sensitivity for Hate category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for hate category. The other three options mean different degrees of strictness in filtering out hate content. The default option is *medium_sensitivity*. | Yes |
| sexual_category | string | The moderation sensitivity for Sexual category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for sexual category. The other three options mean different degrees of strictness in filtering out sexual content. The default option is *medium_sensitivity*. | Yes |
| self_harm_category | string | The moderation sensitivity for Self-harm category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for self-harm category. The other three options mean different degrees of strictness in filtering out self_harm content. The default option is *medium_sensitivity*. | Yes |
| violence_category | string | The moderation sensitivity for Violence category. You can choose from four options: *disable*, *low_sensitivity*, *medium_sensitivity*, or *high_sensitivity*. The *disable* option means no moderation for violence category. The other three options mean different degrees of strictness in filtering out violence content. The default option is *medium_sensitivity*. | Yes |
For more information, please refer to [Azure Content Safety](https://aka.ms/acs-doc)
## Outputs
The following is an example JSON format response returned by the tool:
<details>
<summary>Output</summary>
```json
{
"action_by_category": {
"Hate": "Accept",
"SelfHarm": "Accept",
"Sexual": "Accept",
"Violence": "Accept"
},
"suggested_action": "Accept"
}
```
</details>
The `action_by_category` field gives you a binary value for each category: *Accept* or *Reject*. This value shows if the text meets the sensitivity level that you set in the request parameters for that category.
The `suggested_action` field gives you an overall recommendation based on the four categories. If any category has a *Reject* value, the `suggested_action` will be *Reject* as well.
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/open_model_llm_tool.md | # Open Model LLM
## Introduction
The Open Model LLM tool enables the utilization of a variety of Open Model and Foundational Models, such as [Falcon](https://ml.azure.com/models/tiiuae-falcon-7b/version/4/catalog/registry/azureml) and [Llama 2](https://ml.azure.com/models/Llama-2-7b-chat/version/14/catalog/registry/azureml-meta), for natural language processing in Azure ML Prompt Flow.
Here's how it looks in action on the Visual Studio Code prompt flow extension. In this example, the tool is being used to call a LlaMa-2 chat endpoint and asking "What is CI?".
![Screenshot of the Open Model LLM On VScode Prompt Flow extension](../../media/reference/tools-reference/open_model_llm_on_vscode_promptflow.png)
This prompt flow tool supports two different LLM API types:
- **Chat**: Shown in the example above. The chat API type facilitates interactive conversations with text-based inputs and responses.
- **Completion**: The Completion API type is used to generate single response text completions based on provided prompt input.
## Quick Overview: How do I use Open Model LLM Tool?
1. Choose a Model from the AzureML Model Catalog and get it deployed.
2. Connect to the model deployment.
3. Configure the open model llm tool settings.
4. Prepare the Prompt with [guidance](./prompt-tool.md#how-to-write-prompt).
5. Run the flow.
## Prerequisites: Model Deployment
1. Pick the model which matched your scenario from the [Azure Machine Learning model catalog](https://ml.azure.com/model/catalog).
2. Use the "Deploy" button to deploy the model to a AzureML Online Inference endpoint.
2.1. Use one of the Pay as you go deployment options.
More detailed instructions can be found here [Deploying foundation models to endpoints for inferencing.](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing)
## Prerequisites: Connect to the Model
In order for prompt flow to use your deployed model, you will need to connect to it. There are several ways to connect.
### 1. Endpoint Connections
Once associated to a AzureML or Azure AI Studio workspace, the Open Model LLM tool can use the endpoints on that workspace.
1. **Using AzureML or Azure AI Studio workspaces**: If you are using prompt flow in one of the web page based browsers workspaces, the online endpoints available on that workspace will automatically who up.
2. **Using VScode or Code First**: If you are using prompt flow in VScode or one of the Code First offerings, you will need to connect to the workspace. The Open Model LLM tool uses the azure.identity DefaultAzureCredential client for authorization. One way is through [setting environment credential values](https://learn.microsoft.com/en-us/python/api/azure-identity/azure.identity.environmentcredential?view=azure-python).
### 2. Custom Connections
The Open Model LLM tool uses the CustomConnection. Prompt flow supports two types of connections:
1. **Workspace Connections** - These are connections which are stored as secrets on an Azure Machine Learning workspace. While these can be used, in many places, the are commonly created and maintained in the Studio UI.
2. **Local Connections** - These are connections which are stored locally on your machine. These connections are not available in the Studio UX's, but can be used with the VScode extension.
Instructions on how to create a workspace or local Custom Connection [can be found here.](../../how-to-guides/manage-connections.md#create-a-connection)
The required keys to set are:
1. **endpoint_url**
- This value can be found at the previously created Inferencing endpoint.
2. **endpoint_api_key**
- Ensure to set this as a secret value.
- This value can be found at the previously created Inferencing endpoint.
3. **model_family**
- Supported values: LLAMA, DOLLY, GPT2, or FALCON
- This value is dependent on the type of deployment you are targeting.
## Running the Tool: Inputs
The Open Model LLM tool has a number of parameters, some of which are required. Please see the below table for details, you can match these to the screen shot above for visual clarity.
| Name | Type | Description | Required |
|------|------|-------------|----------|
| api | string | This is the API mode and will depend on the model used and the scenario selected. *Supported values: (Completion \| Chat)* | Yes |
| endpoint_name | string | Name of an Online Inferencing Endpoint with a supported model deployed on it. Takes priority over connection. | No |
| temperature | float | The randomness of the generated text. Default is 1. | No |
| max_new_tokens | integer | The maximum number of tokens to generate in the completion. Default is 500. | No |
| top_p | float | The probability of using the top choice from the generated tokens. Default is 1. | No |
| model_kwargs | dictionary | This input is used to provide configuration specific to the model used. For example, the Llama-02 model may use {\"temperature\":0.4}. *Default: {}* | No |
| deployment_name | string | The name of the deployment to target on the Online Inferencing endpoint. If no value is passed, the Inferencing load balancer traffic settings will be used. | No |
| prompt | string | The text prompt that the language model will use to generate it's response. | Yes |
## Outputs
| API | Return Type | Description |
|------------|-------------|------------------------------------------|
| Completion | string | The text of one predicted completion |
| Chat | string | The text of one response int the conversation |
## Deploying to an Online Endpoint
When deploying a flow containing the Open Model LLM tool to an online endpoint, there is an additional step to setup permissions. During deployment through the web pages, there is a choice between System-assigned and User-assigned Identity types. Either way, using the Azure Portal (or a similar functionality), add the "Reader" Job function role to the identity on the Azure Machine Learning workspace or Ai Studio project which is hosting the endpoint. The prompt flow deployment may need to be refreshed.
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/faiss_index_lookup_tool.md | # Faiss Index Lookup
Faiss Index Lookup is a tool tailored for querying within a user-provided Faiss-based vector store. In combination with our Large Language Model (LLM) tool, it empowers users to extract contextually relevant information from a domain knowledge base.
## Requirements
- For AzureML users, the tool is installed in default image, you can use the tool without extra installation.
- For local users, if your index is stored in local path,
`pip install promptflow-vectordb`
if your index is stored in Azure storage,
`pip install promptflow-vectordb[azure]`
## Prerequisites
### For AzureML users,
- step 1. Prepare an accessible path on Azure Blob Storage. Here's the guide if a new storage account needs to be created: [Azure Storage Account](https://learn.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal).
- step 2. Create related Faiss-based index files on Azure Blob Storage. We support the LangChain format (index.faiss + index.pkl) for the index files, which can be prepared either by employing our promptflow-vectordb SDK or following the quick guide from [LangChain documentation](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss). Please refer to the instructions of <a href="https://aka.ms/pf-sample-build-faiss-index" target="_blank">An example code for creating Faiss index</a> for building index using promptflow-vectordb SDK.
- step 3. Based on where you put your own index files, the identity used by the promptflow runtime should be granted with certain roles. Please refer to [Steps to assign an Azure role](https://learn.microsoft.com/en-us/azure/role-based-access-control/role-assignments-steps):
| Location | Role |
| ---- | ---- |
| workspace datastores or workspace default blob | AzureML Data Scientist |
| other blobs | Storage Blob Data Reader |
### For local users,
- Create Faiss-based index files in local path by only doing step 2 above.
## Inputs
The tool accepts the following inputs:
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| path | string | URL or path for the vector store.<br><br>local path (for local users):<br>`<local_path_to_the_index_folder>`<br><br> Azure blob URL format (with [azure] extra installed):<br>https://`<account_name>`.blob.core.windows.net/`<container_name>`/`<path_and_folder_name>`.<br><br>AML datastore URL format (with [azure] extra installed):<br>azureml://subscriptions/`<your_subscription>`/resourcegroups/`<your_resource_group>`/workspaces/`<your_workspace>`/data/`<data_path>`<br><br>public http/https URL (for public demonstration):<br>http(s)://`<path_and_folder_name>` | Yes |
| vector | list[float] | The target vector to be queried, which can be generated by the LLM tool. | Yes |
| top_k | integer | The count of top-scored entities to return. Default value is 3. | No |
## Outputs
The following is an example for JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by our promptflow-vectordb SDK. For the Faiss Index Search, the following fields are populated:
| Field Name | Type | Description |
| ---- | ---- | ----------- |
| text | string | Text of the entity |
| score | float | Distance between the entity and the query vector |
| metadata | dict | Customized key-value pairs provided by user when create the index |
<details>
<summary>Output</summary>
```json
[
{
"metadata": {
"link": "http://sample_link_0",
"title": "title0"
},
"original_entity": null,
"score": 0,
"text": "sample text #0",
"vector": null
},
{
"metadata": {
"link": "http://sample_link_1",
"title": "title1"
},
"original_entity": null,
"score": 0.05000000447034836,
"text": "sample text #1",
"vector": null
},
{
"metadata": {
"link": "http://sample_link_2",
"title": "title2"
},
"original_entity": null,
"score": 0.20000001788139343,
"text": "sample text #2",
"vector": null
}
]
```
</details> | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/vector_db_lookup_tool.md | # Vector DB Lookup
Vector DB Lookup is a vector search tool that allows users to search top k similar vectors from vector database. This tool is a wrapper for multiple third-party vector databases. The list of current supported databases is as follows.
| Name | Description |
| --- | --- |
| Azure Cognitive Search | Microsoft's cloud search service with built-in AI capabilities that enrich all types of information to help identify and explore relevant content at scale. |
| Qdrant | Qdrant is a vector similarity search engine that provides a production-ready service with a convenient API to store, search and manage points (i.e. vectors) with an additional payload. |
| Weaviate | Weaviate is an open source vector database that stores both objects and vectors. This allows for combining vector search with structured filtering. |
This tool will support more vector databases.
## Requirements
- For AzureML users, the tool is installed in default image, you can use the tool without extra installation.
- For local users,
`pip install promptflow-vectordb`
## Prerequisites
The tool searches data from a third-party vector database. To use it, you should create resources in advance and establish connection between the tool and the resource.
- **Azure Cognitive Search:**
- Create resource [Azure Cognitive Search](https://learn.microsoft.com/en-us/azure/search/search-create-service-portal).
- Add "Cognitive search" connection. Fill "API key" field with "Primary admin key" from "Keys" section of created resource, and fill "API base" field with the URL, the URL format is `https://{your_serive_name}.search.windows.net`.
- **Qdrant:**
- Follow the [installation](https://qdrant.tech/documentation/quick-start/) to deploy Qdrant to a self-maintained cloud server.
- Add "Qdrant" connection. Fill "API base" with your self-maintained cloud server address and fill "API key" field.
- **Weaviate:**
- Follow the [installation](https://weaviate.io/developers/weaviate/installation) to deploy Weaviate to a self-maintained instance.
- Add "Weaviate" connection. Fill "API base" with your self-maintained instance address and fill "API key" field.
## Inputs
The tool accepts the following inputs:
- **Azure Cognitive Search:**
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| connection | CognitiveSearchConnection | The created connection for accessing to Cognitive Search endpoint. | Yes |
| index_name | string | The index name created in Cognitive Search resource. | Yes |
| text_field | string | The text field name. The returned text field will populate the text of output. | No |
| vector_field | string | The vector field name. The target vector is searched in this vector field. | Yes |
| search_params | dict | The search parameters. It's key-value pairs. Except for parameters in the tool input list mentioned above, additional search parameters can be formed into a JSON object as search_params. For example, use `{"select": ""}` as search_params to select the returned fields, use `{"search": ""}` to perform a [hybrid search](https://learn.microsoft.com/en-us/azure/search/search-get-started-vector#hybrid-search). | No |
| search_filters | dict | The search filters. It's key-value pairs, the input format is like `{"filter": ""}` | No |
| vector | list | The target vector to be queried, which can be generated by Embedding tool. | Yes |
| top_k | int | The count of top-scored entities to return. Default value is 3 | No |
- **Qdrant:**
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| connection | QdrantConnection | The created connection for accessing to Qdrant server. | Yes |
| collection_name | string | The collection name created in self-maintained cloud server. | Yes |
| text_field | string | The text field name. The returned text field will populate the text of output. | No |
| search_params | dict | The search parameters can be formed into a JSON object as search_params. For example, use `{"params": {"hnsw_ef": 0, "exact": false, "quantization": null}}` to set search_params. | No |
| search_filters | dict | The search filters. It's key-value pairs, the input format is like `{"filter": {"should": [{"key": "", "match": {"value": ""}}]}}` | No |
| vector | list | The target vector to be queried, which can be generated by Embedding tool. | Yes |
| top_k | int | The count of top-scored entities to return. Default value is 3 | No |
- **Weaviate:**
| Name | Type | Description | Required |
| ---- | ---- | ----------- | -------- |
| connection | WeaviateConnection | The created connection for accessing to Weaviate. | Yes |
| class_name | string | The class name. | Yes |
| text_field | string | The text field name. The returned text field will populate the text of output. | No |
| vector | list | The target vector to be queried, which can be generated by Embedding tool. | Yes |
| top_k | int | The count of top-scored entities to return. Default value is 3 | No |
## Outputs
The following is an example JSON format response returned by the tool, which includes the top-k scored entities. The entity follows a generic schema of vector search result provided by promptflow-vectordb SDK.
- **Azure Cognitive Search:**
For Azure Cognitive Search, the following fields are populated:
| Field Name | Type | Description |
| ---- | ---- | ----------- |
| original_entity | dict | the original response json from search REST API|
| score | float | @search.score from the original entity, which evaluates the similarity between the entity and the query vector |
| text | string | text of the entity|
| vector | list | vector of the entity|
<details>
<summary>Output</summary>
```json
[
{
"metadata": null,
"original_entity": {
"@search.score": 0.5099789,
"id": "",
"your_text_filed_name": "sample text1",
"your_vector_filed_name": [-0.40517663431890405, 0.5856996257406859, -0.1593078462266455, -0.9776269170785785, -0.6145604369828972],
"your_additional_field_name": ""
},
"score": 0.5099789,
"text": "sample text1",
"vector": [-0.40517663431890405, 0.5856996257406859, -0.1593078462266455, -0.9776269170785785, -0.6145604369828972]
}
]
```
</details>
- **Qdrant:**
For Qdrant, the following fields are populated:
| Field Name | Type | Description |
| ---- | ---- | ----------- |
| original_entity | dict | the original response json from search REST API|
| metadata | dict | payload from the original entity|
| score | float | score from the original entity, which evaluates the similarity between the entity and the query vector|
| text | string | text of the payload|
| vector | list | vector of the entity|
<details>
<summary>Output</summary>
```json
[
{
"metadata": {
"text": "sample text1"
},
"original_entity": {
"id": 1,
"payload": {
"text": "sample text1"
},
"score": 1,
"vector": [0.18257418, 0.36514837, 0.5477226, 0.73029673],
"version": 0
},
"score": 1,
"text": "sample text1",
"vector": [0.18257418, 0.36514837, 0.5477226, 0.73029673]
}
]
```
</details>
- **Weaviate:**
For Weaviate, the following fields are populated:
| Field Name | Type | Description |
| ---- | ---- | ----------- |
| original_entity | dict | the original response json from search REST API|
| score | float | certainty from the original entity, which evaluates the similarity between the entity and the query vector|
| text | string | text in the original entity|
| vector | list | vector of the entity|
<details>
<summary>Output</summary>
```json
[
{
"metadata": null,
"original_entity": {
"_additional": {
"certainty": 1,
"distance": 0,
"vector": [
0.58,
0.59,
0.6,
0.61,
0.62
]
},
"text": "sample text1."
},
"score": 1,
"text": "sample text1.",
"vector": [
0.58,
0.59,
0.6,
0.61,
0.62
]
}
]
```
</details> | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/prompt-tool.md | # Prompt
## Introduction
The Prompt Tool in PromptFlow offers a collection of textual templates that serve as a starting point for creating prompts.
These templates, based on the Jinja2 template engine, facilitate the definition of prompts. The tool proves useful
when prompt tuning is required prior to feeding the prompts into the Language Model (LLM) model in PromptFlow.
## Inputs
| Name | Type | Description | Required |
|--------------------|--------|----------------------------------------------------------|----------|
| prompt | string | The prompt template in Jinja | Yes |
| Inputs | - | List of variables of prompt template and its assignments | - |
## Outputs
The prompt text parsed from the prompt + Inputs
## How to write Prompt?
1. Prepare jinja template. Learn more about [Jinja](https://jinja.palletsprojects.com/en/3.1.x/)
_In below example, the prompt incorporates Jinja templating syntax to dynamically generate the welcome message and personalize it based on the user's name. It also presents a menu of options for the user to choose from. Depending on whether the user_name variable is provided, it either addresses the user by name or uses a generic greeting._
```jinja
Welcome to {{ website_name }}!
{% if user_name %}
Hello, {{ user_name }}!
{% else %}
Hello there!
{% endif %}
Please select an option from the menu below:
1. View your account
2. Update personal information
3. Browse available products
4. Contact customer support
```
2. Assign value for the variables.
_In above example, two variables would be automatically detected and listed in '**Inputs**' section. Please assign values._
### Sample 1
Inputs
| Variable | Type | Sample Value |
|---------------|--------|--------------|
| website_name | string | "Microsoft" |
| user_name | string | "Jane" |
Outputs
```
Welcome to Microsoft! Hello, Jane! Please select an option from the menu below: 1. View your account 2. Update personal information 3. Browse available products 4. Contact customer support
```
### Sample 2
Inputs
| Variable | Type | Sample Value |
|--------------|--------|----------------|
| website_name | string | "Bing" |
| user_name | string | " |
Outputs
```
Welcome to Bing! Hello there! Please select an option from the menu below: 1. View your account 2. Update personal information 3. Browse available products 4. Contact customer support
``` | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/embedding_tool.md | # Embedding
## Introduction
OpenAI's embedding models convert text into dense vector representations for various NLP tasks. See the [OpenAI Embeddings API](https://platform.openai.com/docs/api-reference/embeddings) for more information.
## Prerequisite
Create OpenAI resources:
- **OpenAI**
Sign up account [OpenAI website](https://openai.com/)
Login and [Find personal API key](https://platform.openai.com/account/api-keys)
- **Azure OpenAI (AOAI)**
Create Azure OpenAI resources with [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal)
## **Connections**
Setup connections to provide resources in embedding tool.
| Type | Name | API KEY | API Type | API Version |
|-------------|----------|----------|----------|-------------|
| OpenAI | Required | Required | - | - |
| AzureOpenAI | Required | Required | Required | Required |
## Inputs
| Name | Type | Description | Required |
|------------------------|-------------|-----------------------------------------------------------------------|----------|
| input | string | the input text to embed | Yes |
| connection | string | the connection for the embedding tool use to provide resources | Yes |
| model/deployment_name | string | instance of the text-embedding engine to use. Fill in model name if you use OpenAI connection, or deployment name if use Azure OpenAI connection. | Yes |
## Outputs
| Return Type | Description |
|-------------|------------------------------------------|
| list | The vector representations for inputs |
The following is an example response returned by the embedding tool:
<details>
<summary>Output</summary>
```
[-0.005744616035372019,
-0.007096089422702789,
-0.00563855143263936,
-0.005272455979138613,
-0.02355326898396015,
0.03955197334289551,
-0.014260607771575451,
-0.011810848489403725,
-0.023170066997408867,
-0.014739611186087132,
...]
```
</details> | 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/aoai-gpt4-turbo-vision.md | # Azure OpenAI GPT-4 Turbo with Vision
## Introduction
Azure OpenAI GPT-4 Turbo with Vision tool enables you to leverage your AzureOpenAI GPT-4 Turbo with Vision model deployment to analyze images and provide textual responses to questions about them.
## Prerequisites
- Create AzureOpenAI resources
Create Azure OpenAI resources with [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal)
- Create a GPT-4 Turbo with Vision deployment
Browse to [Azure OpenAI Studio](https://oai.azure.com/) and sign in with the credentials associated with your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource.
Under Management select Deployments and Create a GPT-4 Turbo with Vision deployment by selecting model name: `gpt-4` and model version `vision-preview`.
## Connection
Setup connections to provisioned resources in prompt flow.
| Type | Name | API KEY | API Type | API Version |
|-------------|----------|----------|----------|-------------|
| AzureOpenAI | Required | Required | Required | Required |
## Inputs
| Name | Type | Description | Required |
|------------------------|-------------|------------------------------------------------------------------------------------------------|----------|
| connection | AzureOpenAI | the AzureOpenAI connection to be used in the tool | Yes |
| deployment\_name | string | the language model to use | Yes |
| prompt | string | The text prompt that the language model will use to generate it's response. | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is 512. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0. | No |
## Outputs
| Return Type | Description |
|-------------|------------------------------------------|
| string | The text of one response of conversation |
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/openai-gpt-4v-tool.md | # OpenAI GPT-4V
## Introduction
OpenAI GPT-4V tool enables you to leverage OpenAI's GPT-4 with vision, also referred to as GPT-4V or gpt-4-vision-preview in the API, to take images as input and answer questions about them.
## Prerequisites
- Create OpenAI resources
Sign up account [OpenAI website](https://openai.com/)
Login and [Find personal API key](https://platform.openai.com/account/api-keys)
- Get Access to GPT-4 API
To use GPT-4 with vision, you need access to GPT-4 API. Learn more about [How to get access to GPT-4 API](https://help.openai.com/en/articles/7102672-how-can-i-access-gpt-4)
## Connection
Setup connections to provisioned resources in prompt flow.
| Type | Name | API KEY |
|-------------|----------|----------|
| OpenAI | Required | Required |
## Inputs
| Name | Type | Description | Required |
|------------------------|-------------|------------------------------------------------------------------------------------------------|----------|
| connection | OpenAI | the OpenAI connection to be used in the tool | Yes |
| model | string | the language model to use, currently only support gpt-4-vision-preview | Yes |
| prompt | string | The text prompt that the language model will use to generate it's response. | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is 512. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0. | No |
## Outputs
| Return Type | Description |
|-------------|------------------------------------------|
| string | The text of one response of conversation |
| 0 |
promptflow_repo/promptflow/docs/reference | promptflow_repo/promptflow/docs/reference/tools-reference/llm-tool.md | # LLM
## Introduction
Prompt flow LLM tool enables you to leverage widely used large language models like [OpenAI](https://platform.openai.com/) or [Azure OpenAI (AOAI)](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/overview) for natural language processing.
Prompt flow provides a few different LLM APIs:
- **[Completion](https://platform.openai.com/docs/api-reference/completions)**: OpenAI's completion models generate text based on provided prompts.
- **[Chat](https://platform.openai.com/docs/api-reference/chat)**: OpenAI's chat models facilitate interactive conversations with text-based inputs and responses.
> [!NOTE]
> We now remove the `embedding` option from LLM tool api setting. You can use embedding api with [Embedding tool](https://github.com/microsoft/promptflow/blob/main/docs/reference/tools-reference/embedding_tool.md).
## Prerequisite
Create OpenAI resources:
- **OpenAI**
Sign up account [OpenAI website](https://openai.com/)
Login and [Find personal API key](https://platform.openai.com/account/api-keys)
- **Azure OpenAI (AOAI)**
Create Azure OpenAI resources with [instruction](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal)
## **Connections**
Setup connections to provisioned resources in prompt flow.
| Type | Name | API KEY | API Type | API Version |
|-------------|----------|----------|----------|-------------|
| OpenAI | Required | Required | - | - |
| AzureOpenAI | Required | Required | Required | Required |
## Inputs
### Text Completion
| Name | Type | Description | Required |
|------------------------|-------------|-----------------------------------------------------------------------------------------|----------|
| prompt | string | text prompt that the language model will complete | Yes |
| model, deployment_name | string | the language model to use | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the completion. Default is 16. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| suffix | string | text appended to the end of the completion | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| logprobs | integer | the number of log probabilities to generate. Default is null. | No |
| echo | boolean | value that indicates whether to echo back the prompt in the response. Default is false. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0. | No |
| best\_of | integer | the number of best completions to generate. Default is 1. | No |
| logit\_bias | dictionary | the logit bias for the language model. Default is empty dictionary. | No |
### Chat
| Name | Type | Description | Required |
|------------------------|-------------|------------------------------------------------------------------------------------------------|----------|
| prompt | string | text prompt that the language model will response | Yes |
| model, deployment_name | string | the language model to use | Yes |
| max\_tokens | integer | the maximum number of tokens to generate in the response. Default is inf. | No |
| temperature | float | the randomness of the generated text. Default is 1. | No |
| stop | list | the stopping sequence for the generated text. Default is null. | No |
| top_p | float | the probability of using the top choice from the generated tokens. Default is 1. | No |
| presence\_penalty | float | value that controls the model's behavior with regards to repeating phrases. Default is 0. | No |
| frequency\_penalty | float | value that controls the model's behavior with regards to generating rare phrases. Default is 0.| No |
| logit\_bias | dictionary | the logit bias for the language model. Default is empty dictionary. | No |
| function\_call | object | value that controls which function is called by the model. Default is null. | No |
| functions | list | a list of functions the model may generate JSON inputs for. Default is null. | No |
| response_format | object | an object specifying the format that the model must output. Default is null. | No |
## Outputs
| API | Return Type | Description |
|------------|-------------|------------------------------------------|
| Completion | string | The text of one predicted completion |
| Chat | string | The text of one response of conversation |
## How to use LLM Tool?
1. Setup and select the connections to OpenAI resources
2. Configure LLM model api and its parameters
3. Prepare the Prompt with [guidance](./prompt-tool.md#how-to-write-prompt).
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/tutorials/index.md | # Tutorials
This section contains a collection of flow samples and step-by-step tutorials.
|Area|<div style="width:250px">Sample</div>|Description|
|--|--|--|
|SDK|[Getting started with prompt flow](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/quickstart.ipynb)| A step by step guidance to invoke your first flow run.
|CLI|[Chat with PDF](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/e2e-development/chat-with-pdf.md)| An end-to-end tutorial on how to build a high quality chat application with prompt flow, including flow development and evaluation with metrics.
|SDK|[Chat with PDF - test, evaluation and experimentation](https://github.com/microsoft/promptflow/blob/main/examples/flows/chat/chat-with-pdf/chat-with-pdf.ipynb)| We will walk you through how to use prompt flow Python SDK to test, evaluate and experiment with the "Chat with PDF" flow.
|SDK|[Connection management](https://github.com/microsoft/promptflow/blob/main/examples/connections/connection.ipynb)| Manage various types of connections using sdk
|CLI|[Working with connection](https://github.com/microsoft/promptflow/blob/main/examples/connections/README.md)| Manage various types of connections using cli
|SDK|[Run prompt flow in Azure AI](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/get-started/quickstart-azure.ipynb)| A quick start tutorial to run a flow in Azure AI and evaluate it.
|SDK|[Flow run management in Azure AI](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/run-management/cloud-run-management.ipynb)| Flow run management in azure AI
## Samples
|Area|<div style="width:250px">Sample</div>|Description|
|--|--|--|
|Standard Flow|[basic](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/basic)| a basic flow with prompt and python tool.
|Standard Flow|[basic-with-connection](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/basic-with-connection)| a basic flow using custom connection with prompt and python tool
|Standard Flow|[basic-with-builtin-llm](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/basic-with-builtin-llm)| a basic flow using builtin llm tool
|Standard Flow|[customer-intent-extraction](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/customer-intent-extraction)| a flow created from existing langchain python code
|Standard Flow|[web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification)| a flow demonstrating multi-class classification with LLM. Given an url, it will classify the url into one web category with just a few shots, simple summarization and classification prompts.
|Standard Flow|[autonomous-agent](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/autonomous-agent)| a flow showcasing how to construct a AutoGPT flow to autonomously figures out how to apply the given functions to solve the goal, which is film trivia that provides accurate and up-to-date information about movies, directors, actors, and more.
|Chat Flow|[chat-with-wikipedia](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-wikipedia)| a flow demonstrating Q&A with GPT3.5 using information from Wikipedia to make the answer more grounded.
|Chat Flow|[chat-with-pdf](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat/chat-with-pdf)| a flow that allow you to ask questions about the content of a PDF file and get answers.
|Evaluation Flow|[eval-classification-accuracy](https://github.com/microsoft/promptflow/tree/main/examples/flows/evaluation/eval-classification-accuracy)| a flow illustrating how to evaluate the performance of a classification system.
Learn more: [Try out more promptflow examples.](https://github.com/microsoft/promptflow/tree/main/examples)
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/concept-variants.md | With prompt flow, you can use variants to tune your prompt. In this article, you'll learn the prompt flow variants concept.
# Variants
A variant refers to a specific version of a tool node that has distinct settings. Currently, variants are supported only in the LLM tool. For example, in the LLM tool, a new variant can represent either a different prompt content or different connection settings.
Suppose you want to generate a summary of a news article. You can set different variants of prompts and settings like this:
| Variants | Prompt | Connection settings |
| --------- | ------------------------------------------------------------ | ------------------- |
| Variant 0 | `Summary: {{input sentences}}` | Temperature = 1 |
| Variant 1 | `Summary: {{input sentences}}` | Temperature = 0.7 |
| Variant 2 | `What is the main point of this article? {{input sentences}}` | Temperature = 1 |
| Variant 3 | `What is the main point of this article? {{input sentences}}` | Temperature = 0.7 |
By utilizing different variants of prompts and settings, you can explore how the model responds to various inputs and outputs, enabling you to discover the most suitable combination for your requirements.
## Benefits of using variants
- **Enhance the quality of your LLM generation**: By creating multiple variants of the same LLM node with diverse prompts and configurations, you can identify the optimal combination that produces high-quality content aligned with your needs.
- **Save time and effort**: Even slight modifications to a prompt can yield significantly different results. It's crucial to track and compare the performance of each prompt version. With variants, you can easily manage the historical versions of your LLM nodes, facilitating updates based on any variant without the risk of forgetting previous iterations. This saves you time and effort in managing prompt tuning history.
- **Boost productivity**: Variants streamline the optimization process for LLM nodes, making it simpler to create and manage multiple variations. You can achieve improved results in less time, thereby increasing your overall productivity.
- **Facilitate easy comparison**: You can effortlessly compare the results obtained from different variants side by side, enabling you to make data-driven decisions regarding the variant that generates the best outcomes.
## Next steps
- [Tune prompts with variants](../how-to-guides/tune-prompts-with-variants.md) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/concept-flows.md | While how LLMs work may be elusive to many developers, how LLM apps work is not - they essentially involve a series of calls to external services such as LLMs/databases/search engines, or intermediate data processing, all glued together. Thus LLM apps are merely Directed Acyclic Graphs (DAGs) of function calls. These DAGs are flows in prompt flow.
# Flows
A flow in prompt flow is a DAG of functions (we call them [tools](./concept-tools.md)). These functions/tools connected via input/output dependencies and executed based on the topology by prompt flow executor.
A flow is represented as a YAML file and can be visualized with our [Prompt flow for VS Code extension](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow). Here is an example:
![flow_dag](../media/how-to-guides/quick-start/flow_dag.png)
## Flow types
Prompt flow has three flow types:
- **Standard flow** and **Chat flow**: these two are for you to develop your LLM application. The primary difference between the two lies in the additional support provided by the "Chat Flow" for chat applications. For instance, you can define chat_history, chat_input, and chat_output for your flow. The prompt flow, in turn, will offer a chat-like experience (including conversation history) during the development of the flow. Moreover, it also provides a sample chat application for deployment purposes.
- **Evaluation flow** is for you to test/evaluate the quality of your LLM application (standard/chat flow). It usually run on the outputs of standard/chat flow, and compute some metrics that can be used to determine whether the standard/chat flow performs well. E.g. is the answer accurate? is the answer fact-based?
## When to use standard flow vs. chat flow?
As a general guideline, if you are building a chatbot that needs to maintain conversation history, try chat flow. In most other cases, standard flow should serve your needs.
Our examples should also give you an idea when to use what:
- [examples/flows/standard](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard)
- [examples/flows/chat](https://github.com/microsoft/promptflow/tree/main/examples/flows/chat)
## Next steps
- [Quick start](../how-to-guides/quick-start.md)
- [Initialize and test a flow](../how-to-guides/init-and-test-a-flow.md)
- [Run and evaluate a flow](../how-to-guides/run-and-evaluate-a-flow/index.md)
- [Tune prompts using variants](../how-to-guides/tune-prompts-with-variants.md) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/concept-tools.md | Tools are the fundamental building blocks of a [flow](./concept-flows.md).
Each tool is an executable unit, basically a function to performs various tasks including but not limited to:
- Accessing LLMs for various purposes
- Querying databases
- Getting information from search engines
- Pre/post processing of data
# Tools
Prompt flow provides 3 basic tools:
- [LLM](../reference/tools-reference/llm-tool.md): The LLM tool allows you to write custom prompts and leverage large language models to achieve specific goals, such as summarizing articles, generating customer support responses, and more.
- [Python](../reference/tools-reference/python-tool.md): The Python tool enables you to write custom Python functions to perform various tasks, such as fetching web pages, processing intermediate data, calling third-party APIs, and more.
- [Prompt](../reference/tools-reference/prompt-tool.md): The Prompt tool allows you to prepare a prompt as a string for more complex use cases or for use in conjunction with other prompt tools or python tools.
## More tools
Our partners also contributes other useful tools for advanced scenarios, here are some links:
- [Vector DB Lookup](../reference/tools-reference/vector_db_lookup_tool.md): vector search tool that allows users to search top k similar vectors from vector database.
- [Faiss Index Lookup](../reference/tools-reference/faiss_index_lookup_tool.md): querying within a user-provided Faiss-based vector store.
## Custom tools
You can create your own tools that can be shared with your team or anyone in the world.
Learn more on [Create and Use Tool Package](../how-to-guides/develop-a-tool/create-and-use-tool-package.md)
## Next steps
For more information on the available tools and their usage, visit the our [reference doc](../reference/index.md). | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/concept-connections.md | In prompt flow, you can utilize connections to securely manage credentials or secrets for external services.
# Connections
Connections are for storing information about how to access external services like LLMs: endpoint, api keys etc.
- In your local development environment, the connections are persisted in your local machine with keys encrypted.
- In Azure AI, connections can be configured to be shared across the entire workspace. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards.
Prompt flow provides a variety of pre-built connections, including Azure Open AI, Open AI, etc. These pre-built connections enable seamless integration with these resources within the built-in tools. Additionally, you have the flexibility to create custom connection types using key-value pairs, empowering them to tailor the connections to their specific requirements, particularly in Python tools.
| Connection type | Built-in tools |
| ------------------------------------------------------------ | ------------------------------- |
| [Azure Open AI](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service) | LLM or Python |
| [Open AI](https://openai.com/) | LLM or Python |
| [Cognitive Search](https://azure.microsoft.com/en-us/products/search) | Vector DB Lookup or Python |
| [Serp](https://serpapi.com/) | Serp API or Python |
| Custom | Python |
By leveraging connections in prompt flow, you can easily establish and manage connections to external APIs and data sources, facilitating efficient data exchange and interaction within their AI applications.
## Next steps
- [Create connections](../how-to-guides/manage-connections.md) | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/index.md | # Concepts
In this section, you will learn the basic concepts of prompt flow.
```{toctree}
:maxdepth: 1
concept-flows
concept-tools
concept-connections
concept-variants
design-principles
``` | 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/concepts/design-principles.md | # Design principles
When we started this project, [LangChain](https://www.langchain.com/) already became popular esp. after the ChatGPT launch. One of the questions we’ve been asked is what’s the difference between prompt flow and LangChain. This article is to elucidate the reasons for building prompt flow and the deliberate design choices we have made. To put it succinctly, prompt flow is a suite of development tools for you to build LLM apps with a strong emphasis of quality through experimentations, not a framework - which LangChain is.
While LLM apps are mostly in exploration stage, Microsoft started in this area a bit earlier and we’ve had the opportunity to observe how developers are integrating LLMs into existing systems or build new applications. These invaluable insights have shaped the fundamental design principles of prompt flow.
## 1. Expose the prompts vs. hiding them
The core essence of LLM applications lies in the prompts themselves, at least for today. When developing a reasonably complex LLM application, the majority of development work should be “tuning” the prompts (note the intentional use of the term "tuning," which we will delve into further later on). Any framework or tool trying to help in this space should focus on making prompt tuning easier and more straightforward. On the other hand, prompts are very volatile, it's unlikely to write a single prompt that can work across different models or even different version of same models. Building a successful LLM-based application, you have to understand every prompt introduced, so that you can tune it when necessary. LLM is simply not powerful or deterministic enough that you can use a prompt written by others like you use libraries in traditional programming languages.
In this context, any design that tries to provide a smart function or agent by encapsulating a few prompts in a library is unlikely to yield favorable results in real-world scenarios. And hiding prompts inside a library’s code base only makes it’s hard for people to improve or tailor the prompts to suit their specific needs.
Prompt flow, being positioned as a tool, refrains from wrapping any prompts within its core codebase. The only place you will see prompts are our sample flows, which are, of course, available for adoption and utilization. Every prompt should be authored and controlled by the developers themselves, rather than relying on us.
## 2. A new way of work
LLMs possess remarkable capabilities that enable developers to enhance their applications without delving deep into the intricacies of machine learning. In the meantime, LLMs make these apps more stochastic, which pose new challenges to application development. Merely asserting "no exception" or "result == x" in gated tests is no longer sufficient. Adopting a new methodology and employing new tools becomes imperative to ensure the quality of LLM applications — an entirely novel way of working is required.
At the center of this paradigm shift is evaluation, a term frequently used in machine learning space, refers to the process of assessing the performance and quality of a trained model. It involves measuring how well the model performs on a given task or dataset, which plays a pivotal role in understanding the model's strengths, weaknesses, and overall effectiveness. Evaluation metrics and techniques vary depending on the specific task and problem domain. Some common metrics include accuracy, precision and recall, you probably already familiar with. Now the LLM apps share similarities with machine learning models, they requires an evaluation-centric approach integrated into the development workflow, with a robust set of metrics and evaluation forming the foundation for ensuring the quality of LLM applications.
Prompt flow offers a range of tools to streamline the new way of work:
* Develop your evaluation program as Evaluation flow to calculate metrics for your app/flow, learn from our sample evaluation flows.
* Iterate on your application flow and run evaluation flows via the SDK/CLI, allowing you to compare metrics and choose the optimal candidate for release. These iterations include trying different prompts, different LLM parameters like temperature etc. - this is referred as “tuning” process earlier, or sometime referred as experimentation.
* Integrate the evaluation into your CI/CD pipeline, aligning the assertions in your gated tests with the selected metrics.
Prompt flow introduces two conceptual components to facilitate this workflow:
* Evaluation flow: a flow type that indicates this flow is not for deploy or integrate into your app, it’s for evaluating an app/flow performance.
* Run: every time you run your flow with data, or run an evaluation on the output of a flow, a Run object is created to manage the history and allow for comparison and additional analysis.
While new concepts introduce additional cognitive load, we firmly believe they hold greater importance compared to abstracting different LLM APIs or vector database APIs.
## 3. Optimize for “visibility”
There are quite some interesting application patterns emerging because of LLMs, like Retrieval Augmented Generation (RAG), ReAct and more. Though how LLMs work may remain enigmatic to many developers, how LLM apps work is not - they essentially involve a series of calls to external services such as LLMs, databases, and search engines, all glued together. Architecturally there isn’t much new, patterns like RAG and ReAct are both straightforward to implement once a developer understands what they are - plain Python programs with API calls to external services can totally serve the purpose effectively.
By observing many internal use cases, we learned that deeper insight into the detail of the execution is critical. Establishing a systematic method for tracking interactions with external systems is one of design priority. Consequently, We adopted an unconventional approach - prompt flow has a YAML file describing how function calls (we call them [Tools](../concepts/concept-tools.md)) are executed and connected into a Directed Acyclic Graph (DAG).
This approach offers several key benefits, primarily centered around **enhanced visibility**:
1) During development, your flow can be visualized in an intelligible manner, enabling clear identification of any faulty components. As a byproduct, you obtain an architecturally descriptive diagram that can be shared with others.
2) Each node in the flow has it’s internal detail visualized in a consistent way.
3) Single nodes can be individually run or debugged without the need to rerun previous nodes.
</b>
![promptflow-dag](../media/promptflow-dag.png)
The emphasis on visibility in prompt flow's design helps developers to gain a comprehensive understanding of the intricate details of their applications. This, in turn, empowers developers to engage in effective troubleshooting and optimization.
Despite there're some control flow features like "activate-when" to serve the needs of branches/switch-case, we do not intend to make Flow itself Turing-complete. If you want to develop an agent which is fully dynamic and guided by LLM, leveraging [Semantic Kernel](https://github.com/microsoft/semantic-kernel) together with prompt flow would be a favorable option.
| 0 |
promptflow_repo/promptflow/docs | promptflow_repo/promptflow/docs/cloud/index.md | # Cloud
Prompt flow streamlines the process of developing AI applications based on LLM, easing prompt engineering, prototyping, evaluating, and fine-tuning for high-quality products.
Transitioning to production, however, typically requires a comprehensive **LLMOps** process, LLMOps is short for large language model operations. This can often be a complex task, demanding high availability and security, particularly vital for large-scale team collaboration and lifecycle management when deploying to production.
To assist in this journey, we've introduced **Azure AI**, a **cloud-based platform** tailored for executing LLMOps, focusing on boosting productivity for enterprises.
* Private data access and controls
* Collaborative development
* Automating iterative experimentation and CI/CD
* Deployment and optimization
* Safe and Responsible AI
![img](../media/cloud/azureml/llmops_cloud_value.png)
## Transitioning from local to cloud (Azure AI)
In prompt flow, You can develop your flow locally and then seamlessly transition to Azure AI. Here are a few scenarios where this might be beneficial:
| Scenario | Benefit | How to|
| --- | --- |--- |
| Collaborative development | Azure AI provides a cloud-based platform for flow development and management, facilitating sharing and collaboration across multiple teams, organizations, and tenants.| [Submit a run using pfazure](./azureai/quick-start.md), based on the flow file in your code base.|
| Processing large amounts of data in parallel pipelines | Transitioning to Azure AI allows you to use your flow as a parallel component in a pipeline job, enabling you to process large amounts of data and integrate with existing pipelines. | Learn how to [Use flow in Azure ML pipeline job](./azureai/use-flow-in-azure-ml-pipeline.md).|
| Large-scale Deployment | Azure AI allows for seamless deployment and optimization when your flow is ready for production and requires high availability and security. | Use `pf flow build` to deploy your flow to [Azure App Service](./azureai/deploy-to-azure-appservice.md).|
| Data Security and Responsible AI Practices | If your flow handling sensitive data or requiring ethical AI practices, Azure AI offers robust security, responsible AI services, and features for data storage, identity, and access control. | Follow the steps mentioned in the above scenarios.|
For more resources on Azure AI, visit the cloud documentation site: [Build AI solutions with prompt flow](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/get-started-prompt-flow?view=azureml-api-2).
```{toctree}
:caption: AzureAI
:maxdepth: 1
azureai/quick-start
azureai/manage-flows
azureai/consume-connections-from-azure-ai
azureai/deploy-to-azure-appservice
azureai/use-flow-in-azure-ml-pipeline.md
azureai/faq
```
| 0 |
promptflow_repo/promptflow/docs/cloud | promptflow_repo/promptflow/docs/cloud/azureai/manage-flows.md | # Manage flows
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../../how-to-guides/faq.md#stable-vs-experimental).
:::
This documentation will walk you through how to manage your flow with CLI and SDK on [Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow?view=azureml-api-2).
The flow examples in this guide come from [examples/flows/standard](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard).
In general:
- For `CLI`, you can run `pfazure flow --help` in the terminal to see help messages.
- For `SDK`, you can refer to [Promptflow Python Library Reference](../../reference/python-library-reference/promptflow.md) and check `promptflow.azure.PFClient.flows` for more flow operations.
:::{admonition} Prerequisites
- Refer to the prerequisites in [Quick start](./quick-start.md#prerequisites).
- Use the `az login` command in the command line to log in. This enables promptflow to access your credentials.
:::
Let's take a look at the following topics:
- [Manage flows](#manage-flows)
- [Create a flow](#create-a-flow)
- [List flows](#list-flows)
## Create a flow
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
To set the target workspace, you can either specify it in the CLI command or set default value in the Azure CLI.
You can refer to [Quick start](./quick-start.md#submit-a-run-to-workspace) for more information.
To create a flow to Azure from local flow directory, you can use
```bash
# create the flow
pfazure flow create --flow <path-to-flow-folder>
# create the flow with metadata
pfazure flow create --flow <path-to-flow-folder> --set display_name=<display-name> description=<description> tags.key1=value1
```
After the flow is created successfully, you can see the flow summary in the command line.
![img](../../media/cloud/manage-flows/flow_create_0.png)
:::
:::{tab-item} SDK
:sync: SDK
1. Import the required libraries
```python
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
# azure version promptflow apis
from promptflow.azure import PFClient
```
2. Get credential
```python
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()
```
3. Get a handle to the workspace
```python
# Get a handle to workspace
pf = PFClient(
credential=credential,
subscription_id="<SUBSCRIPTION_ID>", # this will look like xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
resource_group_name="<RESOURCE_GROUP>",
workspace_name="<AML_WORKSPACE_NAME>",
)
```
4. Create the flow
```python
# specify flow path
flow = "./web-classification"
# create flow to Azure
flow = pf.flows.create_or_update(
flow=flow, # path to the flow folder
display_name="my-web-classification", # it will be "web-classification-{timestamp}" if not specified
type="standard", # it will be "standard" if not specified
)
```
:::
::::
On Azure portal, you can see the created flow in the flow list.
![img](../../media/cloud/manage-flows/flow_create_1.png)
And the flow source folder on file share is `Users/<alias>/promptflow/<flow-display-name>`:
![img](../../media/cloud/manage-flows/flow_create_2.png)
Note that if the flow display name is not specified, it will default to the flow folder name + timestamp. (e.g. `web-classification-11-13-2023-14-19-10`)
## List flows
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
List flows with default json format:
```bash
pfazure flow list --max-results 1
```
![img](../../media/cloud/manage-flows/flow_list_0.png)
:::
:::{tab-item} SDK
:sync: SDK
```python
# reuse the pf client created in "create a flow" section
flows = pf.flows.list(max_results=1)
```
:::
:::: | 0 |
promptflow_repo/promptflow/docs/cloud | promptflow_repo/promptflow/docs/cloud/azureai/quick-start.md | # Run prompt flow in Azure AI
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../../how-to-guides/faq.md#stable-vs-experimental).
:::
Assuming you have learned how to create and run a flow following [Quick start](../../how-to-guides/quick-start.md). This guide will walk you through the main process of how to submit a promptflow run to [Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow?view=azureml-api-2).
Benefits of use Azure AI comparison to just run locally:
- **Designed for team collaboration**: Portal UI is a better fix for sharing & presentation your flow and runs. And workspace can better organize team shared resources like connections.
- **Enterprise Readiness Solutions**: prompt flow leverages Azure AI's robust enterprise readiness solutions, providing a secure, scalable, and reliable foundation for the development, experimentation, and deployment of flows.
## Prerequisites
1. An Azure account with an active subscription - [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
2. An Azure AI ML workspace - [Create workspace resources you need to get started with Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources).
3. A python environment, `python=3.9` or higher version like 3.10 is recommended.
4. Install `promptflow` with extra dependencies and `promptflow-tools`.
```sh
pip install promptflow[azure] promptflow-tools
```
5. Clone the sample repo and check flows in folder [examples/flows](https://github.com/microsoft/promptflow/tree/main/examples/flows).
```sh
git clone https://github.com/microsoft/promptflow.git
```
## Create necessary connections
Connection helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety.
In this guide, we will use flow `web-classification` which uses connection `open_ai_connection` inside, we need to set up the connection if we haven't added it before.
Please go to workspace portal, click `Prompt flow` -> `Connections` -> `Create`, then follow the instruction to create your own connections. Learn more on [connections](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/concept-connections?view=azureml-api-2).
## Submit a run to workspace
Assuming you are in working directory `<path-to-the-sample-repo>/examples/flows/standard/`
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Use `az login` to login so promptflow can get your credential.
```sh
az login
```
Submit a run to workspace.
```sh
pfazure run create --subscription <my_sub> -g <my_resource_group> -w <my_workspace> --flow web-classification --data web-classification/data.jsonl --stream
```
**Default subscription/resource-group/workspace**
Note `--subscription`, `-g` and `-w` can be omitted if you have installed the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) and [set the default configurations](https://learn.microsoft.com/en-us/cli/azure/azure-cli-configuration).
```sh
az account set --subscription <my-sub>
az configure --defaults group=<my_resource_group> workspace=<my_workspace>
```
**Serverless runtime and named runtime**
Runtimes serve as computing resources so that the flow can be executed in workspace. Above command does not specify any runtime which means it will run in serverless mode. In this mode the workspace will automatically create a runtime and you can use it as the default runtime for any flow run later.
Instead, you can also [create a runtime](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-create-manage-runtime?view=azureml-api-2) and use it with `--runtime <my-runtime>`:
```sh
pfazure run create --flow web-classification --data web-classification/data.jsonl --stream --runtime <my-runtime>
```
**Specify run name and view a run**
You can also name the run by specifying `--name my_first_cloud_run` in the run create command, otherwise the run name will be generated in a certain pattern which has timestamp inside.
With a run name, you can easily stream or view the run details using below commands:
```sh
pfazure run stream -n my_first_cloud_run # same as "--stream" in command "run create"
pfazure run show-details -n my_first_cloud_run
pfazure run visualize -n my_first_cloud_run
```
More details can be found in [CLI reference: pfazure](../../reference/pfazure-command-reference.md)
:::
:::{tab-item} SDK
:sync: SDK
1. Import the required libraries
```python
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
# azure version promptflow apis
from promptflow.azure import PFClient
```
2. Get credential
```python
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()
```
3. Get a handle to the workspace
```python
# Get a handle to workspace
pf = PFClient(
credential=credential,
subscription_id="<SUBSCRIPTION_ID>", # this will look like xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
resource_group_name="<RESOURCE_GROUP>",
workspace_name="<AML_WORKSPACE_NAME>",
)
```
4. Submit the flow run
```python
# load flow
flow = "web-classification"
data = "web-classification/data.jsonl"
runtime = "example-runtime-ci" # assume you have existing runtime with this name provisioned
# runtime = None # un-comment use automatic runtime
# create run
base_run = pf.run(
flow=flow,
data=data,
runtime=runtime,
)
pf.stream(base_run)
```
5. View the run info
```python
details = pf.get_details(base_run)
details.head(10)
pf.visualize(base_run)
```
:::
::::
## View the run in workspace
At the end of stream logs, you can find the `portal_url` of the submitted run, click it to view the run in the workspace.
![c_0](../../media/cloud/azureml/local-to-cloud-run-webview.png)
### Run snapshot of the flow with additional includes
Flows that enabled [additional include](../../how-to-guides/develop-a-flow/referencing-external-files-or-folders-in-a-flow.md) files can also be submitted for execution in the workspace. Please note that the specific additional include files or folders will be uploaded and organized within the **Files** folder of the run snapshot in the cloud.
![img](../../media/cloud/azureml/run-with-additional-includes.png)
## Next steps
Learn more about:
- [CLI reference: pfazure](../../reference/pfazure-command-reference.md)
| 0 |
promptflow_repo/promptflow/docs/cloud | promptflow_repo/promptflow/docs/cloud/azureai/deploy-to-azure-appservice.md | # Deploy to Azure App Service
[Azure App Service](https://learn.microsoft.com/azure/app-service/) is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends.
The scripts (`deploy.sh` for bash and `deploy.ps1` for powershell) under [this folder](https://github.com/microsoft/promptflow/tree/main/examples/tutorials/flow-deploy/azure-app-service) are here to help deploy the docker image to Azure App Service.
This example demos how to deploy [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) deploy a flow using Azure App Service.
## Build a flow as docker format app
Use the command below to build a flow as docker format app:
```bash
pf flow build --source ../../flows/standard/web-classification --output dist --format docker
```
Note that all dependent connections must be created before building as docker.
## Deploy with Azure App Service
The two scripts will do the following things:
1. Create a resource group if not exists.
2. Build and push the image to docker registry.
3. Create an app service plan with the given sku.
4. Create an app with specified name, set the deployment container image to the pushed docker image.
5. Set up the environment variables for the app.
::::{tab-set}
:::{tab-item} Bash
Example command to use bash script:
```shell
bash deploy.sh --path dist -i <image_tag> --name my_app_23d8m -r <docker registry> -g <resource_group>
```
See the full parameters by `bash deploy.sh -h`.
:::
:::{tab-item} PowerShell
Example command to use powershell script:
```powershell
.\deploy.ps1 -i <image_tag> --Name my_app_23d8m -r <docker registry> -g <resource_group>
```
See the full parameters by `.\deploy.ps1 -h`.
:::
::::
Note that the `name` will produce a unique FQDN as AppName.azurewebsites.net.
## View and test the web app
The web app can be found via [azure portal](https://portal.azure.com/)
![img](../../media/cloud/azureml/deploy_appservice_azure_portal_img.png)
After the app created, you will need to go to https://portal.azure.com/ find the app and set up the environment variables
at (Settings>Configuration) or (Settings>Environment variables), then restart the app.
![img](../../media/cloud/azureml/deploy_appservice_set_env_var.png)
The app can be tested by sending a POST request to the endpoint or browse the test page.
::::{tab-set}
:::{tab-item} Bash
```bash
curl https://<name>.azurewebsites.net/score --data '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -X POST -H "Content-Type: application/json"
```
:::
:::{tab-item} PowerShell
```powershell
Invoke-WebRequest -URI https://<name>.azurewebsites.net/score -Body '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -Method POST -ContentType "application/json"
```
:::
:::{tab-item} Test Page
Browse the app at Overview and see the test page:
![img](../../media/cloud/azureml/deploy_appservice_test_page.png)
:::
::::
Tips:
- Reach deployment logs at (Deployment>Deployment Central) and app logs at (Monitoring>Log stream).
- Reach advanced deployment tools at https://$name.scm.azurewebsites.net/.
- Reach more details about app service at https://learn.microsoft.com/azure/app-service/.
## Next steps
- Try the example [here](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/flow-deploy/azure-app-service). | 0 |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 32