repo_id
stringlengths 15
132
| file_path
stringlengths 34
176
| content
stringlengths 2
3.52M
| __index_level_0__
int64 0
0
|
---|---|---|---|
promptflow_repo | promptflow_repo/promptflow/.cspell.json | {
"version": "0.2",
"language": "en",
"languageId": "python",
"dictionaries": [
"powershell",
"python",
"go",
"css",
"html",
"bash",
"npm",
"softwareTerms",
"en_us",
"en-gb"
],
"ignorePaths": [
"**/*.js",
"**/*.pyc",
"**/*.log",
"**/*.jsonl",
"**/*.xml",
"**/*.txt",
".gitignore",
"scripts/docs/_build/**",
"src/promptflow/promptflow/azure/_restclient/flow/**",
"src/promptflow/promptflow/azure/_restclient/swagger.json",
"src/promptflow/tests/**",
"src/promptflow-tools/tests/**",
"**/flow.dag.yaml",
"**/setup.py",
"scripts/installer/curl_install_pypi/**",
"scripts/installer/windows/**",
"src/promptflow/promptflow/_sdk/_service/pfsvc.py"
],
"words": [
"aoai",
"amlignore",
"mldesigner",
"faiss",
"serp",
"azureml",
"mlflow",
"vnet",
"openai",
"pfazure",
"eastus",
"azureai",
"vectordb",
"Qdrant",
"Weaviate",
"env",
"e2etests",
"e2etest",
"tablefmt",
"logprobs",
"logit",
"hnsw",
"chatml",
"UNLCK",
"KHTML",
"numlines",
"azurecr",
"centralus",
"Policheck",
"azuremlsdktestpypi",
"rediraffe",
"pydata",
"ROBOCOPY",
"undoc",
"retriable",
"pfcli",
"pfutil",
"mgmt",
"wsid",
"westus",
"msrest",
"cref",
"msal",
"pfbytes",
"Apim",
"junit",
"nunit",
"astext",
"Likert",
"pfsvc"
],
"ignoreWords": [
"openmpi",
"ipynb",
"xdist",
"pydash",
"tqdm",
"rtype",
"epocs",
"fout",
"funcs",
"todos",
"fstring",
"creds",
"zipp",
"gmtime",
"pyjwt",
"nbconvert",
"nbformat",
"pypandoc",
"dotenv",
"miniconda",
"datas",
"tcgetpgrp",
"yamls",
"fmt",
"serpapi",
"genutils",
"metadatas",
"tiktoken",
"bfnrt",
"orelse",
"thead",
"sympy",
"ghactions",
"esac",
"MSRC",
"pycln",
"strictyaml",
"psutil",
"getch",
"tcgetattr",
"TCSADRAIN",
"stringio",
"jsonify",
"werkzeug",
"continuumio",
"pydantic",
"iterrows",
"dtype",
"fillna",
"nlines",
"aggr",
"tcsetattr",
"pysqlite",
"AADSTS700082",
"Pyinstaller",
"runsvdir",
"runsv",
"levelno",
"LANCZOS",
"Mobius",
"ruamel",
"gunicorn",
"pkill",
"pgrep",
"Hwfoxydrg",
"llms",
"vcrpy",
"uionly",
"llmops",
"Abhishek",
"restx",
"httpx",
"tiiuae",
"nohup",
"metagenai",
"WBITS",
"laddr",
"nrows",
"Dumpable",
"XCLASS"
],
"flagWords": [
"Prompt Flow"
],
"allowCompoundWords": true
}
| 0 |
promptflow_repo | promptflow_repo/promptflow/CODE_OF_CONDUCT.md | # Microsoft Open Source Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
Resources:
- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
- Contact [[email protected]](mailto:[email protected]) with questions or concerns
| 0 |
promptflow_repo | promptflow_repo/promptflow/.pre-commit-config.yaml | # See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
exclude: '(^docs/)|flows|scripts|src/promptflow/promptflow/azure/_restclient/|src/promptflow/tests/test_configs|src/promptflow-tools'
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-json
- id: check-merge-conflict
- repo: https://github.com/psf/black
rev: 22.3.0 # Replace by any tag/version: https://github.com/psf/black/tags
hooks:
- id: black
language_version: python3 # Should be a command that runs python3.6+
args:
- "--line-length=120"
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: flake8
# Temporary disable this since it gets stuck when updating env
- repo: https://github.com/streetsidesoftware/cspell-cli
rev: v7.3.0
hooks:
- id: cspell
args: ['--config', '.cspell.json', "--no-must-find-files"]
- repo: https://github.com/hadialqattan/pycln
rev: v2.1.2 # Possible releases: https://github.com/hadialqattan/pycln/tags
hooks:
- id: pycln
name: "Clean unused python imports"
args: [--config=setup.cfg]
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
# stages: [commit]
name: isort-python
# Use black profile for isort to avoid conflicts
# see https://github.com/PyCQA/isort/issues/1518
args: ["--profile", "black", --line-length=120]
| 0 |
promptflow_repo | promptflow_repo/promptflow/LICENSE | MIT License
Copyright (c) Microsoft Corporation.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
| 0 |
promptflow_repo | promptflow_repo/promptflow/SUPPORT.md | # Support
## How to file issues and get help
This project uses GitHub Issues to track bugs and feature requests. Please search the existing
issues before filing new issues to avoid duplicates. For new issues, file your bug or
feature request as a new Issue.
## Microsoft Support Policy
Support for this **PROJECT or PRODUCT** is limited to the resources listed above.
| 0 |
promptflow_repo | promptflow_repo/promptflow/SECURITY.md | <!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->
| 0 |
promptflow_repo | promptflow_repo/promptflow/setup.cfg | [flake8]
extend-ignore = E203, E266, W503, F403, F821
max-line-length = 120
enable-extensions = E123,E133,E241,E242,E704,W505
exclude =
.git
.tox
.eggs
__pycache__
tests/fixtures/*
docs/*
venv,.pytest_cache
build
src/promptflow/promptflow/azure/_restclient
src/promptflow/tests/test_configs/*
import-order-style = google
[mypy]
ignore_missing_imports = True
disallow_untyped_defs = True
[mypy-pytest,pytest_mock]
ignore_missing_imports = True
[tool:pycln]
quiet = True
[black]
line_length = 120
[pycln]
silence = True
[isort]
# we use check for make fmt*
profile = "black"
# no need to fmt ignored
skip_gitignore = true
# needs to be the same as in black
line_length = 120
use_parentheses = true
include_trailing_comma = true
honor_noqa = true
ensure_newline_before_comments = true
skip_glob = [
docs/**,
pipelines/**,
pytest/**,
samples/**,
]
known_third_party = azure,mock,numpy,pandas,pydash,pytest,pytest_mock,requests,setuptools,six,sklearn,tqdm,urllib3,utilities,utils,yaml,jsonschema,strictyaml,jwt,pathspec,isodate,docker
known_first_party = promptflow,promptflow_test
| 0 |
promptflow_repo | promptflow_repo/promptflow/README.md | # Prompt flow
[![Python package](https://img.shields.io/pypi/v/promptflow)](https://pypi.org/project/promptflow/)
[![Python](https://img.shields.io/pypi/pyversions/promptflow.svg?maxAge=2592000)](https://pypi.python.org/pypi/promptflow/)
[![PyPI - Downloads](https://img.shields.io/pypi/dm/promptflow)](https://pypi.org/project/promptflow/)
[![CLI](https://img.shields.io/badge/CLI-reference-blue)](https://microsoft.github.io/promptflow/reference/pf-command-reference.html)
[![vsc extension](https://img.shields.io/visual-studio-marketplace/i/prompt-flow.prompt-flow?logo=Visual%20Studio&label=Extension%20)](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow)
[![Doc](https://img.shields.io/badge/Doc-online-green)](https://microsoft.github.io/promptflow/index.html)
[![Issue](https://img.shields.io/github/issues/microsoft/promptflow)](https://github.com/microsoft/promptflow/issues/new/choose)
[![Discussions](https://img.shields.io/github/discussions/microsoft/promptflow)](https://github.com/microsoft/promptflow/issues/new/choose)
[![CONTRIBUTING](https://img.shields.io/badge/Contributing-8A2BE2)](https://github.com/microsoft/promptflow/blob/main/CONTRIBUTING.md)
[![License: MIT](https://img.shields.io/github/license/microsoft/promptflow)](https://github.com/microsoft/promptflow/blob/main/LICENSE)
> Welcome to join us to make prompt flow better by
> participating [discussions](https://github.com/microsoft/promptflow/discussions),
> opening [issues](https://github.com/microsoft/promptflow/issues/new/choose),
> submitting [PRs](https://github.com/microsoft/promptflow/pulls).
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
With prompt flow, you will be able to:
- **Create and iteratively develop flow**
- Create executable [flows](https://microsoft.github.io/promptflow/concepts/concept-flows.html) that link LLMs, prompts, Python code and other [tools](https://microsoft.github.io/promptflow/concepts/concept-tools.html) together.
- Debug and iterate your flows, especially the [interaction with LLMs](https://microsoft.github.io/promptflow/concepts/concept-connections.html) with ease.
- **Evaluate flow quality and performance**
- Evaluate your flow's quality and performance with larger datasets.
- Integrate the testing and evaluation into your CI/CD system to ensure quality of your flow.
- **Streamlined development cycle for production**
- Deploy your flow to the serving platform you choose or integrate into your app's code base easily.
- (Optional but highly recommended) Collaborate with your team by leveraging the cloud version of [Prompt flow in Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow?view=azureml-api-2).
------
## Installation
To get started quickly, you can use a pre-built development environment. **Click the button below** to open the repo in GitHub Codespaces, and then continue the readme!
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/microsoft/promptflow?quickstart=1)
If you want to get started in your local environment, first install the packages:
Ensure you have a python environment, `python=3.9` is recommended.
```sh
pip install promptflow promptflow-tools
```
## Quick Start ⚡
**Create a chatbot with prompt flow**
Run the command to initiate a prompt flow from a chat template, it creates folder named `my_chatbot` and generates required files within it:
```sh
pf flow init --flow ./my_chatbot --type chat
```
**Setup a connection for your API key**
For OpenAI key, establish a connection by running the command, using the `openai.yaml` file in the `my_chatbot` folder, which stores your OpenAI key (override keys and name with --set to avoid yaml file changes):
```sh
pf connection create --file ./my_chatbot/openai.yaml --set api_key=<your_api_key> --name open_ai_connection
```
For Azure OpenAI key, establish the connection by running the command, using the `azure_openai.yaml` file:
```sh
pf connection create --file ./my_chatbot/azure_openai.yaml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
**Chat with your flow**
In the `my_chatbot` folder, there's a `flow.dag.yaml` file that outlines the flow, including inputs/outputs, nodes, connection, and the LLM model, etc
> Note that in the `chat` node, we're using a connection named `open_ai_connection` (specified in `connection` field) and the `gpt-35-turbo` model (specified in `deployment_name` field). The deployment_name filed is to specify the OpenAI model, or the Azure OpenAI deployment resource.
Interact with your chatbot by running: (press `Ctrl + C` to end the session)
```sh
pf flow test --flow ./my_chatbot --interactive
```
**Core value: ensuring "High Quality” from prototype to production**
Explore our [**15-minute tutorial**](examples/tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md) that guides you through prompt tuning ➡ batch testing ➡ evaluation, all designed to ensure high quality ready for production.
Next Step! Continue with the **Tutorial** 👇 section to delve deeper into prompt flow.
## Tutorial 🏃♂️
Prompt flow is a tool designed to **build high quality LLM apps**, the development process in prompt flow follows these steps: develop a flow, improve the flow quality, deploy the flow to production.
### Develop your own LLM apps
#### VS Code Extension
We also offer a VS Code extension (a flow designer) for an interactive flow development experience with UI.
<img src="examples/tutorials/quick-start/media/vsc.png" alt="vsc" width="1000"/>
You can install it from the <a href="https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow">visualstudio marketplace</a>.
#### Deep delve into flow development
[Getting started with prompt flow](https://microsoft.github.io/promptflow/how-to-guides/quick-start.html): A step by step guidance to invoke your first flow run.
### Learn from use cases
[Tutorial: Chat with PDF](https://github.com/microsoft/promptflow/blob/main/examples/tutorials/e2e-development/chat-with-pdf.md): An end-to-end tutorial on how to build a high quality chat application with prompt flow, including flow development and evaluation with metrics.
> More examples can be found [here](https://microsoft.github.io/promptflow/tutorials/index.html#samples). We welcome contributions of new use cases!
### Setup for contributors
If you're interested in contributing, please start with our dev setup guide: [dev_setup.md](./docs/dev/dev_setup.md).
Next Step! Continue with the **Contributing** 👇 section to contribute to prompt flow.
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
trademarks or logos is subject to and must follow
[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
Any use of third-party trademarks or logos are subject to those third-party's policies.
## Code of Conduct
This project has adopted the
[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the
[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [[email protected]](mailto:[email protected])
with any additional questions or comments.
## Data Collection
The software may collect information about you and your use of the software and
send it to Microsoft if configured to enable telemetry.
Microsoft may use this information to provide services and improve our products and services.
You may turn on the telemetry as described in the repository.
There are also some features in the software that may enable you and Microsoft
to collect data from users of your applications. If you use these features, you
must comply with applicable law, including providing appropriate notices to
users of your applications together with a copy of Microsoft's privacy
statement. Our privacy statement is located at
https://go.microsoft.com/fwlink/?LinkID=824704. You can learn more about data
collection and use in the help documentation and our privacy statement. Your
use of the software operates as your consent to these practices.
### Telemetry Configuration
Telemetry collection is on by default.
To opt out, please run `pf config set telemetry.enabled=false` to turn it off.
## License
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the [MIT](LICENSE) license.
| 0 |
promptflow_repo | promptflow_repo/promptflow/CONTRIBUTING.md | # Contributing to Prompt Flow
You can contribute to prompt flow with issues and pull requests (PRs). Simply
filing issues for problems you encounter is a great way to contribute. Contributing
code is greatly appreciated.
## Reporting Issues
We always welcome bug reports, API proposals and overall feedback. Here are a few
tips on how you can make reporting your issue as effective as possible.
### Where to Report
New issues can be reported in our [list of issues](https://github.com/microsoft/promptflow/issues).
Before filing a new issue, please search the list of issues to make sure it does
not already exist.
If you do find an existing issue for what you wanted to report, please include
your own feedback in the discussion. Do consider upvoting (👍 reaction) the original
post, as this helps us prioritize popular issues in our backlog.
### Writing a Good Bug Report
Good bug reports make it easier for maintainers to verify and root cause the
underlying problem.
The better a bug report, the faster the problem will be resolved. Ideally, a bug
report should contain the following information:
- A high-level description of the problem.
- A _minimal reproduction_, i.e. the smallest size of code/configuration required
to reproduce the wrong behavior.
- A description of the _expected behavior_, contrasted with the _actual behavior_ observed.
- Information on the environment: OS/distribution, CPU architecture, SDK version, etc.
- Additional information, e.g. Is it a regression from previous versions? Are there
any known workarounds?
## Contributing Changes
Project maintainers will merge accepted code changes from contributors.
### DOs and DON'Ts
DO's:
- **DO** follow the standard coding conventions: [Python](https://pypi.org/project/black/)
- **DO** give priority to the current style of the project or file you're changing
if it diverges from the general guidelines.
- **DO** include tests when adding new features. When fixing bugs, start with
adding a test that highlights how the current behavior is broken.
- **DO** add proper docstring for functions and classes following [API Documentation Guidelines](./docs/dev/documentation_guidelines.md).
- **DO** keep the discussions focused. When a new or related topic comes up
it's often better to create new issue than to side track the discussion.
- **DO** clearly state on an issue that you are going to take on implementing it.
- **DO** blog and tweet (or whatever) about your contributions, frequently!
DON'Ts:
- **DON'T** surprise us with big pull requests. Instead, file an issue and start
a discussion so we can agree on a direction before you invest a large amount of time.
- **DON'T** commit code that you didn't write. If you find code that you think is a good
fit to add to prompt flow, file an issue and start a discussion before proceeding.
- **DON'T** submit PRs that alter licensing related files or headers. If you believe
there's a problem with them, file an issue and we'll be happy to discuss it.
- **DON'T** make new APIs without filing an issue and discussing with us first.
### Breaking Changes
Contributions must maintain API signature and behavioral compatibility. Contributions
that include breaking changes will be rejected. Please file an issue to discuss
your idea or change if you believe that a breaking change is warranted.
### Suggested Workflow
We use and recommend the following workflow:
1. Create an issue for your work, or reuse an existing issue on the same topic.
- Get agreement from the team and the community that your proposed change is
a good one.
- Clearly state that you are going to take on implementing it, if that's the case.
You can request that the issue be assigned to you. Note: The issue filer and
the implementer don't have to be the same person.
2. Create a personal fork of the repository on GitHub (if you don't already have one).
3. In your fork, create a branch off of main (`git checkout -b my_branch`).
- Name the branch so that it clearly communicates your intentions, such as
"issue-123" or "githubhandle-issue".
4. Make and commit your changes to your branch.
5. Add new tests corresponding to your change, if applicable.
6. Run the relevant scripts in [the section below](https://github.com/microsoft/promptflow/blob/main/CONTRIBUTING.md#dev-scripts) to ensure that your build is clean and all tests are passing.
7. Create a PR against the repository's **main** branch.
- State in the description what issue or improvement your change is addressing.
- Link the PR to the issue in step 1.
- Verify that all the Continuous Integration checks are passing.
8. Wait for feedback or approval of your changes from the code maintainers.
- If there is no response for a few days, you can create a new issue to raise awareness.
Promptflow team has triage process toward issues without assignee,
then you can directly contact the issue owner to follow up (e.g. loop related internal reviewer).
9. When area owners have signed off, and all checks are green, your PR will be merged.
### Development scripts
The scripts below are used to build, test, and lint within the project.
- see [doc/dev/dev_setup.md](https://github.com/microsoft/promptflow/blob/main/docs/dev/dev_setup.md).
### PR - CI Process
The continuous integration (CI) system will automatically perform the required
builds and run tests (including the ones you are expected to run) for PRs. Builds
and test runs must be clean.
If the CI build fails for any reason, the PR issue will be updated with a link
that can be used to determine the cause of the failure.
| 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/examples/configuration.ipynb | # Import required libraries
from promptflow.azure import PFClientfrom azure.identity import (
InteractiveBrowserCredential,
DefaultAzureCredential,
)
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()try:
pf = PFClient.from_config(credential=credential)
except Exception as ex:
# NOTE: Update following workspace information if not correctly configure before
client_config = {
"subscription_id": "<SUBSCRIPTION_ID>",
"resource_group": "<RESOURCE_GROUP>",
"workspace_name": "<AML_WORKSPACE_NAME>",
}
if client_config["subscription_id"].startswith("<"):
print(
"please update your <SUBSCRIPTION_ID> <RESOURCE_GROUP> <AML_WORKSPACE_NAME> in notebook cell"
)
raise ex
else: # write and reload from config file
import json, os
config_path = "../.azureml/config.json"
os.makedirs(os.path.dirname(config_path), exist_ok=True)
with open(config_path, "w") as fo:
fo.write(json.dumps(client_config))
pf = PFClient.from_config(credential=credential, path=config_path)
print(pf)from azure.ai.ml import MLClient
from azure.ai.ml.entities import AmlCompute
# MLClient use the same configuration as PFClient
ml_client = MLClient.from_config(credential=credential)
# specify aml compute name.
cpu_compute_target = "cpu-cluster"
try:
ml_client.compute.get(cpu_compute_target)
except Exception:
print("Creating a new cpu compute target...")
compute = AmlCompute(
name=cpu_compute_target, size="STANDARD_D2_V2", min_instances=0, max_instances=4
)
ml_client.compute.begin_create_or_update(compute).result()# TODO: set up connections | 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/examples/setup.sh | #!/bin/bash
# <promptflow_install>
pip install -r requirements.txt
# </promptflow_install>
pip list | 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/examples/dev_requirements.txt | # required for notebook sample ci
ipython_genutils
ipykernel
jinja2 # for readme generations
markdown # for readme generator
nbformat # for readme generator
papermill
keyrings.alt
black==23.7.0
black-nb
pypandoc # for markdown reader
pypandoc_binary # pypandoc pandoc backend
panflute # for pandoc filters
| 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/examples/requirements.txt | promptflow[azure]
promptflow-tools
python-dotenv
bs4
| 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/examples/README.md | # Promptflow examples
[![code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![license: MIT](https://img.shields.io/badge/License-MIT-purple.svg)](../LICENSE)
## Get started
**Install dependencies**
- Bootstrap your python environment.
- e.g: create a new [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html) environment. `conda create -n pf-examples python=3.9`.
- install required packages in python environment : `pip install -r requirements.txt`
- show installed sdk: `pip show promptflow`
**Quick start**
| path | status | description |
------|--------|-------------
| [quickstart.ipynb](tutorials/get-started/quickstart.ipynb) | [![samples_getstarted_quickstart](https://github.com/microsoft/promptflow/actions/workflows/samples_getstarted_quickstart.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_getstarted_quickstart.yml) | A quickstart tutorial to run a flow and evaluate it. |
| [quickstart-azure.ipynb](tutorials/get-started/quickstart-azure.ipynb) | [![samples_getstarted_quickstartazure](https://github.com/microsoft/promptflow/actions/workflows/samples_getstarted_quickstartazure.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_getstarted_quickstartazure.yml) | A quickstart tutorial to run a flow in Azure AI and evaluate it. |
## CLI examples
### Tutorials ([tutorials](tutorials))
| path | status | description |
------|--------|-------------
| [chat-with-pdf](tutorials/e2e-development/chat-with-pdf.md) | [![samples_tutorials_e2e_development_chat_with_pdf](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_e2e_development_chat_with_pdf.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_e2e_development_chat_with_pdf.yml) | Retrieval Augmented Generation (or RAG) has become a prevalent pattern to build intelligent application with Large Language Models (or LLMs) since it can infuse external knowledge into the model, which is not trained with those up-to-date or proprietary information |
| [azure-app-service](tutorials/flow-deploy/azure-app-service/README.md) | [![samples_tutorials_flow_deploy_azure_app_service](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_deploy_azure_app_service.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_deploy_azure_app_service.yml) | This example demos how to deploy a flow using Azure App Service |
| [create-service-with-flow](tutorials/flow-deploy/create-service-with-flow/README.md) | [![samples_tutorials_flow_deploy_create_service_with_flow](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_deploy_create_service_with_flow.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_deploy_create_service_with_flow.yml) | This example shows how to create a simple service with flow |
| [distribute-flow-as-executable-app](tutorials/flow-deploy/distribute-flow-as-executable-app/README.md) | [![samples_tutorials_flow_deploy_distribute_flow_as_executable_app](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_deploy_distribute_flow_as_executable_app.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_deploy_distribute_flow_as_executable_app.yml) | This example demos how to package flow as a executable app |
| [docker](tutorials/flow-deploy/docker/README.md) | [![samples_tutorials_flow_deploy_docker](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_deploy_docker.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_deploy_docker.yml) | This example demos how to deploy flow as a docker app |
| [kubernetes](tutorials/flow-deploy/kubernetes/README.md) | [![samples_tutorials_flow_deploy_kubernetes](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_deploy_kubernetes.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_deploy_kubernetes.yml) | This example demos how to deploy flow as a Kubernetes app |
| [promptflow-quality-improvement](tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md) | [![samples_tutorials_flow_fine_tuning_evaluation_promptflow_quality_improvement](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_fine_tuning_evaluation_promptflow_quality_improvement.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tutorials_flow_fine_tuning_evaluation_promptflow_quality_improvement.yml) | This tutorial is designed to enhance your understanding of improving flow quality through prompt tuning and evaluation |
### Flows ([flows](flows))
#### [Standard flows](flows/standard/)
| path | status | description |
------|--------|-------------
| [autonomous-agent](flows/standard/autonomous-agent/README.md) | [![samples_flows_standard_autonomous_agent](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_autonomous_agent.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_autonomous_agent.yml) | This is a flow showcasing how to construct a AutoGPT agent with promptflow to autonomously figures out how to apply the given functions to solve the goal, which is film trivia that provides accurate and up-to-date information about movies, directors, actors, and more in this sample |
| [basic](flows/standard/basic/README.md) | [![samples_flows_standard_basic](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_basic.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_basic.yml) | A basic standard flow using custom python tool that calls Azure OpenAI with connection info stored in environment variables |
| [basic-with-builtin-llm](flows/standard/basic-with-builtin-llm/README.md) | [![samples_flows_standard_basic_with_builtin_llm](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_basic_with_builtin_llm.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_basic_with_builtin_llm.yml) | A basic standard flow that calls Azure OpenAI with builtin llm tool |
| [basic-with-connection](flows/standard/basic-with-connection/README.md) | [![samples_flows_standard_basic_with_connection](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_basic_with_connection.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_basic_with_connection.yml) | A basic standard flow that using custom python tool calls Azure OpenAI with connection info stored in custom connection |
| [conditional-flow-for-if-else](flows/standard/conditional-flow-for-if-else/README.md) | [![samples_flows_standard_conditional_flow_for_if_else](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_conditional_flow_for_if_else.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_conditional_flow_for_if_else.yml) | This example is a conditional flow for if-else scenario |
| [conditional-flow-for-switch](flows/standard/conditional-flow-for-switch/README.md) | [![samples_flows_standard_conditional_flow_for_switch](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_conditional_flow_for_switch.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_conditional_flow_for_switch.yml) | This example is a conditional flow for switch scenario |
| [customer-intent-extraction](flows/standard/customer-intent-extraction/README.md) | [![samples_flows_standard_customer_intent_extraction](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_customer_intent_extraction.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_customer_intent_extraction.yml) | This sample is using OpenAI chat model(ChatGPT/GPT4) to identify customer intent from customer's question |
| [describe-image](flows/standard/describe-image/README.md) | [![samples_flows_standard_describe_image](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_describe_image.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_describe_image.yml) | A flow that take image input, flip it horizontally and uses OpenAI GPT-4V tool to describe it |
| [flow-with-additional-includes](flows/standard/flow-with-additional-includes/README.md) | [![samples_flows_standard_flow_with_additional_includes](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_flow_with_additional_includes.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_flow_with_additional_includes.yml) | User sometimes need to reference some common files or folders, this sample demos how to solve the problem using additional_includes |
| [flow-with-symlinks](flows/standard/flow-with-symlinks/README.md) | [![samples_flows_standard_flow_with_symlinks](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_flow_with_symlinks.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_flow_with_symlinks.yml) | User sometimes need to reference some common files or folders, this sample demos how to solve the problem using symlinks |
| [gen-docstring](flows/standard/gen-docstring/README.md) | [![samples_flows_standard_gen_docstring](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_gen_docstring.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_gen_docstring.yml) | This example can help you automatically generate Python code's docstring and return the modified code |
| [maths-to-code](flows/standard/maths-to-code/README.md) | [![samples_flows_standard_maths_to_code](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_maths_to_code.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_maths_to_code.yml) | Math to Code is a project that utilizes the power of the chatGPT model to generate code that models math questions and then executes the generated code to obtain the final numerical answer |
| [named-entity-recognition](flows/standard/named-entity-recognition/README.md) | [![samples_flows_standard_named_entity_recognition](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_named_entity_recognition.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_named_entity_recognition.yml) | A flow that perform named entity recognition task |
| [web-classification](flows/standard/web-classification/README.md) | [![samples_flows_standard_web_classification](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_web_classification.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_standard_web_classification.yml) | This is a flow demonstrating multi-class classification with LLM |
#### [Evaluation flows](flows/evaluation/)
| path | status | description |
------|--------|-------------
| [eval-basic](flows/evaluation/eval-basic/README.md) | [![samples_flows_evaluation_eval_basic](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_basic.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_basic.yml) | This example shows how to create a basic evaluation flow |
| [eval-chat-math](flows/evaluation/eval-chat-math/README.md) | [![samples_flows_evaluation_eval_chat_math](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_chat_math.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_chat_math.yml) | This example shows how to evaluate the answer of math questions, which can compare the output results with the standard answers numerically |
| [eval-classification-accuracy](flows/evaluation/eval-classification-accuracy/README.md) | [![samples_flows_evaluation_eval_classification_accuracy](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_classification_accuracy.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_classification_accuracy.yml) | This is a flow illustrating how to evaluate the performance of a classification system |
| [eval-entity-match-rate](flows/evaluation/eval-entity-match-rate/README.md) | [![samples_flows_evaluation_eval_entity_match_rate](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_entity_match_rate.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_entity_match_rate.yml) | This is a flow evaluates: entity match rate |
| [eval-groundedness](flows/evaluation/eval-groundedness/README.md) | [![samples_flows_evaluation_eval_groundedness](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_groundedness.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_groundedness.yml) | This is a flow leverage llm to eval groundedness: whether answer is stating facts that are all present in the given context |
| [eval-perceived-intelligence](flows/evaluation/eval-perceived-intelligence/README.md) | [![samples_flows_evaluation_eval_perceived_intelligence](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_perceived_intelligence.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_perceived_intelligence.yml) | This is a flow leverage llm to eval perceived intelligence |
| [eval-qna-non-rag](flows/evaluation/eval-qna-non-rag/README.md) | [![samples_flows_evaluation_eval_qna_non_rag](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_qna_non_rag.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_qna_non_rag.yml) | This is a flow evaluating the Q&A systems by leveraging Large Language Models (LLM) to measure the quality and safety of responses |
| [eval-qna-rag-metrics](flows/evaluation/eval-qna-rag-metrics/README.md) | [![samples_flows_evaluation_eval_qna_rag_metrics](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_qna_rag_metrics.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_evaluation_eval_qna_rag_metrics.yml) | This is a flow evaluating the Q&A RAG (Retrieval Augmented Generation) systems by leveraging the state-of-the-art Large Language Models (LLM) to measure the quality and safety of responses |
#### [Chat flows](flows/chat/)
| path | status | description |
------|--------|-------------
| [basic-chat](flows/chat/basic-chat/README.md) | [![samples_flows_chat_basic_chat](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_basic_chat.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_basic_chat.yml) | This example shows how to create a basic chat flow |
| [chat-math-variant](flows/chat/chat-math-variant/README.md) | [![samples_flows_chat_chat_math_variant](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chat_math_variant.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chat_math_variant.yml) | This is a prompt tuning case with 3 prompt variants for math question answering |
| [chat-with-image](flows/chat/chat-with-image/README.md) | [![samples_flows_chat_chat_with_image](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chat_with_image.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chat_with_image.yml) | This flow demonstrates how to create a chatbot that can take image and text as input |
| [chat-with-pdf](flows/chat/chat-with-pdf/README.md) | [![samples_flows_chat_chat_with_pdf](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chat_with_pdf.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chat_with_pdf.yml) | This is a simple flow that allow you to ask questions about the content of a PDF file and get answers |
| [chat-with-wikipedia](flows/chat/chat-with-wikipedia/README.md) | [![samples_flows_chat_chat_with_wikipedia](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chat_with_wikipedia.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chat_with_wikipedia.yml) | This flow demonstrates how to create a chatbot that can remember previous interactions and use the conversation history to generate next message |
| [use_functions_with_chat_models](flows/chat/use_functions_with_chat_models/README.md) | [![samples_flows_chat_use_functions_with_chat_models](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_use_functions_with_chat_models.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_use_functions_with_chat_models.yml) | This flow covers how to use the LLM tool chat API in combination with external functions to extend the capabilities of GPT models |
### Tool Use Cases ([Tool Use Cases](tools/use-cases))
| path | status | description |
------|--------|-------------
| [cascading-inputs-tool-showcase](tools/use-cases/cascading-inputs-tool-showcase/README.md) | [![samples_tools_use_cases_cascading_inputs_tool_showcase](https://github.com/microsoft/promptflow/actions/workflows/samples_tools_use_cases_cascading_inputs_tool_showcase.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tools_use_cases_cascading_inputs_tool_showcase.yml) | This is a flow demonstrating the use of a tool with cascading inputs which frequently used in situations where the selection in one input field determines what subsequent inputs should be shown, and it helps in creating a more efficient, user-friendly, and error-free input process |
| [custom-strong-type-connection-package-tool-showcase](tools/use-cases/custom-strong-type-connection-package-tool-showcase/README.md) | [![samples_tools_use_cases_custom_strong_type_connection_package_tool_showcase](https://github.com/microsoft/promptflow/actions/workflows/samples_tools_use_cases_custom_strong_type_connection_package_tool_showcase.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tools_use_cases_custom_strong_type_connection_package_tool_showcase.yml) | This is a flow demonstrating the use of a package tool with custom string type connection which provides a secure way to manage credentials for external APIs and data sources, and it offers an improved user-friendly and intellisense experience compared to custom connections |
| [custom-strong-type-connection-script-tool-showcase](tools/use-cases/custom-strong-type-connection-script-tool-showcase/README.md) | [![samples_tools_use_cases_custom_strong_type_connection_script_tool_showcase](https://github.com/microsoft/promptflow/actions/workflows/samples_tools_use_cases_custom_strong_type_connection_script_tool_showcase.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tools_use_cases_custom_strong_type_connection_script_tool_showcase.yml) | This is a flow demonstrating the use of a script tool with custom string type connection which provides a secure way to manage credentials for external APIs and data sources, and it offers an improved user-friendly and intellisense experience compared to custom connections |
| [custom_llm_tool_showcase](tools/use-cases/custom_llm_tool_showcase/README.md) | [![samples_tools_use_cases_custom_llm_tool_showcase](https://github.com/microsoft/promptflow/actions/workflows/samples_tools_use_cases_custom_llm_tool_showcase.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tools_use_cases_custom_llm_tool_showcase.yml) | This is a flow demonstrating how to use a `custom_llm` tool, which enables users to seamlessly connect to a large language model with prompt tuning experience using a `PromptTemplate` |
| [dynamic-list-input-tool-showcase](tools/use-cases/dynamic-list-input-tool-showcase/README.md) | [![samples_tools_use_cases_dynamic_list_input_tool_showcase](https://github.com/microsoft/promptflow/actions/workflows/samples_tools_use_cases_dynamic_list_input_tool_showcase.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_tools_use_cases_dynamic_list_input_tool_showcase.yml) | This is a flow demonstrating how to use a tool with a dynamic list input |
### Connections ([connections](connections))
| path | status | description |
------|--------|-------------
| [connections](connections/README.md) | [![samples_connections](https://github.com/microsoft/promptflow/actions/workflows/samples_connections.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_connections.yml) | This folder contains example `YAML` files for creating `connection` using `pf` cli |
## SDK examples
| path | status | description |
------|--------|-------------
| [quickstart.ipynb](tutorials/get-started/quickstart.ipynb) | [![samples_getstarted_quickstart](https://github.com/microsoft/promptflow/actions/workflows/samples_getstarted_quickstart.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_getstarted_quickstart.yml) | A quickstart tutorial to run a flow and evaluate it. |
| [quickstart-azure.ipynb](tutorials/get-started/quickstart-azure.ipynb) | [![samples_getstarted_quickstartazure](https://github.com/microsoft/promptflow/actions/workflows/samples_getstarted_quickstartazure.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_getstarted_quickstartazure.yml) | A quickstart tutorial to run a flow in Azure AI and evaluate it. |
| [pipeline.ipynb](tutorials/flow-in-pipeline/pipeline.ipynb) | [![samples_flowinpipeline_pipeline](https://github.com/microsoft/promptflow/actions/workflows/samples_flowinpipeline_pipeline.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flowinpipeline_pipeline.yml) | Create pipeline using components to run a distributed job with tensorflow |
| [flow-as-function.ipynb](tutorials/get-started/flow-as-function.ipynb) | [![samples_getstarted_flowasfunction](https://github.com/microsoft/promptflow/actions/workflows/samples_getstarted_flowasfunction.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_getstarted_flowasfunction.yml) | This guide will walk you through the main scenarios of executing flow as a function. |
| [cloud-run-management.ipynb](tutorials/run-management/cloud-run-management.ipynb) | [![samples_runmanagement_cloudrunmanagement](https://github.com/microsoft/promptflow/actions/workflows/samples_runmanagement_cloudrunmanagement.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_runmanagement_cloudrunmanagement.yml) | Flow run management in Azure AI |
| [run-management.ipynb](tutorials/run-management/run-management.ipynb) | [![samples_runmanagement_runmanagement](https://github.com/microsoft/promptflow/actions/workflows/samples_runmanagement_runmanagement.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_runmanagement_runmanagement.yml) | Flow run management |
| [connection.ipynb](connections/connection.ipynb) | [![samples_connections_connection](https://github.com/microsoft/promptflow/actions/workflows/samples_connections_connection.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_connections_connection.yml) | Manage various types of connections using sdk |
| [chat-with-pdf-azure.ipynb](flows/chat/chat-with-pdf/chat-with-pdf-azure.ipynb) | [![samples_flows_chat_chatwithpdf_chatwithpdfazure](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chatwithpdf_chatwithpdfazure.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chatwithpdf_chatwithpdfazure.yml) | A tutorial of chat-with-pdf flow that executes in Azure AI |
| [chat-with-pdf.ipynb](flows/chat/chat-with-pdf/chat-with-pdf.ipynb) | [![samples_flows_chat_chatwithpdf_chatwithpdf](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chatwithpdf_chatwithpdf.yml/badge.svg?branch=main)](https://github.com/microsoft/promptflow/actions/workflows/samples_flows_chat_chatwithpdf_chatwithpdf.yml) | A tutorial of chat-with-pdf flow that allows user ask questions about the content of a PDF file and get answers |
## Contributing
We welcome contributions and suggestions! Please see the [contributing guidelines](../CONTRIBUTING.md) for details.
## Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). Please see the [code of conduct](../CODE_OF_CONDUCT.md) for details.
## Reference
* [Promptflow documentation](https://microsoft.github.io/promptflow/) | 0 |
promptflow_repo/promptflow | promptflow_repo/promptflow/examples/CONTRIBUTING.md | # Contributing to examples folder
Thank you for your interest in contributing to the examples folder. This folder contains a collection of Python notebooks and selected markdown files that demonstrate various usage of this promptflow project. The script will automatically generate a README.md file in the root folder, listing all the notebooks and markdown files with their corresponding workflows.
## Guidelines for notebooks and markdown files in examples folder
When creating or modifying a notebook or markdown file, please follow these guidelines:
- Each notebook or markdown file should have a clear and descriptive title as the first line
- Each notebook or markdown file should have a brief introduction that explains the purpose and scope of the example. For details, please refer to the readme workflow generator manual [README.md](../scripts/readme/README.md) file.
- The first sentence of first paragraph of the markdown file is important. The introduction should be concise and informative, and end with a period.
- Each notebook file should have a metadata area when the file is opened as a big JSON file. The metadata area may contain the following fields:
- `.metadata.description`: (Mandatory) A short description of the example that will be displayed in the README.md file. The description should be concise and informative, and end with a period.
- `.metadata.stage`: (Optional) A value that indicates whether the script should skip generating a workflow for this notebook or markdown file. If set to `development`, the script will ignore this file. If set to other values or omitted, the script will generate a workflow for this file.
- Each notebook or markdown file should have a clear and logical structure, using appropriate headings, subheadings, comments, and code cells. The code cells should be executable and produce meaningful outputs.
- Each notebook or markdown file should follow the [PEP 8](https://peps.python.org/pep-0008/) style guide for Python code, and use consistent and readable variable names, indentation, spacing, and punctuation.
- Each notebook or markdown file should include relevant references, citations, and acknowledgements.
- If you are contributing to [tutorial](./tutorials/), each notebook or markdown file should declare its dependent resources in its metadata, so that the auto generated workflow can listen to the changes of these resources to avoid unexpected breaking. Resources should be declared with relative path to the repo root, and here are examples for [notebook](./tutorials/get-started/quickstart.ipynb) and [markdown](./tutorials/e2e-development/chat-with-pdf.md).
## Generate workflows, update README.md and submit pull requests
To run the readme.py script, you need to have Python 3 installed on your system. You also need to install the required packages by running:
```bash
# At this examples folder
pip install -r requirements.txt
pip install -r dev_requirements.txt
```
Then, you can run the script by:
```bash
# At the root of this repository
python scripts/readme/readme.py
```
For detailed usage of readme.py, please refer to the readme workflow generator manual [README.md](../scripts/readme/README.md)
### Update [README.md](./README.md) in [examples](./) folder
The readme.py script will scan all the notebooks and markdown files in the examples folder, and generate a README.md file in the root folder. The README.md file will contain a table of contents with links to each notebook and markdown file, as well as their descriptions and workflows.
### Generations in the [workflows](../.github/workflows/) folder
This contains two parts:
* For notebooks, we'll prepare standard workflow running environment to test the notebook to the end.
* For markdowns, The workflows are generated by extracting the `bash` cells from markdown file. The workflows will prepare standard workflow running environment and test these cells to the end.
The script will also save workflows in the [workflows](../.github/workflows/) folder, where each notebook or markdown file will have a corresponding workflow file with the `.yml` extension. The workflow files can be triggered by creating a new pull request or pushing a new commit to the repository. The workflow will run the notebook or markdown file, and you could check the outputs afterwards.
## Feedback and Support
If you have any feedback or need any support regarding this folder, submit an issue on GitHub. We appreciate your contribution and hope you enjoy using our project.
| 0 |
promptflow_repo/promptflow/examples | promptflow_repo/promptflow/examples/connections/azure_openai.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureOpenAIConnection.schema.json
name: open_ai_connection
type: azure_open_ai
api_key: "<user-input>"
api_base: "aoai-api-endpoint"
api_type: "azure"
| 0 |
promptflow_repo/promptflow/examples | promptflow_repo/promptflow/examples/connections/serp.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/SerpConnection.schema.json
name: serp_connection
type: serp
api_key: "<to-be-replaced>"
| 0 |
promptflow_repo/promptflow/examples | promptflow_repo/promptflow/examples/connections/azure_content_safety.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/AzureContentSafetyConnection.schema.json
name: azure_content_safety_connection
type: azure_content_safety
api_key: "<to-be-replaced>"
endpoint: "endpoint"
api_version: "2023-04-30-preview"
| 0 |
promptflow_repo/promptflow/examples | promptflow_repo/promptflow/examples/connections/requirements.txt | promptflow
promptflow-tools
python-dotenv
| 0 |
promptflow_repo/promptflow/examples | promptflow_repo/promptflow/examples/connections/connection.ipynb | %pip install -r ../requirements.txtfrom promptflow import PFClient
# client can help manage your runs and connections.
client = PFClient()from promptflow.entities import AzureOpenAIConnection
# Initialize an AzureOpenAIConnection object
connection = AzureOpenAIConnection(
name="my_azure_open_ai_connection",
api_key="<your-api-key>",
api_base="<your-endpoint>",
)
# Create the connection, note that api_key will be scrubbed in the returned result
result = client.connections.create_or_update(connection)
print(result)from promptflow.entities import CustomConnection
# Initialize a custom connection object
connection = CustomConnection(
name="my_custom_connection",
# Secrets is a required field for custom connection
secrets={"my_key": "<your-api-key>"},
configs={"endpoint": "<your-endpoint>", "other_config": "other_value"},
)
# Create the connection, note that all secret values will be scrubbed in the returned result
result = client.connections.create_or_update(connection)
print(result)connections = client.connections.list()
for connection in connections:
print(connection)connection = client.connections.get(name="my_custom_connection")
print(connection)connection = client.connections.get(name="my_azure_open_ai_connection")
connection.api_base = "new_value"
connection.api_key = (
"<original-key>" # secrets are required again when updating connection using sdk
)
result = client.connections.create_or_update(connection)
print(connection)connection = client.connections.get(name="my_custom_connection")
connection.configs["other_config"] = "new_value"
connection.secrets[
"my_key"
] = "new_secret_value" # ValueError: Connection 'my_custom_connection' secrets ['my_key'] must be filled again when updating it.
result = client.connections.create_or_update(connection)
print(connection)# client.connections.delete(name="my_custom_connection") | 0 |
promptflow_repo/promptflow/examples | promptflow_repo/promptflow/examples/connections/.env.example | api_key=<your_api_key> | 0 |
promptflow_repo/promptflow/examples | promptflow_repo/promptflow/examples/connections/custom.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/CustomConnection.schema.json
name: custom_connection
type: custom
configs:
key1: "test1"
secrets: # required
api-key: "<to-be-replaced>" | 0 |
promptflow_repo/promptflow/examples | promptflow_repo/promptflow/examples/connections/README.md | # Working with Connection
This folder contains example `YAML` files for creating `connection` using `pf` cli. Learn more on all the [connections types](https://microsoft.github.io/promptflow/concepts/concept-connections.html).
## Prerequisites
- Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Get started
- To create a connection using any of the sample `YAML` files provided in this directory, execute following command:
```bash
# Override keys with --set to avoid yaml file changes
pf connection create -f custom.yml --set configs.key1='<your_api_key>'
pf connection create -f azure_openai.yml --set api_key='<your_api_key>'
```
- To create a custom connection using an `.env` file, execute following command:
```bash
pf connection create -f .env --name custom_connection
```
- To list the created connection, execute following command:
```bash
pf connection list
```
- To show one connection details, execute following command:
```bash
pf connection show --name custom_connection
```
- To update a connection that in workspace, execute following command. Currently only a few fields(description, display_name) support update:
```bash
# Update an existing connection with --set to override values
# Update an azure open ai connection with a new api base
pf connection update -n open_ai_connection --set api_base='<your_api_base>'
# Update a custom connection
pf connection update -n custom_connection --set configs.key1='<your_new_key>' secrets.key2='<your_another_key>'
```
- To delete a connection:
```bash
pf connection delete -n custom_connection
```
| 0 |
promptflow_repo/promptflow/examples | promptflow_repo/promptflow/examples/connections/cognitive_search.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/CognitiveSearchConnection.schema.json
name: cognitive_search_connection
type: cognitive_search
api_key: "<to-be-replaced>"
api_base: "endpoint"
api_version: "2023-07-01-Preview"
| 0 |
promptflow_repo/promptflow/examples | promptflow_repo/promptflow/examples/connections/openai.yml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: open_ai_connection
type: open_ai
api_key: "<user-input>"
organization: "" # optional
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/use_functions_with_chat_models/data.jsonl | {
"chat_history": [
{
"inputs": {
"question": "What is the weather like in Boston?"
},
"outputs": {
"answer": "{\"forecast\":[\"sunny\",\"windy\"],\"location\":\"Boston\",\"temperature\":\"72\",\"unit\":\"fahrenheit\"}",
"llm_output": {
"content": null,
"function_call": {
"arguments": "{\n \"location\": \"Boston\"\n}",
"name": "get_current_weather"
},
"role": "assistant"
}
}
}
],
"question": "How about London next week?"
} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/use_functions_with_chat_models/requirements.txt | promptflow[azure]
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/use_functions_with_chat_models/run_function.py | from promptflow import tool
import json
def get_current_weather(location, unit="fahrenheit"):
"""Get the current weather in a given location"""
weather_info = {
"location": location,
"temperature": "72",
"unit": unit,
"forecast": ["sunny", "windy"],
}
return weather_info
def get_n_day_weather_forecast(location, format, num_days):
"""Get next num_days weather in a given location"""
weather_info = {
"location": location,
"temperature": "60",
"format": format,
"forecast": ["rainy"],
"num_days": num_days,
}
return weather_info
@tool
def run_function(response_message: dict) -> str:
if "function_call" in response_message:
function_name = response_message["function_call"]["name"]
function_args = json.loads(response_message["function_call"]["arguments"])
print(function_args)
result = globals()[function_name](**function_args)
else:
print("No function call")
if isinstance(response_message, dict):
result = response_message["content"]
else:
result = response_message
return result
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/use_functions_with_chat_models/use_functions_with_chat_models.jinja2 | system:
Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.
{% for item in chat_history %}
user:
{{item.inputs.question}}
{% if 'function_call' in item.outputs.llm_output %}
assistant:
Function generation requested, function = {{item.outputs.llm_output.function_call.name}}, args = {{item.outputs.llm_output.function_call.arguments}}
function:
name:
{{item.outputs.llm_output.function_call.name}}
content:
{{item.outputs.answer}}
{% else %}
assistant:
{{item.outputs.llm_output}}}}
{% endif %}}
{% endfor %}
user:
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/use_functions_with_chat_models/README.md | # Use Functions with Chat Models
This flow covers how to use the LLM tool chat API in combination with external functions to extend the
capabilities of GPT models.
`functions` is an optional parameter in the <a href='https://platform.openai.com/docs/api-reference/chat/create' target='_blank'>Chat Completion API</a> which can be used to provide function
specifications. The purpose of this is to enable models to generate function arguments which adhere to the provided
specifications. Note that the API will not actually execute any function calls. It is up to developers to execute
function calls using model outputs.
If the `functions` parameter is provided then by default the model will decide when it is appropriate to use one of the
functions. The API can be forced to use a specific function by setting the `function_call` parameter to
`{"name": "<insert-function-name>"}`. The API can also be forced to not use any function by setting the `function_call`
parameter to `"none"`. If a function is used, the output will contain `"finish_reason": "function_call"` in the
response, as well as a `function_call` object that has the name of the function and the generated function arguments.
You can refer to <a href='https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb' target='_blank'>openai sample</a> for more details.
## What you will learn
In this flow, you will learn how to use functions with LLM chat models and how to compose function role message in prompt template.
## Tools used in this flow
- LLM tool
- Python tool
## Prerequisites
Install prompt-flow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Getting started
### 1 Create connection for LLM tool to use
Go to "Prompt flow" "Connections" tab. Click on "Create" button, select one of LLM tool supported connection types and fill in the configurations.
Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer to [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for more details. If you want to use "OpenAI" connection type, you need to create an OpenAI account first. Please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `open_ai_connection`.
```bash
# show registered connection
pf connection show --name open_ai_connection
```
### 2 Start chatting
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow .
# run chat flow with new question
pf flow test --flow . --inputs question="How about London next week?"
# start a interactive chat session in CLI
pf flow test --flow . --interactive
# start a interactive chat session in CLI with verbose info
pf flow test --flow . --interactive --verbose
```
## References
- <a href='https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb' target='_blank'>OpenAI cookbook example</a>
- <a href='https://openai.com/blog/function-calling-and-other-api-updates?ref=upstract.com' target='_blank'>OpenAI function calling announcement</a>
- <a href='https://platform.openai.com/docs/guides/gpt/function-calling' target='_blank'>OpenAI function calling doc</a>
- <a href='https://platform.openai.com/docs/api-reference/chat/create' target='_blank'>OpenAI function calling API</a>
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/use_functions_with_chat_models/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
chat_history:
type: list
default:
- inputs:
question: What is the weather like in Boston?
outputs:
answer: '{"forecast":["sunny","windy"],"location":"Boston","temperature":"72","unit":"fahrenheit"}'
llm_output:
content: null
function_call:
name: get_current_weather
arguments: |-
{
"location": "Boston"
}
role: assistant
is_chat_history: true
question:
type: string
default: How about London next week?
is_chat_input: true
outputs:
answer:
type: string
reference: ${run_function.output}
is_chat_output: true
llm_output:
type: object
reference: ${use_functions_with_chat_models.output}
nodes:
- name: run_function
type: python
source:
type: code
path: run_function.py
inputs:
response_message: ${use_functions_with_chat_models.output}
- name: use_functions_with_chat_models
type: llm
source:
type: code
path: use_functions_with_chat_models.jinja2
inputs:
deployment_name: gpt-35-turbo
temperature: '0.7'
top_p: '1.0'
stop: ''
max_tokens: '256'
presence_penalty: '0'
frequency_penalty: '0'
logit_bias: ''
functions: '[{"name":"get_current_weather","description":"Get the current weather
in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The
city and state, e.g. San Francisco, CA"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"]}},{"name":"get_n_day_weather_forecast","description":"Get
an N-day weather forecast","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The
city and state, e.g. San Francisco, CA"},"format":{"type":"string","enum":["celsius","fahrenheit"],"description":"The
temperature unit to use. Infer this from the users location."},"num_days":{"type":"integer","description":"The
number of days to forecast"}},"required":["location","format","num_days"]}}]'
function_call: auto
question: ${inputs.question}
chat_history: ${inputs.chat_history}
connection: open_ai_connection
api: chat | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/chat.jinja2 | system:
You are a helpful assistant.
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/README.md | # Basic Chat
This example shows how to create a basic chat flow. It demonstrates how to create a chatbot that can remember previous interactions and use the conversation history to generate next message.
Tools used in this flow:
- `llm` tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## What you will learn
In this flow, you will learn
- how to compose a chat flow.
- prompt template format of LLM tool chat api. Message delimiter is a separate line containing role name and colon: "system:", "user:", "assistant:".
See <a href="https://platform.openai.com/docs/api-reference/chat/create#chat/create-role" target="_blank">OpenAI Chat</a> for more about message role.
```jinja
system:
You are a chatbot having a conversation with a human.
user:
{{question}}
```
- how to consume chat history in prompt.
```jinja
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
```
## Getting started
### 1 Create connection for LLM tool to use
Go to "Prompt flow" "Connections" tab. Click on "Create" button, select one of LLM tool supported connection types and fill in the configurations.
Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer to [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for more details. If you want to use "OpenAI" connection type, you need to create an OpenAI account first. Please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `open_ai_connection`.
```bash
# show registered connection
pf connection show --name open_ai_connection
```
### 2 Start chatting
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow .
# run chat flow with new question
pf flow test --flow . --inputs question="What's Azure Machine Learning?"
# start a interactive chat session in CLI
pf flow test --flow . --interactive
# start a interactive chat session in CLI with verbose info
pf flow test --flow . --interactive --verbose
```
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/basic-chat/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
chat_history:
type: list
default: []
question:
type: string
is_chat_input: true
default: What is ChatGPT?
outputs:
answer:
type: string
reference: ${chat.output}
is_chat_output: true
nodes:
- inputs:
# This is to easily switch between openai and azure openai.
# deployment_name is required by azure openai, model is required by openai.
deployment_name: gpt-35-turbo
model: gpt-3.5-turbo
max_tokens: "256"
temperature: "0.7"
chat_history: ${inputs.chat_history}
question: ${inputs.question}
name: chat
type: llm
source:
type: code
path: chat.jinja2
api: chat
connection: open_ai_connection
node_variants: {}
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/chat.jinja2 | # system:
You are a helpful assistant.
{% for item in chat_history %}
# user:
{{item.inputs.question}}
# assistant:
{{item.outputs.answer}}
{% endfor %}
# user:
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/README.md | # Chat With Image
This flow demonstrates how to create a chatbot that can take image and text as input.
Tools used in this flow:
- `OpenAI GPT-4V` tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## What you will learn
In this flow, you will learn
- how to compose a chat flow with image and text as input. The chat input should be a list of text and/or images.
## Getting started
### 1 Create connection for OpenAI GPT-4V tool to use
Go to "Prompt flow" "Connections" tab. Click on "Create" button, and create an "OpenAI" connection. If you do not have an OpenAI account, please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> name=aoai_gpt4v_connection api_version=2023-07-01-preview
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `aoai_gpt4v_connection`.
```bash
# show registered connection
pf connection show --name aoai_gpt4v_connection
```
### 2 Start chatting
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow .
# run chat flow with new question
pf flow test --flow . --inputs question='["How many colors can you see?", {"data:image/png;url": "https://developer.microsoft.com/_devcom/images/logo-ms-social.png"}]'
```
```sh
# start a interactive chat session in CLI
pf flow test --flow . --interactive
# start a interactive chat session in CLI with verbose info
pf flow test --flow . --interactive --verbose
```
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-image/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
chat_history:
type: list
is_chat_history: true
question:
type: list
default:
- data:image/png;url: https://images.idgesg.net/images/article/2019/11/edge-browser-logo_microsoft-100816808-large.jpg
- How many colors can you see?
is_chat_input: true
outputs:
answer:
type: string
reference: ${chat.output}
is_chat_output: true
nodes:
- name: chat
type: custom_llm
source:
type: package_with_prompt
tool: promptflow.tools.aoai_gpt4v.AzureOpenAI.chat
path: chat.jinja2
inputs:
connection: aoai_gpt4v_connection
deployment_name: gpt-4v
max_tokens: 512
chat_history: ${inputs.chat_history}
question: ${inputs.question}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/batch_run.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
#name: chat_with_pdf_default_20230820_162219_559000
flow: .
data: ./data/bert-paper-qna.jsonl
#run: <Uncomment to select a run input>
column_mapping:
chat_history: ${data.chat_history}
pdf_url: ${data.pdf_url}
question: ${data.question}
config:
EMBEDDING_MODEL_DEPLOYMENT_NAME: text-embedding-ada-002
CHAT_MODEL_DEPLOYMENT_NAME: gpt-4
PROMPT_TOKEN_LIMIT: 3000
MAX_COMPLETION_TOKENS: 1024
VERBOSE: true
CHUNK_SIZE: 1024
CHUNK_OVERLAP: 64 | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat-with-pdf-azure.ipynb | %pip install -r requirements.txtfrom azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()import promptflow.azure as azure
# Get a handle to workspace
pf = azure.PFClient.from_config(credential=credential)conn_name = "open_ai_connection"
# TODO integrate with azure.ai sdk
# currently we only support create connection in Azure ML Studio UI
# raise Exception(f"Please create {conn_name} connection in Azure ML Studio.")flow_path = "."
data_path = "./data/bert-paper-qna-3-line.jsonl"
config_2k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo",
"PROMPT_TOKEN_LIMIT": 2000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 32,
}
column_mapping = {
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config_2k_context,
}
run_2k_context = pf.run(
flow=flow_path,
data=data_path,
column_mapping=column_mapping,
display_name="chat_with_pdf_2k_context",
tags={"chat_with_pdf": "", "1st_round": ""},
)
pf.stream(run_2k_context)print(run_2k_context)detail = pf.get_details(run_2k_context)
detaileval_groundedness_flow_path = "../../evaluation/eval-groundedness/"
eval_groundedness_2k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_2k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_2k_context",
)
pf.stream(eval_groundedness_2k_context)
print(eval_groundedness_2k_context)flow_path = "."
data_path = "./data/bert-paper-qna-3-line.jsonl"
config_3k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo",
"PROMPT_TOKEN_LIMIT": 3000, # different from 2k context
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 32,
}
column_mapping = {
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config_3k_context,
}
run_3k_context = pf.run(
flow=flow_path,
data=data_path,
column_mapping=column_mapping,
display_name="chat_with_pdf_3k_context",
tags={"chat_with_pdf": "", "2nd_round": ""},
)
pf.stream(run_3k_context)print(run_3k_context)detail = pf.get_details(run_3k_context)
detaileval_groundedness_3k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_3k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_3k_context",
)
pf.stream(eval_groundedness_3k_context)
print(eval_groundedness_3k_context)pf.get_details(eval_groundedness_3k_context)pf.visualize([eval_groundedness_2k_context, eval_groundedness_3k_context]) | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/requirements.txt | PyPDF2
faiss-cpu
openai
jinja2
python-dotenv
tiktoken
promptflow[azure]
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/rewrite_question_tool.py | from promptflow import tool
from chat_with_pdf.rewrite_question import rewrite_question
@tool
def rewrite_question_tool(question: str, history: list, env_ready_signal: str):
return rewrite_question(question, history)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/setup_env.py | import os
from typing import Union
from promptflow import tool
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
from chat_with_pdf.utils.lock import acquire_lock
BASE_DIR = os.path.dirname(os.path.abspath(__file__)) + "/chat_with_pdf/"
@tool
def setup_env(connection: Union[AzureOpenAIConnection, OpenAIConnection], config: dict):
if not connection or not config:
return
if isinstance(connection, AzureOpenAIConnection):
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = connection.api_base
os.environ["OPENAI_API_KEY"] = connection.api_key
os.environ["OPENAI_API_VERSION"] = connection.api_version
if isinstance(connection, OpenAIConnection):
os.environ["OPENAI_API_KEY"] = connection.api_key
if connection.organization is not None:
os.environ["OPENAI_ORG_ID"] = connection.organization
for key in config:
os.environ[key] = str(config[key])
with acquire_lock(BASE_DIR + "create_folder.lock"):
if not os.path.exists(BASE_DIR + ".pdfs"):
os.mkdir(BASE_DIR + ".pdfs")
if not os.path.exists(BASE_DIR + ".index/.pdfs"):
os.makedirs(BASE_DIR + ".index/.pdfs")
return "Ready"
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/build_index_tool.py | from promptflow import tool
from chat_with_pdf.build_index import create_faiss_index
@tool
def build_index_tool(pdf_path: str) -> str:
return create_faiss_index(pdf_path)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/README.md | # Chat with PDF
This is a simple flow that allow you to ask questions about the content of a PDF file and get answers.
You can run the flow with a URL to a PDF file and question as argument.
Once it's launched it will download the PDF and build an index of the content.
Then when you ask a question, it will look up the index to retrieve relevant content and post the question with the relevant content to OpenAI chat model (gpt-3.5-turbo or gpt4) to get an answer.
Learn more on corresponding [tutorials](../../../tutorials/e2e-development/chat-with-pdf.md).
Tools used in this flow:
- custom `python` Tool
## Prerequisites
Install promptflow sdk and other dependencies:
```bash
pip install -r requirements.txt
```
## Get started
### Create connection in this folder
```bash
# create connection needed by flow
if pf connection list | grep open_ai_connection; then
echo "open_ai_connection already exists"
else
pf connection create --file ../../../connections/azure_openai.yml --name open_ai_connection --set api_key=<your_api_key> api_base=<your_api_base>
fi
```
### CLI Example
#### Run flow
**Note**: this sample uses [predownloaded PDFs](./chat_with_pdf/.pdfs/) and [prebuilt FAISS Index](./chat_with_pdf/.index/) to speed up execution time.
You can remove the folders to start a fresh run.
```bash
# test with default input value in flow.dag.yaml
pf flow test --flow .
# test with flow inputs
pf flow test --flow . --inputs question="What is the name of the new language representation model introduced in the document?" pdf_url="https://arxiv.org/pdf/1810.04805.pdf"
# (Optional) create a random run name
run_name="web_classification_"$(openssl rand -hex 12)
# run with multiline data, --name is optional
pf run create --file batch_run.yaml --name $run_name
# visualize run output details
pf run visualize --name $run_name
```
#### Submit run to cloud
Assume we already have a connection named `open_ai_connection` in workspace.
```bash
# set default workspace
az account set -s <your_subscription_id>
az configure --defaults group=<your_resource_group_name> workspace=<your_workspace_name>
```
``` bash
# create run
pfazure run create --file batch_run.yaml --name $run_name
```
Note: Click portal_url of the run to view the final snapshot.
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml.multi-node | inputs:
chat_history:
type: list
default: []
pdf_url:
type: string
default: https://arxiv.org/pdf/1810.04805.pdf
question:
type: string
is_chat_input: true
default: what NLP tasks does it perform well?
outputs:
answer:
type: string
is_chat_output: true
reference: ${qna_tool.output.answer}
context:
type: string
reference: ${qna_tool.output.context}
nodes:
- name: setup_env
type: python
source:
type: code
path: setup_env.py
inputs:
conn: my_custom_connection
- name: download_tool
type: python
source:
type: code
path: download_tool.py
inputs:
url: ${inputs.pdf_url}
env_ready_signal: ${setup_env.output}
- name: build_index_tool
type: python
source:
type: code
path: build_index_tool.py
inputs:
pdf_path: ${download_tool.output}
- name: qna_tool
type: python
source:
type: code
path: qna_tool.py
inputs:
question: ${rewrite_question_tool.output}
index_path: ${build_index_tool.output}
history: ${inputs.chat_history}
- name: rewrite_question_tool
type: python
source:
type: code
path: rewrite_question_tool.py
inputs:
question: ${inputs.question}
history: ${inputs.chat_history}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
inputs:
chat_history:
type: list
default: []
pdf_url:
type: string
default: https://arxiv.org/pdf/1810.04805.pdf
question:
type: string
is_chat_input: true
default: what is BERT?
config:
type: object
default:
EMBEDDING_MODEL_DEPLOYMENT_NAME: text-embedding-ada-002
CHAT_MODEL_DEPLOYMENT_NAME: gpt-4
PROMPT_TOKEN_LIMIT: 3000
MAX_COMPLETION_TOKENS: 1024
VERBOSE: true
CHUNK_SIZE: 1024
CHUNK_OVERLAP: 64
outputs:
answer:
type: string
is_chat_output: true
reference: ${qna_tool.output.answer}
context:
type: string
reference: ${find_context_tool.output.context}
nodes:
- name: setup_env
type: python
source:
type: code
path: setup_env.py
inputs:
connection: open_ai_connection
config: ${inputs.config}
- name: download_tool
type: python
source:
type: code
path: download_tool.py
inputs:
url: ${inputs.pdf_url}
env_ready_signal: ${setup_env.output}
- name: build_index_tool
type: python
source:
type: code
path: build_index_tool.py
inputs:
pdf_path: ${download_tool.output}
- name: find_context_tool
type: python
source:
type: code
path: find_context_tool.py
inputs:
question: ${rewrite_question_tool.output}
index_path: ${build_index_tool.output}
- name: qna_tool
type: python
source:
type: code
path: qna_tool.py
inputs:
prompt: ${find_context_tool.output.prompt}
history: ${inputs.chat_history}
- name: rewrite_question_tool
type: python
source:
type: code
path: rewrite_question_tool.py
inputs:
question: ${inputs.question}
history: ${inputs.chat_history}
env_ready_signal: ${setup_env.output}
environment:
python_requirements_txt: requirements.txt
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/flow.dag.yaml.single-node | inputs:
chat_history:
type: list
default:
- inputs:
question: what is BERT?
outputs:
answer: BERT (Bidirectional Encoder Representations from Transformers) is a
language representation model that pre-trains deep bidirectional
representations from unlabeled text by jointly conditioning on both
left and right context in all layers. Unlike other language
representation models, BERT can be fine-tuned with just one additional
output layer to create state-of-the-art models for a wide range of
tasks such as question answering and language inference, without
substantial task-specific architecture modifications. BERT is
effective for both fine-tuning and feature-based approaches. It
obtains new state-of-the-art results on eleven natural language
processing tasks, including pushing the GLUE score to 80.5% (7.7%
point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute
improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point
absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point
absolute improvement).
pdf_url:
type: string
default: https://arxiv.org/pdf/1810.04805.pdf
question:
type: string
is_chat_input: true
default: what NLP tasks does it perform well?
outputs:
answer:
type: string
is_chat_output: true
reference: ${chat_with_pdf_tool.output.answer}
context:
type: string
reference: ${chat_with_pdf_tool.output.context}
nodes:
- name: setup_env
type: python
source:
type: code
path: setup_env.py
inputs:
conn: my_custom_connection
- name: chat_with_pdf_tool
type: python
source:
type: code
path: chat_with_pdf_tool.py
inputs:
history: ${inputs.chat_history}
pdf_url: ${inputs.pdf_url}
question: ${inputs.question}
ready: ${setup_env.output}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf_tool.py | from promptflow import tool
from chat_with_pdf.main import chat_with_pdf
@tool
def chat_with_pdf_tool(question: str, pdf_url: str, history: list, ready: str):
history = convert_chat_history_to_chatml_messages(history)
stream, context = chat_with_pdf(question, pdf_url, history)
answer = ""
for str in stream:
answer = answer + str + ""
return {"answer": answer, "context": context}
def convert_chat_history_to_chatml_messages(history):
messages = []
for item in history:
messages.append({"role": "user", "content": item["inputs"]["question"]})
messages.append({"role": "assistant", "content": item["outputs"]["answer"]})
return messages
def convert_chatml_messages_to_chat_history(messages):
history = []
for i in range(0, len(messages), 2):
history.append(
{
"inputs": {"question": messages[i]["content"]},
"outputs": {"answer": messages[i + 1]["content"]},
}
)
return history
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/qna_tool.py | from promptflow import tool
from chat_with_pdf.qna import qna
@tool
def qna_tool(prompt: str, history: list):
stream = qna(prompt, convert_chat_history_to_chatml_messages(history))
answer = ""
for str in stream:
answer = answer + str + ""
return {"answer": answer}
def convert_chat_history_to_chatml_messages(history):
messages = []
for item in history:
messages.append({"role": "user", "content": item["inputs"]["question"]})
messages.append({"role": "assistant", "content": item["outputs"]["answer"]})
return messages
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/download_tool.py | from promptflow import tool
from chat_with_pdf.download import download
@tool
def download_tool(url: str, env_ready_signal: str) -> str:
return download(url)
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/eval_run.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Run.schema.json
#name: eval_groundedness_default_20230820_200152_009000
flow: ../../evaluation/eval-groundedness
run: chat_with_pdf_default_20230820_162219_559000
column_mapping:
question: ${run.inputs.question}
answer: ${run.outputs.answer}
context: ${run.outputs.context} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/openai.yaml | # All the values should be string type, please use "123" instead of 123 or "True" instead of True.
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
name: open_ai_connection
type: open_ai
api_key: "<open-ai-api-key>"
organization: ""
# Note:
# The connection information will be stored in a local database with api_key encrypted for safety.
# Prompt flow will ONLY use the connection information (incl. keys) when instructed by you, e.g. manage connections, use connections to run flow etc.
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/find_context_tool.py | from promptflow import tool
from chat_with_pdf.find_context import find_context
@tool
def find_context_tool(question: str, index_path: str):
prompt, context = find_context(question, index_path)
return {"prompt": prompt, "context": [c.text for c in context]}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat-with-pdf.ipynb | %pip install -r requirements.txtimport promptflow
pf = promptflow.PFClient()
# List all the available connections
for c in pf.connections.list():
print(c.name + " (" + c.type + ")")# create needed connection
from promptflow.entities import AzureOpenAIConnection, OpenAIConnection
try:
conn_name = "open_ai_connection"
conn = pf.connections.get(name=conn_name)
print("using existing connection")
except:
# Follow https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal to create an Azure Open AI resource.
connection = AzureOpenAIConnection(
name=conn_name,
api_key="<user-input>",
api_base="<test_base>",
api_type="azure",
api_version="<test_version>",
)
# use this if you have an existing OpenAI account
# connection = OpenAIConnection(
# name=conn_name,
# api_key="<user-input>",
# )
conn = pf.connections.create_or_update(connection)
print("successfully created connection")
print(conn)output = pf.flows.test(
".",
inputs={
"chat_history": [],
"pdf_url": "https://arxiv.org/pdf/1810.04805.pdf",
"question": "what is BERT?",
},
)
print(output)flow_path = "."
data_path = "./data/bert-paper-qna-3-line.jsonl"
config_2k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-4", # change this to the name of your deployment if you're using Azure OpenAI
"PROMPT_TOKEN_LIMIT": 2000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 64,
}
column_mapping = {
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config_2k_context,
}
run_2k_context = pf.run(flow=flow_path, data=data_path, column_mapping=column_mapping)
pf.stream(run_2k_context)
print(run_2k_context)pf.get_details(run_2k_context)eval_groundedness_flow_path = "../../evaluation/eval-groundedness/"
eval_groundedness_2k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_2k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_2k_context",
)
pf.stream(eval_groundedness_2k_context)
print(eval_groundedness_2k_context)pf.get_details(eval_groundedness_2k_context)pf.get_metrics(eval_groundedness_2k_context)pf.visualize(eval_groundedness_2k_context)config_3k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-4", # change this to the name of your deployment if you're using Azure OpenAI
"PROMPT_TOKEN_LIMIT": 3000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 64,
}
run_3k_context = pf.run(flow=flow_path, data=data_path, column_mapping=column_mapping)
pf.stream(run_3k_context)
print(run_3k_context)eval_groundedness_3k_context = pf.run(
flow=eval_groundedness_flow_path,
run=run_3k_context,
column_mapping={
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name="eval_groundedness_3k_context",
)
pf.stream(eval_groundedness_3k_context)
print(eval_groundedness_3k_context)pf.get_details(eval_groundedness_3k_context)pf.visualize([eval_groundedness_2k_context, eval_groundedness_3k_context]) | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/__init__.py | import sys
import os
sys.path.append(
os.path.join(os.path.dirname(os.path.abspath(__file__)), "chat_with_pdf")
)
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/test.ipynb | from main import chat_with_pdf, print_stream_and_return_full_answer
from dotenv import load_dotenv
load_dotenv()
bert_paper_url = "https://arxiv.org/pdf/1810.04805.pdf"
questions = [
"what is BERT?",
"what NLP tasks does it perform well?",
"is BERT suitable for NER?",
"is it better than GPT",
"when was GPT come up?",
"when was BERT come up?",
"so about same time?",
]
history = []
for q in questions:
stream, context = chat_with_pdf(q, bert_paper_url, history)
print("User: " + q, flush=True)
print("Bot: ", end="", flush=True)
answer = print_stream_and_return_full_answer(stream)
history = history + [
{"role": "user", "content": q},
{"role": "assistant", "content": answer},
] | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/build_index.py | import PyPDF2
import faiss
import os
from pathlib import Path
from utils.oai import OAIEmbedding
from utils.index import FAISSIndex
from utils.logging import log
from utils.lock import acquire_lock
from constants import INDEX_DIR
def create_faiss_index(pdf_path: str) -> str:
chunk_size = int(os.environ.get("CHUNK_SIZE"))
chunk_overlap = int(os.environ.get("CHUNK_OVERLAP"))
log(f"Chunk size: {chunk_size}, chunk overlap: {chunk_overlap}")
file_name = Path(pdf_path).name + f".index_{chunk_size}_{chunk_overlap}"
index_persistent_path = Path(INDEX_DIR) / file_name
index_persistent_path = index_persistent_path.resolve().as_posix()
lock_path = index_persistent_path + ".lock"
log("Index path: " + os.path.abspath(index_persistent_path))
with acquire_lock(lock_path):
if os.path.exists(os.path.join(index_persistent_path, "index.faiss")):
log("Index already exists, bypassing index creation")
return index_persistent_path
else:
if not os.path.exists(index_persistent_path):
os.makedirs(index_persistent_path)
log("Building index")
pdf_reader = PyPDF2.PdfReader(pdf_path)
text = ""
for page in pdf_reader.pages:
text += page.extract_text()
# Chunk the words into segments of X words with Y-word overlap, X=CHUNK_SIZE, Y=OVERLAP_SIZE
segments = split_text(text, chunk_size, chunk_overlap)
log(f"Number of segments: {len(segments)}")
index = FAISSIndex(index=faiss.IndexFlatL2(1536), embedding=OAIEmbedding())
index.insert_batch(segments)
index.save(index_persistent_path)
log("Index built: " + index_persistent_path)
return index_persistent_path
# Split the text into chunks with CHUNK_SIZE and CHUNK_OVERLAP as character count
def split_text(text, chunk_size, chunk_overlap):
# Calculate the number of chunks
num_chunks = (len(text) - chunk_overlap) // (chunk_size - chunk_overlap)
# Split the text into chunks
chunks = []
for i in range(num_chunks):
start = i * (chunk_size - chunk_overlap)
end = start + chunk_size
chunks.append(text[start:end])
# Add the last chunk
chunks.append(text[num_chunks * (chunk_size - chunk_overlap):])
return chunks
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/main.py | import argparse
from dotenv import load_dotenv
import os
from qna import qna
from find_context import find_context
from rewrite_question import rewrite_question
from build_index import create_faiss_index
from download import download
from utils.lock import acquire_lock
from constants import PDF_DIR, INDEX_DIR
def chat_with_pdf(question: str, pdf_url: str, history: list):
with acquire_lock("create_folder.lock"):
if not os.path.exists(PDF_DIR):
os.mkdir(PDF_DIR)
if not os.path.exists(INDEX_DIR):
os.makedirs(INDEX_DIR)
pdf_path = download(pdf_url)
index_path = create_faiss_index(pdf_path)
q = rewrite_question(question, history)
prompt, context = find_context(q, index_path)
stream = qna(prompt, history)
return stream, context
def print_stream_and_return_full_answer(stream):
answer = ""
for str in stream:
print(str, end="", flush=True)
answer = answer + str + ""
print(flush=True)
return answer
def main_loop(url: str):
load_dotenv(os.path.join(os.path.dirname(__file__), ".env"), override=True)
history = []
while True:
question = input("\033[92m" + "$User (type q! to quit): " + "\033[0m")
if question == "q!":
break
stream, context = chat_with_pdf(question, url, history)
print("\033[92m" + "$Bot: " + "\033[0m", end=" ", flush=True)
answer = print_stream_and_return_full_answer(stream)
history = history + [
{"role": "user", "content": question},
{"role": "assistant", "content": answer},
]
def main():
parser = argparse.ArgumentParser(description="Ask questions about a PDF file")
parser.add_argument("url", help="URL to the PDF file")
args = parser.parse_args()
main_loop(args.url)
if __name__ == "__main__":
main()
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/rewrite_question_prompt.md | You are able to reason from previous conversation and the recent question, to come up with a rewrite of the question which is concise but with enough information that people without knowledge of previous conversation can understand the question.
A few examples:
# Example 1
## Previous conversation
user: Who is Bill Clinton?
assistant: Bill Clinton is an American politician who served as the 42nd President of the United States from 1993 to 2001.
## Question
user: When was he born?
## Rewritten question
When was Bill Clinton born?
# Example 2
## Previous conversation
user: What is BERT?
assistant: BERT stands for "Bidirectional Encoder Representations from Transformers." It is a natural language processing (NLP) model developed by Google.
user: What data was used for its training?
assistant: The BERT (Bidirectional Encoder Representations from Transformers) model was trained on a large corpus of publicly available text from the internet. It was trained on a combination of books, articles, websites, and other sources to learn the language patterns and relationships between words.
## Question
user: What NLP tasks can it perform well?
## Rewritten question
What NLP tasks can BERT perform well?
Now comes the actual work - please respond with the rewritten question in the same language as the question, nothing else.
## Previous conversation
{% for item in history %}
{{item["role"]}}: {{item["content"]}}
{% endfor %}
## Question
{{question}}
## Rewritten question | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/download.py | import requests
import os
import re
from utils.lock import acquire_lock
from utils.logging import log
from constants import PDF_DIR
# Download a pdf file from a url and return the path to the file
def download(url: str) -> str:
path = os.path.join(PDF_DIR, normalize_filename(url) + ".pdf")
lock_path = path + ".lock"
with acquire_lock(lock_path):
if os.path.exists(path):
log("Pdf already exists in " + os.path.abspath(path))
return path
log("Downloading pdf from " + url)
response = requests.get(url)
with open(path, "wb") as f:
f.write(response.content)
return path
def normalize_filename(filename):
# Replace any invalid characters with an underscore
return re.sub(r"[^\w\-_. ]", "_", filename)
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/qna_prompt.md | You're a smart assistant can answer questions based on provided context and previous conversation history between you and human.
Use the context to answer the question at the end, note that the context has order and importance - e.g. context #1 is more important than #2.
Try as much as you can to answer based on the provided the context, if you cannot derive the answer from the context, you should say you don't know.
Answer in the same language as the question.
# Context
{% for i, c in context %}
## Context #{{i+1}}
{{c.text}}
{% endfor %}
# Question
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/qna.py | import os
from utils.oai import OAIChat
def qna(prompt: str, history: list):
max_completion_tokens = int(os.environ.get("MAX_COMPLETION_TOKENS"))
chat = OAIChat()
stream = chat.stream(
messages=history + [{"role": "user", "content": prompt}],
max_tokens=max_completion_tokens,
)
return stream
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/find_context.py | import faiss
from jinja2 import Environment, FileSystemLoader
import os
from utils.index import FAISSIndex
from utils.oai import OAIEmbedding, render_with_token_limit
from utils.logging import log
def find_context(question: str, index_path: str):
index = FAISSIndex(index=faiss.IndexFlatL2(1536), embedding=OAIEmbedding())
index.load(path=index_path)
snippets = index.query(question, top_k=5)
template = Environment(
loader=FileSystemLoader(os.path.dirname(os.path.abspath(__file__)))
).get_template("qna_prompt.md")
token_limit = int(os.environ.get("PROMPT_TOKEN_LIMIT"))
# Try to render the template with token limit and reduce snippet count if it fails
while True:
try:
prompt = render_with_token_limit(
template, token_limit, question=question, context=enumerate(snippets)
)
break
except ValueError:
snippets = snippets[:-1]
log(f"Reducing snippet count to {len(snippets)} to fit token limit")
return prompt, snippets
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/constants.py | import os
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
PDF_DIR = os.path.join(BASE_DIR, ".pdfs")
INDEX_DIR = os.path.join(BASE_DIR, ".index/.pdfs/")
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/.env.example | # Azure OpenAI, uncomment below section if you want to use Azure OpenAI
# Note: EMBEDDING_MODEL_DEPLOYMENT_NAME and CHAT_MODEL_DEPLOYMENT_NAME are deployment names for Azure OpenAI
OPENAI_API_TYPE=azure
OPENAI_API_BASE=<your_AOAI_endpoint>
OPENAI_API_KEY=<your_AOAI_key>
OPENAI_API_VERSION=2023-05-15
EMBEDDING_MODEL_DEPLOYMENT_NAME=text-embedding-ada-002
CHAT_MODEL_DEPLOYMENT_NAME=gpt-4
# OpenAI, uncomment below section if you want to use OpenAI
# Note: EMBEDDING_MODEL_DEPLOYMENT_NAME and CHAT_MODEL_DEPLOYMENT_NAME are model names for OpenAI
#OPENAI_API_KEY=<your_openai_key>
#OPENAI_ORG_ID=<your_openai_org_id> # this is optional
#EMBEDDING_MODEL_DEPLOYMENT_NAME=text-embedding-ada-002
#CHAT_MODEL_DEPLOYMENT_NAME=gpt-4
PROMPT_TOKEN_LIMIT=2000
MAX_COMPLETION_TOKENS=1024
CHUNK_SIZE=256
CHUNK_OVERLAP=16
VERBOSE=True | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/rewrite_question.py | from jinja2 import Environment, FileSystemLoader
import os
from utils.logging import log
from utils.oai import OAIChat, render_with_token_limit
def rewrite_question(question: str, history: list):
template = Environment(
loader=FileSystemLoader(os.path.dirname(os.path.abspath(__file__)))
).get_template("rewrite_question_prompt.md")
token_limit = int(os.environ["PROMPT_TOKEN_LIMIT"])
max_completion_tokens = int(os.environ["MAX_COMPLETION_TOKENS"])
# Try to render the prompt with token limit and reduce the history count if it fails
while True:
try:
prompt = render_with_token_limit(
template, token_limit, question=question, history=history
)
break
except ValueError:
history = history[:-1]
log(f"Reducing chat history count to {len(history)} to fit token limit")
chat = OAIChat()
rewritten_question = chat.generate(
messages=[{"role": "user", "content": prompt}], max_tokens=max_completion_tokens
)
log(f"Rewritten question: {rewritten_question}")
return rewritten_question
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/README.md | # Chat with PDF
This is a simple Python application that allow you to ask questions about the content of a PDF file and get answers.
It's a console application that you start with a URL to a PDF file as argument. Once it's launched it will download the PDF and build an index of the content. Then when you ask a question, it will look up the index to retrieve relevant content and post the question with the relevant content to OpenAI chat model (gpt-3.5-turbo or gpt4) to get an answer.
## Screenshot - ask questions about BERT paper
![screenshot-chat-with-pdf](../assets/chat_with_pdf_console.png)
## How it works?
## Get started
### Create .env file in this folder with below content
```
OPENAI_API_BASE=<AOAI_endpoint>
OPENAI_API_KEY=<AOAI_key>
EMBEDDING_MODEL_DEPLOYMENT_NAME=text-embedding-ada-002
CHAT_MODEL_DEPLOYMENT_NAME=gpt-35-turbo
PROMPT_TOKEN_LIMIT=3000
MAX_COMPLETION_TOKENS=256
VERBOSE=false
CHUNK_SIZE=1024
CHUNK_OVERLAP=64
```
Note: CHAT_MODEL_DEPLOYMENT_NAME should point to a chat model like gpt-3.5-turbo or gpt-4
### Run the command line
```shell
python main.py <url-to-pdf-file>
``` | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/__init__.py | import sys
import os
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/index.py | import os
from typing import Iterable, List, Optional
from dataclasses import dataclass
from faiss import Index
import faiss
import pickle
import numpy as np
from .oai import OAIEmbedding as Embedding
@dataclass
class SearchResultEntity:
text: str = None
vector: List[float] = None
score: float = None
original_entity: dict = None
metadata: dict = None
INDEX_FILE_NAME = "index.faiss"
DATA_FILE_NAME = "index.pkl"
class FAISSIndex:
def __init__(self, index: Index, embedding: Embedding) -> None:
self.index = index
self.docs = {} # id -> doc, doc is (text, metadata)
self.embedding = embedding
def insert_batch(
self, texts: Iterable[str], metadatas: Optional[List[dict]] = None
) -> None:
documents = []
vectors = []
for i, text in enumerate(texts):
metadata = metadatas[i] if metadatas else {}
vector = self.embedding.generate(text)
documents.append((text, metadata))
vectors.append(vector)
self.index.add(np.array(vectors, dtype=np.float32))
self.docs.update(
{i: doc for i, doc in enumerate(documents, start=len(self.docs))}
)
pass
def query(self, text: str, top_k: int = 10) -> List[SearchResultEntity]:
vector = self.embedding.generate(text)
scores, indices = self.index.search(np.array([vector], dtype=np.float32), top_k)
docs = []
for j, i in enumerate(indices[0]):
if i == -1: # This happens when not enough docs are returned.
continue
doc = self.docs[i]
docs.append(
SearchResultEntity(text=doc[0], metadata=doc[1], score=scores[0][j])
)
return docs
def save(self, path: str) -> None:
faiss.write_index(self.index, os.path.join(path, INDEX_FILE_NAME))
# dump docs to pickle file
with open(os.path.join(path, DATA_FILE_NAME), "wb") as f:
pickle.dump(self.docs, f)
pass
def load(self, path: str) -> None:
self.index = faiss.read_index(os.path.join(path, INDEX_FILE_NAME))
with open(os.path.join(path, DATA_FILE_NAME), "rb") as f:
self.docs = pickle.load(f)
pass
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/logging.py | import os
def log(message: str):
verbose = os.environ.get("VERBOSE", "false")
if verbose.lower() == "true":
print(message, flush=True)
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/lock.py | import contextlib
import os
import sys
if sys.platform.startswith("win"):
import msvcrt
else:
import fcntl
@contextlib.contextmanager
def acquire_lock(filename):
if not sys.platform.startswith("win"):
with open(filename, "a+") as f:
fcntl.flock(f, fcntl.LOCK_EX)
yield f
fcntl.flock(f, fcntl.LOCK_UN)
else: # Windows
with open(filename, "w") as f:
msvcrt.locking(f.fileno(), msvcrt.LK_LOCK, 1)
yield f
msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, 1)
try:
os.remove(filename)
except OSError:
pass # best effort to remove the lock file
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/oai.py | from typing import List
import openai
from openai.version import VERSION as OPENAI_VERSION
import os
import tiktoken
from jinja2 import Template
from .retry import (
retry_and_handle_exceptions,
retry_and_handle_exceptions_for_generator,
)
from .logging import log
def extract_delay_from_rate_limit_error_msg(text):
import re
pattern = r"retry after (\d+)"
match = re.search(pattern, text)
if match:
retry_time_from_message = match.group(1)
return float(retry_time_from_message)
else:
return 5 # default retry time
class OAI:
def __init__(self):
if OPENAI_VERSION.startswith("0."):
raise Exception(
"Please upgrade your OpenAI package to version >= 1.0.0 or "
"using the command: pip install --upgrade openai."
)
init_params = {}
api_type = os.environ.get("OPENAI_API_TYPE")
if os.getenv("OPENAI_API_VERSION") is not None:
init_params["api_version"] = os.environ.get("OPENAI_API_VERSION")
if os.getenv("OPENAI_ORG_ID") is not None:
init_params["organization"] = os.environ.get("OPENAI_ORG_ID")
if os.getenv("OPENAI_API_KEY") is None:
raise ValueError("OPENAI_API_KEY is not set in environment variables")
if os.getenv("OPENAI_API_BASE") is not None:
if api_type == "azure":
init_params["azure_endpoint"] = os.environ.get("OPENAI_API_BASE")
else:
init_params["base_url"] = os.environ.get("OPENAI_API_BASE")
init_params["api_key"] = os.environ.get("OPENAI_API_KEY")
# A few sanity checks
if api_type == "azure":
if init_params.get("azure_endpoint") is None:
raise ValueError(
"OPENAI_API_BASE is not set in environment variables, this is required when api_type==azure"
)
if init_params.get("api_version") is None:
raise ValueError(
"OPENAI_API_VERSION is not set in environment variables, this is required when api_type==azure"
)
if init_params["api_key"].startswith("sk-"):
raise ValueError(
"OPENAI_API_KEY should not start with sk- when api_type==azure, "
"are you using openai key by mistake?"
)
from openai import AzureOpenAI as Client
else:
from openai import OpenAI as Client
self.client = Client(**init_params)
class OAIChat(OAI):
@retry_and_handle_exceptions(
exception_to_check=(
openai.RateLimitError,
openai.APIStatusError,
openai.APIConnectionError,
KeyError,
),
max_retries=5,
extract_delay_from_error_message=extract_delay_from_rate_limit_error_msg,
)
def generate(self, messages: list, **kwargs) -> List[float]:
# chat api may return message with no content.
message = self.client.chat.completions.create(
model=os.environ.get("CHAT_MODEL_DEPLOYMENT_NAME"),
messages=messages,
**kwargs,
).choices[0].message
return getattr(message, "content", "")
@retry_and_handle_exceptions_for_generator(
exception_to_check=(
openai.RateLimitError,
openai.APIStatusError,
openai.APIConnectionError,
KeyError,
),
max_retries=5,
extract_delay_from_error_message=extract_delay_from_rate_limit_error_msg,
)
def stream(self, messages: list, **kwargs):
response = self.client.chat.completions.create(
model=os.environ.get("CHAT_MODEL_DEPLOYMENT_NAME"),
messages=messages,
stream=True,
**kwargs,
)
for chunk in response:
if not chunk.choices:
continue
if chunk.choices[0].delta.content:
yield chunk.choices[0].delta.content
else:
yield ""
class OAIEmbedding(OAI):
@retry_and_handle_exceptions(
exception_to_check=openai.RateLimitError,
max_retries=5,
extract_delay_from_error_message=extract_delay_from_rate_limit_error_msg,
)
def generate(self, text: str) -> List[float]:
return self.client.embeddings.create(
input=text, model=os.environ.get("EMBEDDING_MODEL_DEPLOYMENT_NAME")
).data[0].embedding
def count_token(text: str) -> int:
encoding = tiktoken.get_encoding("cl100k_base")
return len(encoding.encode(text))
def render_with_token_limit(template: Template, token_limit: int, **kwargs) -> str:
text = template.render(**kwargs)
token_count = count_token(text)
if token_count > token_limit:
message = f"token count {token_count} exceeds limit {token_limit}"
log(message)
raise ValueError(message)
return text
if __name__ == "__main__":
print(count_token("hello world, this is impressive"))
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/__init__.py | __path__ = __import__("pkgutil").extend_path(__path__, __name__) # type: ignore
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/chat_with_pdf/utils/retry.py | from typing import Tuple, Union, Optional, Type
import functools
import time
import random
def retry_and_handle_exceptions(
exception_to_check: Union[Type[Exception], Tuple[Type[Exception], ...]],
max_retries: int = 3,
initial_delay: float = 1,
exponential_base: float = 2,
jitter: bool = False,
extract_delay_from_error_message: Optional[any] = None,
):
def deco_retry(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
delay = initial_delay
for i in range(max_retries):
try:
return func(*args, **kwargs)
except exception_to_check as e:
if i == max_retries - 1:
raise Exception(
"Func execution failed after {0} retries: {1}".format(
max_retries, e
)
)
delay *= exponential_base * (1 + jitter * random.random())
delay_from_error_message = None
if extract_delay_from_error_message is not None:
delay_from_error_message = extract_delay_from_error_message(
str(e)
)
final_delay = (
delay_from_error_message if delay_from_error_message else delay
)
print(
"Func execution failed. Retrying in {0} seconds: {1}".format(
final_delay, e
)
)
time.sleep(final_delay)
return wrapper
return deco_retry
def retry_and_handle_exceptions_for_generator(
exception_to_check: Union[Type[Exception], Tuple[Type[Exception], ...]],
max_retries: int = 3,
initial_delay: float = 1,
exponential_base: float = 2,
jitter: bool = False,
extract_delay_from_error_message: Optional[any] = None,
):
def deco_retry(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
delay = initial_delay
for i in range(max_retries):
try:
for value in func(*args, **kwargs):
yield value
break
except exception_to_check as e:
if i == max_retries - 1:
raise Exception(
"Func execution failed after {0} retries: {1}".format(
max_retries, e
)
)
delay *= exponential_base * (1 + jitter * random.random())
delay_from_error_message = None
if extract_delay_from_error_message is not None:
delay_from_error_message = extract_delay_from_error_message(
str(e)
)
final_delay = (
delay_from_error_message if delay_from_error_message else delay
)
print(
"Func execution failed. Retrying in {0} seconds: {1}".format(
final_delay, e
)
)
time.sleep(final_delay)
return wrapper
return deco_retry
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/tests/azure_chat_with_pdf_test.py | import unittest
import promptflow.azure as azure
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
from base_test import BaseTest
import os
from promptflow._sdk._errors import InvalidRunStatusError
class TestChatWithPDFAzure(BaseTest):
def setUp(self):
super().setUp()
self.data_path = os.path.join(
self.flow_path, "data/bert-paper-qna-3-line.jsonl"
)
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()
self.pf = azure.PFClient.from_config(credential=credential)
def tearDown(self) -> None:
return super().tearDown()
def test_bulk_run_chat_with_pdf(self):
run = self.create_chat_run(display_name="chat_with_pdf_batch_run")
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 3)
def test_eval(self):
run_2k, eval_groundedness_2k, eval_pi_2k = self.run_eval_with_config(
self.config_2k_context,
display_name="chat_with_pdf_2k_context",
)
run_3k, eval_groundedness_3k, eval_pi_3k = self.run_eval_with_config(
self.config_3k_context,
display_name="chat_with_pdf_3k_context",
)
self.check_run_basics(run_2k)
self.check_run_basics(run_3k)
self.check_run_basics(eval_groundedness_2k)
self.check_run_basics(eval_pi_2k)
self.check_run_basics(eval_groundedness_3k)
self.check_run_basics(eval_pi_3k)
def test_bulk_run_valid_mapping(self):
data = os.path.join(self.flow_path, "data/bert-paper-qna-1-line.jsonl")
run = self.create_chat_run(
data=data,
column_mapping={
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": self.config_2k_context,
},
)
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 1)
def test_bulk_run_mapping_missing_one_column(self):
run = self.create_chat_run(
column_mapping={
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
},
)
self.pf.stream(run) # wait for completion
# run won't be failed, only line runs inside it will be failed.
self.assertEqual(run.status, "Completed")
# TODO: get line run results when supported.
def test_bulk_run_invalid_mapping(self):
run = self.create_chat_run(
column_mapping={
"question": "${data.question_not_exist}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
},
stream=False,
)
with self.assertRaises(InvalidRunStatusError):
self.pf.stream(run) # wait for completion
if __name__ == "__main__":
unittest.main()
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/tests/chat_with_pdf_test.py | import os
import unittest
import promptflow
from base_test import BaseTest
from promptflow._sdk._errors import InvalidRunStatusError
class TestChatWithPDF(BaseTest):
def setUp(self):
super().setUp()
self.pf = promptflow.PFClient()
def tearDown(self) -> None:
return super().tearDown()
def test_run_chat_with_pdf(self):
result = self.pf.test(
flow=self.flow_path,
inputs={
"chat_history": [],
"pdf_url": "https://arxiv.org/pdf/1810.04805.pdf",
"question": "BERT stands for?",
"config": self.config_2k_context,
},
)
print(result)
self.assertTrue(
result["answer"].find(
"Bidirectional Encoder Representations from Transformers"
)
!= -1
)
def test_bulk_run_chat_with_pdf(self):
run = self.create_chat_run()
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 3)
def test_eval(self):
run_2k, eval_groundedness_2k, eval_pi_2k = self.run_eval_with_config(
self.config_2k_context,
display_name="chat_with_pdf_2k_context",
)
run_3k, eval_groundedness_3k, eval_pi_3k = self.run_eval_with_config(
self.config_3k_context,
display_name="chat_with_pdf_3k_context",
)
self.check_run_basics(run_2k)
self.check_run_basics(run_3k)
self.check_run_basics(eval_groundedness_2k)
self.check_run_basics(eval_pi_2k)
self.check_run_basics(eval_groundedness_3k)
self.check_run_basics(eval_pi_3k)
def test_bulk_run_valid_mapping(self):
run = self.create_chat_run(
column_mapping={
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": self.config_2k_context,
}
)
self.pf.stream(run) # wait for completion
self.assertEqual(run.status, "Completed")
details = self.pf.get_details(run)
self.assertEqual(details.shape[0], 3)
def test_bulk_run_mapping_missing_one_column(self):
data_path = os.path.join(
self.flow_path, "data/invalid-data-missing-column.jsonl"
)
with self.assertRaises(InvalidRunStatusError):
self.create_chat_run(
column_mapping={
"question": "${data.question}",
},
data=data_path
)
def test_bulk_run_invalid_mapping(self):
with self.assertRaises(InvalidRunStatusError):
self.create_chat_run(
column_mapping={
"question": "${data.question_not_exist}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
}
)
if __name__ == "__main__":
unittest.main()
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/tests/base_test.py | import unittest
import os
import time
import traceback
class BaseTest(unittest.TestCase):
def setUp(self):
root = os.path.join(os.path.dirname(os.path.abspath(__file__)), "../../")
self.flow_path = os.path.join(root, "chat-with-pdf")
self.data_path = os.path.join(
self.flow_path, "data/bert-paper-qna-3-line.jsonl"
)
self.eval_groundedness_flow_path = os.path.join(
root, "../evaluation/eval-groundedness"
)
self.eval_perceived_intelligence_flow_path = os.path.join(
root, "../evaluation/eval-perceived-intelligence"
)
self.all_runs_generated = []
self.config_3k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo",
"PROMPT_TOKEN_LIMIT": 3000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 64,
}
self.config_2k_context = {
"EMBEDDING_MODEL_DEPLOYMENT_NAME": "text-embedding-ada-002",
"CHAT_MODEL_DEPLOYMENT_NAME": "gpt-35-turbo",
"PROMPT_TOKEN_LIMIT": 2000,
"MAX_COMPLETION_TOKENS": 256,
"VERBOSE": True,
"CHUNK_SIZE": 1024,
"CHUNK_OVERLAP": 64,
}
# Switch current working directory to the folder of this file
self.cwd = os.getcwd()
os.chdir(os.path.dirname(os.path.abspath(__file__)))
def tearDown(self):
# Switch back to the original working directory
os.chdir(self.cwd)
for run in self.all_runs_generated:
try:
self.pf.runs.archive(run.name)
except Exception as e:
print(e)
traceback.print_exc()
def create_chat_run(
self,
data=None,
column_mapping=None,
connections=None,
display_name="chat_run",
stream=True,
):
if column_mapping is None:
column_mapping = {
"chat_history": "${data.chat_history}",
"pdf_url": "${data.pdf_url}",
"question": "${data.question}",
"config": self.config_2k_context,
}
data = self.data_path if data is None else data
run = self.pf.run(
flow=self.flow_path,
data=data,
column_mapping=column_mapping,
connections=connections,
display_name=display_name,
tags={"unittest": "true"},
stream=stream,
)
self.all_runs_generated.append(run)
self.check_run_basics(run, display_name)
return run
def create_eval_run(
self,
eval_flow_path,
base_run,
column_mapping,
connections=None,
display_name_postfix="",
):
display_name = eval_flow_path.split("/")[-1] + display_name_postfix
eval = self.pf.run(
flow=eval_flow_path,
run=base_run,
column_mapping=column_mapping,
connections=connections,
display_name=display_name,
tags={"unittest": "true"},
stream=True,
)
self.all_runs_generated.append(eval)
self.check_run_basics(eval, display_name)
return eval
def check_run_basics(self, run, display_name=None):
self.assertTrue(run is not None)
if display_name is not None:
self.assertTrue(run.display_name.find(display_name) != -1)
self.assertEqual(run.tags["unittest"], "true")
def run_eval_with_config(self, config: dict, display_name: str = None):
run = self.create_chat_run(
column_mapping={
"question": "${data.question}",
"pdf_url": "${data.pdf_url}",
"chat_history": "${data.chat_history}",
"config": config,
},
display_name=display_name,
)
self.pf.stream(run) # wait for completion
self.check_run_basics(run)
eval_groundedness = self.create_eval_run(
self.eval_groundedness_flow_path,
run,
{
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name_postfix="_" + display_name,
)
self.pf.stream(eval_groundedness) # wait for completion
self.check_run_basics(eval_groundedness)
details = self.pf.get_details(eval_groundedness)
self.assertGreater(details.shape[0], 2)
metrics, elapsed = self.wait_for_metrics(eval_groundedness)
self.assertGreaterEqual(metrics["groundedness"], 0.0)
self.assertLessEqual(elapsed, 5) # metrics should be available within 5 seconds
eval_pi = self.create_eval_run(
self.eval_perceived_intelligence_flow_path,
run,
{
"question": "${run.inputs.question}",
"answer": "${run.outputs.answer}",
"context": "${run.outputs.context}",
},
display_name_postfix="_" + display_name,
)
self.pf.stream(eval_pi) # wait for completion
self.check_run_basics(eval_pi)
details = self.pf.get_details(eval_pi)
self.assertGreater(details.shape[0], 2)
metrics, elapsed = self.wait_for_metrics(eval_pi)
self.assertGreaterEqual(metrics["perceived_intelligence_score"], 0.0)
self.assertLessEqual(elapsed, 5) # metrics should be available within 5 seconds
return run, eval_groundedness, eval_pi
def wait_for_metrics(self, run):
start = time.time()
metrics = self.pf.get_metrics(run)
cnt = 3
while len(metrics) == 0 and cnt > 0:
time.sleep(5)
metrics = self.pf.get_metrics(run)
cnt -= 1
end = time.time()
return metrics, end - start
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/bert-paper-qna.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the name of the new language representation model introduced in the document?", "answer": "BERT", "context": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the main difference between BERT and previous language representation models?", "answer": "BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers.", "context": "Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the advantage of fine-tuning BERT over using feature-based approaches?", "answer": "Fine-tuning BERT reduces the need for many heavily-engineered taskspecific architectures and transfers all parameters to initialize end-task model parameters.", "context": "We show that pre-trained representations reduce the need for many heavily-engineered taskspecific architectures. BERT is the first finetuning based representation model that achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks, outperforming many task-specific architectures."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What are the two unsupervised tasks used to pre-train BERT?", "answer": "Masked LM and next sentence prediction", "context": "In order to train a deep bidirectional representation, we simply mask some percentage of the input tokens at random, and then predict those masked tokens. We refer to this procedure as a \"masked LM\" (MLM), although it is often referred to as a Cloze task in the literature (Taylor, 1953). In addition to the masked language model, we also use a \"next sentence prediction\" task that jointly pretrains text-pair representations."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "How does BERT handle single sentence and sentence pair inputs?", "answer": "It uses a special classification token ([CLS]) at the beginning of every input sequence and a special separator token ([SEP]) to separate sentences or mark the end of a sequence.", "context": "To make BERT handle a variety of down-stream tasks, our input representation is able to unambiguously represent both a single sentence and a pair of sentences (e.g., h Question, Answeri) in one token sequence. The first token of every sequence is always a special classification token ([CLS]). The final hidden state corresponding to this token is used as the aggregate sequence representation for classification tasks. Sentence pairs are packed together into a single sequence. We differentiate the sentences in two ways. First, we separate them with a special token ([SEP]). Second, we add a learned embedding to every token indicating whether it belongs to sentence A or sentence B."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What are the three types of embeddings used to construct the input representation for BERT?", "answer": "Token embeddings, segment embeddings and position embeddings", "context": "For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings. A visualization of this construction can be seen in Figure 2."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the size of the vocabulary used by BERT?", "answer": "30,000", "context": "We use WordPiece embeddings (Wu et al., 2016) with a 30,000 token vocabulary."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What are the two model sizes reported in the paper for BERT?", "answer": "BERTBASE (L=12, H=768, A=12, Total Parameters=110M) and BERTLARGE (L=24, H=1024, A=16, Total Parameters=340M)", "context": "We primarily report results on two model sizes: BERTBASE (L=12, H=768, A=12, Total Parameters=110M) and BERTLARGE (L=24, H=1024, A=16, Total Parameters=340M)."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "How does BERT predict the start and end positions of an answer span in SQuAD?", "answer": "It uses two vectors S and E whose dot products with the final hidden vectors of each token denote scores for start and end positions.", "context": "We only introduce a start vector S ∈ R H and an end vector E ∈ R H during fine-tuning. The probability of word i being the start of the answer span is computed as a dot product between Ti and S followed by a softmax over all of the words in the paragraph: Pi = e S·Ti P j e S·Tj . The analogous formula is used for the end of the answer span."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the main benefit of using a masked language model over a standard left-to-right or right-to-left language model?", "answer": "It enables the representation to fuse the left and the right context, which allows to pretrain a deep bidirectional Transformer.", "context": "Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pretrain a deep bidirectional Transformer."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "How much does GPT4 API cost?", "answer": "I don't know"} | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/invalid-data-missing-column.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf"}
| 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/bert-paper-qna-1-line.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the name of the new language representation model introduced in the document?", "answer": "BERT", "context": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers."} | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/data/bert-paper-qna-3-line.jsonl | {"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the main difference between BERT and previous language representation models?", "answer": "BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers.", "context": "Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers."}
{"pdf_url":"https://arxiv.org/pdf/1810.04805.pdf", "chat_history":[], "question": "What is the size of the vocabulary used by BERT?", "answer": "30,000", "context": "We use WordPiece embeddings (Wu et al., 2016) with a 30,000 token vocabulary."}
{"pdf_url":"https://grs.pku.edu.cn/docs/2018-03/20180301083100898652.pdf", "chat_history":[], "question": "论文写作中论文引言有什么注意事项?", "answer":"", "context":""} | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf | promptflow_repo/promptflow/examples/flows/chat/chat-with-pdf/.promptflow/flow.tools.json | {
"package": {},
"code": {
"setup_env.py": {
"type": "python",
"inputs": {
"connection": {
"type": [
"AzureOpenAIConnection",
"OpenAIConnection"
]
},
"config": {
"type": [
"object"
]
}
},
"source": "setup_env.py",
"function": "setup_env"
},
"download_tool.py": {
"type": "python",
"inputs": {
"url": {
"type": [
"string"
]
},
"env_ready_signal": {
"type": [
"string"
]
}
},
"source": "download_tool.py",
"function": "download_tool"
},
"build_index_tool.py": {
"type": "python",
"inputs": {
"pdf_path": {
"type": [
"string"
]
}
},
"source": "build_index_tool.py",
"function": "build_index_tool"
},
"find_context_tool.py": {
"type": "python",
"inputs": {
"question": {
"type": [
"string"
]
},
"index_path": {
"type": [
"string"
]
}
},
"source": "find_context_tool.py",
"function": "find_context_tool"
},
"qna_tool.py": {
"type": "python",
"inputs": {
"prompt": {
"type": [
"string"
]
},
"history": {
"type": [
"list"
]
}
},
"source": "qna_tool.py",
"function": "qna_tool"
},
"rewrite_question_tool.py": {
"type": "python",
"inputs": {
"question": {
"type": [
"string"
]
},
"history": {
"type": [
"list"
]
},
"env_ready_signal": {
"type": [
"string"
]
}
},
"source": "rewrite_question_tool.py",
"function": "rewrite_question_tool"
}
}
} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/data.jsonl | {"question": "Compute $\\dbinom{16}{5}$.", "answer": "4368", "raw_answer": "$\\dbinom{16}{5}=\\dfrac{16\\times 15\\times 14\\times 13\\times 12}{5\\times 4\\times 3\\times 2\\times 1}=\\boxed{4368}.$"}
{"question": "Determine the number of ways to arrange the letters of the word PROOF.", "answer": "60", "raw_answer": "There are two O's and five total letters, so the answer is $\\dfrac{5!}{2!} = \\boxed{60}$."}
{"question": "23 people attend a party. Each person shakes hands with at most 22 other people. What is the maximum possible number of handshakes, assuming that any two people can shake hands at most once?", "answer": "253", "raw_answer": "Note that if each person shakes hands with every other person, then the number of handshakes is maximized. There are $\\binom{23}{2} = \\frac{(23)(22)}{2} = (23)(11) = 230+23 = \\boxed{253}$ ways to choose two people to form a handshake."}
{"question": "James has 7 apples. 4 of them are red, and 3 of them are green. If he chooses 2 apples at random, what is the probability that both the apples he chooses are green?", "answer": "1/7", "raw_answer": "There are $\\binom{7}{2}=21$ total ways for James to choose 2 apples from 7, but only $\\binom{3}{2}=3$ ways for him to choose 2 green apples. So, the probability that he chooses 2 green apples is $\\frac{3}{21}=\\boxed{\\frac{1}{7}}$."}
{"question": "We are allowed to remove exactly one integer from the list $$-1,0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,$$and then we choose two distinct integers at random from the remaining list. What number should we remove if we wish to maximize the probability that the sum of the two chosen numbers is 10?", "answer": "5", "raw_answer": "For each integer $x$ in the list besides 5, the integer $10-x$ is also in the list. So, for each of these integers, removing $x$ reduces the number of pairs of distinct integers whose sum is 10. However, there is no other integer in list that can be added to 5 to give 10, so removing 5 from the list will not reduce the number of pairs of distinct integers whose sum is 10.\n\nSince removing any integer besides 5 will reduce the number of pairs that add to 10, while removing 5 will leave the number of pairs that add to 10 unchanged, we have the highest probability of having a sum of 10 when we remove $\\boxed{5}$."}
{"question": "The numbers 1 through 25 are written on 25 cards with one number on each card. Sara picks one of the 25 cards at random. What is the probability that the number on her card will be a multiple of 2 or 5? Express your answer as a common fraction.", "answer": "3/5", "raw_answer": "There are $12$ even numbers and $5$ multiples of $5$ in the range $1$ to $25$. However, we have double-counted $10$ and $20$, which are divisible by both $2$ and $5$. So the number of good outcomes is $12+5-2=15$ and the probability is $\\frac{15}{25}=\\boxed{\\frac{3}{5}}$."}
{"question": "A bag has 3 red marbles and 5 white marbles. Two marbles are drawn from the bag and not replaced. What is the probability that the first marble is red and the second marble is white?", "answer": "15/56", "raw_answer": "The probability that the first is red is $\\dfrac38$. Now with 7 remaining, the probability that the second is white is $\\dfrac57$. The answer is $\\dfrac38 \\times \\dfrac57 = \\boxed{\\dfrac{15}{56}}$."}
{"question": "Find the largest prime divisor of 11! + 12!", "answer": "13", "raw_answer": "Since $12! = 12 \\cdot 11!$, we can examine the sum better by factoring $11!$ out of both parts: $$ 11! + 12! = 11! + 12 \\cdot 11! = 11!(1 + 12) = 11! \\cdot 13. $$Since no prime greater than 11 divides $11!$, $\\boxed{13}$ is the largest prime factor of $11! + 12!$."}
{"question": "These two spinners are divided into thirds and quarters, respectively. If each of these spinners is spun once, what is the probability that the product of the results of the two spins will be an even number? Express your answer as a common fraction.\n\n[asy]\n\nsize(5cm,5cm);\n\ndraw(Circle((0,0),1));\n\ndraw(Circle((3,0),1));\n\ndraw((0,0)--(0,1));\n\ndraw((0,0)--(-0.9,-0.47));\n\ndraw((0,0)--(0.9,-0.47));\n\ndraw((2,0)--(4,0));\n\ndraw((3,1)--(3,-1));\n\nlabel(\"$3$\",(-0.5,0.3));\n\nlabel(\"$4$\",(0.5,0.3));\n\nlabel(\"$5$\",(0,-0.5));\n\nlabel(\"$5$\",(2.6,-0.4));\n\nlabel(\"$6$\",(2.6,0.4));\n\nlabel(\"$7$\",(3.4,0.4));\n\nlabel(\"$8$\",(3.4,-0.4));\n\ndraw((0,0)--(0.2,0.8),Arrow);\n\ndraw((3,0)--(3.2,0.8),Arrow);\n\n[/asy]", "answer": "2/3", "raw_answer": "We will subtract the probability that the product is odd from 1 to get the probability that the product is even. In order for the product to be odd, we must have both numbers be odd. There are $2\\cdot2=4$ possibilities for this (a 3 or 5 is spun on the left spinner and a 5 or 7 on the right) out of a total of $3\\cdot4=12$ possibilities, so the probability that the product is odd is $4/12=1/3$. The probability that the product is even is $1-1/3=\\boxed{\\frac{2}{3}}$."}
{"question": "No two students in Mrs. Vale's 26-student mathematics class have the same two initials. Each student's first name and last name begin with the same letter. If the letter ``Y'' is considered a vowel, what is the probability of randomly picking a student whose initials are vowels? Express your answer as a common fraction.", "answer": "3/13", "raw_answer": "The students' initials are AA, BB, CC, $\\cdots$, ZZ, representing all 26 letters. The vowels are A, E, I, O, U, and Y, which are 6 letters out of the possible 26. So the probability of picking a student whose initials are vowels is $\\frac{6}{26}=\\boxed{\\frac{3}{13}}$."}
{"question": "What is the expected value of the roll of a standard 6-sided die?", "answer": "3.5", "raw_answer": "Each outcome of rolling a 6-sided die has probability $\\frac16$, and the possible outcomes are 1, 2, 3, 4, 5, and 6. So the expected value is $$ \\frac16(1) + \\frac16(2) + \\frac16(3) + \\frac16(4) + \\frac16(5) + \\frac16(6) = \\frac{21}{6} = \\boxed{3.5}. $$"}
{"question": "How many positive divisors of 30! are prime?", "answer": "10", "raw_answer": "The only prime numbers that divide $30!$ are less than or equal to 30. So 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 are primes that divide $30!$, and there are $\\boxed{10}$ of these."}
{"question": "Marius is entering a wildlife photo contest, and wishes to arrange his seven snow leopards of different heights in a row. If the shortest two leopards have inferiority complexes and demand to be placed at the ends of the row, how many ways can he line up the leopards?", "answer": "240", "raw_answer": "There are two ways to arrange the shortest two leopards. For the five remaining leopards, there are $5!$ ways to arrange them.\n\nTherefore, the answer is $2\\times5!=\\boxed{240\\text{ ways.}}$"}
{"question": "My school's math club has 6 boys and 8 girls. I need to select a team to send to the state math competition. We want 6 people on the team. In how many ways can I select the team without restrictions?", "answer": "3003", "raw_answer": "With no restrictions, we are merely picking 6 students out of 14. This is $\\binom{14}{6} = \\boxed{3003}$."}
{"question": "Nathan will roll two six-sided dice. What is the probability that he will roll a number less than three on the first die and a number greater than three on the second die? Express your answer as a common fraction.", "answer": "1/6", "raw_answer": "For the first die to be less than three, it must be a 1 or a 2, which occurs with probability $\\frac{1}{3}$. For the second die to be greater than 3, it must be a 4 or a 5 or a 6, which occurs with probability $\\frac{1}{2}$. The probability of both of these events occuring, as they are independent, is $\\frac{1}{3} \\cdot \\frac{1}{2} = \\boxed{\\frac{1}{6}}$."}
{"question": "A Senate committee has 8 Republicans and 6 Democrats. In how many ways can we form a subcommittee with 3 Republicans and 2 Democrats?", "answer": "840", "raw_answer": "There are 8 Republicans and 3 spots for them, so there are $\\binom{8}{3} = 56$ ways to choose the Republicans. There are 6 Democrats and 2 spots for them, so there are $\\binom{6}{2} = 15$ ways to choose the Democrats. So there are $56 \\times 15 = \\boxed{840}$ ways to choose the subcommittee."}
{"question": "How many different positive, four-digit integers can be formed using the digits 2, 2, 9 and 9?", "answer": "6", "raw_answer": "We could go ahead and count these directly, but instead we could count in general and then correct for overcounting. That is, if we had 4 distinct digits, there would be $4! = 24$ orderings. However, we must divide by 2! once for the repetition of the digit 2, and divide by 2! for the repetition of the digit 9 (this should make sense because if the repeated digit were different we would have twice as many orderings). So, our answer is $\\frac{4!}{2!\\cdot 2!} = 2 \\cdot 3 = \\boxed{6}$."}
{"question": "I won a trip for four to the Super Bowl. I can bring three of my friends. I have 8 friends. In how many ways can I form my Super Bowl party?", "answer": "56", "raw_answer": "Order does not matter, so it is a combination. Choosing $3$ out of $8$ is $\\binom{8}{3}=\\boxed{56}.$"}
{"question": "Determine the number of ways to arrange the letters of the word MADAM.", "answer": "30", "raw_answer": "First we count the arrangements if all the letters are unique, which is $5!$. Then since the M's and the A's are not unique, we divide by $2!$ twice for the arrangements of M's and the arrangements of A's, for an answer of $\\dfrac{5!}{2! \\times 2!} = \\boxed{30}$."}
{"question": "A palindrome is a number that reads the same forwards and backwards, such as 3003. How many positive four-digit integers are palindromes?", "answer": "90", "raw_answer": "Constructing palindromes requires that we choose the thousands digit (which defines the units digit) and the hundreds digit (which defines the tens digit). Since there are 9 choices for the thousands digit, and 10 choices for the hundreds digit, creating $9 \\cdot 10 = \\boxed{90}$ palindromes."} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/requirements.txt | promptflow
promptflow-tools | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/chat_variant_2.jinja2 | system:
You are an assistant to calculate the answer to the provided math problems.
Please think step by step.
Return the final numerical answer only and any accompanying reasoning or explanation seperately as json format.
user:
A jar contains two red marbles, three green marbles, ten white marbles and no other marbles. Two marbles are randomly drawn from this jar without replacement. What is the probability that these two marbles drawn will both be red? Express your answer as a common fraction.
assistant:
{Chain of thought: "The total number of marbles is $2+3+10=15$. The probability that the first marble drawn will be red is $2/15$. Then, there will be one red left, out of 14. Therefore, the probability of drawing out two red marbles will be: $$\\frac{2}{15}\\cdot\\frac{1}{14}=\\boxed{\\frac{1}{105}}$$.", "answer": "1/105"}
user:
Find the greatest common divisor of $7!$ and $(5!)^2.$
assistant:
{"Chain of thought": "$$ \\begin{array} 7! &=& 7 \\cdot 6 \\cdot 5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1 &=& 2^4 \\cdot 3^2 \\cdot 5^1 \\cdot 7^1 \\\\ (5!)^2 &=& (5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1)^2 &=& 2^6 \\cdot 3^2 \\cdot 5^2 \\\\ \\text{gcd}(7!, (5!)^2) &=& 2^4 \\cdot 3^2 \\cdot 5^1 &=& \\boxed{720} \\end{array} $$.", "answer": "720"}
user:
A club has 10 members, 5 boys and 5 girls. Two of the members are chosen at random. What is the probability that they are both girls?
assistant:
{"Chain of thought": "There are $\\binomial{10}{2} = 45$ ways to choose two members of the group, and there are $\\binomial{5}{2} = 10$ ways to choose two girls. Therefore, the probability that two members chosen at random are girls is $\\dfrac{10}{45} = \\boxed{\\dfrac{2}{9}}$.", "answer": "2/9"}
user:
Allison, Brian and Noah each have a 6-sided cube. All of the faces on Allison's cube have a 5. The faces on Brian's cube are numbered 1, 2, 3, 4, 5 and 6. Three of the faces on Noah's cube have a 2 and three of the faces have a 6. All three cubes are rolled. What is the probability that Allison's roll is greater than each of Brian's and Noah's? Express your answer as a common fraction.
assistant:
{"Chain of thought": "Since Allison will always roll a 5, we must calculate the probability that both Brian and Noah roll a 4 or lower. The probability of Brian rolling a 4 or lower is $\\frac{4}{6} = \\frac{2}{3}$ since Brian has a standard die. Noah, however, has a $\\frac{3}{6} = \\frac{1}{2}$ probability of rolling a 4 or lower, since the only way he can do so is by rolling one of his 3 sides that have a 2. So, the probability of both of these independent events occurring is $\\frac{2}{3} \\cdot \\frac{1}{2} = \\boxed{\\frac{1}{3}}$.", "answer": "1/3"}
user:
Compute $\\density binomial{50}{2}$.
assistant:
{"Chain of thought": "$\\density binomial{50}{2} = \\dfrac{50!}{2!48!}=\\dfrac{50\\times 49}{2\\times 1}=\\boxed{1225}.$", "answer": "1225"}
user:
The set $S = \\{1, 2, 3, \\ldots , 49, 50\\}$ contains the first $50$ positive integers. After the multiples of 2 and the multiples of 3 are removed, how many integers remain in the set $S$?
assistant:
{"Chain of thought": "The set $S$ contains $25$ multiples of 2 (that is, even numbers). When these are removed, the set $S$ is left with only the odd integers from 1 to 49. At this point, there are $50-25=25$ integers in $S$. We still need to remove the multiples of 3 from $S$.\n\nSince $S$ only contains odd integers after the multiples of 2 are removed, we must remove the odd multiples of 3 between 1 and 49. These are 3, 9, 15, 21, 27, 33, 39, 45, of which there are 8. Therefore, the number of integers remaining in the set $S$ is $25 - 8 = \\boxed{17}$.", "answer": "17"}
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/extract_result.py | from promptflow import tool
import json
import re
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
@tool
def my_python_tool(input1: str) -> str:
input1 = re.sub(r'[$\\!]', '', input1)
try:
json_answer = json.loads(input1)
answer = json_answer['answer']
except Exception:
answer = input1
return answer
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/chat.jinja2 | system:
You are an assistant to calculate the answer to the provided math problems.
Please return the final numerical answer only, without any accompanying reasoning or explanation.
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/README.md | # Test your prompt variants for chat with math
This is a prompt tuning case with 3 prompt variants for math question answering.
By utilizing this flow, in conjunction with the `evaluation/eval-chat-math` flow, you can quickly grasp the advantages of prompt tuning and experimentation with prompt flow. Here we provide a [video](https://www.youtube.com/watch?v=gcIe6nk2gA4) and a [tutorial]((../../../tutorials/flow-fine-tuning-evaluation/promptflow-quality-improvement.md)) for you to get started.
Tools used in this flow:
- `llm` tool
- custom `python` Tool
## Prerequisites
Install promptflow sdk and other dependencies in this folder:
```bash
pip install -r requirements.txt
```
## Getting started
### 1 Create connection for LLM tool to use
Go to "Prompt flow" "Connections" tab. Click on "Create" button, select one of LLM tool supported connection types and fill in the configurations.
Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer to [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for more details. If you want to use "OpenAI" connection type, you need to create an OpenAI account first. Please refer to [OpenAI](https://platform.openai.com/) for more details.
```bash
# Override keys with --set to avoid yaml file changes
pf connection create --file ../../../connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Note in [flow.dag.yaml](flow.dag.yaml) we are using connection named `open_ai_connection`.
```bash
# show registered connection
pf connection show --name open_ai_connection
```
### 2 Start chatting
```bash
# run chat flow with default question in flow.dag.yaml
pf flow test --flow .
# run chat flow with new question
pf flow test --flow . --inputs question="2+5=?"
# start a interactive chat session in CLI
pf flow test --flow . --interactive
# start a interactive chat session in CLI with verbose info
pf flow test --flow . --interactive --verbose | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/flow.dag.yaml | $schema: https://azuremlschemas.azureedge.net/promptflow/latest/Flow.schema.json
environment:
python_requirements_txt: requirements.txt
inputs:
chat_history:
type: list
is_chat_history: true
default: []
question:
type: string
is_chat_input: true
default: '1+1=?'
outputs:
answer:
type: string
reference: ${extract_result.output}
is_chat_output: true
nodes:
- name: chat
use_variants: true
- name: extract_result
type: python
source:
type: code
path: extract_result.py
inputs:
input1: ${chat.output}
node_variants:
chat:
default_variant_id: variant_0
variants:
variant_0:
node:
type: llm
source:
type: code
path: chat.jinja2
inputs:
deployment_name: gpt-4
max_tokens: 256
temperature: 0
chat_history: ${inputs.chat_history}
question: ${inputs.question}
model: gpt-4
connection: open_ai_connection
api: chat
variant_1:
node:
type: llm
source:
type: code
path: chat_variant_1.jinja2
inputs:
deployment_name: gpt-4
max_tokens: 256
temperature: 0
chat_history: ${inputs.chat_history}
question: ${inputs.question}
model: gpt-4
connection: open_ai_connection
api: chat
variant_2:
node:
type: llm
source:
type: code
path: chat_variant_2.jinja2
inputs:
deployment_name: gpt-4
max_tokens: 256
temperature: 0
chat_history: ${inputs.chat_history}
question: ${inputs.question}
model: gpt-4
connection: open_ai_connection
api: chat
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/chat_variant_1.jinja2 | system:
You are an assistant to calculate the answer to the provided math problems.
Please think step by step.
Return the final numerical answer only and any accompanying reasoning or explanation seperately as json format.
user:
A jar contains two red marbles, three green marbles, ten white marbles and no other marbles. Two marbles are randomly drawn from this jar without replacement. What is the probability that these two marbles drawn will both be red? Express your answer as a common fraction.
assistant:
{Chain of thought: "The total number of marbles is $2+3+10=15$. The probability that the first marble drawn will be red is $2/15$. Then, there will be one red left, out of 14. Therefore, the probability of drawing out two red marbles will be: $$\\frac{2}{15}\\cdot\\frac{1}{14}=\\boxed{\\frac{1}{105}}$$.", "answer": "1/105"}
user:
Find the greatest common divisor of $7!$ and $(5!)^2.$
assistant:
{"Chain of thought": "$$ \\begin{array} 7! &=& 7 \\cdot 6 \\cdot 5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1 &=& 2^4 \\cdot 3^2 \\cdot 5^1 \\cdot 7^1 \\\\ (5!)^2 &=& (5 \\cdot 4 \\cdot 3 \\cdot 2 \\cdot 1)^2 &=& 2^6 \\cdot 3^2 \\cdot 5^2 \\\\ \\text{gcd}(7!, (5!)^2) &=& 2^4 \\cdot 3^2 \\cdot 5^1 &=& \\boxed{720} \\end{array} $$.", "answer": "720"}
{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}
user:
{{question}} | 0 |
promptflow_repo/promptflow/examples/flows/chat/chat-math-variant | promptflow_repo/promptflow/examples/flows/chat/chat-math-variant/.promptflow/flow.tools.json | {
"package": {},
"code": {
"chat.jinja2": {
"type": "llm",
"inputs": {
"chat_history": {
"type": [
"string"
]
},
"question": {
"type": [
"string"
]
}
},
"source": "chat.jinja2"
},
"chat_variant_1.jinja2": {
"type": "llm",
"inputs": {
"chat_history": {
"type": [
"string"
]
},
"question": {
"type": [
"string"
]
}
},
"source": "chat_variant_1.jinja2"
},
"chat_variant_2.jinja2": {
"type": "llm",
"inputs": {
"chat_history": {
"type": [
"string"
]
},
"question": {
"type": [
"string"
]
}
},
"source": "chat_variant_2.jinja2"
},
"extract_result.py": {
"type": "python",
"inputs": {
"input1": {
"type": [
"string"
]
}
},
"source": "extract_result.py",
"function": "my_python_tool"
}
}
} | 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/process_search_result.py | from promptflow import tool
@tool
def process_search_result(search_result):
def format(doc: dict):
return f"Content: {doc['Content']}\nSource: {doc['Source']}"
try:
context = []
for url, content in search_result:
context.append({"Content": content, "Source": url})
context_str = "\n\n".join([format(c) for c in context])
return context_str
except Exception as e:
print(f"Error: {e}")
return ""
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/data.jsonl | {"chat_history":[{"inputs":{"question":"What is ChatGPT?"},"outputs":{"answer":"ChatGPT is a chatbot product developed by OpenAI. It is powered by the Generative Pre-trained Transformer (GPT) series of language models, with GPT-4 being the latest version. ChatGPT uses natural language processing to generate responses to user inputs in a conversational manner. It was released as ChatGPT Plus, a premium version, which provides enhanced features and access to the GPT-4 based version of OpenAI's API. ChatGPT allows users to interact and have conversations with the language model, utilizing both text and image inputs. It is designed to be more reliable, creative, and capable of handling nuanced instructions compared to previous versions. However, it is important to note that while GPT-4 improves upon its predecessors, it still retains some of the same limitations and challenges."}}],"question":"What is the difference between this model and previous neural network?"}
| 0 |
promptflow_repo/promptflow/examples/flows/chat | promptflow_repo/promptflow/examples/flows/chat/chat-with-wikipedia/get_wiki_url.py | import re
import bs4
import requests
from promptflow import tool
def decode_str(string):
return string.encode().decode("unicode-escape").encode("latin1").decode("utf-8")
def remove_nested_parentheses(string):
pattern = r"\([^()]+\)"
while re.search(pattern, string):
string = re.sub(pattern, "", string)
return string
@tool
def get_wiki_url(entity: str, count=2):
# Send a request to the URL
url = f"https://en.wikipedia.org/w/index.php?search={entity}"
url_list = []
try:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.35"
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
# Parse the HTML content using BeautifulSoup
soup = bs4.BeautifulSoup(response.text, "html.parser")
mw_divs = soup.find_all("div", {"class": "mw-search-result-heading"})
if mw_divs: # mismatch
result_titles = [decode_str(div.get_text().strip()) for div in mw_divs]
result_titles = [remove_nested_parentheses(result_title) for result_title in result_titles]
print(f"Could not find {entity}. Similar entity: {result_titles[:count]}.")
url_list.extend(
[f"https://en.wikipedia.org/w/index.php?search={result_title}" for result_title in result_titles]
)
else:
page_content = [p_ul.get_text().strip() for p_ul in soup.find_all("p") + soup.find_all("ul")]
if any("may refer to:" in p for p in page_content):
url_list.extend(get_wiki_url("[" + entity + "]"))
else:
url_list.append(url)
else:
msg = (
f"Get url failed with status code {response.status_code}.\nURL: {url}\nResponse: "
f"{response.text[:100]}"
)
print(msg)
return url_list[:count]
except Exception as e:
print("Get url failed with error: {}".format(e))
return url_list
| 0 |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 34