text
stringlengths 13
1.77M
| id
stringlengths 22
127
| metadata
dict | __index_level_0__
int64 0
28
|
---|---|---|---|
# Run prompt flow in Azure AI
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../../../how-to-guides/faq.md#stable-vs-experimental).
:::
Assuming you have learned how to create and run a flow following [Quick start](../../../how-to-guides/quick-start.md). This guide will walk you through the main process of how to submit a promptflow run to [Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/overview-what-is-prompt-flow?view=azureml-api-2).
Benefits of use Azure AI comparison to just run locally:
- **Designed for team collaboration**: Portal UI is a better fix for sharing & presentation your flow and runs. And workspace can better organize team shared resources like connections.
- **Enterprise Readiness Solutions**: prompt flow leverages Azure AI's robust enterprise readiness solutions, providing a secure, scalable, and reliable foundation for the development, experimentation, and deployment of flows.
## Prerequisites
1. An Azure account with an active subscription - [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
2. An Azure AI ML workspace - [Create workspace resources you need to get started with Azure AI](https://learn.microsoft.com/en-us/azure/machine-learning/quickstart-create-resources).
3. A python environment, `python=3.9` or higher version like 3.10 is recommended.
4. Install `promptflow` with extra dependencies and `promptflow-tools`.
```sh
pip install promptflow[azure] promptflow-tools
```
5. Clone the sample repo and check flows in folder [examples/flows](https://github.com/microsoft/promptflow/tree/main/examples/flows).
```sh
git clone https://github.com/microsoft/promptflow.git
```
## Create necessary connections
Connection helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM and other external tools for example Azure Content Safety.
In this guide, we will use flow `web-classification` which uses connection `open_ai_connection` inside, we need to set up the connection if we haven't added it before.
Please go to workspace portal, click `Prompt flow` -> `Connections` -> `Create`, then follow the instruction to create your own connections. Learn more on [connections](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/concept-connections?view=azureml-api-2).
## Submit a run to workspace
Assuming you are in working directory `<path-to-the-sample-repo>/examples/flows/standard/`
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Use `az login` to login so promptflow can get your credential.
```sh
az login
```
Submit a run to workspace.
```sh
pfazure run create --subscription <my_sub> -g <my_resource_group> -w <my_workspace> --flow web-classification --data web-classification/data.jsonl --stream
```
**Default subscription/resource-group/workspace**
Note `--subscription`, `-g` and `-w` can be omitted if you have installed the [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) and [set the default configurations](https://learn.microsoft.com/en-us/cli/azure/azure-cli-configuration).
```sh
az account set --subscription <my-sub>
az configure --defaults group=<my_resource_group> workspace=<my_workspace>
```
**Serverless runtime and named runtime**
Runtimes serve as computing resources so that the flow can be executed in workspace. Above command does not specify any runtime which means it will run in serverless mode. In this mode the workspace will automatically create a runtime and you can use it as the default runtime for any flow run later.
Instead, you can also [create a runtime](https://learn.microsoft.com/en-us/azure/machine-learning/prompt-flow/how-to-create-manage-runtime?view=azureml-api-2) and use it with `--runtime <my-runtime>`:
```sh
pfazure run create --flow web-classification --data web-classification/data.jsonl --stream --runtime <my-runtime>
```
**Specify run name and view a run**
You can also name the run by specifying `--name my_first_cloud_run` in the run create command, otherwise the run name will be generated in a certain pattern which has timestamp inside.
With a run name, you can easily stream or view the run details using below commands:
```sh
pfazure run stream -n my_first_cloud_run # same as "--stream" in command "run create"
pfazure run show-details -n my_first_cloud_run
pfazure run visualize -n my_first_cloud_run
```
More details can be found in [CLI reference: pfazure](../../../reference/pfazure-command-reference.md)
:::
:::{tab-item} SDK
:sync: SDK
1. Import the required libraries
```python
from azure.identity import DefaultAzureCredential, InteractiveBrowserCredential
# azure version promptflow apis
from promptflow.azure import PFClient
```
2. Get credential
```python
try:
credential = DefaultAzureCredential()
# Check if given credential can get token successfully.
credential.get_token("https://management.azure.com/.default")
except Exception as ex:
# Fall back to InteractiveBrowserCredential in case DefaultAzureCredential not work
credential = InteractiveBrowserCredential()
```
3. Get a handle to the workspace
```python
# Get a handle to workspace
pf = PFClient(
credential=credential,
subscription_id="<SUBSCRIPTION_ID>", # this will look like xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
resource_group_name="<RESOURCE_GROUP>",
workspace_name="<AML_WORKSPACE_NAME>",
)
```
4. Submit the flow run
```python
# load flow
flow = "web-classification"
data = "web-classification/data.jsonl"
runtime = "example-runtime-ci" # assume you have existing runtime with this name provisioned
# runtime = None # un-comment use automatic runtime
# create run
base_run = pf.run(
flow=flow,
data=data,
runtime=runtime,
)
pf.stream(base_run)
```
5. View the run info
```python
details = pf.get_details(base_run)
details.head(10)
pf.visualize(base_run)
```
:::
::::
## View the run in workspace
At the end of stream logs, you can find the `portal_url` of the submitted run, click it to view the run in the workspace.
![c_0](../../../media/cloud/azureml/local-to-cloud-run-webview.png)
### Run snapshot of the flow with additional includes
Flows that enabled [additional include](../../../how-to-guides/develop-a-flow/referencing-external-files-or-folders-in-a-flow.md) files can also be submitted for execution in the workspace. Please note that the specific additional include files or folders will be uploaded and organized within the **Files** folder of the run snapshot in the cloud.
![img](../../../media/cloud/azureml/run-with-additional-includes.png)
## Next steps
Learn more about:
- [CLI reference: pfazure](../../../reference/pfazure-command-reference.md)
```{toctree}
:maxdepth: 1
:hidden:
create-run-with-automatic-runtime
```
| promptflow/docs/cloud/azureai/quick-start/index.md/0 | {
"file_path": "promptflow/docs/cloud/azureai/quick-start/index.md",
"repo_id": "promptflow",
"token_count": 2042
} | 0 |
# Deploy a flow using Kubernetes
:::{admonition} Experimental feature
This is an experimental feature, and may change at any time. Learn [more](../faq.md#stable-vs-experimental).
:::
There are four steps to deploy a flow using Kubernetes:
1. Build the flow as docker format.
2. Build the docker image.
3. Create Kubernetes deployment yaml.
4. Apply the deployment.
## Build a flow as docker format
::::{tab-set}
:::{tab-item} CLI
:sync: CLI
Note that all dependent connections must be created before building as docker.
```bash
# create connection if not created before
pf connection create --file ../../../examples/connections/azure_openai.yml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection
```
Use the command below to build a flow as docker format:
```bash
pf flow build --source <path-to-your-flow-folder> --output <your-output-dir> --format docker
```
:::
:::{tab-item} VS Code Extension
:sync: VSC
Click the button below to build a flow as docker format:
![img](../../media/how-to-guides/vscode_export_as_docker.png)
:::
::::
Note that all dependent connections must be created before exporting as docker.
### Docker format folder structure
Exported Dockerfile & its dependencies are located in the same folder. The structure is as below:
- flow: the folder contains all the flow files
- ...
- connections: the folder contains yaml files to create all related connections
- ...
- Dockerfile: the dockerfile to build the image
- start.sh: the script used in `CMD` of `Dockerfile` to start the service
- runit: the folder contains all the runit scripts
- ...
- settings.json: a json file to store the settings of the docker image
- README.md: Simple introduction of the files
## Deploy with Kubernetes
We are going to use the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) as
an example to show how to deploy with Kubernetes.
Please ensure you have [create the connection](../manage-connections.md#create-a-connection) required by flow, if not, you could
refer to [Setup connection for web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification).
Additionally, please ensure that you have installed all the required dependencies. You can refer to the "Prerequisites" section in the README of the [web-classification](https://github.com/microsoft/promptflow/tree/main/examples/flows/standard/web-classification/) for a comprehensive list of prerequisites and installation instructions.
### Build Docker image
Like other Dockerfile, you need to build the image first. You can tag the image with any name you want. In this example, we use `web-classification-serve`.
Then run the command below:
```bash
cd <your-output-dir>
docker build . -t web-classification-serve
```
### Create Kubernetes deployment yaml.
The Kubernetes deployment yaml file acts as a guide for managing your docker container in a Kubernetes pod. It clearly specifies important information like the container image, port configurations, environment variables, and various settings. Below, you'll find a simple deployment template that you can easily customize to meet your needs.
**Note**: You need encode the secret using base64 firstly and input the <encoded_secret> as 'open-ai-connection-api-key' in the deployment configuration. For example, you can run below commands in linux:
```bash
encoded_secret=$(echo -n <your_api_key> | base64)
```
```yaml
---
kind: Namespace
apiVersion: v1
metadata:
name: <your-namespace>
---
apiVersion: v1
kind: Secret
metadata:
name: open-ai-connection-api-key
namespace: <your-namespace>
type: Opaque
data:
open-ai-connection-api-key: <encoded_secret>
---
apiVersion: v1
kind: Service
metadata:
name: web-classification-service
namespace: <your-namespace>
spec:
type: NodePort
ports:
- name: http
port: 8080
targetPort: 8080
nodePort: 30123
selector:
app: web-classification-serve-app
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-classification-serve-app
namespace: <your-namespace>
spec:
selector:
matchLabels:
app: web-classification-serve-app
template:
metadata:
labels:
app: web-classification-serve-app
spec:
containers:
- name: web-classification-serve-container
image: <your-docker-image>
imagePullPolicy: Never
ports:
- containerPort: 8080
env:
- name: OPEN_AI_CONNECTION_API_KEY
valueFrom:
secretKeyRef:
name: open-ai-connection-api-key
key: open-ai-connection-api-key
```
### Apply the deployment.
Before you can deploy your application, ensure that you have set up a Kubernetes cluster and installed [kubectl](https://kubernetes.io/docs/reference/kubectl/) if it's not already installed. In this documentation, we will use [Minikube](https://minikube.sigs.k8s.io/docs/) as an example. To start the cluster, execute the following command:
```bash
minikube start
```
Once your Kubernetes cluster is up and running, you can proceed to deploy your application by using the following command:
```bash
kubectl apply -f deployment.yaml
```
This command will create the necessary pods to run your application within the cluster.
**Note**: You need replace <pod_name> below with your specific pod_name. You can retrieve it by running `kubectl get pods -n web-classification`.
### Retrieve flow service logs of the container
The kubectl logs command is used to retrieve the logs of a container running within a pod, which can be useful for debugging, monitoring, and troubleshooting applications deployed in a Kubernetes cluster.
```bash
kubectl -n <your-namespace> logs <pod-name>
```
#### Connections
If the service involves connections, all related connections will be exported as yaml files and recreated in containers.
Secrets in connections won't be exported directly. Instead, we will export them as a reference to environment variables:
```yaml
$schema: https://azuremlschemas.azureedge.net/promptflow/latest/OpenAIConnection.schema.json
type: open_ai
name: open_ai_connection
module: promptflow.connections
api_key: ${env:OPEN_AI_CONNECTION_API_KEY} # env reference
```
You'll need to set up the environment variables in the container to make the connections work.
### Test the endpoint
- Option1:
Once you've started the service, you can establish a connection between a local port and a port on the pod. This allows you to conveniently test the endpoint from your local terminal.
To achieve this, execute the following command:
```bash
kubectl port-forward <pod_name> <local_port>:<container_port> -n <your-namespace>
```
With the port forwarding in place, you can use the curl command to initiate the endpoint test:
```bash
curl http://localhost:<local_port>/score --data '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -X POST -H "Content-Type: application/json"
```
- Option2:
`minikube service web-classification-service --url -n <your-namespace>` runs as a process, creating a tunnel to the cluster. The command exposes the service directly to any program running on the host operating system.
The command above will retrieve the URL of a service running within a Minikube Kubernetes cluster (e.g. http://<ip>:<assigned_port>), which you can click to interact with the flow service in your web browser. Alternatively, you can use the following command to test the endpoint:
**Note**: Minikube will use its own external port instead of nodePort to listen to the service. So please substitute <assigned_port> with the port obtained above.
```bash
curl http://localhost:<assigned_port>/score --data '{"url":"https://play.google.com/store/apps/details?id=com.twitter.android"}' -X POST -H "Content-Type: application/json"
```
## Next steps
- Try the example [here](https://github.com/microsoft/promptflow/tree/main/examples/tutorials/flow-deploy/kubernetes). | promptflow/docs/how-to-guides/deploy-a-flow/deploy-using-kubernetes.md/0 | {
"file_path": "promptflow/docs/how-to-guides/deploy-a-flow/deploy-using-kubernetes.md",
"repo_id": "promptflow",
"token_count": 2398
} | 1 |
# Using File Path as Tool Input
Users sometimes need to reference local files within a tool to implement specific logic. To simplify this, we've introduced the `FilePath` input type. This input type enables users to either select an existing file or create a new one, then pass it to a tool, allowing the tool to access the file's content.
In this guide, we will provide a detailed walkthrough on how to use `FilePath` as a tool input. We will also demonstrate the user experience when utilizing this type of tool within a flow.
## Prerequisites
- Please install promptflow package and ensure that its version is 0.1.0b8 or later.
```
pip install promptflow>=0.1.0b8
```
- Please ensure that your [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow) is updated to version 1.1.0 or later.
## Using File Path as Package Tool Input
### How to create a package tool with file path input
Here we use [an existing tool package](https://github.com/microsoft/promptflow/tree/main/examples/tools/tool-package-quickstart/my_tool_package) as an example. If you want to create your own tool, please refer to [create and use tool package](create-and-use-tool-package.md#create-custom-tool-package).
1. Add a `FilePath` input for your tool, like in [this example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/tools/tool_with_file_path_input.py).
```python
import importlib
from pathlib import Path
from promptflow import tool
# 1. import the FilePath type
from promptflow.contracts.types import FilePath
# 2. add a FilePath input for your tool method
@tool
def my_tool(input_file: FilePath, input_text: str) -> str:
# 3. customise your own code to handle and use the input_file here
new_module = importlib.import_module(Path(input_file).stem)
return new_module.hello(input_text)
```
2. `FilePath` input format in a tool YAML, like in [this example](https://github.com/microsoft/promptflow/blob/main/examples/tools/tool-package-quickstart/my_tool_package/yamls/tool_with_file_path_input.yaml).
```yaml
my_tool_package.tools.tool_with_file_path_input.my_tool:
function: my_tool
inputs:
# yaml format for FilePath input
input_file:
type:
- file_path
input_text:
type:
- string
module: my_tool_package.tools.tool_with_file_path_input
name: Tool with FilePath Input
description: This is a tool to demonstrate the usage of FilePath input
type: python
```
> [!Note] tool yaml file can be generated using a python script. For further details, please refer to [create custom tool package](create-and-use-tool-package.md#create-custom-tool-package).
### Use tool with a file path input in VS Code extension
Follow steps to [build and install your tool package](create-and-use-tool-package.md#build-and-share-the-tool-package) and [use your tool from VS Code extension](create-and-use-tool-package.md#use-your-tool-from-vscode-extension).
Here we use an existing flow to demonstrate the experience, open [this flow](https://github.com/microsoft/promptflow/blob/main/examples/tools/use-cases/filepath-input-tool-showcase/flow.dag.yaml) in VS Code extension:
- There is a node named "Tool_with_FilePath_Input" with a `file_path` type input called `input_file`.
- Click the picker icon to open the UI for selecting an existing file or creating a new file to use as input.
![use file path in flow](../../media/how-to-guides/develop-a-tool/use_file_path_in_flow.png)
## Using File Path as Script Tool Input
We can also utilize the `FilePath` input type directly in a script tool, eliminating the need to create a package tool.
1. Initiate an empty flow in the VS Code extension and add a python node titled 'python_node_with_filepath' into it in the Visual Editor page.
2. Select the link `python_node_with_filepath.py` in the node to modify the python method to include a `FilePath` input as shown below, and save the code change.
```python
import importlib
from pathlib import Path
from promptflow import tool
# 1. import the FilePath type
from promptflow.contracts.types import FilePath
# 2. add a FilePath input for your tool method
@tool
def my_tool(input_file: FilePath, input_text: str) -> str:
# 3. customise your own code to handle and use the input_file here
new_module = importlib.import_module(Path(input_file).stem)
return new_module.hello(input_text)
```
3. Return to the flow Visual Editor page, click the picker icon to launch the UI for selecting an existing file or creating a new file to use as input, here we select [this file](https://github.com/microsoft/promptflow/blob/main/examples/tools/use-cases/filepath-input-tool-showcase/hello_method.py) as an example.
![use file path in script tool](../../media/how-to-guides/develop-a-tool/use_file_path_in_script_tool.png)
## FAQ
### What are some practical use cases for this feature?
The `FilePath` input enables several useful workflows:
1. **Dynamically load modules** - As shown in the demo, you can load a Python module from a specific script file selected by the user. This allows flexible custom logic.
2. **Load arbitrary data files** - The tool can load data from files like .csv, .txt, .json, etc. This provides an easy way to inject external data into a tool.
So in summary, `FilePath` input gives tools flexible access to external files provided by users at runtime. This unlocks many useful scenarios like the ones above.
| promptflow/docs/how-to-guides/develop-a-tool/use-file-path-as-tool-input.md/0 | {
"file_path": "promptflow/docs/how-to-guides/develop-a-tool/use-file-path-as-tool-input.md",
"repo_id": "promptflow",
"token_count": 1804
} | 2 |
# Alternative LLMs
This section provides tutorials on incorporating alternative large language models into prompt flow.
```{toctree}
:maxdepth: 1
:hidden:
``` | promptflow/docs/integrations/llms/index.md/0 | {
"file_path": "promptflow/docs/integrations/llms/index.md",
"repo_id": "promptflow",
"token_count": 43
} | 3 |
# Python
## Introduction
Users are empowered by the Python Tool to offer customized code snippets as self-contained executable nodes in PromptFlow.
Users can effortlessly create Python tools, edit code, and verify results with ease.
## Inputs
| Name | Type | Description | Required |
|--------|--------|------------------------------------------------------|---------|
| Code | string | Python code snippet | Yes |
| Inputs | - | List of tool function parameters and its assignments | - |
### Types
| Type | Python example | Description |
|-----------------------------------------------------|---------------------------------|--------------------------------------------|
| int | param: int | Integer type |
| bool | param: bool | Boolean type |
| string | param: str | String type |
| double | param: float | Double type |
| list | param: list or param: List[T] | List type |
| object | param: dict or param: Dict[K, V] | Object type |
| [Connection](../../concepts/concept-connections.md) | param: CustomConnection | Connection type, will be handled specially |
Parameters with `Connection` type annotation will be treated as connection inputs, which means:
- Promptflow extension will show a selector to select the connection.
- During execution time, promptflow will try to find the connection with the name same from parameter value passed in.
Note that `Union[...]` type annotation is supported **ONLY** for connection type,
for example, `param: Union[CustomConnection, OpenAIConnection]`.
## Outputs
The return of the python tool function.
## How to write Python Tool?
### Guidelines
1. Python Tool Code should consist of a complete Python code, including any necessary module imports.
2. Python Tool Code must contain a function decorated with @tool (tool function), serving as the entry point for execution. The @tool decorator should be applied only once within the snippet.
_Below sample defines python tool "my_python_tool", decorated with @tool_
3. Python tool function parameters must be assigned in 'Inputs' section
_Below sample defines inputs "message" and assign with "world"_
4. Python tool function shall have return
_Below sample returns a concatenated string_
### Code
The snippet below shows the basic structure of a tool function. Promptflow will read the function and extract inputs
from function parameters and type annotations.
```python
from promptflow import tool
from promptflow.connections import CustomConnection
# The inputs section will change based on the arguments of the tool function, after you save the code
# Adding type to arguments and return value will help the system show the types properly
# Please update the function name/signature per need
@tool
def my_python_tool(message: str, my_conn: CustomConnection) -> str:
my_conn_dict = dict(my_conn)
# Do some function call with my_conn_dict...
return 'hello ' + message
```
### Inputs
| Name | Type | Sample Value in Flow Yaml | Value passed to function|
|---------|--------|-------------------------| ------------------------|
| message | string | "world" | "world" |
| my_conn | CustomConnection | "my_conn" | CustomConnection object |
Promptflow will try to find the connection named 'my_conn' during execution time.
### outputs
```python
"hello world"
```
### Keyword Arguments Support
Starting from version 1.0.0 of PromptFlow and version 1.4.0 of [Prompt flow for VS Code](https://marketplace.visualstudio.com/items?itemName=prompt-flow.prompt-flow),
we have introduced support for keyword arguments (kwargs) in the Python tool.
```python
from promptflow import tool
@tool
def print_test(normal_input: str, **kwargs):
for key, value in kwargs.items():
print(f"Key {key}'s value is {value}")
return len(kwargs)
```
When you add `kwargs` in your python tool like above code, you can insert variable number of inputs by the `+Add input` button.
![Screenshot of the kwargs On VScode Prompt Flow extension](../../media/reference/tools-reference/python_tool_kwargs.png) | promptflow/docs/reference/tools-reference/python-tool.md/0 | {
"file_path": "promptflow/docs/reference/tools-reference/python-tool.md",
"repo_id": "promptflow",
"token_count": 1821
} | 4 |
from enum import Enum
from typing import Union
from openai import AzureOpenAI as AzureOpenAIClient, OpenAI as OpenAIClient
from promptflow.tools.common import handle_openai_error, normalize_connection_config
from promptflow.tools.exception import InvalidConnectionType
# Avoid circular dependencies: Use import 'from promptflow._internal' instead of 'from promptflow'
# since the code here is in promptflow namespace as well
from promptflow._internal import tool
from promptflow.connections import AzureOpenAIConnection, OpenAIConnection
class EmbeddingModel(str, Enum):
TEXT_EMBEDDING_ADA_002 = "text-embedding-ada-002"
TEXT_SEARCH_ADA_DOC_001 = "text-search-ada-doc-001"
TEXT_SEARCH_ADA_QUERY_001 = "text-search-ada-query-001"
@tool
@handle_openai_error()
def embedding(connection: Union[AzureOpenAIConnection, OpenAIConnection], input: str, deployment_name: str = "",
model: EmbeddingModel = EmbeddingModel.TEXT_EMBEDDING_ADA_002):
if isinstance(connection, AzureOpenAIConnection):
client = AzureOpenAIClient(**normalize_connection_config(connection))
return client.embeddings.create(
input=input,
model=deployment_name,
extra_headers={"ms-azure-ai-promptflow-called-from": "aoai-tool"}
).data[0].embedding
elif isinstance(connection, OpenAIConnection):
client = OpenAIClient(**normalize_connection_config(connection))
return client.embeddings.create(
input=input,
model=model
).data[0].embedding
else:
error_message = f"Not Support connection type '{type(connection).__name__}' for embedding api. " \
f"Connection type should be in [AzureOpenAIConnection, OpenAIConnection]."
raise InvalidConnectionType(message=error_message)
| promptflow/src/promptflow-tools/promptflow/tools/embedding.py/0 | {
"file_path": "promptflow/src/promptflow-tools/promptflow/tools/embedding.py",
"repo_id": "promptflow",
"token_count": 684
} | 5 |
import os
import re
from io import open
from typing import Any, List, Match, cast
from setuptools import find_namespace_packages, setup
PACKAGE_NAME = "promptflow-tools"
PACKAGE_FOLDER_PATH = "promptflow"
def parse_requirements(file_name: str) -> List[str]:
with open(file_name) as f:
return [
require.strip() for require in f
if require.strip() and not require.startswith('#')
]
# Version extraction inspired from 'requests'
with open(os.path.join(PACKAGE_FOLDER_PATH, "version.txt"), "r") as fd:
version_content = fd.read()
print(version_content)
version = cast(Match[Any], re.search(r'^VERSION\s*=\s*[\'"]([^\'"]*)[\'"]', version_content, re.MULTILINE)).group(1)
if not version:
raise RuntimeError("Cannot find version information")
with open("README.md", encoding="utf-8") as f:
readme = f.read()
with open("CHANGELOG.md", encoding="utf-8") as f:
changelog = f.read()
setup(
name=PACKAGE_NAME,
version=version,
description="Prompt flow built-in tools",
long_description_content_type="text/markdown",
long_description=readme + "\n\n" + changelog,
author="Microsoft Corporation",
author_email="[email protected]",
url="https://github.com/microsoft/promptflow",
classifiers=[
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
python_requires="<4.0,>=3.8",
install_requires=parse_requirements('requirements.txt'),
extras_require={
"azure": [
# Dependency to list deployment in aoai_gpt4v
"azure-mgmt-cognitiveservices==13.5.0"
]
},
packages=find_namespace_packages(include=[f"{PACKAGE_FOLDER_PATH}.*"]),
entry_points={
"package_tools": ["builtins = promptflow.tools.list:list_package_tools"],
},
include_package_data=True,
project_urls={
"Bug Reports": "https://github.com/microsoft/promptflow/issues",
"Source": "https://github.com/microsoft/promptflow",
},
)
| promptflow/src/promptflow-tools/setup.py/0 | {
"file_path": "promptflow/src/promptflow-tools/setup.py",
"repo_id": "promptflow",
"token_count": 947
} | 6 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import argparse
import json
from promptflow._cli._params import (
add_param_all_results,
add_param_archived_only,
add_param_include_archived,
add_param_max_results,
base_params,
)
from promptflow._cli._utils import activate_action, exception_handler
from promptflow._sdk._constants import get_list_view_type
from promptflow._sdk._pf_client import PFClient
from promptflow._sdk.entities._experiment import Experiment
from promptflow._utils.logger_utils import get_cli_sdk_logger
logger = get_cli_sdk_logger()
_client = None
def _get_pf_client():
global _client
if _client is None:
_client = PFClient()
return _client
def add_param_template(parser):
parser.add_argument("--template", type=str, required=True, help="The experiment template path.")
def add_param_name(parser):
parser.add_argument("--name", "-n", type=str, help="The experiment name.")
def add_experiment_create(subparsers):
epilog = """
Examples:
# Create an experiment from a template:
pf experiment create --template flow.exp.yaml
"""
add_params = [add_param_template, add_param_name] + base_params
create_parser = activate_action(
name="create",
description=None,
epilog=epilog,
add_params=add_params,
subparsers=subparsers,
help_message="Create an experiment.",
action_param_name="sub_action",
)
return create_parser
def add_experiment_list(subparsers):
epilog = """
Examples:
# List all experiments:
pf experiment list
"""
activate_action(
name="list",
description="List all experiments.",
epilog=epilog,
add_params=[
add_param_max_results,
add_param_all_results,
add_param_archived_only,
add_param_include_archived,
]
+ base_params,
subparsers=subparsers,
help_message="List all experiments.",
action_param_name="sub_action",
)
def add_experiment_show(subparsers):
epilog = """
Examples:
# Get and show an experiment:
pf experiment show -n my_experiment
"""
activate_action(
name="show",
description="Show an experiment for promptflow.",
epilog=epilog,
add_params=[add_param_name] + base_params,
subparsers=subparsers,
help_message="Show an experiment for promptflow.",
action_param_name="sub_action",
)
def add_experiment_start(subparsers):
epilog = """
Examples:
# Start an experiment:
pf experiment start -n my_experiment
"""
activate_action(
name="start",
description="Start an experiment.",
epilog=epilog,
add_params=[add_param_name] + base_params,
subparsers=subparsers,
help_message="Start an experiment.",
action_param_name="sub_action",
)
def add_experiment_stop(subparsers):
epilog = """
Examples:
# Stop an experiment:
pf experiment stop -n my_experiment
"""
activate_action(
name="stop",
description="Stop an experiment.",
epilog=epilog,
add_params=[add_param_name] + base_params,
subparsers=subparsers,
help_message="Stop an experiment.",
action_param_name="sub_action",
)
def add_experiment_parser(subparsers):
experiment_parser = subparsers.add_parser(
"experiment",
description="[Experimental] A CLI tool to manage experiment for prompt flow.",
help="[Experimental] pf experiment. This is an experimental feature, and may change at any time.",
)
subparsers = experiment_parser.add_subparsers()
add_experiment_create(subparsers)
add_experiment_list(subparsers)
add_experiment_show(subparsers)
add_experiment_start(subparsers)
add_experiment_stop(subparsers)
experiment_parser.set_defaults(action="experiment")
def dispatch_experiment_commands(args: argparse.Namespace):
if args.sub_action == "create":
create_experiment(args)
elif args.sub_action == "list":
list_experiment(args)
elif args.sub_action == "show":
show_experiment(args)
elif args.sub_action == "start":
start_experiment(args)
elif args.sub_action == "show-status":
pass
elif args.sub_action == "update":
pass
elif args.sub_action == "delete":
pass
elif args.sub_action == "stop":
stop_experiment(args)
elif args.sub_action == "test":
pass
elif args.sub_action == "clone":
pass
@exception_handler("Create experiment")
def create_experiment(args: argparse.Namespace):
from promptflow._sdk._load_functions import _load_experiment_template
template_path = args.template
logger.debug("Loading experiment template from %s", template_path)
template = _load_experiment_template(source=template_path)
logger.debug("Creating experiment from template %s", template.dir_name)
experiment = Experiment.from_template(template, name=args.name)
logger.debug("Creating experiment %s", experiment.name)
exp = _get_pf_client()._experiments.create_or_update(experiment)
print(json.dumps(exp._to_dict(), indent=4))
@exception_handler("List experiment")
def list_experiment(args: argparse.Namespace):
list_view_type = get_list_view_type(archived_only=args.archived_only, include_archived=args.include_archived)
results = _get_pf_client()._experiments.list(args.max_results, list_view_type=list_view_type)
print(json.dumps([result._to_dict() for result in results], indent=4))
@exception_handler("Show experiment")
def show_experiment(args: argparse.Namespace):
result = _get_pf_client()._experiments.get(args.name)
print(json.dumps(result._to_dict(), indent=4))
@exception_handler("Start experiment")
def start_experiment(args: argparse.Namespace):
result = _get_pf_client()._experiments.start(args.name)
print(json.dumps(result._to_dict(), indent=4))
@exception_handler("Stop experiment")
def stop_experiment(args: argparse.Namespace):
result = _get_pf_client()._experiments.stop(args.name)
print(json.dumps(result._to_dict(), indent=4))
| promptflow/src/promptflow/promptflow/_cli/_pf/_experiment.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/_pf/_experiment.py",
"repo_id": "promptflow",
"token_count": 2454
} | 7 |
The directory structure in the package tool is as follows:
```python
{{ package_name }}
│ setup.py # This file contains metadata about your project like the name, version.
│
│ MANIFEST.in # This file is used to determine which files to include in the distribution of the project.
│
└───{{ package_name }}{{" " * (24 - package_name|length)}}# This is the source directory. All of your project’s source code should be placed in this directory.
{{ tool_name }}.py{{ " " * (17 - tool_name|length)}}# The source code of tools. Using the @tool decorator to identify the function as a tool.
utils.py # Utility functions for the package. A method for listing all tools defined in the package is generated in this file.
__init__.py
```
Please refer to [tool doc](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/index.html) for more details about how to develop a tool. | promptflow/src/promptflow/promptflow/_cli/data/package_tool/README.md.jinja2/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_cli/data/package_tool/README.md.jinja2",
"repo_id": "promptflow",
"token_count": 311
} | 8 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import asyncio
import functools
import inspect
import logging
import threading
import time
import uuid
from contextvars import ContextVar
from logging import WARNING
from typing import Callable
from promptflow._core._errors import ToolExecutionError, UnexpectedError
from promptflow._core.cache_manager import AbstractCacheManager, CacheInfo, CacheResult
from promptflow._utils.logger_utils import flow_logger, logger
from promptflow._utils.thread_utils import RepeatLogTimer
from promptflow._utils.utils import generate_elapsed_time_messages
from promptflow.contracts.flow import Node
from promptflow.contracts.run_info import RunInfo
from promptflow.exceptions import PromptflowException
from .run_tracker import RunTracker
from .thread_local_singleton import ThreadLocalSingleton
from .tracer import Tracer
class FlowExecutionContext(ThreadLocalSingleton):
"""The context for a flow execution."""
CONTEXT_VAR_NAME = "Flow"
context_var = ContextVar(CONTEXT_VAR_NAME, default=None)
def __init__(
self,
name,
run_tracker: RunTracker,
cache_manager: AbstractCacheManager = None,
run_id=None,
flow_id=None,
line_number=None,
variant_id=None,
):
self._name = name
self._run_tracker = run_tracker
self._cache_manager = cache_manager or AbstractCacheManager.init_from_env()
self._run_id = run_id or str(uuid.uuid4())
self._flow_id = flow_id or self._run_id
self._line_number = line_number
self._variant_id = variant_id
def copy(self):
return FlowExecutionContext(
name=self._name,
run_tracker=self._run_tracker,
cache_manager=self._cache_manager,
run_id=self._run_id,
flow_id=self._flow_id,
line_number=self._line_number,
variant_id=self._variant_id,
)
def cancel_node_runs(self, msg):
self._run_tracker.cancel_node_runs(msg, self._run_id)
def invoke_tool(self, node: Node, f: Callable, kwargs):
run_info = self._prepare_node_run(node, f, kwargs)
node_run_id = run_info.run_id
traces = []
try:
hit_cache = False
# Get result from cache. If hit cache, no need to execute f.
cache_info: CacheInfo = self._cache_manager.calculate_cache_info(self._flow_id, f, [], kwargs)
if node.enable_cache and cache_info:
cache_result: CacheResult = self._cache_manager.get_cache_result(cache_info)
if cache_result and cache_result.hit_cache:
# Assign cached_flow_run_id and cached_run_id.
run_info.cached_flow_run_id = cache_result.cached_flow_run_id
run_info.cached_run_id = cache_result.cached_run_id
result = cache_result.result
hit_cache = True
if not hit_cache:
Tracer.start_tracing(node_run_id, node.name)
result = self._invoke_tool_with_timer(node, f, kwargs)
traces = Tracer.end_tracing(node_run_id)
self._run_tracker.end_run(node_run_id, result=result, traces=traces)
# Record result in cache so that future run might reuse its result.
if not hit_cache and node.enable_cache:
self._persist_cache(cache_info, run_info)
flow_logger.info(f"Node {node.name} completes.")
return result
except Exception as e:
logger.exception(f"Node {node.name} in line {self._line_number} failed. Exception: {e}.")
if not traces:
traces = Tracer.end_tracing(node_run_id)
self._run_tracker.end_run(node_run_id, ex=e, traces=traces)
raise
finally:
self._run_tracker.persist_node_run(run_info)
def _prepare_node_run(self, node: Node, f, kwargs={}):
node_run_id = self._generate_node_run_id(node)
flow_logger.info(f"Executing node {node.name}. node run id: {node_run_id}")
parent_run_id = f"{self._run_id}_{self._line_number}" if self._line_number is not None else self._run_id
run_info: RunInfo = self._run_tracker.start_node_run(
node=node.name,
flow_run_id=self._run_id,
parent_run_id=parent_run_id,
run_id=node_run_id,
index=self._line_number,
)
run_info.index = self._line_number
run_info.variant_id = self._variant_id
self._run_tracker.set_inputs(node_run_id, {key: value for key, value in kwargs.items() if key != "self"})
return run_info
async def invoke_tool_async(self, node: Node, f: Callable, kwargs):
if not inspect.iscoroutinefunction(f):
raise UnexpectedError(
message_format="Tool '{function}' in node '{node}' is not a coroutine function.",
function=f,
node=node.name,
)
run_info = self._prepare_node_run(node, f, kwargs=kwargs)
node_run_id = run_info.run_id
traces = []
try:
Tracer.start_tracing(node_run_id, node.name)
result = await self._invoke_tool_async_inner(node, f, kwargs)
traces = Tracer.end_tracing(node_run_id)
self._run_tracker.end_run(node_run_id, result=result, traces=traces)
flow_logger.info(f"Node {node.name} completes.")
return result
# User tool should reraise the CancelledError after its own handling logic,
# so that the error can propagate to the scheduler for handling.
# Otherwise, the node would end with Completed status.
except asyncio.CancelledError as e:
logger.info(f"Node {node.name} in line {self._line_number} is canceled.")
traces = Tracer.end_tracing(node_run_id)
self._run_tracker.end_run(node_run_id, ex=e, traces=traces)
raise
except Exception as e:
logger.exception(f"Node {node.name} in line {self._line_number} failed. Exception: {e}.")
traces = Tracer.end_tracing(node_run_id)
self._run_tracker.end_run(node_run_id, ex=e, traces=traces)
raise
finally:
self._run_tracker.persist_node_run(run_info)
async def _invoke_tool_async_inner(self, node: Node, f: Callable, kwargs):
module = f.func.__module__ if isinstance(f, functools.partial) else f.__module__
try:
return await f(**kwargs)
except PromptflowException as e:
# All the exceptions from built-in tools are PromptflowException.
# For these cases, raise the exception directly.
if module is not None:
e.module = module
raise e
except Exception as e:
# Otherwise, we assume the error comes from user's tool.
# For these cases, raise ToolExecutionError, which is classified as UserError
# and shows stack trace in the error message to make it easy for user to troubleshoot.
raise ToolExecutionError(node_name=node.name, module=module) from e
def _invoke_tool_with_timer(self, node: Node, f: Callable, kwargs):
module = f.func.__module__ if isinstance(f, functools.partial) else f.__module__
node_name = node.name
try:
logging_name = node_name
if self._line_number is not None:
logging_name = f"{node_name} in line {self._line_number}"
interval_seconds = 60
start_time = time.perf_counter()
thread_id = threading.current_thread().ident
with RepeatLogTimer(
interval_seconds=interval_seconds,
logger=logger,
level=WARNING,
log_message_function=generate_elapsed_time_messages,
args=(logging_name, start_time, interval_seconds, thread_id),
):
return f(**kwargs)
except PromptflowException as e:
# All the exceptions from built-in tools are PromptflowException.
# For these cases, raise the exception directly.
if module is not None:
e.module = module
raise e
except Exception as e:
# Otherwise, we assume the error comes from user's tool.
# For these cases, raise ToolExecutionError, which is classified as UserError
# and shows stack trace in the error message to make it easy for user to troubleshoot.
raise ToolExecutionError(node_name=node_name, module=module) from e
def bypass_node(self, node: Node):
"""Update teh bypassed node run info."""
node_run_id = self._generate_node_run_id(node)
flow_logger.info(f"Bypassing node {node.name}. node run id: {node_run_id}")
parent_run_id = f"{self._run_id}_{self._line_number}" if self._line_number is not None else self._run_id
run_info = self._run_tracker.bypass_node_run(
node=node.name,
flow_run_id=self._run_id,
parent_run_id=parent_run_id,
run_id=node_run_id,
index=self._line_number,
variant_id=self._variant_id,
)
self._run_tracker.persist_node_run(run_info)
def _persist_cache(self, cache_info: CacheInfo, run_info: RunInfo):
"""Record result in cache storage if hash_id is valid."""
if cache_info and cache_info.hash_id is not None and len(cache_info.hash_id) > 0:
try:
self._cache_manager.persist_result(run_info, cache_info, self._flow_id)
except Exception as ex:
# Not a critical path, swallow the exception.
logging.warning(f"Failed to persist cache result. run_id: {run_info.run_id}. Exception: {ex}")
def _generate_node_run_id(self, node: Node) -> str:
if node.aggregation:
# For reduce node, the id should be constructed by the flow run info run id
return f"{self._run_id}_{node.name}_reduce"
if self._line_number is None:
return f"{self._run_id}_{node.name}_{uuid.uuid4()}"
return f"{self._run_id}_{node.name}_{self._line_number}"
| promptflow/src/promptflow/promptflow/_core/flow_execution_context.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_core/flow_execution_context.py",
"repo_id": "promptflow",
"token_count": 4636
} | 9 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import os
from enum import Enum
from pathlib import Path
LOGGER_NAME = "promptflow"
PROMPT_FLOW_HOME_DIR_ENV_VAR = "PF_HOME_DIRECTORY"
PROMPT_FLOW_DIR_NAME = ".promptflow"
def _prepare_home_dir() -> Path:
"""Prepare prompt flow home directory.
User can configure it by setting environment variable: `PF_HOME_DIRECTORY`;
if not configured, or configured value is not valid, use default value: "~/.promptflow/".
"""
from promptflow._utils.logger_utils import get_cli_sdk_logger
logger = get_cli_sdk_logger()
if PROMPT_FLOW_HOME_DIR_ENV_VAR in os.environ:
logger.debug(
f"environment variable {PROMPT_FLOW_HOME_DIR_ENV_VAR!r} is set, honor it preparing home directory."
)
try:
pf_home_dir = Path(os.getenv(PROMPT_FLOW_HOME_DIR_ENV_VAR)).resolve()
pf_home_dir.mkdir(parents=True, exist_ok=True)
return pf_home_dir
except Exception as e: # pylint: disable=broad-except
_warning_message = (
"Invalid configuration for prompt flow home directory: "
f"{os.getenv(PROMPT_FLOW_HOME_DIR_ENV_VAR)!r}: {str(e)!r}.\n"
'Fall back to use default value: "~/.promptflow/".'
)
logger.warning(_warning_message)
try:
logger.debug("preparing home directory with default value.")
pf_home_dir = (Path.home() / PROMPT_FLOW_DIR_NAME).resolve()
pf_home_dir.mkdir(parents=True, exist_ok=True)
return pf_home_dir
except Exception as e: # pylint: disable=broad-except
_error_message = (
f"Cannot create prompt flow home directory: {str(e)!r}.\n"
"Please check if you have proper permission to operate the directory "
f"{HOME_PROMPT_FLOW_DIR.as_posix()!r}; or configure it via "
f"environment variable {PROMPT_FLOW_HOME_DIR_ENV_VAR!r}.\n"
)
logger.error(_error_message)
raise Exception(_error_message)
HOME_PROMPT_FLOW_DIR = _prepare_home_dir()
DAG_FILE_NAME = "flow.dag.yaml"
NODE_VARIANTS = "node_variants"
VARIANTS = "variants"
NODES = "nodes"
NODE = "node"
INPUTS = "inputs"
USE_VARIANTS = "use_variants"
DEFAULT_VAR_ID = "default_variant_id"
FLOW_TOOLS_JSON = "flow.tools.json"
FLOW_META_JSON = "flow.json"
FLOW_TOOLS_JSON_GEN_TIMEOUT = 60
PROMPT_FLOW_RUNS_DIR_NAME = ".runs"
PROMPT_FLOW_EXP_DIR_NAME = ".exps"
SERVICE_CONFIG_FILE = "pf.yaml"
PF_SERVICE_PORT_FILE = "pfs.port"
PF_SERVICE_LOG_FILE = "pfs.log"
PF_TRACE_CONTEXT = "PF_TRACE_CONTEXT"
LOCAL_MGMT_DB_PATH = (HOME_PROMPT_FLOW_DIR / "pf.sqlite").resolve()
LOCAL_MGMT_DB_SESSION_ACQUIRE_LOCK_PATH = (HOME_PROMPT_FLOW_DIR / "pf.sqlite.lock").resolve()
SCHEMA_INFO_TABLENAME = "schema_info"
RUN_INFO_TABLENAME = "run_info"
RUN_INFO_CREATED_ON_INDEX_NAME = "idx_run_info_created_on"
CONNECTION_TABLE_NAME = "connection"
EXPERIMENT_TABLE_NAME = "experiment"
ORCHESTRATOR_TABLE_NAME = "orchestrator"
EXP_NODE_RUN_TABLE_NAME = "exp_node_run"
EXPERIMENT_CREATED_ON_INDEX_NAME = "idx_experiment_created_on"
BASE_PATH_CONTEXT_KEY = "base_path"
SCHEMA_KEYS_CONTEXT_CONFIG_KEY = "schema_configs_keys"
SCHEMA_KEYS_CONTEXT_SECRET_KEY = "schema_secrets_keys"
PARAMS_OVERRIDE_KEY = "params_override"
FILE_PREFIX = "file:"
KEYRING_SYSTEM = "promptflow"
KEYRING_ENCRYPTION_KEY_NAME = "encryption_key"
KEYRING_ENCRYPTION_LOCK_PATH = (HOME_PROMPT_FLOW_DIR / "encryption_key.lock").resolve()
REFRESH_CONNECTIONS_DIR_LOCK_PATH = (HOME_PROMPT_FLOW_DIR / "refresh_connections_dir.lock").resolve()
# Note: Use this only for show. Reading input should regard all '*' string as scrubbed, no matter the length.
SCRUBBED_VALUE = "******"
SCRUBBED_VALUE_NO_CHANGE = "<no-change>"
SCRUBBED_VALUE_USER_INPUT = "<user-input>"
CHAT_HISTORY = "chat_history"
WORKSPACE_LINKED_DATASTORE_NAME = "workspaceblobstore"
LINE_NUMBER = "line_number"
AZUREML_PF_RUN_PROPERTIES_LINEAGE = "azureml.promptflow.input_run_id"
AZURE_WORKSPACE_REGEX_FORMAT = (
"^azureml:[/]{1,2}subscriptions/([^/]+)/resource(groups|Groups)/([^/]+)"
"(/providers/Microsoft.MachineLearningServices)?/workspaces/([^/]+)$"
)
DEFAULT_ENCODING = "utf-8"
LOCAL_STORAGE_BATCH_SIZE = 1
LOCAL_SERVICE_PORT = 5000
BULK_RUN_ERRORS = "BulkRunErrors"
RUN_MACRO = "${run}"
VARIANT_ID_MACRO = "${variant_id}"
TIMESTAMP_MACRO = "${timestamp}"
DEFAULT_VARIANT = "variant_0"
# run visualize constants
VIS_HTML_TMPL = Path(__file__).parent / "data" / "visualize.j2"
VIS_JS_BUNDLE_FILENAME = "bulkTestDetails.min.js"
VIS_PORTAL_URL_TMPL = (
"https://ml.azure.com/prompts/flow/bulkrun/runs/outputs"
"?wsid=/subscriptions/{subscription_id}/resourceGroups/{resource_group_name}"
"/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}&runId={names}"
)
REMOTE_URI_PREFIX = "azureml:"
REGISTRY_URI_PREFIX = "azureml://registries/"
FLOW_RESOURCE_ID_PREFIX = "azureml://locations/"
FLOW_DIRECTORY_MACRO_IN_CONFIG = "${flow_directory}"
# Tool meta info
UIONLY_HIDDEN = "uionly_hidden"
SKIP_FUNC_PARAMS = ["subscription_id", "resource_group_name", "workspace_name"]
ICON_DARK = "icon_dark"
ICON_LIGHT = "icon_light"
ICON = "icon"
TOOL_SCHEMA = Path(__file__).parent / "data" / "tool.schema.json"
# trace
TRACE_MGMT_DB_PATH = (HOME_PROMPT_FLOW_DIR / "trace.sqlite").resolve()
TRACE_MGMT_DB_SESSION_ACQUIRE_LOCK_PATH = (HOME_PROMPT_FLOW_DIR / "trace.sqlite.lock").resolve()
SPAN_TABLENAME = "span"
PFS_MODEL_DATETIME_FORMAT = "iso8601"
class CustomStrongTypeConnectionConfigs:
PREFIX = "promptflow.connection."
TYPE = "custom_type"
MODULE = "module"
PACKAGE = "package"
PACKAGE_VERSION = "package_version"
PROMPTFLOW_TYPE_KEY = PREFIX + TYPE
PROMPTFLOW_MODULE_KEY = PREFIX + MODULE
PROMPTFLOW_PACKAGE_KEY = PREFIX + PACKAGE
PROMPTFLOW_PACKAGE_VERSION_KEY = PREFIX + PACKAGE_VERSION
@staticmethod
def is_custom_key(key):
return key not in [
CustomStrongTypeConnectionConfigs.PROMPTFLOW_TYPE_KEY,
CustomStrongTypeConnectionConfigs.PROMPTFLOW_MODULE_KEY,
CustomStrongTypeConnectionConfigs.PROMPTFLOW_PACKAGE_KEY,
CustomStrongTypeConnectionConfigs.PROMPTFLOW_PACKAGE_VERSION_KEY,
]
class RunTypes:
BATCH = "batch"
EVALUATION = "evaluation"
PAIRWISE_EVALUATE = "pairwise_evaluate"
COMMAND = "command"
class AzureRunTypes:
"""Run types for run entity from index service."""
BATCH = "azureml.promptflow.FlowRun"
EVALUATION = "azureml.promptflow.EvaluationRun"
PAIRWISE_EVALUATE = "azureml.promptflow.PairwiseEvaluationRun"
class RestRunTypes:
"""Run types for run entity from MT service."""
BATCH = "FlowRun"
EVALUATION = "EvaluationRun"
PAIRWISE_EVALUATE = "PairwiseEvaluationRun"
# run document statuses
class RunStatus(object):
# Ordered by transition order
QUEUED = "Queued"
NOT_STARTED = "NotStarted"
PREPARING = "Preparing"
PROVISIONING = "Provisioning"
STARTING = "Starting"
RUNNING = "Running"
CANCEL_REQUESTED = "CancelRequested"
CANCELED = "Canceled"
FINALIZING = "Finalizing"
COMPLETED = "Completed"
FAILED = "Failed"
UNAPPROVED = "Unapproved"
NOTRESPONDING = "NotResponding"
PAUSING = "Pausing"
PAUSED = "Paused"
@classmethod
def list(cls):
"""Return the list of supported run statuses."""
return [
cls.QUEUED,
cls.PREPARING,
cls.PROVISIONING,
cls.STARTING,
cls.RUNNING,
cls.CANCEL_REQUESTED,
cls.CANCELED,
cls.FINALIZING,
cls.COMPLETED,
cls.FAILED,
cls.NOT_STARTED,
cls.UNAPPROVED,
cls.NOTRESPONDING,
cls.PAUSING,
cls.PAUSED,
]
@classmethod
def get_running_statuses(cls):
"""Return the list of running statuses."""
return [
cls.NOT_STARTED,
cls.QUEUED,
cls.PREPARING,
cls.PROVISIONING,
cls.STARTING,
cls.RUNNING,
cls.UNAPPROVED,
cls.NOTRESPONDING,
cls.PAUSING,
cls.PAUSED,
]
@classmethod
def get_post_processing_statuses(cls):
"""Return the list of running statuses."""
return [cls.CANCEL_REQUESTED, cls.FINALIZING]
class FlowRunProperties:
FLOW_PATH = "flow_path"
OUTPUT_PATH = "output_path"
NODE_VARIANT = "node_variant"
RUN = "run"
SYSTEM_METRICS = "system_metrics"
# Experiment command node fields only
COMMAND = "command"
OUTPUTS = "outputs"
class CommonYamlFields:
"""Common yaml fields.
Common yaml fields are used to define the common fields in yaml files. It can be one of the following values: type,
name, $schema.
"""
TYPE = "type"
"""Type."""
NAME = "name"
"""Name."""
SCHEMA = "$schema"
"""Schema."""
MAX_LIST_CLI_RESULTS = 50 # general list
MAX_RUN_LIST_RESULTS = 50 # run list
MAX_SHOW_DETAILS_RESULTS = 100 # show details
class CLIListOutputFormat:
JSON = "json"
TABLE = "table"
class LocalStorageFilenames:
SNAPSHOT_FOLDER = "snapshot"
DAG = DAG_FILE_NAME
FLOW_TOOLS_JSON = FLOW_TOOLS_JSON
INPUTS = "inputs.jsonl"
OUTPUTS = "outputs.jsonl"
DETAIL = "detail.json"
METRICS = "metrics.json"
LOG = "logs.txt"
EXCEPTION = "error.json"
META = "meta.json"
class ListViewType(str, Enum):
ACTIVE_ONLY = "ActiveOnly"
ARCHIVED_ONLY = "ArchivedOnly"
ALL = "All"
def get_list_view_type(archived_only: bool, include_archived: bool) -> ListViewType:
if archived_only and include_archived:
raise Exception("Cannot provide both archived-only and include-archived.")
if include_archived:
return ListViewType.ALL
elif archived_only:
return ListViewType.ARCHIVED_ONLY
else:
return ListViewType.ACTIVE_ONLY
class RunInfoSources(str, Enum):
"""Run sources."""
LOCAL = "local"
INDEX_SERVICE = "index_service"
RUN_HISTORY = "run_history"
MT_SERVICE = "mt_service"
EXISTING_RUN = "existing_run"
class ConfigValueType(str, Enum):
STRING = "String"
SECRET = "Secret"
class ConnectionType(str, Enum):
_NOT_SET = "NotSet"
AZURE_OPEN_AI = "AzureOpenAI"
OPEN_AI = "OpenAI"
QDRANT = "Qdrant"
COGNITIVE_SEARCH = "CognitiveSearch"
SERP = "Serp"
AZURE_CONTENT_SAFETY = "AzureContentSafety"
FORM_RECOGNIZER = "FormRecognizer"
WEAVIATE = "Weaviate"
CUSTOM = "Custom"
ALL_CONNECTION_TYPES = set(
map(lambda x: f"{x.value}Connection", filter(lambda x: x != ConnectionType._NOT_SET, ConnectionType))
)
class ConnectionFields(str, Enum):
CONNECTION = "connection"
DEPLOYMENT_NAME = "deployment_name"
MODEL = "model"
SUPPORTED_CONNECTION_FIELDS = {
ConnectionFields.CONNECTION.value,
ConnectionFields.DEPLOYMENT_NAME.value,
ConnectionFields.MODEL.value,
}
class RunDataKeys:
PORTAL_URL = "portal_url"
DATA = "data"
RUN = "run"
OUTPUT = "output"
class RunHistoryKeys:
RunMetaData = "runMetadata"
HIDDEN = "hidden"
class ConnectionProvider(str, Enum):
LOCAL = "local"
AZUREML = "azureml"
class FlowType:
STANDARD = "standard"
EVALUATION = "evaluation"
CHAT = "chat"
@staticmethod
def get_all_values():
values = [value for key, value in vars(FlowType).items() if isinstance(value, str) and key.isupper()]
return values
CLIENT_FLOW_TYPE_2_SERVICE_FLOW_TYPE = {
FlowType.STANDARD: "default",
FlowType.EVALUATION: "evaluation",
FlowType.CHAT: "chat",
}
SERVICE_FLOW_TYPE_2_CLIENT_FLOW_TYPE = {value: key for key, value in CLIENT_FLOW_TYPE_2_SERVICE_FLOW_TYPE.items()}
class AzureFlowSource:
LOCAL = "local"
PF_SERVICE = "pf_service"
INDEX = "index"
class DownloadedRun:
SNAPSHOT_FOLDER = LocalStorageFilenames.SNAPSHOT_FOLDER
METRICS_FILE_NAME = LocalStorageFilenames.METRICS
LOGS_FILE_NAME = LocalStorageFilenames.LOG
RUN_METADATA_FILE_NAME = "run_metadata.json"
class ExperimentNodeType(object):
FLOW = "flow"
CHAT_GROUP = "chat_group"
COMMAND = "command"
class ExperimentStatus(object):
NOT_STARTED = "NotStarted"
QUEUING = "Queuing"
IN_PROGRESS = "InProgress"
TERMINATED = "Terminated"
class ExperimentNodeRunStatus(object):
NOT_STARTED = "NotStarted"
QUEUING = "Queuing"
IN_PROGRESS = "InProgress"
COMPLETED = "Completed"
FAILED = "Failed"
CANCELED = "Canceled"
class ContextAttributeKey:
EXPERIMENT = "experiment"
# Note: referenced id not used for lineage, only for evaluation
REFERENCED_LINE_RUN_ID = "referenced.line_run_id"
REFERENCED_BATCH_RUN_ID = "referenced.batch_run_id"
class EnvironmentVariables:
"""The environment variables."""
PF_USE_AZURE_CLI_CREDENTIAL = "PF_USE_AZURE_CLI_CREDENTIAL"
| promptflow/src/promptflow/promptflow/_sdk/_constants.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_constants.py",
"repo_id": "promptflow",
"token_count": 5765
} | 10 |
# Prompt Flow Service
This document will describe the usage of pfs(prompt flow service) CLI.
### Start prompt flow service (optional)
If you don't install pfs as a service, you need to start pfs manually.
pfs CLI provides **start** command to start service. You can also use this command to specify the service port.
```commandline
usage: pfs [-h] [-p PORT]
Start prompt flow service.
optional arguments:
-h, --help show this help message and exit
-p PORT, --port PORT port of the promptflow service
```
If you don't specify a port to start service, pfs will first use the port in the configure file in "~/.promptflow/pfs.port".
If not found port configuration or the port is used, pfs will use a random port to start the service.
### Swagger of service
After start the service, it will provide Swagger UI documentation, served from "http://localhost:your-port/v1.0/swagger.json".
For details, please refer to [swagger.json](./swagger.json).
#### Generate C# client
1. Right click the project, Add -> Rest API Client... -> Generate with OpenAPI Generator
2. It will open a dialog, fill in the file name and swagger url, it will generate the client under the project.
For details, please refer to [REST API Client Code Generator](https://marketplace.visualstudio.com/items?itemName=ChristianResmaHelle.ApiClientCodeGenerator2022). | promptflow/src/promptflow/promptflow/_sdk/_service/README.md/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/README.md",
"repo_id": "promptflow",
"token_count": 387
} | 11 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import getpass
import socket
import time
from dataclasses import InitVar, dataclass, field
from datetime import datetime
from functools import wraps
import psutil
import requests
from flask import abort, make_response, request
from promptflow._sdk._constants import DEFAULT_ENCODING, HOME_PROMPT_FLOW_DIR, PF_SERVICE_PORT_FILE
from promptflow._sdk._errors import ConnectionNotFoundError, RunNotFoundError
from promptflow._sdk._utils import read_write_by_user
from promptflow._utils.logger_utils import get_cli_sdk_logger
from promptflow._utils.yaml_utils import dump_yaml, load_yaml
from promptflow._version import VERSION
from promptflow.exceptions import PromptflowException, UserErrorException
logger = get_cli_sdk_logger()
def local_user_only(func):
@wraps(func)
def wrapper(*args, **kwargs):
# Get the user name from request.
user = request.environ.get("REMOTE_USER") or request.headers.get("X-Remote-User")
if user != getpass.getuser():
abort(403)
return func(*args, **kwargs)
return wrapper
def get_port_from_config(create_if_not_exists=False):
(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE).touch(mode=read_write_by_user(), exist_ok=True)
with open(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE, "r", encoding=DEFAULT_ENCODING) as f:
service_config = load_yaml(f) or {}
port = service_config.get("service", {}).get("port", None)
if not port and create_if_not_exists:
with open(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE, "w", encoding=DEFAULT_ENCODING) as f:
# Set random port to ~/.promptflow/pf.yaml
port = get_random_port()
service_config["service"] = service_config.get("service", {})
service_config["service"]["port"] = port
dump_yaml(service_config, f)
return port
def dump_port_to_config(port):
# Set port to ~/.promptflow/pf.port, if already have a port in file , will overwrite it.
(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE).touch(mode=read_write_by_user(), exist_ok=True)
with open(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE, "r", encoding=DEFAULT_ENCODING) as f:
service_config = load_yaml(f) or {}
with open(HOME_PROMPT_FLOW_DIR / PF_SERVICE_PORT_FILE, "w", encoding=DEFAULT_ENCODING) as f:
service_config["service"] = service_config.get("service", {})
service_config["service"]["port"] = port
dump_yaml(service_config, f)
def is_port_in_use(port: int):
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
return s.connect_ex(("localhost", port)) == 0
def get_random_port():
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(("localhost", 0))
return s.getsockname()[1]
def _get_process_by_port(port):
for proc in psutil.process_iter(["pid", "connections", "create_time"]):
try:
for connection in proc.connections():
if connection.laddr.port == port:
return proc
except psutil.AccessDenied:
pass
def kill_exist_service(port):
proc = _get_process_by_port(port)
if proc:
proc.terminate()
proc.wait(10)
def get_started_service_info(port):
service_info = {}
proc = _get_process_by_port(port)
if proc:
create_time = proc.info["create_time"]
process_uptime = datetime.now() - datetime.fromtimestamp(create_time)
service_info["create_time"] = str(datetime.fromtimestamp(create_time))
service_info["uptime"] = str(process_uptime)
service_info["port"] = port
return service_info
def make_response_no_content():
return make_response("", 204)
def is_pfs_service_healthy(pfs_port) -> bool:
"""Check if pfs service is running."""
try:
response = requests.get("http://localhost:{}/heartbeat".format(pfs_port))
if response.status_code == 200:
logger.debug(f"Pfs service is already running on port {pfs_port}.")
return True
except Exception: # pylint: disable=broad-except
pass
logger.warning(f"Pfs service can't be reached through port {pfs_port}, will try to start/force restart pfs.")
return False
def check_pfs_service_status(pfs_port, time_delay=5, time_threshold=30) -> bool:
wait_time = 0
is_healthy = False
while is_healthy is False and time_threshold > wait_time:
logger.info(
f"Pfs service is not ready. It has been waited for {wait_time}s, will wait for at most "
f"{time_threshold}s."
)
wait_time += time_delay
time.sleep(time_delay)
is_healthy = is_pfs_service_healthy(pfs_port)
return is_healthy
@dataclass
class ErrorInfo:
exception: InitVar[Exception]
code: str = field(init=False)
message: str = field(init=False)
message_format: str = field(init=False, default=None)
message_parameters: dict = field(init=False, default=None)
target: str = field(init=False, default=None)
module: str = field(init=False, default=None)
reference_code: str = field(init=False, default=None)
inner_exception: dict = field(init=False, default=None)
additional_info: dict = field(init=False, default=None)
error_codes: list = field(init=False, default=None)
def __post_init__(self, exception):
if isinstance(exception, PromptflowException):
self.code = "PromptflowError"
if isinstance(exception, (UserErrorException, ConnectionNotFoundError, RunNotFoundError)):
self.code = "UserError"
self.message = exception.message
self.message_format = exception.message_format
self.message_parameters = exception.message_parameters
self.target = exception.target
self.module = exception.module
self.reference_code = exception.reference_code
self.inner_exception = exception.inner_exception
self.additional_info = exception.additional_info
self.error_codes = exception.error_codes
else:
self.code = "ServiceError"
self.message = str(exception)
@dataclass
class FormattedException:
exception: InitVar[Exception]
status_code: InitVar[int] = 500
error: ErrorInfo = field(init=False)
time: str = field(init=False)
def __post_init__(self, exception, status_code):
self.status_code = status_code
if isinstance(exception, (UserErrorException, ConnectionNotFoundError, RunNotFoundError)):
self.status_code = 404
self.error = ErrorInfo(exception)
self.time = datetime.now().isoformat()
def build_pfs_user_agent():
extra_agent = f"local_pfs/{VERSION}"
if request.user_agent.string:
return f"{request.user_agent.string} {extra_agent}"
return extra_agent
def get_client_from_request() -> "PFClient":
from promptflow._sdk._pf_client import PFClient
return PFClient(user_agent=build_pfs_user_agent())
| promptflow/src/promptflow/promptflow/_sdk/_service/utils/utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_service/utils/utils.py",
"repo_id": "promptflow",
"token_count": 2835
} | 12 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from enum import Enum
from typing import Dict, Sequence, Set, List, Any
from promptflow._utils.exception_utils import ErrorResponse
from promptflow.contracts.run_info import FlowRunInfo, RunInfo, Status
# define metrics dimension keys
FLOW_KEY = "flow"
RUN_STATUS_KEY = "run_status"
NODE_KEY = "node"
LLM_ENGINE_KEY = "llm_engine"
TOKEN_TYPE_KEY = "token_type"
RESPONSE_CODE_KEY = "response_code"
EXCEPTION_TYPE_KEY = "exception"
STREAMING_KEY = "streaming"
API_CALL_KEY = "api_call"
RESPONSE_TYPE_KEY = "response_type" # firstbyte, lastbyte, default
HISTOGRAM_BOUNDARIES: Sequence[float] = (
1.0,
5.0,
10.0,
25.0,
50.0,
75.0,
100.0,
250.0,
500.0,
750.0,
1000.0,
2500.0,
5000.0,
7500.0,
10000.0,
25000.0,
50000.0,
75000.0,
100000.0,
300000.0,
)
class ResponseType(Enum):
# latency from receiving the request to sending the first byte of response, only applicable to streaming flow
FirstByte = "firstbyte"
# latency from receiving the request to sending the last byte of response, only applicable to streaming flow
LastByte = "lastbyte"
# latency from receiving the request to sending the whole response, only applicable to non-streaming flow
Default = "default"
class LLMTokenType(Enum):
PromptTokens = "prompt_tokens"
CompletionTokens = "completion_tokens"
try:
from opentelemetry import metrics
from opentelemetry.metrics import set_meter_provider
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.view import ExplicitBucketHistogramAggregation, SumAggregation, View
# define meter
meter = metrics.get_meter_provider().get_meter("Promptflow Standard Metrics")
# define metrics
token_consumption = meter.create_counter("Token_Consumption")
flow_latency = meter.create_histogram("Flow_Latency")
node_latency = meter.create_histogram("Node_Latency")
flow_request = meter.create_counter("Flow_Request")
remote_api_call_latency = meter.create_histogram("RPC_Latency")
remote_api_call_request = meter.create_counter("RPC_Request")
node_request = meter.create_counter("Node_Request")
# metrics for streaming
streaming_response_duration = meter.create_histogram("Flow_Streaming_Response_Duration")
# define metrics views
# token view
token_view = View(
instrument_name="Token_Consumption",
description="",
attribute_keys={FLOW_KEY, NODE_KEY, LLM_ENGINE_KEY, TOKEN_TYPE_KEY},
aggregation=SumAggregation(),
)
# latency view
flow_latency_view = View(
instrument_name="Flow_Latency",
description="",
attribute_keys={FLOW_KEY, RESPONSE_CODE_KEY, STREAMING_KEY, RESPONSE_TYPE_KEY},
aggregation=ExplicitBucketHistogramAggregation(boundaries=HISTOGRAM_BOUNDARIES),
)
node_latency_view = View(
instrument_name="Node_Latency",
description="",
attribute_keys={FLOW_KEY, NODE_KEY, RUN_STATUS_KEY},
aggregation=ExplicitBucketHistogramAggregation(boundaries=HISTOGRAM_BOUNDARIES),
)
flow_streaming_response_duration_view = View(
instrument_name="Flow_Streaming_Response_Duration",
description="during between sending the first byte and last byte of the response, only for streaming flow",
attribute_keys={FLOW_KEY},
aggregation=ExplicitBucketHistogramAggregation(boundaries=HISTOGRAM_BOUNDARIES),
)
# request view
request_view = View(
instrument_name="Flow_Request",
description="",
attribute_keys={FLOW_KEY, RESPONSE_CODE_KEY, STREAMING_KEY, EXCEPTION_TYPE_KEY},
aggregation=SumAggregation(),
)
node_request_view = View(
instrument_name="Node_Request",
description="",
attribute_keys={FLOW_KEY, NODE_KEY, RUN_STATUS_KEY, EXCEPTION_TYPE_KEY},
aggregation=SumAggregation(),
)
# Remote API call view
remote_api_call_latency_view = View(
instrument_name="RPC_Latency",
description="",
attribute_keys={FLOW_KEY, NODE_KEY, API_CALL_KEY},
aggregation=ExplicitBucketHistogramAggregation(boundaries=HISTOGRAM_BOUNDARIES),
)
remote_api_call_request_view = View(
instrument_name="RPC_Request",
description="",
attribute_keys={FLOW_KEY, NODE_KEY, API_CALL_KEY, EXCEPTION_TYPE_KEY},
aggregation=SumAggregation(),
)
metrics_enabled = True
except ImportError:
metrics_enabled = False
class MetricsRecorder(object):
"""OpenTelemetry Metrics Recorder"""
def __init__(self, logger, reader=None, common_dimensions: Dict[str, str] = None) -> None:
"""initialize metrics recorder
:param logger: logger
:type logger: Logger
:param reader: metric reader
:type reader: opentelemetry.sdk.metrics.export.MetricReader
:param common_dimensions: common dimensions for all metrics
:type common_dimensions: Dict[str, str]
"""
self.logger = logger
if not metrics_enabled:
logger.warning(
"OpenTelemetry metric is not enabled, metrics will not be recorded."
+ "If you want to collect metrics, please enable 'azureml-serving' extra requirement "
+ "for promptflow: 'pip install promptflow[azureml-serving]'"
)
return
self.common_dimensions = common_dimensions or {}
self.reader = reader
dimension_keys = {key for key in common_dimensions}
self._config_common_monitor(dimension_keys, reader)
logger.info("OpenTelemetry metric is enabled, metrics will be recorded.")
def record_flow_request(self, flow_id: str, response_code: int, exception: str, streaming: bool):
if not metrics_enabled:
return
try:
flow_request.add(
1,
{
FLOW_KEY: flow_id,
RESPONSE_CODE_KEY: str(response_code),
EXCEPTION_TYPE_KEY: exception,
STREAMING_KEY: str(streaming),
**self.common_dimensions,
},
)
except Exception as e:
self.logger.warning("failed to record flow request metrics: %s", e)
def record_flow_latency(
self, flow_id: str, response_code: int, streaming: bool, response_type: str, duration: float
):
if not metrics_enabled:
return
try:
flow_latency.record(
duration,
{
FLOW_KEY: flow_id,
RESPONSE_CODE_KEY: str(response_code),
STREAMING_KEY: str(streaming),
RESPONSE_TYPE_KEY: response_type,
**self.common_dimensions,
},
)
except Exception as e:
self.logger.warning("failed to record flow latency metrics: %s", e)
def record_flow_streaming_response_duration(self, flow_id: str, duration: float):
if not metrics_enabled:
return
try:
streaming_response_duration.record(duration, {FLOW_KEY: flow_id, **self.common_dimensions})
except Exception as e:
self.logger.warning("failed to record streaming duration metrics: %s", e)
def record_tracing_metrics(self, flow_run: FlowRunInfo, node_runs: Dict[str, RunInfo]):
if not metrics_enabled:
return
try:
for _, run in node_runs.items():
flow_id = flow_run.flow_id if flow_run is not None else "default"
if len(run.system_metrics) > 0:
duration = run.system_metrics.get("duration", None)
if duration is not None:
duration = duration * 1000
node_latency.record(
duration,
{
FLOW_KEY: flow_id,
NODE_KEY: run.node,
RUN_STATUS_KEY: run.status.value,
**self.common_dimensions,
},
)
# openai token metrics
inputs = run.inputs or {}
engine = inputs.get("deployment_name") or ""
for token_type in [LLMTokenType.PromptTokens.value, LLMTokenType.CompletionTokens.value]:
count = run.system_metrics.get(token_type, None)
if count:
token_consumption.add(
count,
{
FLOW_KEY: flow_id,
NODE_KEY: run.node,
LLM_ENGINE_KEY: engine,
TOKEN_TYPE_KEY: token_type,
**self.common_dimensions,
},
)
# record node request metric
err = None
if run.status != Status.Completed:
err = "unknown"
if isinstance(run.error, dict):
err = self._get_exact_error(run.error)
elif isinstance(run.error, str):
err = run.error
node_request.add(
1,
{
FLOW_KEY: flow_id,
NODE_KEY: run.node,
RUN_STATUS_KEY: run.status.value,
EXCEPTION_TYPE_KEY: err,
**self.common_dimensions,
},
)
if run.api_calls and len(run.api_calls) > 0:
for api_call in run.api_calls:
# since first layer api_call is the node call itself, we ignore them here
api_calls: List[Dict[str, Any]] = api_call.get("children", None)
if api_calls is None:
continue
self._record_api_call_metrics(flow_id, run.node, api_calls)
except Exception as e:
self.logger.warning(f"failed to record metrics: {e}, flow_run: {flow_run}, node_runs: {node_runs}")
def _record_api_call_metrics(self, flow_id, node, api_calls: List[Dict[str, Any]], prefix: str = None):
if api_calls and len(api_calls) > 0:
for api_call in api_calls:
cur_name = api_call.get("name")
api_name = f"{prefix}_{cur_name}" if prefix else cur_name
# api-call latency metrics
# sample data: {"start_time":1688462182.744916, "end_time":1688462184.280989}
start_time = api_call.get("start_time", None)
end_time = api_call.get("end_time", None)
if start_time and end_time:
api_call_latency_ms = (end_time - start_time) * 1000
remote_api_call_latency.record(
api_call_latency_ms,
{
FLOW_KEY: flow_id,
NODE_KEY: node,
API_CALL_KEY: api_name,
**self.common_dimensions,
},
)
# remote api call request metrics
err = api_call.get("error") or {}
if isinstance(err, dict):
exception_type = self._get_exact_error(err)
else:
exception_type = err
remote_api_call_request.add(
1,
{
FLOW_KEY: flow_id,
NODE_KEY: node,
API_CALL_KEY: api_name,
EXCEPTION_TYPE_KEY: exception_type,
**self.common_dimensions,
},
)
child_api_calls = api_call.get("children", None)
if child_api_calls:
self._record_api_call_metrics(flow_id, node, child_api_calls, api_name)
def _get_exact_error(self, err: Dict):
error_response = ErrorResponse.from_error_dict(err)
return error_response.innermost_error_code
# configure monitor, by default only expose prometheus metrics
def _config_common_monitor(self, common_keys: Set[str] = {}, reader=None):
metrics_views = [
token_view,
flow_latency_view,
node_latency_view,
request_view,
remote_api_call_latency_view,
remote_api_call_request_view,
]
for view in metrics_views:
view._attribute_keys.update(common_keys)
readers = []
if reader:
readers.append(reader)
meter_provider = MeterProvider(
metric_readers=readers,
views=metrics_views,
)
set_meter_provider(meter_provider)
| promptflow/src/promptflow/promptflow/_sdk/_serving/monitor/metrics.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_serving/monitor/metrics.py",
"repo_id": "promptflow",
"token_count": 6653
} | 13 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from promptflow._version import VERSION
USER_AGENT = "{}/{}".format("promptflow-sdk", VERSION)
| promptflow/src/promptflow/promptflow/_sdk/_user_agent.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/_user_agent.py",
"repo_id": "promptflow",
"token_count": 57
} | 14 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from os import PathLike
from typing import Union
# TODO(2528165): remove this file when we deprecate Flow.run_bulk
class BaseInputs(object):
def __init__(self, data: Union[str, PathLike], inputs_mapping: dict = None, **kwargs):
self.data = data
self.inputs_mapping = inputs_mapping
class BulkInputs(BaseInputs):
"""Bulk run inputs.
data: pointer to test data for standard runs
inputs_mapping: define a data flow logic to map input data, support:
from data: data.col1:
Example:
{"question": "${data.question}", "context": "${data.context}"}
"""
# TODO: support inputs_mapping for bulk run
pass
class EvalInputs(BaseInputs):
"""Evaluation flow run inputs.
data: pointer to test data (of variant bulk runs) for eval runs
variant:
variant run id or variant run
keep lineage between current run and variant runs
variant outputs can be referenced as ${batch_run.outputs.col_name} in inputs_mapping
baseline:
baseline run id or baseline run
baseline bulk run for eval runs for pairwise comparison
inputs_mapping: define a data flow logic to map input data, support:
from data: data.col1:
from variant:
[0].col1, [1].col2: if need different col from variant run data
variant.output.col1: if all upstream runs has col1
Example:
{"ground_truth": "${data.answer}", "prediction": "${batch_run.outputs.answer}"}
"""
def __init__(
self,
data: Union[str, PathLike],
variant: Union[str, "BulkRun"] = None, # noqa: F821
baseline: Union[str, "BulkRun"] = None, # noqa: F821
inputs_mapping: dict = None,
**kwargs
):
super().__init__(data=data, inputs_mapping=inputs_mapping, **kwargs)
self.variant = variant
self.baseline = baseline
| promptflow/src/promptflow/promptflow/_sdk/entities/_run_inputs.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_sdk/entities/_run_inputs.py",
"repo_id": "promptflow",
"token_count": 771
} | 15 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import json
import os
from datetime import datetime
from enum import Enum
from traceback import TracebackException, format_tb
from types import TracebackType, FrameType
from promptflow.exceptions import PromptflowException, SystemErrorException, UserErrorException, ValidationException
ADDITIONAL_INFO_USER_EXECUTION_ERROR = "ToolExecutionErrorDetails"
ADDITIONAL_INFO_USER_CODE_STACKTRACE = "UserCodeStackTrace"
CAUSE_MESSAGE = "\nThe above exception was the direct cause of the following exception:\n\n"
CONTEXT_MESSAGE = "\nDuring handling of the above exception, another exception occurred:\n\n"
TRACEBACK_MESSAGE = "Traceback (most recent call last):\n"
class RootErrorCode:
USER_ERROR = "UserError"
SYSTEM_ERROR = "SystemError"
class ResponseCode(str, Enum):
SUCCESS = "200"
ACCEPTED = "202"
REDIRECTION = "300"
CLIENT_ERROR = "400"
SERVICE_ERROR = "500"
UNKNOWN = "0"
class ErrorResponse:
"""A class that represents the response body when an error occurs.
It follows the following specification:
https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses
"""
def __init__(self, error_dict):
self._error_dict = error_dict
@staticmethod
def from_error_dict(error_dict):
"""Create an ErrorResponse from an error dict.
The error dict which usually is generated by ExceptionPresenter.create(exception).to_dict()
"""
return ErrorResponse(error_dict)
@staticmethod
def from_exception(ex: Exception, *, include_debug_info=False):
presenter = ExceptionPresenter.create(ex)
error_dict = presenter.to_dict(include_debug_info=include_debug_info)
return ErrorResponse(error_dict)
@property
def message(self):
return self._error_dict.get("message", "")
@property
def response_code(self):
"""Given the error code, return the corresponding http response code."""
root_error_code = self._error_dict.get("code")
return ResponseCode.CLIENT_ERROR if root_error_code == RootErrorCode.USER_ERROR else ResponseCode.SERVICE_ERROR
@property
def additional_info(self):
"""Return the additional info of the error.
The additional info is defined in the error response.
It is stored as a list of dict, each of which contains a "type" and "info" field.
We change the list of dict to a dict of dict for easier access.
"""
result = {}
list_of_dict = self._error_dict.get("additionalInfo")
if not list_of_dict or not isinstance(list_of_dict, list):
return result
for item in list_of_dict:
# We just ignore the item if it is not a dict or does not contain the required fields.
if not isinstance(item, dict):
continue
name = item.get("type")
info = item.get("info")
if not name or not info:
continue
result[name] = info
return result
def get_additional_info(self, name):
"""Get the additional info by name."""
return self.additional_info.get(name)
def get_user_execution_error_info(self):
"""Get user tool execution error info from additional info."""
user_execution_error_info = self.get_additional_info(ADDITIONAL_INFO_USER_EXECUTION_ERROR)
if not user_execution_error_info or not isinstance(user_execution_error_info, dict):
return {}
return user_execution_error_info
def to_dict(self):
from promptflow._core.operation_context import OperationContext
return {
"error": self._error_dict,
"correlation": None, # TODO: to be implemented
"environment": None, # TODO: to be implemented
"location": None, # TODO: to be implemented
"componentName": OperationContext.get_instance().get_user_agent(),
"time": datetime.utcnow().isoformat(),
}
def to_simplified_dict(self):
return {
"error": {
"code": self._error_dict.get("code"),
"message": self._error_dict.get("message"),
}
}
@property
def error_codes(self):
error = self._error_dict
error_codes = []
while error is not None:
code = error.get("code")
if code is not None:
error_codes.append(code)
error = error.get("innerError")
else:
break
return error_codes
@property
def error_code_hierarchy(self):
"""Get the code hierarchy from error dict."""
return "/".join(self.error_codes)
@property
def innermost_error_code(self):
error_codes = self.error_codes
if error_codes:
return error_codes[-1]
return None
class ExceptionPresenter:
"""A class that can extract information from the exception instance.
It is designed to work for both PromptflowException and other exceptions.
"""
def __init__(self, ex: Exception):
self._ex = ex
@staticmethod
def create(ex: Exception):
if isinstance(ex, PromptflowException):
return PromptflowExceptionPresenter(ex)
return ExceptionPresenter(ex)
@property
def formatted_traceback(self):
te = TracebackException.from_exception(self._ex)
return "".join(te.format())
@property
def debug_info(self):
return self.build_debug_info(self._ex)
def build_debug_info(self, ex: Exception):
inner_exception: dict = None
stack_trace = TRACEBACK_MESSAGE + "".join(format_tb(ex.__traceback__))
if ex.__cause__ is not None:
inner_exception = self.build_debug_info(ex.__cause__)
stack_trace = CAUSE_MESSAGE + stack_trace
elif ex.__context__ is not None and not ex.__suppress_context__:
inner_exception = self.build_debug_info(ex.__context__)
stack_trace = CONTEXT_MESSAGE + stack_trace
return {
"type": ex.__class__.__qualname__,
"message": str(ex),
"stackTrace": stack_trace,
"innerException": inner_exception,
}
@property
def error_codes(self):
"""The hierarchy of the error codes.
We follow the "Microsoft REST API Guidelines" to define error codes in a hierarchy style.
See the below link for details:
https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses
This method returns the error codes in a list. It will be converted into a nested json format by
error_code_recursed.
"""
return [infer_error_code_from_class(SystemErrorException), self._ex.__class__.__name__]
@property
def error_code_recursed(self):
"""Returns a dict of the error codes for this exception.
It is populated in a recursive manner, using the source from `error_codes` property.
i.e. For PromptflowException, such as ToolExcutionError which inherits from UserErrorException,
The result would be:
{
"code": "UserError",
"innerError": {
"code": "ToolExecutionError",
"innerError": None,
},
}
For other exception types, such as ValueError, the result would be:
{
"code": "SystemError",
"innerError": {
"code": "ValueError",
"innerError": None,
},
}
"""
current_error = None
reversed_error_codes = reversed(self.error_codes) if self.error_codes else []
for code in reversed_error_codes:
current_error = {
"code": code,
"innerError": current_error,
}
return current_error
def to_dict(self, *, include_debug_info=False):
"""Return a dict representation of the exception.
This dict specification corresponds to the specification of the Microsoft API Guidelines:
https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses
Note that this dict represents the "error" field in the response body of the API.
The whole error response is then populated in another place outside of this class.
"""
if isinstance(self._ex, JsonSerializedPromptflowException):
return self._ex.to_dict(include_debug_info=include_debug_info)
# Otherwise, return general dict representation of the exception.
result = {"message": str(self._ex), "messageFormat": "", "messageParameters": {}}
result.update(self.error_code_recursed)
if include_debug_info:
result["debugInfo"] = self.debug_info
return result
class PromptflowExceptionPresenter(ExceptionPresenter):
@property
def error_codes(self):
"""The hierarchy of the error codes.
We follow the "Microsoft REST API Guidelines" to define error codes in a hierarchy style.
See the below link for details:
https://github.com/microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses
For subclass of PromptflowException, use the ex.error_codes directly.
For PromptflowException (not a subclass), the ex.error_code is None.
The result should be:
["SystemError", {inner_exception type name if exist}]
"""
if self._ex.error_codes:
return self._ex.error_codes
# For PromptflowException (not a subclass), the ex.error_code is None.
# Handle this case specifically.
error_codes = [infer_error_code_from_class(SystemErrorException)]
if self._ex.inner_exception:
error_codes.append(infer_error_code_from_class(self._ex.inner_exception.__class__))
return error_codes
def to_dict(self, *, include_debug_info=False):
result = {
"message": self._ex.message,
"messageFormat": self._ex.message_format,
"messageParameters": self._ex.serializable_message_parameters,
"referenceCode": self._ex.reference_code,
}
result.update(self.error_code_recursed)
if self._ex.additional_info:
result["additionalInfo"] = [{"type": k, "info": v} for k, v in self._ex.additional_info.items()]
if include_debug_info:
result["debugInfo"] = self.debug_info
return result
class JsonSerializedPromptflowException(Exception):
"""Json serialized PromptflowException.
This exception only has one argument message to avoid the
argument missing error when load/dump with pickle in multiprocessing.
Ref: https://bugs.python.org/issue32696
:param message: A Json serialized message describing the error.
:type message: str
"""
def __init__(self, message):
self.message = message
super().__init__(self.message)
def __str__(self):
return self.message
def to_dict(self, *, include_debug_info=False):
# Return a dict representation of the inner exception.
error_dict = json.loads(self.message)
# The original serialized error might contain debugInfo.
# We pop it out if include_debug_info is set to False.
if not include_debug_info and "debugInfo" in error_dict:
error_dict.pop("debugInfo")
return error_dict
def get_tb_next(tb: TracebackType, next_cnt: int):
"""Return the nth tb_next of input tb.
If the tb does not have n tb_next, return the last tb which has a value.
n = next_cnt
"""
while tb.tb_next and next_cnt > 0:
tb = tb.tb_next
next_cnt -= 1
return tb
def last_frame_info(ex: Exception):
"""Return the line number where the error occurred."""
if ex:
tb = TracebackException.from_exception(ex)
last_frame = tb.stack[-1] if tb.stack else None
if last_frame:
return {
"filename": last_frame.filename,
"lineno": last_frame.lineno,
"name": last_frame.name,
}
return {}
def infer_error_code_from_class(cls):
# Python has a built-in SystemError
if cls == SystemErrorException:
return RootErrorCode.SYSTEM_ERROR
if cls == UserErrorException:
return RootErrorCode.USER_ERROR
if cls == ValidationException:
return "ValidationError"
return cls.__name__
def is_pf_core_frame(frame: FrameType):
"""Check if the frame is from promptflow core code."""
from promptflow import _core
folder_of_core = os.path.dirname(_core.__file__)
return folder_of_core in frame.f_code.co_filename
def remove_suffix(text: str, suffix: str = None):
"""
Given a string, removes specified suffix, if it has.
>>> remove_suffix('hello world', 'world')
'hello '
>>> remove_suffix('hello world', 'hello ')
'hello world'
>>> remove_suffix('NoColumnFoundError', 'Error')
'NoColumnFound'
:param text: string from which prefix will be removed.
:param suffix: suffix to be removed.
:return: string removed suffix.
"""
if not text or not suffix:
return text
if not text.endswith(suffix):
return text
return text[: -len(suffix)]
| promptflow/src/promptflow/promptflow/_utils/exception_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/exception_utils.py",
"repo_id": "promptflow",
"token_count": 5411
} | 16 |
from io import StringIO
from os import PathLike
from typing import IO, AnyStr, Dict, Optional, Union
from ruamel.yaml import YAML, YAMLError
from promptflow._constants import DEFAULT_ENCODING
from promptflow._utils._errors import YamlParseError
def load_yaml(source: Optional[Union[AnyStr, PathLike, IO]]) -> Dict:
# null check - just return an empty dict.
# Certain CLI commands rely on this behavior to produce a resource
# via CLI, which is then populated through CLArgs.
"""Load a local YAML file or a readable stream object.
.. note::
1. For a local file yaml
.. code-block:: python
yaml_path = "path/to/yaml"
content = load_yaml(yaml_path)
2. For a readable stream object
.. code-block:: python
with open("path/to/yaml", "r", encoding="utf-8") as f:
content = load_yaml(f)
:param source: The relative or absolute path to the local file, or a readable stream object.
:type source: str
:return: A dictionary representation of the local file's contents.
:rtype: Dict
"""
if source is None:
return {}
# pylint: disable=redefined-builtin
input = None
must_open_file = False
try: # check source type by duck-typing it as an IOBase
readable = source.readable()
if not readable: # source is misformatted stream or file
msg = "File Permissions Error: The already-open \n\n inputted file is not readable."
raise Exception(msg)
# source is an already-open stream or file, we can read() from it directly.
input = source
except AttributeError:
# source has no writable() function, assume it's a string or file path.
must_open_file = True
if must_open_file: # If supplied a file path, open it.
try:
input = open(source, "r", encoding=DEFAULT_ENCODING)
except OSError: # FileNotFoundError introduced in Python 3
msg = "No such file or directory: {}"
raise Exception(msg.format(source))
# input should now be a readable file or stream. Parse it.
cfg = {}
try:
yaml = YAML()
yaml.preserve_quotes = True
cfg = yaml.load(input)
except YAMLError as e:
msg = f"Error while parsing yaml file: {source} \n\n {str(e)}"
raise Exception(msg)
finally:
if must_open_file:
input.close()
return cfg
def load_yaml_string(yaml_string: str):
"""Load a yaml string.
.. code-block:: python
yaml_string = "some yaml string"
object = load_yaml_string(yaml_string)
:param yaml_string: A yaml string.
:type yaml_string: str
"""
yaml = YAML()
yaml.preserve_quotes = True
return yaml.load(yaml_string)
def dump_yaml(*args, **kwargs):
"""Dump data to a yaml string or stream.
.. note::
1. Dump to a yaml string
.. code-block:: python
data = {"key": "value"}
yaml_string = dump_yaml(data)
2. Dump to a stream
.. code-block:: python
data = {"key": "value"}
with open("path/to/yaml", "w", encoding="utf-8") as f:
dump_yaml(data, f)
"""
yaml = YAML()
yaml.default_flow_style = False
# when using with no stream parameter but just the data, dump to yaml string and return
if len(args) == 1:
string_stream = StringIO()
yaml.dump(args[0], string_stream, **kwargs)
output_string = string_stream.getvalue()
string_stream.close()
return output_string
# when using with stream parameter, dump to stream. e.g.:
# open('test.yaml', 'w', encoding='utf-8') as f:
# dump_yaml(data, f)
elif len(args) == 2:
return yaml.dump(*args, **kwargs)
else:
raise YamlParseError("Only 1 or 2 positional arguments are allowed for dump yaml util function.")
| promptflow/src/promptflow/promptflow/_utils/yaml_utils.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/_utils/yaml_utils.py",
"repo_id": "promptflow",
"token_count": 1625
} | 17 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.9.2, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import TYPE_CHECKING
from azure.core.configuration import Configuration
from azure.core.pipeline import policies
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Optional
VERSION = "unknown"
class AzureMachineLearningDesignerServiceClientConfiguration(Configuration):
"""Configuration for AzureMachineLearningDesignerServiceClient.
Note that all parameters used to create this instance are saved as instance
attributes.
:param api_version: Api Version. The default value is "1.0.0".
:type api_version: str
"""
def __init__(
self,
api_version="1.0.0", # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> None
super(AzureMachineLearningDesignerServiceClientConfiguration, self).__init__(**kwargs)
self.api_version = api_version
kwargs.setdefault('sdk_moniker', 'azuremachinelearningdesignerserviceclient/{}'.format(VERSION))
self._configure(**kwargs)
def _configure(
self,
**kwargs # type: Any
):
# type: (...) -> None
self.user_agent_policy = kwargs.get('user_agent_policy') or policies.UserAgentPolicy(**kwargs)
self.headers_policy = kwargs.get('headers_policy') or policies.HeadersPolicy(**kwargs)
self.proxy_policy = kwargs.get('proxy_policy') or policies.ProxyPolicy(**kwargs)
self.logging_policy = kwargs.get('logging_policy') or policies.NetworkTraceLoggingPolicy(**kwargs)
self.http_logging_policy = kwargs.get('http_logging_policy') or policies.HttpLoggingPolicy(**kwargs)
self.retry_policy = kwargs.get('retry_policy') or policies.RetryPolicy(**kwargs)
self.custom_hook_policy = kwargs.get('custom_hook_policy') or policies.CustomHookPolicy(**kwargs)
self.redirect_policy = kwargs.get('redirect_policy') or policies.RedirectPolicy(**kwargs)
self.authentication_policy = kwargs.get('authentication_policy')
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/_configuration.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/_configuration.py",
"repo_id": "promptflow",
"token_count": 812
} | 18 |
# coding=utf-8
# --------------------------------------------------------------------------
# Code generated by Microsoft (R) AutoRest Code Generator (autorest: 3.9.2, generator: @autorest/[email protected])
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import functools
from typing import TYPE_CHECKING
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpResponse
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator import distributed_trace
from msrest import Serializer
from .. import models as _models
from .._vendor import _convert_request, _format_url_section
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Callable, Dict, Generic, List, Optional, TypeVar, Union
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
_SERIALIZER = Serializer()
_SERIALIZER.client_side_validation = False
# fmt: off
def build_create_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
content_type = kwargs.pop('content_type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
if content_type is not None:
header_parameters['Content-Type'] = _SERIALIZER.header("content_type", content_type, 'str')
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="POST",
url=url,
headers=header_parameters,
**kwargs
)
def build_get_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_delete_flow_session_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="DELETE",
url=url,
headers=header_parameters,
**kwargs
)
def build_list_flow_session_pip_packages_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/pipPackages')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
def build_poll_operation_status_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
action_type, # type: Union[str, "_models.SetupFlowSessionAction"]
location, # type: str
operation_id, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
api_version = kwargs.pop('api_version', "1.0.0") # type: Optional[str]
type = kwargs.pop('type', None) # type: Optional[str]
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/{actionType}/locations/{location}/operations/{operationId}')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
"sessionId": _SERIALIZER.url("session_id", session_id, 'str'),
"actionType": _SERIALIZER.url("action_type", action_type, 'str'),
"location": _SERIALIZER.url("location", location, 'str'),
"operationId": _SERIALIZER.url("operation_id", operation_id, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct parameters
query_parameters = kwargs.pop("params", {}) # type: Dict[str, Any]
if api_version is not None:
query_parameters['api-version'] = _SERIALIZER.query("api_version", api_version, 'str')
if type is not None:
query_parameters['type'] = _SERIALIZER.query("type", type, 'str')
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
params=query_parameters,
headers=header_parameters,
**kwargs
)
def build_get_standby_pools_request(
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> HttpRequest
accept = "application/json"
# Construct URL
url = kwargs.pop("template_url", '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/standbypools')
path_format_arguments = {
"subscriptionId": _SERIALIZER.url("subscription_id", subscription_id, 'str'),
"resourceGroupName": _SERIALIZER.url("resource_group_name", resource_group_name, 'str'),
"workspaceName": _SERIALIZER.url("workspace_name", workspace_name, 'str'),
}
url = _format_url_section(url, **path_format_arguments)
# Construct headers
header_parameters = kwargs.pop("headers", {}) # type: Dict[str, Any]
header_parameters['Accept'] = _SERIALIZER.header("accept", accept, 'str')
return HttpRequest(
method="GET",
url=url,
headers=header_parameters,
**kwargs
)
# fmt: on
class FlowSessionsOperations(object):
"""FlowSessionsOperations operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~flow.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
@distributed_trace
def create_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
body=None, # type: Optional["_models.CreateFlowSessionRequest"]
**kwargs # type: Any
):
# type: (...) -> Any
"""create_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:param body:
:type body: ~flow.models.CreateFlowSessionRequest
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
content_type = kwargs.pop('content_type', "application/json") # type: Optional[str]
if body is not None:
_json = self._serialize.body(body, 'CreateFlowSessionRequest')
else:
_json = None
request = build_create_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
content_type=content_type,
json=_json,
template_url=self.create_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('object', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
create_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace
def get_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.GetTrainingSessionDto"
"""get_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: GetTrainingSessionDto, or the result of cls(response)
:rtype: ~flow.models.GetTrainingSessionDto
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.GetTrainingSessionDto"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
template_url=self.get_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('GetTrainingSessionDto', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace
def delete_flow_session(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> Any
"""delete_flow_session.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_delete_flow_session_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
template_url=self.delete_flow_session.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
if response.status_code == 200:
deserialized = self._deserialize('object', pipeline_response)
if response.status_code == 202:
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
delete_flow_session.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}'} # type: ignore
@distributed_trace
def list_flow_session_pip_packages(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
**kwargs # type: Any
):
# type: (...) -> str
"""list_flow_session_pip_packages.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: str, or the result of cls(response)
:rtype: str
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[str]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_list_flow_session_pip_packages_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
template_url=self.list_flow_session_pip_packages.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('str', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
list_flow_session_pip_packages.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/pipPackages'} # type: ignore
@distributed_trace
def poll_operation_status(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
session_id, # type: str
action_type, # type: Union[str, "_models.SetupFlowSessionAction"]
location, # type: str
operation_id, # type: str
api_version="1.0.0", # type: Optional[str]
type=None, # type: Optional[str]
**kwargs # type: Any
):
# type: (...) -> Any
"""poll_operation_status.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:param session_id:
:type session_id: str
:param action_type:
:type action_type: str or ~flow.models.SetupFlowSessionAction
:param location:
:type location: str
:param operation_id:
:type operation_id: str
:param api_version: Api Version. The default value is "1.0.0".
:type api_version: str
:param type:
:type type: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: any, or the result of cls(response)
:rtype: any
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[Any]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_poll_operation_status_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
session_id=session_id,
action_type=action_type,
location=location,
operation_id=operation_id,
api_version=api_version,
type=type,
template_url=self.poll_operation_status.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('object', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
poll_operation_status.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/{sessionId}/{actionType}/locations/{location}/operations/{operationId}'} # type: ignore
@distributed_trace
def get_standby_pools(
self,
subscription_id, # type: str
resource_group_name, # type: str
workspace_name, # type: str
**kwargs # type: Any
):
# type: (...) -> List["_models.StandbyPoolProperties"]
"""get_standby_pools.
:param subscription_id: The Azure Subscription ID.
:type subscription_id: str
:param resource_group_name: The Name of the resource group in which the workspace is located.
:type resource_group_name: str
:param workspace_name: The name of the workspace.
:type workspace_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: list of StandbyPoolProperties, or the result of cls(response)
:rtype: list[~flow.models.StandbyPoolProperties]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType[List["_models.StandbyPoolProperties"]]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
request = build_get_standby_pools_request(
subscription_id=subscription_id,
resource_group_name=resource_group_name,
workspace_name=workspace_name,
template_url=self.get_standby_pools.metadata['url'],
)
request = _convert_request(request)
request.url = self._client.format_url(request.url)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
error = self._deserialize.failsafe_deserialize(_models.ErrorResponse, pipeline_response)
raise HttpResponseError(response=response, model=error)
deserialized = self._deserialize('[StandbyPoolProperties]', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_standby_pools.metadata = {'url': '/flow/api/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.MachineLearningServices/workspaces/{workspaceName}/FlowSessions/standbypools'} # type: ignore
| promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_flow_sessions_operations.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/_restclient/flow/operations/_flow_sessions_operations.py",
"repo_id": "promptflow",
"token_count": 10750
} | 19 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
# pylint: disable=protected-access
import os
import uuid
from datetime import datetime, timedelta
from pathlib import Path
from typing import Dict, Optional, TypeVar, Union
from azure.ai.ml._artifacts._blob_storage_helper import BlobStorageClient
from azure.ai.ml._artifacts._gen2_storage_helper import Gen2StorageClient
from azure.ai.ml._azure_environments import _get_storage_endpoint_from_metadata
from azure.ai.ml._restclient.v2022_10_01.models import DatastoreType
from azure.ai.ml._scope_dependent_operations import OperationScope
from azure.ai.ml._utils._arm_id_utils import (
AMLNamedArmId,
get_resource_name_from_arm_id,
is_ARM_id_for_resource,
remove_aml_prefix,
)
from azure.ai.ml._utils._asset_utils import (
IgnoreFile,
_build_metadata_dict,
_validate_path,
get_ignore_file,
get_object_hash,
)
from azure.ai.ml._utils._storage_utils import (
AzureMLDatastorePathUri,
get_artifact_path_from_storage_url,
get_storage_client,
)
from azure.ai.ml.constants._common import SHORT_URI_FORMAT, STORAGE_ACCOUNT_URLS
from azure.ai.ml.entities import Environment
from azure.ai.ml.entities._assets._artifacts.artifact import Artifact, ArtifactStorageInfo
from azure.ai.ml.entities._credentials import AccountKeyConfiguration
from azure.ai.ml.entities._datastore._constants import WORKSPACE_BLOB_STORE
from azure.ai.ml.exceptions import ErrorTarget, ValidationException
from azure.ai.ml.operations._datastore_operations import DatastoreOperations
from azure.storage.blob import BlobSasPermissions, generate_blob_sas
from azure.storage.filedatalake import FileSasPermissions, generate_file_sas
from ..._utils.logger_utils import LoggerFactory
from ._fileshare_storeage_helper import FlowFileStorageClient
module_logger = LoggerFactory.get_logger(__name__)
def _get_datastore_name(*, datastore_name: Optional[str] = WORKSPACE_BLOB_STORE) -> str:
datastore_name = WORKSPACE_BLOB_STORE if not datastore_name else datastore_name
try:
datastore_name = get_resource_name_from_arm_id(datastore_name)
except (ValueError, AttributeError, ValidationException):
module_logger.debug("datastore_name %s is not a full arm id. Proceed with a shortened name.\n", datastore_name)
datastore_name = remove_aml_prefix(datastore_name)
if is_ARM_id_for_resource(datastore_name):
datastore_name = get_resource_name_from_arm_id(datastore_name)
return datastore_name
def get_datastore_info(operations: DatastoreOperations, name: str) -> Dict[str, str]:
"""Get datastore account, type, and auth information."""
datastore_info = {}
if name:
datastore = operations.get(name, include_secrets=True)
else:
datastore = operations.get_default(include_secrets=True)
storage_endpoint = _get_storage_endpoint_from_metadata()
credentials = datastore.credentials
datastore_info["storage_type"] = datastore.type
datastore_info["storage_account"] = datastore.account_name
datastore_info["account_url"] = STORAGE_ACCOUNT_URLS[datastore.type].format(
datastore.account_name, storage_endpoint
)
if isinstance(credentials, AccountKeyConfiguration):
datastore_info["credential"] = credentials.account_key
else:
try:
datastore_info["credential"] = credentials.sas_token
except Exception as e: # pylint: disable=broad-except
if not hasattr(credentials, "sas_token"):
datastore_info["credential"] = operations._credential
else:
raise e
if datastore.type == DatastoreType.AZURE_BLOB:
datastore_info["container_name"] = str(datastore.container_name)
elif datastore.type == DatastoreType.AZURE_DATA_LAKE_GEN2:
datastore_info["container_name"] = str(datastore.filesystem)
elif datastore.type == DatastoreType.AZURE_FILE:
datastore_info["container_name"] = str(datastore.file_share_name)
else:
raise Exception(
f"Datastore type {datastore.type} is not supported for uploads. "
f"Supported types are {DatastoreType.AZURE_BLOB} and {DatastoreType.AZURE_DATA_LAKE_GEN2}."
)
return datastore_info
def list_logs_in_datastore(ds_info: Dict[str, str], prefix: str, legacy_log_folder_name: str) -> Dict[str, str]:
"""Returns a dictionary of file name to blob or data lake uri with SAS token, matching the structure of
RunDetails.logFiles.
legacy_log_folder_name: the name of the folder in the datastore that contains the logs
/azureml-logs/*.txt is the legacy log structure for commandJob and sweepJob
/logs/azureml/*.txt is the legacy log structure for pipeline parent Job
"""
if ds_info["storage_type"] not in [
DatastoreType.AZURE_BLOB,
DatastoreType.AZURE_DATA_LAKE_GEN2,
]:
raise Exception("Only Blob and Azure DataLake Storage Gen2 datastores are supported.")
storage_client = get_storage_client(
credential=ds_info["credential"],
container_name=ds_info["container_name"],
storage_account=ds_info["storage_account"],
storage_type=ds_info["storage_type"],
)
items = storage_client.list(starts_with=prefix + "/user_logs/")
# Append legacy log files if present
items.extend(storage_client.list(starts_with=prefix + legacy_log_folder_name))
log_dict = {}
for item_name in items:
sub_name = item_name.split(prefix + "/")[1]
if isinstance(storage_client, BlobStorageClient):
token = generate_blob_sas(
account_name=ds_info["storage_account"],
container_name=ds_info["container_name"],
blob_name=item_name,
account_key=ds_info["credential"],
permission=BlobSasPermissions(read=True),
expiry=datetime.utcnow() + timedelta(minutes=30),
)
elif isinstance(storage_client, Gen2StorageClient):
token = generate_file_sas( # pylint: disable=no-value-for-parameter
account_name=ds_info["storage_account"],
file_system_name=ds_info["container_name"],
file_name=item_name,
credential=ds_info["credential"],
permission=FileSasPermissions(read=True),
expiry=datetime.utcnow() + timedelta(minutes=30),
)
log_dict[sub_name] = "{}/{}/{}?{}".format(ds_info["account_url"], ds_info["container_name"], item_name, token)
return log_dict
def _get_default_datastore_info(datastore_operation):
return get_datastore_info(datastore_operation, None)
def upload_artifact(
local_path: str,
datastore_operation: DatastoreOperations,
operation_scope: OperationScope,
datastore_name: Optional[str],
asset_hash: Optional[str] = None,
show_progress: bool = True,
asset_name: Optional[str] = None,
asset_version: Optional[str] = None,
ignore_file: IgnoreFile = IgnoreFile(None),
sas_uri=None,
) -> ArtifactStorageInfo:
"""Upload local file or directory to datastore."""
if sas_uri:
storage_client = get_storage_client(credential=None, storage_account=None, account_url=sas_uri)
else:
datastore_name = _get_datastore_name(datastore_name=datastore_name)
datastore_info = get_datastore_info(datastore_operation, datastore_name)
storage_client = FlowFileStorageClient(
credential=datastore_info["credential"],
file_share_name=datastore_info["container_name"],
account_url=datastore_info["account_url"],
azure_cred=datastore_operation._credential,
)
artifact_info = storage_client.upload(
local_path,
asset_hash=asset_hash,
show_progress=show_progress,
name=asset_name,
version=asset_version,
ignore_file=ignore_file,
)
artifact_info["remote path"] = os.path.join(
storage_client.directory_client.directory_path, artifact_info["remote path"]
)
return artifact_info
def download_artifact(
starts_with: Union[str, os.PathLike],
destination: str,
datastore_operation: DatastoreOperations,
datastore_name: Optional[str],
datastore_info: Optional[Dict] = None,
) -> str:
"""Download datastore path to local file or directory.
:param Union[str, os.PathLike] starts_with: Prefix of blobs to download
:param str destination: Path that files will be written to
:param DatastoreOperations datastore_operation: Datastore operations
:param Optional[str] datastore_name: name of datastore
:param Dict datastore_info: the return value of invoking get_datastore_info
:return str: Path that files were written to
"""
starts_with = starts_with.as_posix() if isinstance(starts_with, Path) else starts_with
datastore_name = _get_datastore_name(datastore_name=datastore_name)
if datastore_info is None:
datastore_info = get_datastore_info(datastore_operation, datastore_name)
storage_client = get_storage_client(**datastore_info)
storage_client.download(starts_with=starts_with, destination=destination)
return destination
def download_artifact_from_storage_url(
blob_url: str,
destination: str,
datastore_operation: DatastoreOperations,
datastore_name: Optional[str],
) -> str:
"""Download datastore blob URL to local file or directory."""
datastore_name = _get_datastore_name(datastore_name=datastore_name)
datastore_info = get_datastore_info(datastore_operation, datastore_name)
starts_with = get_artifact_path_from_storage_url(
blob_url=str(blob_url), container_name=datastore_info.get("container_name")
)
return download_artifact(
starts_with=starts_with,
destination=destination,
datastore_operation=datastore_operation,
datastore_name=datastore_name,
datastore_info=datastore_info,
)
def download_artifact_from_aml_uri(uri: str, destination: str, datastore_operation: DatastoreOperations):
"""Downloads artifact pointed to by URI of the form `azureml://...` to destination.
:param str uri: AzureML uri of artifact to download
:param str destination: Path to download artifact to
:param DatastoreOperations datastore_operation: datastore operations
:return str: Path that files were downloaded to
"""
parsed_uri = AzureMLDatastorePathUri(uri)
return download_artifact(
starts_with=parsed_uri.path,
destination=destination,
datastore_operation=datastore_operation,
datastore_name=parsed_uri.datastore,
)
def aml_datastore_path_exists(
uri: str, datastore_operation: DatastoreOperations, datastore_info: Optional[dict] = None
):
"""Checks whether `uri` of the form "azureml://" points to either a directory or a file.
:param str uri: azure ml datastore uri
:param DatastoreOperations datastore_operation: Datastore operation
:param dict datastore_info: return value of get_datastore_info
"""
parsed_uri = AzureMLDatastorePathUri(uri)
datastore_info = datastore_info or get_datastore_info(datastore_operation, parsed_uri.datastore)
return get_storage_client(**datastore_info).exists(parsed_uri.path)
def _upload_to_datastore(
operation_scope: OperationScope,
datastore_operation: DatastoreOperations,
path: Union[str, Path, os.PathLike],
artifact_type: str,
datastore_name: Optional[str] = None,
show_progress: bool = True,
asset_name: Optional[str] = None,
asset_version: Optional[str] = None,
asset_hash: Optional[str] = None,
ignore_file: Optional[IgnoreFile] = None,
sas_uri: Optional[str] = None, # contains registry sas url
) -> ArtifactStorageInfo:
_validate_path(path, _type=artifact_type)
if not ignore_file:
ignore_file = get_ignore_file(path)
if not asset_hash:
asset_hash = get_object_hash(path, ignore_file)
artifact = upload_artifact(
str(path),
datastore_operation,
operation_scope,
datastore_name,
show_progress=show_progress,
asset_hash=asset_hash,
asset_name=asset_name,
asset_version=asset_version,
ignore_file=ignore_file,
sas_uri=sas_uri,
)
return artifact
def _upload_and_generate_remote_uri(
operation_scope: OperationScope,
datastore_operation: DatastoreOperations,
path: Union[str, Path, os.PathLike],
artifact_type: str = ErrorTarget.ARTIFACT,
datastore_name: str = WORKSPACE_BLOB_STORE,
show_progress: bool = True,
) -> str:
# Asset name is required for uploading to a datastore
asset_name = str(uuid.uuid4())
artifact_info = _upload_to_datastore(
operation_scope=operation_scope,
datastore_operation=datastore_operation,
path=path,
datastore_name=datastore_name,
asset_name=asset_name,
artifact_type=artifact_type,
show_progress=show_progress,
)
path = artifact_info.relative_path
datastore = AMLNamedArmId(artifact_info.datastore_arm_id).asset_name
return SHORT_URI_FORMAT.format(datastore, path)
def _update_metadata(name, version, indicator_file, datastore_info) -> None:
storage_client = get_storage_client(**datastore_info)
if isinstance(storage_client, BlobStorageClient):
_update_blob_metadata(name, version, indicator_file, storage_client)
elif isinstance(storage_client, Gen2StorageClient):
_update_gen2_metadata(name, version, indicator_file, storage_client)
def _update_blob_metadata(name, version, indicator_file, storage_client) -> None:
container_client = storage_client.container_client
if indicator_file.startswith(storage_client.container):
indicator_file = indicator_file.split(storage_client.container)[1]
blob = container_client.get_blob_client(blob=indicator_file)
blob.set_blob_metadata(_build_metadata_dict(name=name, version=version))
def _update_gen2_metadata(name, version, indicator_file, storage_client) -> None:
artifact_directory_client = storage_client.file_system_client.get_directory_client(indicator_file)
artifact_directory_client.set_metadata(_build_metadata_dict(name=name, version=version))
T = TypeVar("T", bound=Artifact)
def _check_and_upload_path(
artifact: T,
asset_operations: Union["DataOperations", "ModelOperations", "CodeOperations", "FeatureSetOperations"],
artifact_type: str,
datastore_name: Optional[str] = None,
sas_uri: Optional[str] = None,
show_progress: bool = True,
):
"""Checks whether `artifact` is a path or a uri and uploads it to the datastore if necessary.
param T artifact: artifact to check and upload param
Union["DataOperations", "ModelOperations", "CodeOperations"]
asset_operations: the asset operations to use for uploading
param str datastore_name: the name of the datastore to upload to
param str sas_uri: the sas uri to use for uploading
"""
from azure.ai.ml._utils.utils import is_mlflow_uri, is_url
datastore_name = artifact.datastore
if (
hasattr(artifact, "local_path")
and artifact.local_path is not None
or (
hasattr(artifact, "path")
and artifact.path is not None
and not (is_url(artifact.path) or is_mlflow_uri(artifact.path))
)
):
path = (
Path(artifact.path)
if hasattr(artifact, "path") and artifact.path is not None
else Path(artifact.local_path)
)
if not path.is_absolute():
path = Path(artifact.base_path, path).resolve()
uploaded_artifact = _upload_to_datastore(
asset_operations._operation_scope,
asset_operations._datastore_operation,
path,
datastore_name=datastore_name,
asset_name=artifact.name,
asset_version=str(artifact.version),
asset_hash=artifact._upload_hash if hasattr(artifact, "_upload_hash") else None,
sas_uri=sas_uri,
artifact_type=artifact_type,
show_progress=show_progress,
ignore_file=getattr(artifact, "_ignore_file", None),
)
return uploaded_artifact
def _check_and_upload_env_build_context(
environment: Environment,
operations: "EnvironmentOperations",
sas_uri=None,
show_progress: bool = True,
) -> Environment:
if environment.path:
uploaded_artifact = _upload_to_datastore(
operations._operation_scope,
operations._datastore_operation,
environment.path,
asset_name=environment.name,
asset_version=str(environment.version),
asset_hash=environment._upload_hash,
sas_uri=sas_uri,
artifact_type=ErrorTarget.ENVIRONMENT,
datastore_name=environment.datastore,
show_progress=show_progress,
)
# TODO: Depending on decision trailing "/" needs to stay or not. EMS requires it to be present
environment.build.path = uploaded_artifact.full_storage_path + "/"
return environment
| promptflow/src/promptflow/promptflow/azure/operations/_artifact_utilities.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/azure/operations/_artifact_utilities.py",
"repo_id": "promptflow",
"token_count": 6848
} | 20 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
from dataclasses import dataclass
from datetime import datetime
from itertools import chain
from typing import Any, List, Mapping
from promptflow._utils.exception_utils import ExceptionPresenter, RootErrorCode
from promptflow._utils.openai_metrics_calculator import OpenAIMetricsCalculator
from promptflow.contracts.run_info import RunInfo, Status
from promptflow.executor._result import AggregationResult, LineResult
@dataclass
class LineError:
"""The error of a line in a batch run.
It contains the line number and the error dict of a failed line in the batch run.
The error dict is gengerated by ExceptionPresenter.to_dict().
"""
line_number: int
error: Mapping[str, Any]
def to_dict(self):
return {
"line_number": self.line_number,
"error": self.error,
}
@dataclass
class ErrorSummary:
"""The summary of errors in a batch run.
:param failed_user_error_lines: The number of lines that failed with user error.
:type failed_user_error_lines: int
:param failed_system_error_lines: The number of lines that failed with system error.
:type failed_system_error_lines: int
:param error_list: The line number and error dict of failed lines in the line results.
:type error_list: List[~promptflow.batch._result.LineError]
:param aggr_error_dict: The dict of node name and error dict of failed nodes in the aggregation result.
:type aggr_error_dict: Mapping[str, Any]
:param batch_error_dict: The dict of batch run error.
:type batch_error_dict: Mapping[str, Any]
"""
failed_user_error_lines: int
failed_system_error_lines: int
error_list: List[LineError]
aggr_error_dict: Mapping[str, Any]
batch_error_dict: Mapping[str, Any]
@staticmethod
def create(line_results: List[LineResult], aggr_result: AggregationResult, exception: Exception = None):
failed_user_error_lines = 0
failed_system_error_lines = 0
error_list: List[LineError] = []
for line_result in line_results:
if line_result.run_info.status != Status.Failed:
continue
flow_run = line_result.run_info
if flow_run.error.get("code", "") == RootErrorCode.USER_ERROR:
failed_user_error_lines += 1
else:
failed_system_error_lines += 1
line_error = LineError(
line_number=flow_run.index,
error=flow_run.error,
)
error_list.append(line_error)
error_summary = ErrorSummary(
failed_user_error_lines=failed_user_error_lines,
failed_system_error_lines=failed_system_error_lines,
error_list=sorted(error_list, key=lambda x: x.line_number),
aggr_error_dict={
node_name: node_run_info.error
for node_name, node_run_info in aggr_result.node_run_infos.items()
if node_run_info.status == Status.Failed
},
batch_error_dict=ExceptionPresenter.create(exception).to_dict() if exception else None,
)
return error_summary
@dataclass
class SystemMetrics:
"""The system metrics of a batch run."""
total_tokens: int
prompt_tokens: int
completion_tokens: int
duration: float # in seconds
@staticmethod
def create(
start_time: datetime, end_time: datetime, line_results: List[LineResult], aggr_results: AggregationResult
):
openai_metrics = SystemMetrics._get_openai_metrics(line_results, aggr_results)
return SystemMetrics(
total_tokens=openai_metrics.get("total_tokens", 0),
prompt_tokens=openai_metrics.get("prompt_tokens", 0),
completion_tokens=openai_metrics.get("completion_tokens", 0),
duration=(end_time - start_time).total_seconds(),
)
@staticmethod
def _get_openai_metrics(line_results: List[LineResult], aggr_results: AggregationResult):
node_run_infos = _get_node_run_infos(line_results, aggr_results)
total_metrics = {}
calculator = OpenAIMetricsCalculator()
for run_info in node_run_infos:
metrics = SystemMetrics._try_get_openai_metrics(run_info)
if metrics:
calculator.merge_metrics_dict(total_metrics, metrics)
else:
api_calls = run_info.api_calls or []
for call in api_calls:
metrics = calculator.get_openai_metrics_from_api_call(call)
calculator.merge_metrics_dict(total_metrics, metrics)
return total_metrics
def _try_get_openai_metrics(run_info: RunInfo):
openai_metrics = {}
if run_info.system_metrics:
for metric in ["total_tokens", "prompt_tokens", "completion_tokens"]:
if metric not in run_info.system_metrics:
return False
openai_metrics[metric] = run_info.system_metrics[metric]
return openai_metrics
def to_dict(self):
return {
"total_tokens": self.total_tokens,
"prompt_tokens": self.prompt_tokens,
"completion_tokens": self.completion_tokens,
"duration": self.duration,
}
@dataclass
class BatchResult:
"""The result of a batch run."""
status: Status
total_lines: int
completed_lines: int
failed_lines: int
node_status: Mapping[str, int]
start_time: datetime
end_time: datetime
metrics: Mapping[str, str]
system_metrics: SystemMetrics
error_summary: ErrorSummary
@classmethod
def create(
cls,
start_time: datetime,
end_time: datetime,
line_results: List[LineResult],
aggr_result: AggregationResult,
status: Status = Status.Completed,
exception: Exception = None,
) -> "BatchResult":
total_lines = len(line_results)
completed_lines = sum(line_result.run_info.status == Status.Completed for line_result in line_results)
failed_lines = total_lines - completed_lines
if exception:
status = Status.Failed
return cls(
status=status,
total_lines=total_lines,
completed_lines=completed_lines,
failed_lines=failed_lines,
node_status=BatchResult._get_node_status(line_results, aggr_result),
start_time=start_time,
end_time=end_time,
metrics=aggr_result.metrics,
system_metrics=SystemMetrics.create(start_time, end_time, line_results, aggr_result),
error_summary=ErrorSummary.create(line_results, aggr_result, exception),
)
@staticmethod
def _get_node_status(line_results: List[LineResult], aggr_result: AggregationResult):
node_run_infos = _get_node_run_infos(line_results, aggr_result)
node_status = {}
for node_run_info in node_run_infos:
key = f"{node_run_info.node}.{node_run_info.status.value.lower()}"
node_status[key] = node_status.get(key, 0) + 1
return node_status
def _get_node_run_infos(line_results: List[LineResult], aggr_result: AggregationResult):
line_node_run_infos = (
node_run_info for line_result in line_results for node_run_info in line_result.node_run_infos.values()
)
aggr_node_run_infos = (node_run_info for node_run_info in aggr_result.node_run_infos.values())
return chain(line_node_run_infos, aggr_node_run_infos)
| promptflow/src/promptflow/promptflow/batch/_result.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/batch/_result.py",
"repo_id": "promptflow",
"token_count": 3273
} | 21 |
# ---------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# ---------------------------------------------------------
import asyncio
import contextvars
import inspect
import os
import signal
import threading
import time
import traceback
from asyncio import Task
from concurrent.futures import ThreadPoolExecutor
from typing import Any, Dict, List, Tuple
from promptflow._core.flow_execution_context import FlowExecutionContext
from promptflow._core.tools_manager import ToolsManager
from promptflow._utils.logger_utils import flow_logger
from promptflow._utils.utils import extract_user_frame_summaries, set_context
from promptflow.contracts.flow import Node
from promptflow.executor._dag_manager import DAGManager
from promptflow.executor._errors import NoNodeExecutedError
PF_ASYNC_NODE_SCHEDULER_EXECUTE_TASK_NAME = "_pf_async_nodes_scheduler.execute"
DEFAULT_TASK_LOGGING_INTERVAL = 60
ASYNC_DAG_MANAGER_COMPLETED = False
class AsyncNodesScheduler:
def __init__(
self,
tools_manager: ToolsManager,
node_concurrency: int,
) -> None:
self._tools_manager = tools_manager
self._node_concurrency = node_concurrency
self._task_start_time = {}
self._task_last_log_time = {}
self._dag_manager_completed_event = threading.Event()
async def execute(
self,
nodes: List[Node],
inputs: Dict[str, Any],
context: FlowExecutionContext,
) -> Tuple[dict, dict]:
# TODO: Provide cancel API
if threading.current_thread() is threading.main_thread():
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
else:
flow_logger.info(
"Current thread is not main thread, skip signal handler registration in AsyncNodesScheduler."
)
# Semaphore should be created in the loop, otherwise it will not work.
loop = asyncio.get_running_loop()
self._semaphore = asyncio.Semaphore(self._node_concurrency)
monitor = threading.Thread(
target=monitor_long_running_coroutine,
args=(loop, self._task_start_time, self._task_last_log_time, self._dag_manager_completed_event),
daemon=True,
)
monitor.start()
# Set the name of scheduler tasks to avoid monitoring its duration
task = asyncio.current_task()
task.set_name(PF_ASYNC_NODE_SCHEDULER_EXECUTE_TASK_NAME)
parent_context = contextvars.copy_context()
executor = ThreadPoolExecutor(
max_workers=self._node_concurrency, initializer=set_context, initargs=(parent_context,)
)
# Note that we must not use `with` statement to manage the executor.
# This is because it will always call `executor.shutdown()` when exiting the `with` block.
# Then the event loop will wait for all tasks to be completed before raising the cancellation error.
# See reference: https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.Executor
outputs = await self._execute_with_thread_pool(executor, nodes, inputs, context)
executor.shutdown()
return outputs
async def _execute_with_thread_pool(
self,
executor: ThreadPoolExecutor,
nodes: List[Node],
inputs: Dict[str, Any],
context: FlowExecutionContext,
) -> Tuple[dict, dict]:
flow_logger.info(f"Start to run {len(nodes)} nodes with the current event loop.")
dag_manager = DAGManager(nodes, inputs)
task2nodes = self._execute_nodes(dag_manager, context, executor)
while not dag_manager.completed():
task2nodes = await self._wait_and_complete_nodes(task2nodes, dag_manager)
submitted_tasks2nodes = self._execute_nodes(dag_manager, context, executor)
task2nodes.update(submitted_tasks2nodes)
# Set the event to notify the monitor thread to exit
# Ref: https://docs.python.org/3/library/threading.html#event-objects
self._dag_manager_completed_event.set()
for node in dag_manager.bypassed_nodes:
dag_manager.completed_nodes_outputs[node] = None
return dag_manager.completed_nodes_outputs, dag_manager.bypassed_nodes
async def _wait_and_complete_nodes(self, task2nodes: Dict[Task, Node], dag_manager: DAGManager) -> Dict[Task, Node]:
if not task2nodes:
raise NoNodeExecutedError("No nodes are ready for execution, but the flow is not completed.")
tasks = [task for task in task2nodes]
for task in tasks:
self._task_start_time[task] = time.time()
done, _ = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
dag_manager.complete_nodes({task2nodes[task].name: task.result() for task in done})
for task in done:
del task2nodes[task]
return task2nodes
def _execute_nodes(
self,
dag_manager: DAGManager,
context: FlowExecutionContext,
executor: ThreadPoolExecutor,
) -> Dict[Task, Node]:
# Bypass nodes and update node run info until there are no nodes to bypass
nodes_to_bypass = dag_manager.pop_bypassable_nodes()
while nodes_to_bypass:
for node in nodes_to_bypass:
context.bypass_node(node)
nodes_to_bypass = dag_manager.pop_bypassable_nodes()
# Create tasks for ready nodes
return {
self._create_node_task(node, dag_manager, context, executor): node for node in dag_manager.pop_ready_nodes()
}
async def run_task_with_semaphore(self, coroutine):
async with self._semaphore:
return await coroutine
def _create_node_task(
self,
node: Node,
dag_manager: DAGManager,
context: FlowExecutionContext,
executor: ThreadPoolExecutor,
) -> Task:
f = self._tools_manager.get_tool(node.name)
kwargs = dag_manager.get_node_valid_inputs(node, f)
if inspect.iscoroutinefunction(f):
# For async task, it will not be executed before calling create_task.
task = context.invoke_tool_async(node, f, kwargs)
else:
# For sync task, convert it to async task and run it in executor thread.
# Even though the task is put to the thread pool, thread.start will only be triggered after create_task.
task = self._sync_function_to_async_task(executor, context, node, f, kwargs)
# Set the name of the task to the node name for debugging purpose
# It does not need to be unique by design.
# Wrap the coroutine in a task with asyncio.create_task to schedule it for event loop execution
# The task is created and added to the event loop, but the exact execution depends on loop's scheduling
return asyncio.create_task(self.run_task_with_semaphore(task), name=node.name)
@staticmethod
async def _sync_function_to_async_task(
executor: ThreadPoolExecutor,
context: FlowExecutionContext,
node,
f,
kwargs,
):
# The task will not be executed before calling create_task.
return await asyncio.get_running_loop().run_in_executor(executor, context.invoke_tool, node, f, kwargs)
def signal_handler(sig, frame):
"""
Start a thread to monitor coroutines after receiving signal.
"""
flow_logger.info(f"Received signal {sig}({signal.Signals(sig).name}), start coroutine monitor thread.")
loop = asyncio.get_running_loop()
monitor = threading.Thread(target=monitor_coroutine_after_cancellation, args=(loop,))
monitor.start()
raise KeyboardInterrupt
def log_stack_recursively(task: asyncio.Task, elapse_time: float):
"""Recursively log the frame of a task or coroutine.
Traditional stacktrace would stop at the first awaited nested inside the coroutine.
:param task: Task to log
:type task_or_coroutine: asyncio.Task
:param elapse_time: Seconds elapsed since the task started
:type elapse_time: float
"""
# We cannot use task.get_stack() to get the stack, because only one stack frame is
# returned for a suspended coroutine because of the implementation of CPython
# Ref: https://github.com/python/cpython/blob/main/Lib/asyncio/tasks.py
# "only one stack frame is returned for a suspended coroutine."
task_or_coroutine = task
frame_summaries = []
# Collect frame_summaries along async call chain
while True:
if isinstance(task_or_coroutine, asyncio.Task):
# For a task, get the coroutine it's running
coroutine: asyncio.coroutine = task_or_coroutine.get_coro()
elif asyncio.iscoroutine(task_or_coroutine):
coroutine = task_or_coroutine
else:
break
frame = coroutine.cr_frame
stack_summary: traceback.StackSummary = traceback.extract_stack(frame)
frame_summaries.extend(stack_summary)
task_or_coroutine = coroutine.cr_await
# Format the frame summaries to warning message
if frame_summaries:
user_frame_summaries = extract_user_frame_summaries(frame_summaries)
stack_messages = traceback.format_list(user_frame_summaries)
all_stack_message = "".join(stack_messages)
task_msg = (
f"Task {task.get_name()} has been running for {elapse_time:.0f} seconds,"
f" stacktrace:\n{all_stack_message}"
)
flow_logger.warning(task_msg)
def monitor_long_running_coroutine(
loop: asyncio.AbstractEventLoop,
task_start_time: dict,
task_last_log_time: dict,
dag_manager_completed_event: threading.Event,
):
flow_logger.info("monitor_long_running_coroutine started")
logging_interval = DEFAULT_TASK_LOGGING_INTERVAL
logging_interval_in_env = os.environ.get("PF_TASK_PEEKING_INTERVAL")
if logging_interval_in_env:
try:
value = int(logging_interval_in_env)
if value <= 0:
raise ValueError
logging_interval = value
flow_logger.info(
f"Using value of PF_TASK_PEEKING_INTERVAL in environment variable as "
f"logging interval: {logging_interval_in_env}"
)
except ValueError:
flow_logger.warning(
f"Value of PF_TASK_PEEKING_INTERVAL in environment variable ('{logging_interval_in_env}') "
f"is invalid, use default value {DEFAULT_TASK_LOGGING_INTERVAL}"
)
while not dag_manager_completed_event.is_set():
running_tasks = [task for task in asyncio.all_tasks(loop) if not task.done()]
# get duration of running tasks
for task in running_tasks:
# Do not monitor the scheduler task
if task.get_name() == PF_ASYNC_NODE_SCHEDULER_EXECUTE_TASK_NAME:
continue
# Do not monitor sync tools, since they will run in executor thread and will
# be monitored by RepeatLogTimer.
task_stacks = task.get_stack()
if (
task_stacks
and task_stacks[-1].f_code
and task_stacks[-1].f_code.co_name == AsyncNodesScheduler._sync_function_to_async_task.__name__
):
continue
if task_start_time.get(task) is None:
flow_logger.warning(f"task {task.get_name()} has no start time, which should not happen")
else:
duration = time.time() - task_start_time[task]
if duration > logging_interval:
if (
task_last_log_time.get(task) is None
or time.time() - task_last_log_time[task] > logging_interval
):
log_stack_recursively(task, duration)
task_last_log_time[task] = time.time()
time.sleep(1)
def monitor_coroutine_after_cancellation(loop: asyncio.AbstractEventLoop):
"""Exit the process when all coroutines are done.
We add this function because if a sync tool is running in async mode,
the task will be cancelled after receiving SIGINT,
but the thread will not be terminated and blocks the program from exiting.
:param loop: event loop of main thread
:type loop: asyncio.AbstractEventLoop
"""
# TODO: Use environment variable to ensure it is flow test scenario to avoid unexpected exit.
# E.g. Customer is integrating Promptflow in their own code, and they want to handle SIGINT by themselves.
max_wait_seconds = os.environ.get("PF_WAIT_SECONDS_AFTER_CANCELLATION", 30)
all_tasks_are_done = False
exceeded_wait_seconds = False
thread_start_time = time.time()
flow_logger.info(f"Start to monitor coroutines after cancellation, max wait seconds: {max_wait_seconds}s")
while not all_tasks_are_done and not exceeded_wait_seconds:
# For sync tool running in async mode, the task will be cancelled,
# but the thread will not be terminated, we exit the program despite of it.
# TODO: Detect whether there is any sync tool running in async mode,
# if there is none, avoid sys.exit and let the program exit gracefully.
all_tasks_are_done = all(task.done() for task in asyncio.all_tasks(loop))
if all_tasks_are_done:
flow_logger.info("All coroutines are done. Exiting.")
# We cannot ensure persist_flow_run is called before the process exits in the case that there is
# non-daemon thread running, sleep for 3 seconds as a best effort.
# If the caller wants to ensure flow status is cancelled in storage, it should check the flow status
# after timeout and set the flow status to Cancelled.
time.sleep(3)
# Use os._exit instead of sys.exit, so that the process can stop without
# waiting for the thread created by run_in_executor to finish.
# sys.exit: https://docs.python.org/3/library/sys.html#sys.exit
# Raise a SystemExit exception, signaling an intention to exit the interpreter.
# Specifically, it does not exit non-daemon thread
# os._exit https://docs.python.org/3/library/os.html#os._exit
# Exit the process with status n, without calling cleanup handlers, flushing stdio buffers, etc.
# Specifically, it stops process without waiting for non-daemon thread.
os._exit(0)
exceeded_wait_seconds = time.time() - thread_start_time > max_wait_seconds
time.sleep(1)
if exceeded_wait_seconds:
if not all_tasks_are_done:
flow_logger.info(
f"Not all coroutines are done within {max_wait_seconds}s"
" after cancellation. Exiting the process despite of them."
" Please config the environment variable"
" PF_WAIT_SECONDS_AFTER_CANCELLATION if your tool needs"
" more time to clean up after cancellation."
)
remaining_tasks = [task for task in asyncio.all_tasks(loop) if not task.done()]
flow_logger.info(f"Remaining tasks: {[task.get_name() for task in remaining_tasks]}")
time.sleep(3)
os._exit(0)
| promptflow/src/promptflow/promptflow/executor/_async_nodes_scheduler.py/0 | {
"file_path": "promptflow/src/promptflow/promptflow/executor/_async_nodes_scheduler.py",
"repo_id": "promptflow",
"token_count": 6253
} | 22 |
[tool.black]
line-length = 120
[tool.pytest.ini_options]
markers = [
"sdk_test",
"cli_test",
"unittest",
"e2etest",
"flaky",
"endpointtest",
"mt_endpointtest",
]
[tool.coverage.run]
omit = [
# omit anything in a _restclient directory anywhere
"*/_restclient/*",
]
| promptflow/src/promptflow/pyproject.toml/0 | {
"file_path": "promptflow/src/promptflow/pyproject.toml",
"repo_id": "promptflow",
"token_count": 139
} | 23 |
{
"version": "0.2",
"language": "en",
"languageId": "python",
"dictionaries": [
"powershell",
"python",
"go",
"css",
"html",
"bash",
"npm",
"softwareTerms",
"en_us",
"en-gb"
],
"ignorePaths": [
"**/*.js",
"**/*.pyc",
"**/*.log",
"**/*.jsonl",
"**/*.xml",
"**/*.txt",
".gitignore",
"scripts/docs/_build/**",
"src/promptflow/promptflow/azure/_restclient/flow/**",
"src/promptflow/promptflow/azure/_restclient/swagger.json",
"src/promptflow/tests/**",
"src/promptflow-tools/tests/**",
"**/flow.dag.yaml",
"**/setup.py",
"scripts/installer/curl_install_pypi/**",
"scripts/installer/windows/**",
"src/promptflow/promptflow/_sdk/_service/pfsvc.py"
],
"words": [
"aoai",
"amlignore",
"mldesigner",
"faiss",
"serp",
"azureml",
"mlflow",
"vnet",
"openai",
"pfazure",
"eastus",
"azureai",
"vectordb",
"Qdrant",
"Weaviate",
"env",
"e2etests",
"e2etest",
"tablefmt",
"logprobs",
"logit",
"hnsw",
"chatml",
"UNLCK",
"KHTML",
"numlines",
"azurecr",
"centralus",
"Policheck",
"azuremlsdktestpypi",
"rediraffe",
"pydata",
"ROBOCOPY",
"undoc",
"retriable",
"pfcli",
"pfutil",
"mgmt",
"wsid",
"westus",
"msrest",
"cref",
"msal",
"pfbytes",
"Apim",
"junit",
"nunit",
"astext",
"Likert",
"pfsvc"
],
"ignoreWords": [
"openmpi",
"ipynb",
"xdist",
"pydash",
"tqdm",
"rtype",
"epocs",
"fout",
"funcs",
"todos",
"fstring",
"creds",
"zipp",
"gmtime",
"pyjwt",
"nbconvert",
"nbformat",
"pypandoc",
"dotenv",
"miniconda",
"datas",
"tcgetpgrp",
"yamls",
"fmt",
"serpapi",
"genutils",
"metadatas",
"tiktoken",
"bfnrt",
"orelse",
"thead",
"sympy",
"ghactions",
"esac",
"MSRC",
"pycln",
"strictyaml",
"psutil",
"getch",
"tcgetattr",
"TCSADRAIN",
"stringio",
"jsonify",
"werkzeug",
"continuumio",
"pydantic",
"iterrows",
"dtype",
"fillna",
"nlines",
"aggr",
"tcsetattr",
"pysqlite",
"AADSTS700082",
"Pyinstaller",
"runsvdir",
"runsv",
"levelno",
"LANCZOS",
"Mobius",
"ruamel",
"gunicorn",
"pkill",
"pgrep",
"Hwfoxydrg",
"llms",
"vcrpy",
"uionly",
"llmops",
"Abhishek",
"restx",
"httpx",
"tiiuae",
"nohup",
"metagenai",
"WBITS",
"laddr",
"nrows",
"Dumpable",
"XCLASS",
"otel",
"OTLP",
"spawnv",
"spawnve",
"addrs"
],
"flagWords": [
"Prompt Flow"
],
"allowCompoundWords": true
}
| promptflow/.cspell.json/0 | {
"file_path": "promptflow/.cspell.json",
"repo_id": "promptflow",
"token_count": 1604
} | 0 |
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK -->
| promptflow/SECURITY.md/0 | {
"file_path": "promptflow/SECURITY.md",
"repo_id": "promptflow",
"token_count": 674
} | 1 |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 31